Updates from: 04/07/2021 03:08:02
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Howto Password Ban Bad On Premises Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-password-ban-bad-on-premises-deploy.md
The following core requirements apply:
* The Key Distribution Service must be enabled on all domain controllers in the domain that run Windows Server 2012 and later versions. By default, this service is enabled via manual trigger start. * Network connectivity must exist between at least one domain controller in each domain and at least one server that hosts the proxy service for Azure AD Password Protection. This connectivity must allow the domain controller to access RPC endpoint mapper port 135 and the RPC server port on the proxy service.
- * By default, the RPC server port is a dynamic RPC port, but it can be configured to [use a static port](#static).
+ * By default, the RPC server port is a dynamic RPC port from the range (49152 - 65535), but it can be configured to [use a static port](#static).
* All machines where the Azure AD Password Protection Proxy service will be installed must have network access to the following endpoints: |**Endpoint**|**Purpose**|
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
Cloud apps or actions are a key signal in a Conditional Access policy. Conditional Access policies allow administrators to assign controls to specific applications or actions. - Administrators can choose from the list of applications that include built-in Microsoft applications and any [Azure AD integrated applications](../manage-apps/what-is-application-management.md) including gallery, non-gallery, and applications published through [Application Proxy](../manage-apps/what-is-application-proxy.md).-- Administrators may choose to define policy not based on a cloud application but on a user action. The only supported action is Register security information (preview), allowing Conditional Access to enforce controls around the [combined security information registration experience](../authentication/howto-registration-mfa-sspr-combined.md).
+- Administrators may choose to define policy not based on a cloud application but on a user action. We support two user actions
+ - Register security information (preview) to enforce controls around the [combined security information registration experience](../authentication/howto-registration-mfa-sspr-combined.md)
+ - Register or join devices (preview) to enforce controls when users [register](../devices/concept-azure-ad-register.md) or [join](../devices/concept-azure-ad-join.md) devices to Azure AD.
![Define a Conditional Access policy and specify cloud apps](./media/concept-conditional-access-cloud-apps/conditional-access-cloud-apps-or-actions.png)
User actions are tasks that can be performed by a user. Currently, Conditional A
- **Register security information**: This user action allows Conditional Access policy to enforce when users who are enabled for combined registration attempt to register their security information. More information can be found in the article, [Combined security information registration](../authentication/concept-registration-mfa-sspr-combined.md). -- **Register or join devices (preview)**: This user action enables administrators to enforce Conditional Access policy when users [register](../devices/concept-azure-ad-register.md) or [join](../devices/concept-azure-ad-join.md) devices to Azure AD. There are two key considerations with this user action:
+- **Register or join devices (preview)**: This user action enables administrators to enforce Conditional Access policy when users [register](../devices/concept-azure-ad-register.md) or [join](../devices/concept-azure-ad-join.md) devices to Azure AD. It provides granularity in configuring multi-factor authentication for registering or joining devices instead of a tenant-wide policy that currently exists. There are three key considerations with this user action:
- `Require multi-factor authentication` is the only access control available with this user action and all others are disabled. This restriction prevents conflicts with access controls that are either dependent on Azure AD device registration or not applicable to Azure AD device registration.
- - When a Conditional Access policy is enabled with this user action, you must set **Azure Active Directory** > **Devices** > **Device Settings** - `Devices to be Azure AD joined or Azure AD registered require Multi-Factor Authentication` to **No**. Otherwise, Conditional Access policy with this user action is not properly enforced. More information regarding this device setting can found in [Configure device settings](../devices/device-management-azure-portal.md#configure-device-settings). This user action provides flexibility to require multi-factor authentication for registering or joining devices for specific users and groups or conditions instead of having a tenant-wide policy in Device settings.
+ - `Client apps` and `Device state` conditions are not available with this user action since they are dependent on Azure AD device registration to enforce Conditional Access policies.
+ - When a Conditional Access policy is enabled with this user action, you must set **Azure Active Directory** > **Devices** > **Device Settings** - `Devices to be Azure AD joined or Azure AD registered require Multi-Factor Authentication` to **No**. Otherwise, the Conditional Access policy with this user action is not properly enforced. More information regarding this device setting can found in [Configure device settings](../devices/device-management-azure-portal.md#configure-device-settings).
## Next steps
active-directory Msal Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-overview.md
MSAL can be used in many application scenarios, including the following:
| [MSAL for Android](https://github.com/AzureAD/microsoft-authentication-library-for-android)|Android| | [MSAL Angular](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-angular)| Single-page apps with Angular and Angular.js frameworks| | [MSAL for iOS and macOS](https://github.com/AzureAD/microsoft-authentication-library-for-objc)|iOS and macOS|
+| [MSAL Go (Preview)](https://github.com/AzureAD/microsoft-authentication-library-for-go)|Windows, macOS, Linux|
| [MSAL Java](https://github.com/AzureAD/microsoft-authentication-library-for-java)|Windows, macOS, Linux| | [MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser)| JavaScript/TypeScript frameworks such as Vue.js, Ember.js, or Durandal.js| | [MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet)| .NET Framework, .NET Core, Xamarin Android, Xamarin iOS, Universal Windows Platform|
active-directory Quickstart V2 Nodejs Webapp Msal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-nodejs-webapp-msal.md
npm install @azure/msal-node
## Next steps > [!div class="nextstepaction"]
-> [Adding Auth to an existing web app - GitHub code sample >](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-node-samples/standalone-samples/auth-code)
+> [Adding Auth to an existing web app - GitHub code sample >](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-node-samples/auth-code)
active-directory Azureadjoin Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/azureadjoin-plan.md
Users get SSO from Azure AD joined devices if the device has access to a domain
### On-premises network shares
-Your users have SSO from Azure AD joined devices when a device has access to an on-premises domain controller.
+Your users have SSO from Azure AD joined devices when a device has access to an on-premises domain controller. [Learn how this works](azuread-join-sso.md)
### Printers
-For printers, you need to deploy [hybrid cloud print](/windows-server/administration/hybrid-cloud-print/hybrid-cloud-print-deploy) for discovering printers on Azure AD joined devices.
-
-While printers can't be automatically discovered in a cloud only environment, your users can also use the printersΓÇÖ UNC path to directly add them.
+We recommend deploying [Universal Print](/universal-print/fundamentals/universal-print-whatis) to have a cloud based print management solution without any on-premises dependencies.
### On-premises applications relying on machine authentication
Choose your deployment approach or approaches by reviewing the table above and r
## Configure your device settings
-The Azure portal allows you to control the deployment of Azure AD joined devices in your organization. To configure the related settings, on the **Azure Active Directory page**, select `Devices > Device settings`.
+The Azure portal allows you to control the deployment of Azure AD joined devices in your organization. To configure the related settings, on the **Azure Active Directory page**, select `Devices > Device settings`. [Learn more](device-management-azure-portal.md)
### Users may join devices to Azure AD
Choose **Selected** and selects the users you want to add to the local administr
![Additional local administrators on Azure AD joined devices](./media/azureadjoin-plan/02.png)
-### Require multi-factor Auth to join devices
+### Require multi-factor authentication (MFA) to join devices
Select **ΓÇ£Yes** if you require users to perform MFA while joining devices to Azure AD. For the users joining devices to Azure AD using MFA, the device itself becomes a 2nd factor. ![Require multi-factor Auth to join devices](./media/azureadjoin-plan/03.png)
+**Recommendation:** Use the user action [Register or join devices](/conditional-access/concept-conditional-access-cloud-apps#user-actions) in Conditional Access for enforcing MFA for joining devices.
+ ## Configure your mobility settings Before you can configure your mobility settings, you may have to add an MDM provider, first.
active-directory Device Management Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/device-management-azure-portal.md
Both options allow administrators the ability to:
> [!TIP] > - Hybrid Azure AD Joined Windows 10 devices do not have an owner. If you are looking for a device by owner and didn't find it, search by the device ID. >
-> - If you see a device that is "Hybrid Azure AD joined" with a state "Pending" under the REGISTERED column, it indicates that the device has been synchronized from Azure AD connect and is waiting to complete registration from the client. Read more on how to [plan your Hybrid Azure AD join implementation](hybrid-azuread-join-plan.md). Additional information can be found in the article, [Devices frequently asked questions](faq.md).
+> - If you see a device that is "Hybrid Azure AD joined" with a state "Pending" under the REGISTERED column, it indicates that the device has been synchronized from Azure AD connect and is waiting to complete registration from the client. Read more on how to [plan your Hybrid Azure AD join implementation](hybrid-azuread-join-plan.md). Additional information can be found in the article, [Devices frequently asked questions](faq.yml).
> > - For some iOS devices, the device names containing apostrophes can potentially use different characters that look like apostrophes. So searching for such devices is a little tricky - if you are not seeing search results correctly, ensure that the search string contains matching apostrophe character.
active-directory Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/faq.md
- Title: Azure Active Directory device management FAQ | Microsoft Docs
-description: Azure Active Directory device management FAQ.
----- Previously updated : 06/28/2019--------
-# Azure Active Directory device management FAQ
-
-## General FAQ
-
-### Q: I registered the device recently. Why can't I see the device under my user info in the Azure portal? Or why is the device owner marked as N/A for hybrid Azure Active Directory (Azure AD) joined devices?
-
-**A:** Windows 10 devices that are hybrid Azure AD joined don't show up under **USER devices**.
-Use the **All devices** view in the Azure portal. You can also use a PowerShell [Get-MsolDevice](/powershell/module/msonline/get-msoldevice) cmdlet.
-
-Only the following devices are listed under **USER devices**:
--- All personal devices that aren't hybrid Azure AD joined. -- All non-Windows 10 or Windows Server 2016 devices.-- All non-Windows devices. ---
-### Q: How do I know what the device registration state of the client is?
-
-**A:** In the Azure portal, go to **All devices**. Search for the device by using the device ID. Check the value under the join type column. Sometimes, the device might be reset or reimaged. So it's essential to also check the device registration state on the device:
--- For Windows 10 and Windows Server 2016 or later devices, run `dsregcmd.exe /status`.-- For down-level OS versions, run `%programFiles%\Microsoft Workplace Join\autoworkplace.exe`.-
-**A:** For troubleshooting information, see these articles:
-- [Troubleshooting devices using dsregcmd command](troubleshoot-device-dsregcmd.md)-- [Troubleshooting hybrid Azure Active Directory joined Windows 10 and Windows Server 2016 devices](troubleshoot-hybrid-join-windows-current.md)-- [Troubleshooting hybrid Azure Active Directory joined down-level devices](troubleshoot-hybrid-join-windows-legacy.md)---
-### Q: I see the device record under the USER info in the Azure portal. And I see the state as registered on the device. Am I set up correctly to use Conditional Access?
-
-**A:** The device join state, shown by **deviceID**, must match the state on Azure AD and meet any evaluation criteria for Conditional Access.
-For more information, see [Require managed devices for cloud app access with Conditional Access](../conditional-access/require-managed-devices.md).
---
-### Q: Why do my users see an error message saying "Your organization has deleted the device" or "Your organization has disabled the device" on their Windows 10 devices?
-
-**A:** On Windows 10 devices joined or registered with Azure AD, users are issued a [Primary refresh token (PRT)](concept-primary-refresh-token.md) which enables single sign on. The validity of the PRT is based on the validity of the device itself. Users see this message if the device is either deleted or disabled in Azure AD without initiating the action from the device itself. A device can be deleted or disabled in Azure AD one of the following scenarios:
--- User disables the device from the My Apps portal. -- An administrator (or user) deletes or disables the device in the Azure portal or by using PowerShell-- Hybrid Azure AD joined only: An administrator removes the devices OU out of sync scope resulting in the devices being deleted from Azure AD-- Upgrading Azure AD connect to the version 1.4.xx.x. [Understanding Azure AD Connect 1.4.xx.x and device disappearance](../hybrid/reference-connect-device-disappearance.md).--
-See below on how these actions can be rectified.
---
-### Q: I disabled or deleted my device in the Azure portal or by using Windows PowerShell. But the local state on the device says it's still registered. What should I do?
-
-**A:** This operation is by design. In this case, the device doesn't have access to resources in the cloud. Administrators can perform this action for stale, lost, or stolen devices to prevent unauthorized access. If this action was performed unintentionally, you'll need to re-enable or re-register the device as described below
--- If the device was disabled in Azure AD, an administrator with sufficient privileges can enable it from the Azure AD portal
- > [!NOTE]
- > If you are syncing devices using Azure AD Connect, hybrid Azure AD joined devices will be automatically re-enabled during the next sync cycle. So, if you need to disable a hybrid Azure AD joined device, you need to disable it from your on-premises AD
--
- To re-register hybrid Azure AD joined Windows 10 and Windows Server 2016/2019 devices, take the following steps:
-
- 1. Open the command prompt as an administrator.
- 1. Enter `dsregcmd.exe /debug /leave`.
- 1. Sign out and sign in to trigger the scheduled task that registers the device again with Azure AD.
-
- For down-level Windows OS versions that are hybrid Azure AD joined, take the following steps:
-
- 1. Open the command prompt as an administrator.
- 1. Enter `"%programFiles%\Microsoft Workplace Join\autoworkplace.exe /l"`.
- 1. Enter `"%programFiles%\Microsoft Workplace Join\autoworkplace.exe /j"`.
-
- For Azure AD joined devices Windows 10 devices, take the following steps:
-
- 1. Open the command prompt as an administrator
- 1. Enter `dsregcmd /forcerecovery` (Note: You need to be an administrator to perform this action).
- 1. Click "Sign in" in the dialog that opens up and continue with the sign in process.
- 1. Sign out and sign in back to the device to complete the recovery.
-
- For Azure AD registered Windows 10 devices, take the following steps:
-
- 1. Go to **Settings** > **Accounts** > **Access Work or School**.
- 1. Select the account and select **Disconnect**.
- 1. Click on "+ Connect" and register the device again by going through the sign in process.
---
-### Q: Why do I see duplicate device entries in the Azure portal?
-
-**A:**
--- For Windows 10 and Windows Server 2016, repeated tries to unjoin and rejoin the same device might cause duplicate entries. -- Each Windows user who uses **Add Work or School Account** creates a new device record with the same device name.-- For down-level Windows OS versions that are on-premises Azure Directory domain joined, automatic registration creates a new device record with the same device name for each domain user who signs in to the device. -- An Azure AD joined machine that's wiped, reinstalled, and rejoined with the same name shows up as another record with the same device name.---
-### Q: Does Windows 10 device registration in Azure AD support TPMs in FIPS mode?
-
-**A:** Windows 10 device registration only supported for FIPS-compliant TPM 2.0 and not supported for TPM 1.2. If your devices have FIPS-compliant TPM 1.2, you must disable them before proceeding with Azure AD join or Hybrid Azure AD join. Microsoft does not provide any tools for disabling FIPS mode for TPMs as it is dependent on the TPM manufacturer. Contact your hardware OEM for support.
---
-**Q: Why can a user still access resources from a device I disabled in the Azure portal?**
-
-**A:** It takes up to an hour for a revoke to be applied from the time the Azure AD device is marked as disabled.
-
->[!NOTE]
->For enrolled devices, we recommend that you wipe the device to make sure users can't access the resources. For more information, see [What is device enrollment?](/mem/intune/user-help/use-managed-devices-to-get-work-done).
---
-### Q: Why are there devices marked as "Pending" under the REGISTERED column in the Azure portal?
-
-**A**: Pending indicates that the device is not registered. This state indicates that a device has been synchronized using Azure AD connect from an on-premises AD and is ready for device registration. These devices have the JOIN TYPE set to "Hybrid Azure AD joined". Learn more on [how to plan your hybrid Azure Active Directory join implementation](hybrid-azuread-join-plan.md).
-
->[!NOTE]
->A device can also change from having a registered state to "Pending"
->* If a device is deleted from Azure AD first and re-synchronized from an on-premises AD.
->* If a device is removed from a sync scope on Azure AD Connect and added back.
->
->In both cases, you must re-register the device manually on each of these devices. To review whether the device was previously registered, you can [troubleshoot devices using the dsregcmd command](troubleshoot-device-dsregcmd.md).
---
-### Q: I cannot add more than 3 Azure AD user accounts under the same user session on a Windows 10 device, why?
-
-**A**: Azure AD added support for multiple Azure AD accounts in Windows 10 1803 release. However, Windows 10 restricts the number of Azure AD accounts on a device to 3 to limit the size of token requests and enable reliable single sign on (SSO). Once 3 accounts have been added, users will see an error for subsequent accounts. The Additional problem information on the error screen provides the following message indicating the reason - "Add account operation is blocked because account limit is reached".
--
-## Azure AD join FAQ
-
-### Q: How do I unjoin an Azure AD joined device locally on the device?
-
-**A:** For pure Azure AD joined devices, make sure you have an offline local administrator account or create one. You can't sign in with any Azure AD user credentials. Next, go to **Settings** > **Accounts** > **Access Work or School**. Select your account and select **Disconnect**. Follow the prompts and provide the local administrator credentials when prompted. Reboot the device to finish the unjoin process.
---
-### Q: Can my users' sign in to Azure AD joined devices that are deleted or disabled in Azure AD?
-
-**A:** Yes. Windows has a cached username and password capability that allows users who signed in previously to access the desktop quickly even without network connectivity.
-
-When a device is deleted or disabled in Azure AD, it's not known to the Windows device. So users who signed in previously continue to access the desktop with the cached username and password. But as the device is deleted or disabled, users can't access any resources protected by device-based Conditional Access.
-
-Users who didn't sign in previously can't access the device. There's no cached username and password enabled for them.
---
-### Q: Can a disabled or deleted user sign in to an Azure AD joined devices
-
-**A:** Yes, but only for a limited time. When a user is deleted or disabled in Azure AD, it's not immediately known to the Windows device. So users who signed in previously can access the desktop with the cached username and password.
-
-Typically, the device is aware of the user state in less than four hours. Then Windows blocks those users' access to the desktop. As the user is deleted or disabled in Azure AD, all their tokens are revoked. So they can't access any resources.
-
-Deleted or disabled users who didn't sign in previously can't access a device. There's no cached username and password enabled for them.
---
-### Q: Why do my users have issues on Azure AD joined devices after changing their UPN?
-
-**A:** Currently, UPN changes are not fully supported on Azure AD joined devices. So their authentication with Azure AD fails after their UPN changes. As a result, users have SSO and Conditional Access issues on their devices. At this time, users need to sign in to Windows through the "Other user" tile using their new UPN to resolve this issue. We are currently working on addressing this issue. However, users signing in with Windows Hello for Business do not face this issue.
-
-UPN changes are supported with Windows 10 2004 update. Users on devices with this update will not have any issues after changing their UPNs
---
-### Q: My users can't search printers from Azure AD joined devices. How can I enable printing from those devices?
-
-**A:** To deploy printers for Azure AD joined devices, see [Deploy Windows Server Hybrid Cloud Print with Pre-Authentication](/windows-server/administration/hybrid-cloud-print/hybrid-cloud-print-deploy). You need an on-premises Windows Server to deploy hybrid cloud print. Currently, cloud-based print service isn't available.
---
-### Q: How do I connect to a remote Azure AD joined device?
-
-**A:** See [Connect to remote Azure Active Directory-joined PC](/windows/client-management/connect-to-remote-aadj-pc).
---
-### Q: Why do my users see *You can't get there from here*?
-
-**A:** Did you configure certain Conditional Access rules to require a specific device state? If the device doesn't meet the criteria, users are blocked, and they see that message.
-Evaluate the Conditional Access policy rules. Make sure the device meets the criteria to avoid the message.
---
-### Q: Why don't some of my users get Azure AD Multi-Factor Authentication prompts on Azure AD joined devices?
-
-**A:** A user might join or register a device with Azure AD by using Multi-Factor Authentication. Then the device itself becomes a trusted second factor for that user. Whenever the same user signs in to the device and accesses an application, Azure AD considers the device as a second factor. It enables that user to seamlessly access applications without additional Multi-Factor Authentication prompts.
-
-This behavior:
--- Is applicable to Azure AD joined and Azure AD registered devices - but not for hybrid Azure AD joined devices.-- Isn't applicable to any other user who signs in to that device. So all other users who access that device get a Multi-Factor Authentication challenge. Then they can access applications that require Multi-Factor Authentication.---
-### Q: Why do I get a *username or password is incorrect* message for a device I just joined to Azure AD?
-
-**A:** Common reasons for this scenario are as follows:
--- Your user credentials are no longer valid.-- Your computer can't communicate with Azure Active Directory. Check for any network connectivity issues.-- Federated sign-ins require your federation server to support WS-Trust endpoints that are enabled and accessible. -- You enabled pass-through authentication. So your temporary password needs to be changed when you sign in.---
-### Q: Why do I see the *Oops… an error occurred!* dialog when I try to Azure AD join my PC?
-
-**A:** This error happens when you set up Azure Active Directory enrollment with Intune. Make sure that the user who tries to Azure AD join has the correct Intune license assigned. For more information, see [Set up enrollment for Windows devices](/intune/windows-enroll).
---
-### Q: Why did my attempt to Azure AD join a PC fail, although I didn't get any error information?
-
-**A:** A likely cause is that you signed in to the device by using the local built-in administrator account.
-Create a different local account before you use Azure Active Directory join to finish the setup.
---
-### Q:What are the MS-Organization-P2P-Access certificates present on our Windows 10 devices?
-
-**A:** The MS-Organization-P2P-Access certificates are issued by Azure AD to both, Azure AD joined and hybrid Azure AD joined devices. These certificates are used to enable trust between devices in the same tenant for remote desktop scenarios. One certificate is issued to the device and another is issued to the user. The device certificate is present in `Local Computer\Personal\Certificates` and is valid for one day. This certificate is renewed (by issuing a new certificate) if the device is still active in Azure AD. The user certificate is present in `Current User\Personal\Certificates` and this certificate is also valid for one day, but it is issued on-demand when a user attempts a remote desktop session to another Azure AD joined device. It is not renewed on expiry. Both these certificates are issued using the MS-Organization-P2P-Access certificate present in the `Local Computer\AAD Token Issuer\Certificates`. This certificate is issued by Azure AD during device registration.
---
-### Q:Why do I see multiple expired certificates issued by MS-Organization-P2P-Access on our Windows 10 devices? How can I delete them?
-
-**A:** There was an issue identified on Windows 10 version 1709 and lower where expired MS-Organization-P2P-Access certificates continued to exist on the computer store because of cryptographic issues. Your users could face issues with network connectivity, if you are using any VPN clients (for example, Cisco AnyConnect) that cannot handle the large number of expired certificates. This issue was fixed in Windows 10 1803 release to automatically delete any such expired MS-Organization-P2P-Access certificates. You can resolve this issue by updating your devices to Windows 10 1803. If you are unable to update, you can delete these certificates without any adverse impact.
---
-## Hybrid Azure AD join FAQ
-
-### Q: How do I unjoin a Hybrid Azure AD joined device locally on the device?
-
-**A:** For hybrid Azure AD joined devices, make sure to turn off automatic registration. Then the scheduled task doesn't register the device again. Next, open a command prompt as an administrator and enter `dsregcmd.exe /debug /leave`. Or run this command as a script across several devices to unjoin in bulk.
-
-### Q: Where can I find troubleshooting information to diagnose hybrid Azure AD join failures?
-
-**A:** For troubleshooting information, see these articles:
--- [Troubleshooting hybrid Azure Active Directory joined Windows 10 and Windows Server 2016 devices](troubleshoot-hybrid-join-windows-current.md)-- [Troubleshooting hybrid Azure Active Directory joined down-level devices](troubleshoot-hybrid-join-windows-legacy.md)
-
-### Q: Why do I see a duplicate Azure AD registered record for my Windows 10 hybrid Azure AD joined device in the Azure AD devices list?
-
-**A:** When your users add their accounts to apps on a domain-joined device, they might be prompted with **Add account to Windows?** If they enter **Yes** on the prompt, the device registers with Azure AD. The trust type is marked as Azure AD registered. After you enable hybrid Azure AD join in your organization, the device also gets hybrid Azure AD joined. Then two device states show up for the same device.
-
-In most cases, Hybrid Azure AD join takes precedence over the Azure AD registered state, resulting in your device being considered hybrid Azure AD joined for any authentication and Conditional Access evaluation. However, sometimes, this dual state can result in a non-deterministic evaluation of the device and cause access issues. We strongly recommend upgrading to Windows 10 version 1803 and above where we automatically clean up the Azure AD registered state. Learn how to [avoid or clean up this dual state on the Windows 10 machine](hybrid-azuread-join-plan.md#review-things-you-should-know).
---
-### Q: Why do my users have issues on Windows 10 hybrid Azure AD joined devices after changing their UPN?
-
-**A:** Currently UPN changes are not fully supported with hybrid Azure AD joined devices. While users can sign in to the device and access their on-premises applications, authentication with Azure AD fails after a UPN change. As a result, users have SSO and Conditional Access issues on their devices. At this time, you need to unjoin the device from Azure AD (run "dsregcmd /leave" with elevated privileges) and rejoin (happens automatically) to resolve the issue. We are currently working on addressing this issue. However, users signing in with Windows Hello for Business do not face this issue.
-
-UPN changes are supported with Windows 10 2004 update. Users on devices with this update will not have any issues after changing their UPNs
---
-### Q: Do Windows 10 hybrid Azure AD joined devices require line of sight to the domain controller to get access to cloud resources?
-
-**A:** No, except when the user's password is changed. After Windows 10 hybrid Azure AD join is complete, and the user has signed in at least once, the device doesn't require line of sight to the domain controller to access cloud resources. Windows 10 can get single sign-on to Azure AD applications from anywhere with an internet connection, except when a password is changed. Users who sign in with Windows Hello for Business continue to get single sign-on to Azure AD applications even after a password change, even if they don't have line of sight to their domain controller.
---
-### Q: What happens if a user changes their password and tries to login to their Windows 10 hybrid Azure AD joined device outside the corporate network?
-
-**A:**
-If a password is changed outside the corporate network (for example, by using Azure AD SSPR), then the user sign in with the new password will fail. For hybrid Azure AD joined devices, on-premises Active Directory is the primary authority. When a device does not have line of sight to the domain controller, it is unable to validate the new password. So, user needs to establish connection with the domain controller (either via VPN or being in the corporate network) before they're able to sign in to the device with their new password. Otherwise, they can only sign in with their old password because of cached sign in capability in Windows. However, the old password is invalidated by Azure AD during token requests and hence, prevents single sign-on and fails any device-based Conditional Access policies. This issue doesn't occur if you use Windows Hello for Business.
---
-## Azure AD register FAQ
-
-### Q: How do I remove an Azure AD registered state for a device locally?
-
-**A:**
-- For Windows 10 Azure AD registered devices, Go to **Settings** > **Accounts** > **Access Work or School**. Select your account and select **Disconnect**. Device registration is per user profile on Windows 10.-- For iOS and Android, you can use the Microsoft Authenticator application **Settings** > **Device Registration** and select **Unregister device**.-- For macOS, you can use the Microsoft Intune Company Portal application to unenroll the device from management and remove any registration. -
-For Windows 10 devices, this process can be automated with the [Workplace Join (WPJ) removal tool](https://download.microsoft.com/download/8/e/f/8ef13ae0-6aa8-48a2-8697-5b1711134730/WPJCleanUp.zip)
-
-> [!NOTE]
-> This tool removes all SSO accounts on the device. After this operation, all applications will lose SSO state, and the device will be unenrolled from management tools (MDM) and unregistered from the cloud. The next time an application tries to sign in, users will be asked to add the account again.
--
-### Q: How can I block users from adding additional work accounts (Azure AD registered) on my corporate Windows 10 devices?
-
-**A:**
-Enable the following registry to block your users from adding additional work accounts to your corporate domain joined, Azure AD joined, or hybrid Azure AD joined Windows 10 devices. This policy can also be used to block domain joined machines from inadvertently getting Azure AD registered with the same user account.
-
-`HKLM\SOFTWARE\Policies\Microsoft\Windows\WorkplaceJoin, "BlockAADWorkplaceJoin"=dword:00000001`
--
-### Q: Can I register Android or iOS BYOD devices?
-
-**A:** Yes, but only with the Azure device registration service and for hybrid customers. It's not supported with the on-premises device registration service in Active Directory Federation Services (AD FS).
--
-### Q: How can I register a macOS device?
-
-**A:** Take the following steps:
-
-1. [Create a compliance policy](/intune/compliance-policy-create-mac-os)
-1. [Define a Conditional Access policy for macOS devices](../conditional-access/overview.md)
-
-**Remarks:**
--- The users included in your Conditional Access policy need a [supported version of Office for macOS](../conditional-access/concept-conditional-access-conditions.md) to access resources. -- During the first access try, your users are prompted to enroll the device by using the company portal.--
-## Next steps
--- Learn more about [Azure AD registered devices](concept-azure-ad-register.md)-- Learn more about [Azure AD joined devices](concept-azure-ad-join.md)-- Learn more about [hybrid Azure AD joined devices](concept-azure-ad-join-hybrid.md)
active-directory Manage Stale Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/manage-stale-devices.md
To cleanup Azure AD:
- **Windows 7/8** - Disable or delete Windows 7/8 devices in your on-premises AD first. You can't use Azure AD Connect to disable or delete Windows 7/8 devices in Azure AD. Instead, when you make the change in your on-premises, you must disable/delete in Azure AD. > [!NOTE]
->* Deleting devices in your on-premises AD or Azure AD does not remove registration on the client. It will only prevent access to resources using device as an identity (e.g. Conditional Access). Read additional information on how to [remove registration on the client](faq.md#hybrid-azure-ad-join-faq).
+>* Deleting devices in your on-premises AD or Azure AD does not remove registration on the client. It will only prevent access to resources using device as an identity (e.g. Conditional Access). Read additional information on how to [remove registration on the client](faq.yml).
>* Deleting a Windows 10 device only in Azure AD will re-synchronize the device from your on-premises using Azure AD connect but as a new object in "Pending" state. A re-registration is required on the device. >* Removing the device from sync scope for Windows 10/Server 2016 devices will delete the Azure AD device. Adding it back to sync scope will place a new object in "Pending" state. A re-registration of the device is required. >* If you not using Azure AD Connect for Windows 10 devices to synchronize (e.g. ONLY using AD FS for registration), you must manage lifecycle similar to Windows 7/8 devices.
Disable or delete Azure AD joined devices in the Azure AD.
> [!NOTE] >* Deleting an Azure AD device does not remove registration on the client. It will only prevent access to resources using device as an identity (e.g Conditional Access).
->* Read more on [how to unjoin on Azure AD](faq.md#azure-ad-join-faq)
+>* Read more on [how to unjoin on Azure AD](faq.yml)
### Azure AD registered devices
Disable or delete Azure AD registered devices in the Azure AD.
> [!NOTE] >* Deleting an Azure AD registered device in Azure AD does not remove registration on the client. It will only prevent access to resources using device as an identity (e.g. Conditional Access).
->* Read more on [how to remove a registration on the client](faq.md#azure-ad-register-faq)
+>* Read more on [how to remove a registration on the client](faq.yml)
## Clean up stale devices in the Azure portal
active-directory Plan Device Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/plan-device-deployment.md
The key benefits of giving your devices an Azure AD identity:
Video: [Conditional access with device controls](https://youtu.be/NcONUf-jeS4)
-FAQs: [Azure AD device management FAQ](faq.md) and [Settings and data roaming FAQ](enterprise-state-roaming-faqs.md)
+FAQs: [Azure AD device management FAQ](faq.yml) and [Settings and data roaming FAQ](enterprise-state-roaming-faqs.md)
## Plan the deployment project
active-directory Troubleshoot Device Dsregcmd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/troubleshoot-device-dsregcmd.md
This section performs the prerequisite checks for the provisioning of Windows He
## Next steps
-For questions, see the [device management FAQ](faq.md)
+For questions, see the [device management FAQ](faq.yml)
active-directory Troubleshoot Hybrid Join Windows Current https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/troubleshoot-hybrid-join-windows-current.md
If the values are **NO**, it could be due:
Continue [troubleshooting devices using the dsregcmd command](troubleshoot-device-dsregcmd.md)
-For questions, see the [device management FAQ](faq.md)
+For questions, see the [device management FAQ](faq.yml)
active-directory Troubleshoot Hybrid Join Windows Legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/troubleshoot-hybrid-join-windows-legacy.md
You can also find the status information in the event log under: **Applications
## Next steps
-For questions, see the [device management FAQ](faq.md)
+For questions, see the [device management FAQ](faq.yml)
active-directory Groups Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-lifecycle.md
For more information on permissions to restore a deleted group, see [Restore a d
> - When you first set up expiration, any groups that are older than the expiration interval are set to 35 days until expiration unless the group is automatically renewed or the owner renews it. > - When a dynamic group is deleted and restored, it's seen as a new group and re-populated according to the rule. This process can take up to 24 hours. > - Expiration notices for groups used in Teams appear in the Teams Owners feed.
+> - When you enable expiration for selected groups, you can add up to 500 groups to the list. If you need to add more than 500 groups, you can enable expiration for all your groups. In that scenario, the 500-group limitation doesn't apply.
## Email notifications
active-directory Direct Federation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/direct-federation.md
Previously updated : 03/02/2021 Last updated : 04/06/2021
After you set up direct federation with an organization, any new guest users you
- If you set up direct federation with a partner organization and invite guest users, and then the partner organization later moves to Azure AD, the guest users who have already redeemed invitations will continue to use direct federation, as long as the direct federation policy in your tenant exists. - If you delete direct federation with a partner organization, any guest users currently using direct federation will be unable to sign in.
-In any of these scenarios, you can update a guest userΓÇÖs authentication method by deleting the guest user account from your directory and reinviting them.
+In any of these scenarios, you can update a guest userΓÇÖs authentication method by [resetting their redemption status](reset-redemption-status.md).
Direct federation is tied to domain namespaces, such as contoso.com and fabrikam.com. When establishing a direct federation configuration with AD FS or a third-party IdP, organizations associate one or more domain namespaces to these IdPs.
When direct federation is established with a partner organization, it takes prec
### Does direct federation address sign-in issues due to a partially synced tenancy? No, the [email one-time passcode](one-time-passcode.md) feature should be used in this scenario. A ΓÇ£partially synced tenancyΓÇ¥ refers to a partner Azure AD tenant where on-premises user identities aren't fully synced to the cloud. A guest whose identity doesnΓÇÖt yet exist in the cloud but who tries to redeem your B2B invitation wonΓÇÖt be able to sign in. The one-time passcode feature would allow this guest to sign in. The direct federation feature addresses scenarios where the guest has their own IdP-managed organizational account, but the organization has no Azure AD presence at all. ### Once Direct Federation is configured with an organization, does each guest need to be sent and redeem an individual invitation?
-Setting up direct federation doesnΓÇÖt change the authentication method for guest users who have already redeemed an invitation from you. You can update a guest userΓÇÖs authentication method by deleting the guest user account from your directory and reinviting them.
+Setting up direct federation doesnΓÇÖt change the authentication method for guest users who have already redeemed an invitation from you. You can update a guest userΓÇÖs authentication method by [resetting their redemption status](reset-redemption-status.md).
## Step 1: Configure the partner organizationΓÇÖs identity provider First, your partner organization needs to configure their identity provider with the required claims and relying party trusts.
Now test your direct federation setup by inviting a new B2B guest user. For deta
## How do I remove direct federation?
-You can remove your direct federation setup. If you do, direct federation guest users who have already redeemed their invitations won't be able to sign in. But you can give them access to your resources again by deleting them from the directory and reinviting them.
+You can remove your direct federation setup. If you do, direct federation guest users who have already redeemed their invitations won't be able to sign in. But you can give them access to your resources again by [resetting their redemption status](reset-redemption-status.md).
To remove direct federation with an identity provider in the Azure AD portal: 1. Go to the [Azure portal](https://portal.azure.com/). In the left pane, select **Azure Active Directory**.
active-directory Google Federation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/google-federation.md
Previously updated : 03/02/2021 Last updated : 04/06/2021
You'll now set the Google client ID and client secret. You can use the Azure por
> Use the client ID and client secret from the app you created in "Step 1: Configure a Google developer project." For more information, see [New-AzureADMSIdentityProvider](/powershell/module/azuread/new-azureadmsidentityprovider?view=azureadps-2.0-preview&preserve-view=true). ## How do I remove Google federation?
-You can delete your Google federation setup. If you do so, Google guest users who have already redeemed their invitation won't be able to sign in. But you can give them access to your resources again by deleting them from the directory and reinviting them.
+You can delete your Google federation setup. If you do so, Google guest users who have already redeemed their invitation won't be able to sign in. But you can give them access to your resources again by [resetting their redemption status](reset-redemption-status.md).
**To delete Google federation in the Azure AD portal** 1. Go to the [Azure portal](https://portal.azure.com). On the left pane, select **Azure Active Directory**.
active-directory One Time Passcode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/one-time-passcode.md
Previously updated : 03/02/2021 Last updated : 04/06/2021
You can see whether a guest user authenticates using one-time passcodes by viewi
![Screenshot showing a one-time passcode user with Source value of OTP](media/one-time-passcode/guest-user-properties.png) > [!NOTE]
-> When a user redeems a one-time passcode and later obtains an MSA, Azure AD account, or other federated account, they'll continue to be authenticated using a one-time passcode. If you want to update their authentication method, you can delete their guest user account and reinvite them.
+> When a user redeems a one-time passcode and later obtains an MSA, Azure AD account, or other federated account, they'll continue to be authenticated using a one-time passcode. If you want to update the user's authentication method, you can [reset their redemption status](reset-redemption-status.md).
### Example
Starting October 2021, the email one-time passcode feature will be turned on for
> [!NOTE] >
-> If the email one-time passcode feature has been enabled in your tenant and you turn it off, any guest users who have redeemed a one-time passcode will not be able to sign in. You can delete the guest user and reinvite them so they can sign in again using another authentication method.
+> If the email one-time passcode feature has been enabled in your tenant and you turn it off, any guest users who have redeemed a one-time passcode will not be able to sign in. You can [reset their redemption status](reset-redemption-status.md) so they can sign in again using another authentication method.
### To disable the email one-time passcode feature
active-directory Reset Redemption Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/reset-redemption-status.md
Previously updated : 02/03/2021 Last updated : 04/06/2021
-# Reset redemption status for a guest user
+# Reset redemption status for a guest user (Preview)
After a guest user has redeemed your invitation for B2B collaboration, there might be times when you'll need to update their sign-in information, for example when:
active-directory Delete Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/delete-application-portal.md
To delete an application from your Azure AD tenant:
## Clean up resources
-When your done with this quickstart series, consider deleting the app to clean up your test tenant. Deleting the app was covered in this quickstart.
+When you are done with this quickstart series, consider deleting the app to clean up your test tenant. Deleting the app was covered in this quickstart.
## Next steps
active-directory Tenant Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/tenant-restrictions.md
Previously updated : 2/23/2021 Last updated : 4/6/2021
This section describes the experience for both end users and admins.
An example user is on the Contoso network, but is trying to access the Fabrikam instance of a shared SaaS application like Outlook online. If Fabrikam is a non-permitted tenant for the Contoso instance, the user sees an access denial message, which says you're trying to access a resource that belongs to an organization unapproved by your IT department.
+![Tenant restrictions error message, from April 2021](./media/tenant-restrictions/error-message.png)
+ ### Admin experience While configuration of tenant restrictions is done on the corporate proxy infrastructure, admins can access the tenant restrictions reports in the Azure portal directly. To view the reports:
The report may contain limited information, such as target directory ID, when a
Like other reports in the Azure portal, you can use filters to specify the scope of your report. You can filter on a specific time interval, user, application, client, or status. If you select the **Columns** button, you can choose to display data with any combination of the following fields: -- **User** - this field can have personally identifiable information removed, where it will be set to `00000000-0000-0000-0000-000000000000`.
+- **User** - this field can have personal data removed, where it will be set to `00000000-0000-0000-0000-000000000000`.
- **Application** - **Status** - **Date** - **Date (UTC)** - where UTC is Coordinated Universal Time - **IP Address** - **Client**-- **Username** - this field can have personally identifiable information removed, where it will be set to `{PII Removed}@domain.com`
+- **Username** - this field can have personal data removed, where it will be set to `{PII Removed}@domain.com`
- **Location** - **Target tenant ID**
Some organizations attempt to fix this by blocking `login.live.com` in order to
### Configuration for consumer apps
-While the `Restrict-Access-To-Tenants` header functions as an allow-list, the Microsoft account (MSA) block works as a deny signal, telling the Microsoft account platform to not allow users to sign in to consumer applications. To send this signal, the `sec-Restrict-Tenant-Access-Policy` header is injected to traffic visiting `login.live.com` using the same corporate proxy or firewall as [above](#proxy-configuration-and-requirements). The value of the header must be `restrict-msa`. When the header is present and a consumer app is attempting to sign in a user directly, that sign in will be blocked.
+While the `Restrict-Access-To-Tenants` header functions as an allowlist, the Microsoft account (MSA) block works as a deny signal, telling the Microsoft account platform to not allow users to sign in to consumer applications. To send this signal, the `sec-Restrict-Tenant-Access-Policy` header is injected to traffic visiting `login.live.com` using the same corporate proxy or firewall as [above](#proxy-configuration-and-requirements). The value of the header must be `restrict-msa`. When the header is present and a consumer app is attempting to sign in a user directly, that sign in will be blocked.
At this time, authentication to consumer applications does not appear in the [admin logs](#admin-experience), as login.live.com is hosted separately from Azure AD.
The `restrict-msa` policy blocks the use of consumer applications, but allows th
## Next steps - Read about [Updated Office 365 modern authentication](https://www.microsoft.com/microsoft-365/blog/2015/03/23/office-2013-modern-authentication-public-preview-announced/)-- Review the [Office 365 URLs and IP address ranges](https://support.office.com/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2)
+- Review the [Office 365 URLs and IP address ranges](https://support.office.com/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2)
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/overview.md
# What are managed identities for Azure resources?
-A common challenge for developers is the management of secrets and credentials to secure communication between different services. On Azure, managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens. This also helps accessing [Azure Key Vault](../../key-vault/general/overview.md) where developers can store credentials in a secure manner. Managed identities for Azure resources solves this problem by providing Azure services with an automatically managed identity in Azure AD.
+A common challenge for developers is the management of secrets and credentials used to secure communication between different components making up a solution. Managed identities eliminate the need for developers to manage credentials. Managed identities provide an identity for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication. Applications may use the managed identity to obtain Azure AD tokens. For example, an application may use a managed identity to access resources like [Azure Key Vault](../../key-vault/general/overview.md) where developers can store credentials in a secure manner or to access storage accounts.
What can a managed identity be used for?
What can a managed identity be used for?
Here are some of the benefits of using Managed identities: - You don't need to manage credentials. Credentials are not even accessible to you.-- You can use managed identities to authenticate to any resource that supports Azure Active Directory authentication including your own applications.
+- You can use managed identities to authenticate to any resource that supports [Azure Active Directory authentication](../authentication/overview-authentication.md) including your own applications.
- Managed identities can be used without any additional cost. > [!NOTE]
The table below shows the differences between the two types of managed identitie
## How can I use managed identities for Azure resources?
-![some examples of how a developer may use managed identities to get access to resources from their code without managing authentication information](media/overview/azure-managed-identities-examples.png)
+![some examples of how a developer may use managed identities to get access to resources from their code without managing authentication information](media/overview/when-use-managed-identities.png)
## What Azure services support the feature?<a name="which-azure-services-support-managed-identity"></a>
active-directory Pim Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-getting-started.md
Once Privileged Identity Management is set up, you can learn your way around.
| Task + Manage | Description | | | | | **My roles** | Displays a list of eligible and active roles assigned to you. This is where you can activate any assigned eligible roles. |
-| **My requests** | Displays your pending requests to activate eligible role assignments. |
+| **Pending requests** | Displays your pending requests to activate eligible role assignments. |
| **Approve requests** | Displays a list of requests to activate eligible roles by users in your directory that you are designated to approve. | | **Review access** | Lists active access reviews you are assigned to complete, whether you're reviewing access for yourself or someone else. | | **Azure AD roles** | Displays a dashboard and settings for Privileged role administrators to manage Azure AD role assignments. This dashboard is disabled for anyone who isn't a privileged role administrator. These users have access to a special dashboard titled My view. The My view dashboard only displays information about the user accessing the dashboard, not the entire organization. |
active-directory Pim How To Start Security Review https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-start-security-review.md
Previously updated : 3/16/2021 Last updated : 4/05/2021
To reduce the risk associated with stale role assignments, you should regularly
This article describes how to create one or more access reviews for privileged Azure AD roles.
+## Prerequisite license
++
+> [!Note]
+> Currently, you can scope an access review to service principals with access to Azure AD and Azure resource roles (Preview) with an Azure Active Directory Premium P2 edition active in your tenant. The licensing model for service principals will be finalized for general availability of this feature and additional licenses may be required.
+ ## Prerequisites [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator)
This article describes how to create one or more access reviews for privileged A
1. Sign in to [Azure portal](https://portal.azure.com/) with a user that is a member of the Privileged role administrator role.
-1. Open **Azure AD Privileged Identity Management**.
-
-1. Select **Azure AD roles**.
+1. Select **Identity Governance**
+
+1. Select **Azure AD roles** under **Azure AD Privileged Identity Management**.
+
+1. Select **Azure AD roles** again under **Manage**.
1. Under Manage, select **Access reviews**, and then select **New**.
Click **New** to create a new access review.
1. Use the **End** setting to specify how to end the recurring access review series. The series can end in three ways: it runs continuously to start reviews indefinitely, until a specific date, or after a defined number of occurrences has been completed. You, another User administrator, or another Global administrator can stop the series after creation by changing the date in **Settings**, so that it ends on that date.
-1. In the **Users** section, select one or more roles that you want to review membership of.
+1. In the **Users Scope** section, select the scope of the review. To review users and groups with access to the Azure AD role, select **Users and Groups**, or select **(Preview) Service Principals** to review the machine accounts with access to the Azure AD role.
![Users scope to review role membership of](./media/pim-how-to-start-security-review/users.png)
+1. Under **Review role membership**, select the privileged Azure AD roles to review.
+ > [!NOTE] > - Roles selected here include both [permanent and eligible roles](../privileged-identity-management/pim-how-to-add-role-to-user.md). > - Selecting more than one role will create multiple access reviews. For example, selecting five roles will create five separate access reviews.
Click **New** to create a new access review.
![Reviewers list of selected users or members (self)](./media/pim-how-to-start-security-review/reviewers.png)
- - **Selected users** - Use this option when you don't know who needs access. With this option, you can assign the review to a resource owner or group manager to complete.
- - **Members (self)** - Use this option to have the users review their own role assignments. Groups assigned to the role will not be a part of the review when this option is selected.
- - **Manager** ΓÇô Use this option to have the userΓÇÖs manager review their role assignment. Upon selecting Manager, you will also have the option to specify a fallback reviewer. Fallback reviewers are asked to review a user when the user has no manager specified in the directory. Groups assigned to the role will be reviewed by the Fallback reviewer if one is selected.
+ - **Selected users** - Use this option to designate a specific user to complete the review. This option is available regardless of the Scope of the review, and the selected reviewers can review users, groups and service principals.
+ - **Members (self)** - Use this option to have the users review their own role assignments. Groups assigned to the role will not be a part of the review when this option is selected.This option is only available if the review is scoped to **Users and Groups**.
+ - **Manager** ΓÇô Use this option to have the userΓÇÖs manager review their role assignment. This option is only available if the review is scoped to **Users and Groups**. Upon selecting Manager, you will also have the option to specify a fallback reviewer. Fallback reviewers are asked to review a user when the user has no manager specified in the directory. Groups assigned to the role will be reviewed by the Fallback reviewer if one is selected.
### Upon completion settings
active-directory Pim Resource Roles Start Access Review https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-resource-roles-start-access-review.md
na
ms.devlang: na Previously updated : 03/16/2021 Last updated : 04/05/2021
The need for access to privileged Azure resource roles by employees changes over time. To reduce the risk associated with stale role assignments, you should regularly review access. You can use Azure Active Directory (Azure AD) Privileged Identity Management (PIM) to create access reviews for privileged access to Azure resource roles. You can also configure recurring access reviews that occur automatically. This article describes how to create one or more access reviews.
+## Prerequisite license
++
+> [!Note]
+> Currently, you can scope an access review to service principals with access to Azure AD and Azure resource roles (Preview) with an Azure Active Directory Premium P2 edition active in your tenant. The licensing model for service principals will be finalized for general availability of this feature and additional licenses may be required.
+ ## Prerequisite role To create access reviews, you must be assigned to the [Owner](../../role-based-access-control/built-in-roles.md#owner) or [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) Azure role for the resource.
The need for access to privileged Azure resource roles by employees changes over
1. Sign in to [Azure portal](https://portal.azure.com/) with a user that is assigned to one of the prerequisite roles.
-1. Open **Azure AD Privileged Identity Management**.
-
-1. In the left menu, select **Azure resources**.
+1. Select **Identity Governance**
+
+1. In the left menu, select **Azure resources** under **Azure AD Privileged Identity Management**.
1. Select the resource you want to manage, such as a subscription.
The need for access to privileged Azure resource roles by employees changes over
1. Use the **End** setting to specify how to end the recurring access review series. The series can end in three ways: it runs continuously to start reviews indefinitely, until a specific date, or after a defined number of occurrences has been completed. You, another User administrator, or another Global administrator can stop the series after creation by changing the date in **Settings**, so that it ends on that date.
-1. In the **Users** section, select one or more roles that you want to review membership of.
+1. In the **Users** section, select the scope of the review. To review users, select **Users or select (Preview) Service Principals** to review the machine accounts with access to the Azure role.
![Users scope to review role membership of](./media/pim-resource-roles-start-access-review/users.png) +
+1. Under **Review role membership**, select the privileged Azure roles to review.
+ > [!NOTE] > - Roles selected here include both [permanent and eligible roles](../privileged-identity-management/pim-how-to-add-role-to-user.md). > - Selecting more than one role will create multiple access reviews. For example, selecting five roles will create five separate access reviews.
The need for access to privileged Azure resource roles by employees changes over
![Reviewers list of selected users or members (self)](./media/pim-resource-roles-start-access-review/reviewers.png)
- - **Selected users** - Use this option when you don't know who needs access. With this option, you can assign the review to a resource owner or group manager to complete.
- - **Members (self)** - Use this option to have the users review their own role assignments.
- - **Manager** ΓÇô Use this option to have the userΓÇÖs manager review their role assignment. Upon selecting Manager, you will also have the option to specify a fallback reviewer. Fallback reviewers are asked to review a user when the user has no manager specified in the directory.
+ - **Selected users** - Use this option to designate a specific user to complete the review. This option is available regardless of the Scope of the review, and the selected reviewers can review users and service principals.
+ - **Members (self)** - Use this option to have the users review their own role assignments. This option is only available if the review is scoped to **Users**.
+ - **Manager** ΓÇô Use this option to have the userΓÇÖs manager review their role assignment. This option is only available if the review is scoped to **Users**. Upon selecting Manager, you will also have the option to specify a fallback reviewer. Fallback reviewers are asked to review a user when the user has no manager specified in the directory.
### Upon completion settings
active-directory Subscription Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/subscription-requirements.md
na
ms.devlang: na Previously updated : 08/06/2020 Last updated : 04/05/2021
To use Azure Active Directory (Azure AD) Privileged Identity Management (PIM), a
## Valid licenses
+You will need [!INCLUDE [Azure AD Premium P2 license](../../../includes/active-directory-p2-license.md)] to use PIM and all of it's settings. Currently, you can scope an access review to service principals with access to Azure AD and Azure resource roles (Preview) with an Azure Active Directory Premium P2 edition active in your tenant. The licensing model for service principals will be finalized for general availability of this feature and additional licenses may be required.
## Licenses you must have
If an Azure AD Premium P2, EMS E5, or trial license expires, Privileged Identity
- [Deploy Privileged Identity Management](pim-deployment-plan.md) - [Start using Privileged Identity Management](pim-getting-started.md) - [Roles you can't manage in Privileged Identity Management](pim-roles.md)
+- [Create an access review of Azure resource roles in PIM](pim-resource-roles-start-access-review.md)
+- [Create an access review of Azure AD roles in PIM](pim-how-to-start-security-review.md)
active-directory Concept All Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-all-sign-ins.md
The reporting architecture in Azure Active Directory (Azure AD) consists of the following components: - **Activity**
- - **Sign-ins** ΓÇô Information about when users, applications, and managed resources sign in to Azure AD to and access resources.
+ - **Sign-ins** ΓÇô Information about when users, applications, and managed resources sign in to Azure AD and access resources.
- **Audit logs** - [Audit logs](concept-audit-logs.md) provide system activity information about users and group management, managed applications, and directory activities. - **Security** - **Risky sign-ins** - A [risky sign-in](../identity-protection/overview-identity-protection.md) is an indicator for a sign-in attempt by someone who isn't the legitimate owner of a user account.
Each JSON download consists of four different files:
* [Sign-in activity report error codes](reference-sign-ins-error-codes.md) * [Azure AD data retention policies](reference-reports-data-retention.md)
-* [Azure AD report latencies](reference-reports-latencies.md)
+* [Azure AD report latencies](reference-reports-latencies.md)
active-directory Plan Monitoring And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/plan-monitoring-and-reporting.md
With Azure AD monitoring, you can route logs to:
* an Azure event hub where you can integrate with your existing SIEM tools such as Splunk, Sumologic, or QRadar. > [!NOTE]
-We recently started using the term Azure Monitor logs instead of Log Analytics. Log data is still stored in a Log Analytics workspace and is still collected and analyzed by the same Log Analytics service. We are updating the terminology to better reflect the role of [logs in Azure Monitor](../../azure-monitor/data-platform.md). See [Azure Monitor terminology changes](../../azure-monitor/terminology.md) for details.
+> We recently started using the term Azure Monitor logs instead of Log Analytics. Log data is still stored in a Log Analytics workspace and is still collected and analyzed by the same Log Analytics service. We are updating the terminology to better reflect the role of [logs in Azure Monitor](../../azure-monitor/data-platform.md). See [Azure Monitor terminology changes](../../azure-monitor/terminology.md) for details.
[Learn more about report retention policies](./reference-reports-data-retention.md).
Depending on the decisions you have made earlier using the design guidance above
Consider implementing [Privileged Identity Management](../privileged-identity-management/pim-configure.md)
-Consider implementing [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md)
+Consider implementing [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md)
active-directory Boxcryptor Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/boxcryptor-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Boxcryptor for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Boxcryptor.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: 656de6d6-399e-4346-a07e-0e5fefb0b4ee
+++
+ na
+ms.devlang: na
+ Last updated : 04/02/2021+++
+# Tutorial: Configure Boxcryptor for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Boxcryptor and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Boxcryptor](https://www.boxcryptor.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Boxcryptor
+> * Remove users in Boxcryptor when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Boxcryptor
+> * Provision groups and group memberships in Boxcryptor
+> * [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/boxcryptor-tutorial) to Boxcryptor (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
+* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* Boxcryptor Single sign-on enabled [subscription](https://www.boxcryptor.com/pricing/for-teams).
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
+2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+3. Determine what data to [map between Azure AD and Boxcryptor](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+
+## Step 2. Configure Boxcryptor to support provisioning with Azure AD
+To configure provisioning on Boxcryptor, reach out to your Boxcryptor account manager or the [Boxcryptor support team](mailto:support@boxcryptor.com) who will enable provisioning on Boxcryptor and reach out to you with your Boxcryptor Tenant URL and Secret Token. These values will be entered in the **Tenant URL** and **Secret Token** field in the Provisioning tab of your Boxcryptor application in the Azure portal.
+
+## Step 3. Add Boxcryptor from the Azure AD application gallery
+
+Add Boxcryptor from the Azure AD application gallery to start managing provisioning to Boxcryptor. If you have previously setup Boxcryptor for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+
+* When assigning users and groups to Boxcryptor, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
++
+## Step 5. Configure automatic user provisioning to Boxcryptor
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Boxcryptor in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **Boxcryptor**.
+
+ ![The Boxcryptor link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your Boxcryptor Tenant URL and Secret Token retrieved earlier in Step 2. Click **Test Connection** to ensure Azure AD can connect to Boxcryptor. If the connection fails, ensure your Boxcryptor account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Boxcryptor**.
+
+9. Review the user attributes that are synchronized from Azure AD to Boxcryptor in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Boxcryptor for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the Boxcryptor API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for Filtering|
+ ||||
+ |userName|String|&check;|
+ |preferredLanguage|String|
+ |name.givenName|String|
+ |name.familyName|String|
+ |externalId|String|
+ |addresses[type eq "work"].country|String|
+
+10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Boxcryptor**.
+
+11. Review the group attributes that are synchronized from Azure AD to Boxcryptor in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Boxcryptor for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for Filtering|
+ ||||
+ |displayName|String|&check;|
+ |externalId|String|
+ |members|Reference|
+
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+
+13. To enable the Azure AD provisioning service for Boxcryptor, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+14. Define the users and/or groups that you would like to provision to Boxcryptor by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+15. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
active-directory Github Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/github-provisioning-tutorial.md
The objective of this tutorial is to show you the steps you need to perform in GitHub and Azure AD to automatically provision and de-provision user accounts from Azure AD to GitHub.
+> [!NOTE]
+> The Azure AD provisioning integration relies on the [GitHub SCIM API](https://developer.github.com/v3/scim/), which is available to [GitHub Enterprise Cloud](https://help.github.com/articles/github-s-products/#github-enterprise) customers on the [GitHub Enterprise billing plan](https://help.github.com/articles/github-s-billing-plans/#billing-plans-for-organizations).
+ ## Prerequisites The scenario outlined in this tutorial assumes that you already have the following items:
The scenario outlined in this tutorial assumes that you already have the followi
* SCIM provisioning to a single organization is supported only when SSO is enabled at the organization level > [!NOTE]
-> The Azure AD provisioning integration relies on the [GitHub SCIM API](https://developer.github.com/v3/scim/), which is available to [GitHub Enterprise Cloud](https://help.github.com/articles/github-s-products/#github-enterprise) customers on the [GitHub Enterprise billing plan](https://help.github.com/articles/github-s-billing-plans/#billing-plans-for-organizations).
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
## Assigning users to GitHub
active-directory Multi Factor Authentication Setup Office Phone https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/multi-factor-authentication-setup-office-phone.md
You can set up your office phone to act as your two-factor verification method.
->[!Note]
-> If the Office phone option is greyed out, it's possible that your organization doesn't allow you to use an office phone number for verification. In this case, you'll need to select another method or contact your administrator for more help.
+> [!Note]
+> If the **Office phone** option isn't available to select, it's possible that your organization doesn't allow you to use an office phone number for verification. In this case, you'll need to select another method or contact your administrator for more help.
+>
+> Combined Registration users won't see an option to use an extension with the **Office phone** option.
## Set up your office phone number as your verification method
active-directory Issue Verify Verifiable Credentials Your Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/issue-verify-verifiable-credentials-your-tenant.md
Now that we've issued the verifiable credential from our own tenant with claims
![new permission request](media/enable-your-tenant-verifiable-credentials/new-permission-request.png)
-8. You have no successfully verified your credential and the website should display your first and last name from your Azure AD's user account.
+8. You have now successfully verified your credential and the website should display your first and last name from your Azure AD's user account.
You have now completing the tutorial and are officially a Verified Credential Expert! Your sample app is using your DID for both issuing and verifying, while writing claims into a verifiable credential from your Azure AD. ## Next steps - Learn how to create [custom credentials](credential-design.md)-- Issuer service communication [examples](issuer-openid.md)
+- Issuer service communication [examples](issuer-openid.md)
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-multiple-node-pools.md
az aks nodepool add -g MyResourceGroup2 --cluster-name MyManagedCluster -n nodep
### Use a public IP prefix
-#### Install the `aks-preview` Azure CLI
-
-You will need the *aks-preview* Azure CLI extension. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
-
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
- There are a number of [benefits to using a public IP prefix][public-ip-prefix-benefits]. AKS supports using addresses from an existing public IP prefix for your nodes by passing the resource ID with the flag `node-public-ip-prefix` when creating a new cluster or adding a node pool. First, create a public IP prefix using [az network public-ip prefix create][az-public-ip-prefix-create]:
application-gateway Rewrite Http Headers Url https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/rewrite-http-headers-url.md
A rewrite rule set contains:
* **URL Query String**: The value to which the query string is to be rewritten to. * **Re-evaluate path map**: Used to determine whether the URL path map is to be re-evaluated or not. If kept unchecked, the original URL path will be used to match the path-pattern in the URL path map. If set to true, the URL path map will be re-evaluated to check the match with the rewritten path. Enabling this switch helps in routing the request to a different backend pool post rewrite.
+## Rewrite configuration common pitfall
+
+* Enabling 'Re-evaluate path map' is not allowed for basic request routing rules. This is to prevent infinite evaluation loop for a basic routing rule.
+
+* There needs to be at least 1 conditional rewrite rule or 1 rewrite rule which does not have 'Re-evaluate path map' enabled for path-based routing rules to prevent infinite evaluation loop for a path-based routing rule.
+
+* Incoming requests would be terminated with a 500 error code in case a loop is created dynamically based on client inputs. The Application Gateway will continue to serve other requests without any degradation in such a scenario.
+ ### Using URL rewrite or Host header rewrite with Web Application Firewall (WAF_v2 SKU) When you configure URL rewrite or host header rewrite, the WAF evaluation will happen after the modification to the request header or URL parameters (post-rewrite). And when you remove the URL rewrite or host header rewrite configuration on your Application Gateway, the WAF evaluation will be done before the header rewrite (pre-rewrite). This order ensures that WAF rules are applied to the final request that would be received by your backend pool.
automanage Automanage Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-virtual-machines.md
The Automanage account will be granted **Contributor** and **Resource Policy Con
## Participating services For the complete list of participating Azure services, as well as their supported environment, see the following: - [Automanage for Linux](automanage-linux.md)
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/availability-zones/az-region.md
description: To create highly available and resilient applications in Azure, Ava
Previously updated : 03/30/2021 Last updated : 04/06/2021
To achieve comprehensive business continuity on Azure, build your application ar
## Azure regions with Availability Zones-
+
| Americas | Europe | Africa | Asia Pacific | |--|-||-| | | | | | | Brazil South | France Central | South Africa North* | Australia East |
-| Canada Central | Germany West Central | | Japan East |
-| Central US | North Europe | | Korea Central* |
-| East US | UK South | | Southeast Asia |
-| East US 2 | West Europe | | |
+| Canada Central | Germany West Central | | Central India* |
+| Central US | North Europe | | Japan East |
+| East US | UK South | | Korea Central* |
+| East US 2 | West Europe | | Southeast Asia |
| South Central US | | | | | US Gov Virginia | | | | | West US 2 | | | |
To achieve comprehensive business continuity on Azure, build your application ar
**Mainstream services**
-| Products | Resiliency |
-|-|::|
-| App Service Environments | :large_blue_diamond: |
-| Azure Active Directory Domain Services | :large_blue_diamond: |
-| Azure Bastion | :large_blue_diamond: |
-| Azure Cache for Redis | :large_blue_diamond: |
-| Azure Cognitive
-| Azure Data Explorer | :large_blue_diamond: |
-| Azure Database for MySQL ΓÇô Flexible Server | :large_blue_diamond: |
-| Azure Database for PostgreSQL ΓÇô Flexible Server | :large_blue_diamond: |
-| Azure DDoS Protection | :large_blue_diamond: |
-| Azure Disk Encryption | :large_blue_diamond: |
-| Azure Firewall | :large_blue_diamond: |
-| Azure Firewall Manager | :large_blue_diamond: |
-| Azure Kubernetes Service (AKS) | :large_blue_diamond: |
-| Azure Private Link | :large_blue_diamond: |
-| Azure Red Hat OpenShift | :large_blue_diamond: |
-| Azure Site Recovery | :large_blue_diamond: |
-| Azure SQL: Virtual Machine | :large_blue_diamond: |
-| Azure Search | :large_blue_diamond: |
-| Azure Web Application Firewall | :large_blue_diamond: |
-| Container Registry | :large_blue_diamond: |
-| Event Grid | :large_blue_diamond: |
-| Network Watcher | :large_blue_diamond: |
-| Network Watcher: Traffic Analytics | :large_blue_diamond: |
-| Power BI Embedded | :large_blue_diamond: |
-| Premium Blob Storage | :large_blue_diamond: |
-| Storage: Azure Premium Files | :large_blue_diamond: |
-| Virtual Machines: Azure Dedicated Host | :large_blue_diamond: |
-| Virtual Machines: Ddsv4-Series | :large_blue_diamond: |
-| Virtual Machines: Ddv4-Series | :large_blue_diamond: |
-| Virtual Machines: Dsv4-Series | :large_blue_diamond: |
-| Virtual Machines: Dv4-Series | :large_blue_diamond: |
-| Virtual Machines: Edsv4-Series | :large_blue_diamond: |
-| Virtual Machines: Edv4-Series | :large_blue_diamond: |
-| Virtual Machines: Esv4-Series | :large_blue_diamond: |
-| Virtual Machines: Ev4-Series | :large_blue_diamond: |
-| Virtual Machines: Fsv2-Series | :large_blue_diamond: |
-| Virtual Machines: M-Series | :large_blue_diamond: |
-| Virtual WAN | :large_blue_diamond: |
-| Virtual WAN: ExpressRoute | :large_blue_diamond: |
-| Virtual WAN: Point-to-Site VPN Gateway | :large_blue_diamond: |
-| Virtual WAN: Site-to-Site VPN Gateway | :large_blue_diamond: |
+
+| Products | Resiliency |
+|--|:-:|
+| App Service Environments | :large_blue_diamond: |
+| Azure Active Directory Domain Services | :large_blue_diamond: |
+| Azure Bastion | :large_blue_diamond: |
+| Azure Cache for Redis | :large_blue_diamond: |
+| Azure Cognitive Search | :large_blue_diamond: |
+| Azure Cognitive
+| Azure Data Explorer | :large_blue_diamond: |
+| Azure Database for MySQL ΓÇô Flexible Server | :large_blue_diamond: |
+| Azure Database for PostgreSQL ΓÇô Flexible Server | :large_blue_diamond: |
+| Azure DDoS Protection | :large_blue_diamond: |
+| Azure Disk Encryption | :large_blue_diamond: |
+| Azure Firewall | :large_blue_diamond: |
+| Azure Firewall Manager | :large_blue_diamond: |
+| Azure Kubernetes Service (AKS) | :large_blue_diamond: |
+| Azure Private Link | :large_blue_diamond: |
+| Azure Site Recovery | :large_blue_diamond: |
+| Azure SQL: Virtual Machine | :large_blue_diamond: |
+| Azure Web Application Firewall | :large_blue_diamond: |
+| Container Registry | :large_blue_diamond: |
+| Event Grid | :large_blue_diamond: |
+| Network Watcher | :large_blue_diamond: |
+| Network Watcher: Traffic Analytics | :large_blue_diamond: |
+| Power BI Embedded | :large_blue_diamond: |
+| Premium Blob Storage | :large_blue_diamond: |
+| Storage: Azure Premium Files | :large_blue_diamond: |
+| Virtual Machines: Azure Dedicated Host | :large_blue_diamond: |
+| Virtual Machines: Ddsv4-Series | :large_blue_diamond: |
+| Virtual Machines: Ddv4-Series | :large_blue_diamond: |
+| Virtual Machines: Dsv4-Series | :large_blue_diamond: |
+| Virtual Machines: Dv4-Series | :large_blue_diamond: |
+| Virtual Machines: Edsv4-Series | :large_blue_diamond: |
+| Virtual Machines: Edv4-Series | :large_blue_diamond: |
+| Virtual Machines: Esv4-Series | :large_blue_diamond: |
+| Virtual Machines: Ev4-Series | :large_blue_diamond: |
+| Virtual Machines: Fsv2-Series | :large_blue_diamond: |
+| Virtual Machines: M-Series | :large_blue_diamond: |
+| Virtual WAN | :large_blue_diamond: |
+| Virtual WAN: ExpressRoute | :large_blue_diamond: |
+| Virtual WAN: Point-to-Site VPN Gateway | :large_blue_diamond: |
+| Virtual WAN: Site-to-Site VPN Gateway | :large_blue_diamond: |
++
+**Specialized Services**
+
+| Products | Resiliency |
+|--|:-:|
+| Azure Red Hat OpenShift | :large_blue_diamond: |
+| Cognitive
+| Cognitive
+| Storage: Ultra Disk | :large_blue_diamond: |
**Non-regional**
-| Products | Resiliency |
-|--|:-:|
-| Azure DNS | :globe_with_meridians: |
-| Azure Active Directory | :globe_with_meridians: |
-| Azure Advanced Threat Protection | :globe_with_meridians: |
-| Azure Advisor | :globe_with_meridians: |
-| Azure Blueprints | :globe_with_meridians: |
-| Azure Bot Services | :globe_with_meridians: |
-| Azure Front Door | :globe_with_meridians: |
-| Azure Defender for IoT | :globe_with_meridians: |
-| Azure Front Door | :globe_with_meridians: |
-| Azure Information Protection | :globe_with_meridians: |
-| Azure Lighthouse | :globe_with_meridians: |
-| Azure Managed Applications | :globe_with_meridians: |
-| Azure Maps | :globe_with_meridians: |
-| Azure Policy | :globe_with_meridians: |
-| Azure Resource Graph | :globe_with_meridians: |
-| Azure Sentinel | :globe_with_meridians: |
-| Azure Stack | :globe_with_meridians: |
-| Azure Stack Edge | :globe_with_meridians: |
-| Cloud Shell | :globe_with_meridians: |
-| Content Delivery Network | :globe_with_meridians: |
-| Cost Management | :globe_with_meridians: |
-| Customer Lockbox for Microsoft Azure | :globe_with_meridians: |
-| Intune | :globe_with_meridians: |
-| Microsoft Azure Peering Service | :globe_with_meridians: |
-| Microsoft Azure portal | :globe_with_meridians: |
-| Microsoft Cloud App Security | :globe_with_meridians: |
-| Microsoft Graph | :globe_with_meridians: |
-| Security Center | :globe_with_meridians: |
-| Traffic Manager | :globe_with_meridians: |
+| Products | Resiliency |
+|--|:-:|
+| Azure DNS | :globe_with_meridians: |
+| Azure Active Directory | :globe_with_meridians: |
+| Azure Advanced Threat Protection | :globe_with_meridians: |
+| Azure Advisor | :globe_with_meridians: |
+| Azure Blueprints | :globe_with_meridians: |
+| Azure Bot Services | :globe_with_meridians: |
+| Azure Front Door | :globe_with_meridians: |
+| Azure Defender for IoT | :globe_with_meridians: |
+| Azure Front Door | :globe_with_meridians: |
+| Azure Information Protection | :globe_with_meridians: |
+| Azure Lighthouse | :globe_with_meridians: |
+| Azure Managed Applications | :globe_with_meridians: |
+| Azure Maps | :globe_with_meridians: |
+| Azure Performance Diagnostics | :globe_with_meridians: |
+| Azure Policy | :globe_with_meridians: |
+| Azure Resource Graph | :globe_with_meridians: |
+| Azure Sentinel | :globe_with_meridians: |
+| Azure Stack | :globe_with_meridians: |
+| Azure Stack Edge | :globe_with_meridians: |
+| Cloud Shell | :globe_with_meridians: |
+| Content Delivery Network | :globe_with_meridians: |
+| Cost Management | :globe_with_meridians: |
+| Customer Lockbox for Microsoft Azure | :globe_with_meridians: |
+| Intune | :globe_with_meridians: |
+| Microsoft Azure Peering Service | :globe_with_meridians: |
+| Microsoft Azure portal | :globe_with_meridians: |
+| Microsoft Cloud App Security | :globe_with_meridians: |
+| Microsoft Graph | :globe_with_meridians: |
+| Security Center | :globe_with_meridians: |
+| Traffic Manager | :globe_with_meridians: |
## Pricing for VMs in Availability Zones
-There is no additional cost for virtual machines deployed in an Availability Zone. For more information, review the [Bandwidth pricing page](https://azure.microsoft.com/pricing/details/bandwidth/).
+Azure Availability Zones are available with your Azure subscription. Learn more here - [Bandwidth pricing page](https://azure.microsoft.com/pricing/details/bandwidth/).
## Get started with Availability Zones
azure-arc Create Data Controller Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-azure-data-studio.md
You can create a data controller using Azure Data Studio through the deployment
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
+At the current time, you can create a data controller using the method described in this article.
+ ## Prerequisites - You need access to a Kubernetes cluster and have your kubeconfig file configured to point to the Kubernetes cluster you want to deploy to.
azure-arc Create Data Controller Resource In Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-resource-in-azure-portal.md
You can use the Azure portal to create an Azure Arc data controller.
Many of the creation experiences for Azure Arc start in the Azure portal even though the resource to be created or managed is outside of Azure infrastructure. The user experience pattern in these cases, especially when there is no direct connectivity between Azure and your environment, is to use the Azure portal to generate a script which can then be downloaded and executed in your environment to establish a secure connection back to Azure. For example, Azure Arc enabled servers follows this pattern to [create Arc enabled servers](../servers/onboard-portal.md).
-For now, given that the preview only supports the Indirect Connected mode of Azure Arc enabled data services, you can use the Azure portal to generate a notebook for you that can then be downloaded and run in Azure Data Studio against your Kubernetes cluster. In the future, when the Directly Connected mode is available, you will be able to provision the data controller directly from the Azure portal. You can read more about [connectivity modes](connectivity.md).
+When you use the indirect connect mode of Azure Arc enabled data services, you can use the Azure portal to generate a notebook for you that can then be downloaded and run in Azure Data Studio against your Kubernetes cluster.
+
+When you use direct connect mode, you can provision the data controller directly from the Azure portal. You can read more about [connectivity modes](connectivity.md).
## Use the Azure portal to create an Azure Arc data controller
azure-arc Delete Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/delete-azure-resources.md
# Delete resources from Azure
-> [!NOTE]
-> The options to delete resources in this article are irreversible!
+This article describes how to delete resources from Azure.
-> [!NOTE]
-> Since the only connectivity mode that is offered for Azure Arc enabled data services currently is the Indirect Connected mode, deleting an instance from Kubernetes will not remove it from Azure and deleting an instance from Azure will not remove it from Kubernetes. For now deleting a resource is a two step process and this will be improved in the future. Going forward, Kubernetes will be the source of truth and Azure will be updated to reflect it.
+> [!WARNING]
+> When you delete resources as described in this article, these actions are irreversible.
-In some cases, you may need to manually delete Azure Arc enabled data services resources in Azure Resource Manager (ARM). You can delete these resources using any of the following options.
+In indirect connect mode, deleting an instance from Kubernetes will not remove it from Azure and deleting an instance from Azure will not remove it from Kubernetes. For indirect connect mode, deleting a resource is a two step process and this will be improved in the future. Kubernetes will be the source of truth and the portal will be updated to reflect it.
+
+In some cases, you may need to manually delete Azure Arc enabled data services resources in Azure. You can delete these resources using any of the following options.
- [Delete resources from Azure](#delete-resources-from-azure) - [Delete an entire resource group](#delete-an-entire-resource-group)
In some cases, you may need to manually delete Azure Arc enabled data services r
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)] ## Delete an entire resource group+ If you have been using a specific and dedicated resource group for Azure Arc enabled data services and you want to delete *everything* inside of the resource group you can delete the resource group which will delete everything inside of it. You can delete a resource group in the Azure portal by doing the following: -- Browse to the Resource Group in the Azure portal where the Azure Arc enabled data services resources have been created.
+- Browse to the resource group in the Azure portal where the Azure Arc enabled data services resources have been created.
- Click the **Delete resource group** button. - Confirm the deletion by entering the resource group name and click **Delete**.
You can delete a resource group in the Azure portal by doing the following:
You can delete specific Azure Arc enabled data services resources in a resource group in the Azure portal by doing the following: -- Browse to the Resource Group in the Azure portal where the Azure Arc enabled data services resources have been created.
+- Browse to the resource group in the Azure portal where the Azure Arc enabled data services resources have been created.
- Select all the resources to be deleted. - Click on the Delete button. - Confirm the deletion by typing 'yes' and click **Delete**.
azure-arc Deploy Data Controller Direct Mode Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/deploy-data-controller-direct-mode-prerequisites.md
+
+ Title: Prerequisites | Direct connect mode
+description: Prerequisites to deploy the data controller in direct connect mode.
++++++ Last updated : 03/31/2021+++
+# Deploy data controller - direct connect mode (prerequisites)
+
+This article describes how to prepare to deploy a data controller for Azure Arc enabled data services in direct connect mode.
++
+At a high level summary, the prerequisites include:
+
+1. Install tools
+1. Add extensions
+1. Create the service principal and configure roles for metrics
+1. Connect Kubernetes cluster to Azure using Azure Arc enabled Kubernetes
+
+After you have completed these prerequisites, you can [Deploy Azure Arc data controller | Direct connect mode](deploy-data-controller-direct-mode.md).
+
+The remaining sections of this article identify the prerequisites.
+
+## Install tools
+
+- Helm version 3.3+ ([install](https://helm.sh/docs/intro/install/))
+- Azure CLI ([install](/sql/azdata/install/deploy-install-azdata))
+
+## Add extensions for Azure CLI
+
+Additionally, the following az extensions are also required:
+- Azure CLI `k8s-extension` extension (0.2.0)
+- Azure CLI `customlocation` (0.1.0)
+
+Sample `az` and its CLI extensions would be:
+
+```console
+$ az version
+{
+ "azure-cli": "2.19.1",
+ "azure-cli-core": "2.19.1",
+ "azure-cli-telemetry": "1.0.6",
+ "extensions": {
+ "connectedk8s": "1.1.0",
+ "customlocation": "0.1.0",
+ "k8s-configuration": "1.0.0",
+ "k8s-extension": "0.2.0"
+ }
+}
+```
+
+## Create service principal and configure roles for metrics
+
+Follow the steps detailed in the [Upload metrics](upload-metrics-and-logs-to-azure-monitor.md) article and create a Service Principal and grant the roles as described the article.
+
+The SPN ClientID, TenantID, and Client Secret information will be required when you [deploy Azure Arc data controller](deploy-data-controller-direct-mode.md).
+
+## Connect Kubernetes cluster to Azure using Azure Arc enabled Kubernetes
+
+To complete this task, follow the steps in [Connect an existing Kubernetes cluster to Azure arc](../kubernetes/quickstart-connect-cluster.md).
+
+## Next steps
+
+After you have completed these prerequisites, you can [Deploy Azure Arc data controller | Direct connect mode](deploy-data-controller-direct-mode.md).
azure-arc Deploy Data Controller Direct Mode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/deploy-data-controller-direct-mode.md
+
+ Title: Deploy Azure Arc data controller | Direct connect mode
+description: Explains how to deploy the data controller in direct connect mode.
++++++ Last updated : 04/06/2021+++
+# Deploy Azure Arc data controller | Direct connect mode
+
+This article describes how to deploy the Azure Arc data controller in direct connect mode during the current preview of this feature.
+
+Currently you can create the Azure Arc data controller from Azure portal. Other tools for Azure Arc enabled data services do not support creating the data controller in direct connect mode. For details, see [Known issues - Azure Arc enabled data services (Preview)](known-issues.md).
++
+## Complete prerequisites
+
+Before you begin, verify that you have completed the prerequisites in [Deploy data controller - direct connect mode - prerequisites](deploy-data-controller-direct-mode-prerequisites.md).
+
+From a high level, this article explains how to complete these tasks:
+
+1. Create an Azure Arc enabled data services extension.
+1. Create a custom location.
+1. Deploy the data controller from the portal.
+
+## Create an Azure Arc enabled data services extension
+
+Use the k8s-extension CLI to create a data services extension.
+
+### Set environment variables
+
+Set the following environment variables which will be then used in next step.
+
+#### Linux
+
+``` terminal
+# where you want the connected cluster resource to be created in Azure
+export subscription=<Your subscription ID>
+export resourceGroup=<Your resource group>
+export resourceName=<name of your connected kubernetes cluster>
+export location=<Azure location>
+```
+
+#### Windows PowerShell
+``` PowerShell
+# where you want the connected cluster resource to be created in Azure
+$ENV:subscription="<Your subscription ID>"
+$ENV:resourceGroup="<Your resource group>"
+$ENV:resourceName="<name of your connected kubernetes cluster>"
+$ENV:location="<Azure location>"
+```
+
+### Create the Arc data services extension
+
+#### Linux
+```bash
+export ADSExtensionName=ads-extension
+export CustomLocationsRpOid=$(az ad sp list --filter "displayname eq 'Custom Locations RP'" --query '[].objectId' -o tsv)
++
+az k8s-extension create -c ${resourceName} -g ${resourceGroup} --name ${ADSExtensionName} --cluster-type connectedClusters --extension-type microsoft.arcdataservices --auto-upgrade false --scope cluster --release-namespace arc \
+ --config Microsoft.CustomLocation.ServiceAccount=sa-bootstrapper \
+ --config aad.customLocationObjectId=${CustomLocationsRpOid}
+
+az k8s-extension show -g ${resourceGroup} -c ${resourceName} --name ${ADSExtensionName} --cluster-type connectedclusters
+```
+
+#### Windows PowerShell
+```PowerShell
+$ENV:ADSExtensionName="ads-extension"
+$CustomLocationsRpOid = az ad sp list --filter "displayname eq 'Custom Locations RP'" --query [].objectId -o tsv
+
+az k8s-extension create -c "$ENV:resourceName" -g "$ENV:resourceGroup" --name "$ENV:ADSExtensionName" --cluster-type connectedClusters --extension-type microsoft.arcdataservices --auto-upgrade false --scope cluster --release-namespace arc --config Microsoft.CustomLocation.ServiceAccount=sa-bootstrapper --config aad.customLocationObjectId="$ENV:CustomLocationsRpOid"
+
+az k8s-extension show -g "$ENV:resourceGroup" -c "$ENV:resourceName" --name "$ENV:ADSExtensionName" --cluster-type connectedclusters
+```
+
+> [!NOTE]
+> The Arc data services extension install can take a couple of minutes to finish.
+
+### Verify the Arc data services extension is created
+
+You can verify if the Arc enabled data services extension is created either from the portal or by connecting directly to the Arc enabled Kubernetes cluster.
+
+#### Azure portal
+1. Login to the Azure portal and browse to the resource group where the Kubernetes connected cluster resource is located.
+1. Select the Arc enabled kubernetes cluster (Type = "Kubernetes - Azure Arc") where the extension was deployed.
+1. In the navigation on the left side, under **Settings**, select "Extensions (preview)".
+1. You should see the extension that was just created earlier in an "Installed" state.
++
+#### kubectl CLI
+
+1. Connect to your Kubernetes cluster via a Terminal window.
+1. Run the below command and ensure the (1) namespace mentioned above is created and (2) the `bootstrapper` pod is in 'running' state before proceeding to the next step.
+
+``` console
+kubectl get pods -n <name of namespace used in the json template file above>
+```
+
+For example, the following gets the pods from `arc` namespace.
+
+```console
+#Example:
+kubectl get pods -n arc
+```
+
+## Create a custom location using custom location CLI extension
+
+A custom location is an Azure resource that is equivalent to a namespace in a Kubernetes cluster. Custom locations are used as a target to deploy resources to or from Azure. Learn more about custom locations in the [Custom locations on top of Azure Arc enabled Kubernetes documentation](../kubernetes/conceptual-custom-locations.md).
+
+### Set environment variables
+
+#### Linux
+
+```bash
+export clName=mycustomlocation
+export clNamespace=arc
+export hostClusterId=$(az connectedk8s show -g ${resourceGroup} -n ${resourceName} --query id -o tsv)
+export extensionId=$(az k8s-extension show -g ${resourceGroup} -c ${resourceName} --cluster-type connectedClusters --name ${ADSExtensionName} --query id -o tsv)
+
+az customlocation create -g ${resourceGroup} -n ${clName} --namespace ${clNamespace} \
+ --host-resource-id ${hostClusterId} \
+ --cluster-extension-ids ${extensionId} --location eastus2euap
+```
+
+#### Windows PowerShell
+```PowerShell
+$ENV:clName="mycustomlocation"
+$ENV:clNamespace="arc"
+$ENV:hostClusterId = az connectedk8s show -g "$ENV:resourceGroup" -n "$ENV:resourceName" --query id -o tsv
+$ENV:extensionId = az k8s-extension show -g "$ENV:resourceGroup" -c "$ENV:resourceName" --cluster-type connectedClusters --name "$ENV:ADSExtensionName" --query id -o tsv
+
+az customlocation create -g "$ENV:resourceGroup" -n "$ENV:clName" --namespace "$ENV:clNamespace" --host-resource-id "$ENV:hostClusterId" --cluster-extension-ids "$ENV:extensionId"
+```
+
+## Validate the custom location is created
+
+From the terminal, run the below command to list the custom locations, and validate that the **Provisioning State** shows Succeeded:
+
+```azurecli
+az customlocation list -o table
+```
+
+## Create the Azure Arc data controller
+
+After the extension and custom location are created, proceed to Azure portal to deploy the Azure Arc data controller.
+
+1. Log into the Azure portal.
+1. Search for "Azure Arc data controller" in the Azure Marketplace and initiate the Create flow.
+1. In the **Prerequisites** section, ensure that the Azure Arc enabled Kubernetes cluster (direct mode) is selected and proceed to the next step.
+1. In the **Data controller details** section, choose a subscription and resource group.
+1. Enter a name for the data controller.
+1. Choose a configuration profile based on the Kubernetes distribution provider you are deploying to.
+1. Choose the Custom Location that you created in the previous step.
+1. Provide details for the data controller administrator login and password.
+1. Provide details for ClientId, TenantId, and Client Secret for the Service Principal that would be used to create the Azure objects. See [Upload metrics](upload-metrics-and-logs-to-azure-monitor.md) for detailed instructions on creating a Service Principal account and the roles that needed to be granted for the account.
+1. Click **Next**, review the summary page for all the details and click on **Create**.
+
+## Monitor the creation
+
+When the Azure portal deployment status shows the deployment was successful, you can check the status of the Arc data controller deployment on the cluster as follows:
+
+```console
+kubectl get datacontrollers -n arc
+```
+
+## Next steps
+
+[Create an Azure Arc enabled PostgreSQL Hyperscale server group](create-postgresql-hyperscale-server-group.md)
+
+[Create an Azure SQL managed instance on Azure Arc](create-sql-managed-instance.md)
azure-arc Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/known-issues.md
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
+## March 2021
+
+### Data controller
+
+- You can create a data controller in direct connect mode with the Azure portal. Deployment with other Azure Arc enabled data services tools are not supported. Specifically, you can't deploy a data controller in direct connect mode with any of the following tools during this release.
+ - Azure Data Studio
+ - Azure Data CLI (`azdata`)
+ - Kubernetes native tools
+
+ [Deploy Azure Arc data controller | Direct connect mode](deploy-data-controller-direct-mode.md) explains how to create the data controller in the portal.
+
+### Azure Arc enabled PostgreSQL Hyperscale
+
+- Passing an invalid value to the `--extensions` parameter when editing the configuration of a server group to enable additional extensions incorrectly resets the list of enabled extensions to what it was at the create time of the server group and prevents user from creating additional extensions. The only workaround available when that happens is to delete the server group and redeploy it.
+ ## February 2021 -- Connected cluster mode is disabled
+### Data controller
+
+- Direct connect cluster mode is disabled
+
+### Azure Arc enabled PostgreSQL Hyperscale
+
+- Point in time restore is not supported for now on NFS storage.
+- It is not possible to enable and configure the pg_cron extension at the same time. You need to use two commands for this. One command to enable it and one command to configure it.
+
+ For example:
+ ```console
+ § azdata arc postgres server edit -n myservergroup --extensions pg_cron
+ § azdata arc postgres server edit -n myservergroup --engine-settings cron.database_name='postgres'
+ ```
+
+ The first command requires a restart of the server group. So, before executing the second command, make sure the state of the server group has transitioned from updating to ready. If you execute the second command before the restart has completed it will fail. If that is the case, simply wait for a few more moments and execute the second command again.
## Introduced prior to February 2021
:::image type="content" source="media/release-notes/aks-zone-selector.png" alt-text="Clear the checkboxes for each zone to specify none.":::
-### PostgreSQL
--- Azure Arc enabled PostgreSQL Hyperscale returns an inaccurate error message when it cannot restore to the relative point in time you indicate. For example, if you specified a point in time to restore that is older than what your backups contain, the restore will fail with an error message like:
-`ERROR: (404). Reason: Not found. HTTP response body: {"code":404, "internalStatus":"NOT_FOUND", "reason":"Failed to restore backup for server...}`
-When this happens, restart the command after indicating a point in time that is within the range of dates for which you have backups. You will determine this range by listing your backups and by looking at the dates at which they were taken.
-- Point in time restore is supported only across server groups. The target server of a point in time restore operation cannot be the server from which you took the backup. It has to be a different server group. However, full restore is supported to the same server group.-- A backup-id is required when doing a full restore. By default, if you are not indicating a backup-id the latest backup will be used. This does not work in this release. ## Next steps
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/overview.md
Title: What are Azure Arc enabled data services
-description: Introduces Azure Arc enabled data services
+description: Introduces Azure Arc enabled data services
+ Previously updated : 09/22/2020 Last updated : 03/31/2021 # Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc enabled data services so that I can leverage the capability of the feature.
Using familiar tools such as the Azure portal, Azure Data Studio, and the [!INCL
Many of the services such as self-service provisioning, automated backups/restore, and monitoring can run locally in your infrastructure with or without a direct connection to Azure. Connecting directly to Azure opens up additional options for integration with other Azure services such as Azure Monitor and the ability to use the Azure portal and Azure Resource Manager APIs from anywhere in the world to manage your Azure Arc enabled data services.
+## Supported regions
+
+The following table describes the scenarios that are currently supported for Arc enabled data services.
+
+|Azure Regions |Direct connected mode |Indirect connected mode |
+||||
+|East US|Available|Available
+|West Europe |Available |Available
+|North Europe|Available|Available
+ ## Next steps > **Just want to try things out?**
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/release-notes.md
Previously updated : 03/02/2021 Last updated : 04/06/2021 # Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc enabled data services so that I can leverage the capability of the feature.
This article highlights capabilities, features, and enhancements recently releas
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
+## March 2021
+
+The March 2021 release is introduced on April 6, 2021.
+
+Review limitations of this release in [Known issues - Azure Arc enabled data services (Preview)](known-issues.md).
+
+Azure Data CLI (`azdata`) version number: 20.3.2. Download at [https://aka.ms/azdata](https://aka.ms/azdata). You can install `azdata` from [Install Azure Data CLI (`azdata`)](/sql/azdata/install/deploy-install-azdata).
+
+### Data controller
+
+- Deploy Azure Arc enabled data services data controller in direct connect mode from the portal. Start from [Deploy data controller - direct connect mode - prerequisites](deploy-data-controller-direct-mode-prerequisites.md).
+
+### Azure Arc enabled PostgreSQL Hyperscale
+
+Both custom resource definitions (CRD) for PostgreSQL have been consolidated into a single CRD. See the following table.
+
+|Release |CRD |
+|--|--|
+|February 2021 and prior| postgresql-11s.arcdata.microsoft.com<br/>postgresql-12s.arcdata.microsoft.com |
+|Beginning March 2021 | postgresqls.arcdata.microsoft.com
+
+You will delete the previous CRDs as you cleanup past installations. See [Cleanup from past installations](create-data-controller-using-kubernetes-native-tools.md#cleanup-from-past-installations).
+
+### Azure Arc enabled managed instance
+
+- You can now restore a database to SQL Managed Instance with 3 replicas and it will be automatically added to the availability group.
+
+- You can now connect to a secondary read-only endpoint on SQL Managed Instances deployed with 3 replicas. Use `azdata arc sql endpoint list` to see the secondary read-only connection endpoint.
+
+### Known issues
+
+- In direct connected mode, upload of usage, metrics, and logs using `azdata arc dc upload` is currently blocked. Usage is automatically uploaded. Upload for data controller created in indirect connected mode should continue to work.
+- Deployment of data controller in direct mode can only be done from the Azure portal, and not available from client tools such as azdata, Azure Data Studio, or kubectl.
+- Deployment of Azure Arc enabled SQL Managed Instance in direct mode can only be done from the Azure portal, and not available from tools such as azdata, Azure Data Studio, or kubectl.
+- Deployment of Azure Arc enabled PostgeSQL Hyperscale in direct mode is currently not available.
+- Automatic upload of usage data in direct connectivity mode will not succeed if using proxy via `ΓÇôproxy-cert <path-t-cert-file>`.
+ ## February 2021 ### New capabilities and features
azdata arc dc create --profile-name azure-arc-aks-hci --namespace arc --name arc
:::image type="content" source="media/release-notes/aks-zone-selector.png" alt-text="Clear the checkboxes for each zone to specify none.":::
-#### PostgreSQL
--- Azure Arc enabled PostgreSQL Hyperscale returns an inaccurate error message when it cannot restore to the relative point in time you indicate. For example, if you specified a point in time to restore that is older than what your backups contain, the restore will fail with an error message like:
-ERROR: (404). Reason: Not found. HTTP response body: {"code":404, "internalStatus":"NOT_FOUND", "reason":"Failed to restore backup for server...}
-When this happens, restart the command after indicating a point in time that is within the range of dates for which you have backups. You will determine this range by listing your backups and by looking at the dates at which they were taken.
-- Point in time restore is supported only across server groups. The target server of a point in time restore operation cannot be the server from which you took the backup. It has to be a different server group. However, full restore is supported to the same server group.-- A backup-id is required when doing a full restore. By default, if you are not indicating a backup-id the latest backup will be used. This does not work in this release.- ## October 2020 Azure Data CLI (`azdata`) version number: 20.2.3. Download at [https://aka.ms/azdata](https://aka.ms/azdata).
azure-functions Durable Functions Create First Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-create-first-csharp.md
You have used Visual Studio Code to create and publish a C# durable function app
::: zone pivot="code-editor-visualstudio"
-In this article, you learn how to Visual Studio 2019 to locally create and test a "hello world" durable function. This function orchestrates and chains-together calls to other functions. You then publish the function code to Azure. These tools are available as part of the Azure development workload in Visual Studio 2019.
+In this article, you learn how to use Visual Studio 2019 to locally create and test a "hello world" durable function. This function orchestrates and chains-together calls to other functions. You then publish the function code to Azure. These tools are available as part of the Azure development workload in Visual Studio 2019.
![Screenshot shows a Visual Studio 2019 window with a durable function.](./media/durable-functions-create-first-csharp/functions-vs-complete.png)
azure-functions Functions Deployment Slots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-deployment-slots.md
The following reflect how functions are affected by swapping slots:
- Traffic redirection is seamless; no requests are dropped because of a swap. - If a function is running during a swap, execution continues and the next triggers are routed to the swapped app instance.
-> [!NOTE]
-> Slots are currently not available for the Linux Consumption plan.
- ## Why use slots? There are a number of advantages to using deployment slots. The following scenarios describe common uses for slots:
There are two levels of support for deployment slots:
| Windows Consumption | General availability | | Windows Premium | General availability | | Windows Dedicated | General availability |
-| Linux Consumption | Unsupported |
+| Linux Consumption | Preview |
| Linux Premium | General availability | | Linux Dedicated | General availability |
azure-monitor Alerts Common Schema Definitions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-common-schema-definitions.md
Last updated 09/22/2020
This article describes the [common alert schema definitions](./alerts-common-schema.md) for Azure Monitor, including those for webhooks, Azure Logic Apps, Azure Functions, and Azure Automation runbooks. Any alert instance describes the resource that was affected and the cause of the alert. These instances are described in the common schema in the following sections:
-* **Essentials**: A set of standardized fields, common across all alert types, which describe what resource the alert is on, along with additional common alert metadata (for example, severity or description).
+* **Essentials**: A set of standardized fields, common across all alert types, which describe what resource the alert is on, along with additional common alert metadata (for example, severity or description). Definitions of severity can be found in the [alerts overview](alerts-overview.md#overview).
* **Alert context**: A set of fields that describes the cause of the alert, with fields that vary based on the alert type. For example, a metric alert includes fields like the metric name and metric value in the alert context, whereas an activity log alert has information about the event that generated the alert. **Sample alert payload**
Any alert instance describes the resource that was affected and the cause of the
## Next steps - Learn more about the [common alert schema](./alerts-common-schema.md).-- Learn [how to create a logic app that uses the common alert schema to handle all your alerts](./alerts-common-schema-integrations.md).
+- Learn [how to create a logic app that uses the common alert schema to handle all your alerts](./alerts-common-schema-integrations.md).
azure-monitor Javascript React Plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/javascript-react-plugin.md
The `useTrackEvent` Hook is used to track any custom event that an application m
import React, { useState, useEffect } from "react"; import { useAppInsightsContext, useTrackEvent } from "@microsoft/applicationinsights-react-js";
-const ProductCart = () => {
+const MyComponent = () => {
const appInsights = useAppInsightsContext();
- const trackCheckout = useTrackEvent(appInsights, "Checkout");
- const trackCartUpdate = useTrackEvent(appInsights, "Cart Updated");
const [cart, setCart] = useState([]);
-
+ const trackCheckout = useTrackEvent(appInsights, "Checkout", cart);
+ const trackCartUpdate = useTrackEvent(appInsights, "Cart Updated", cart);
useEffect(() => { trackCartUpdate({ cartCount: cart.length }); }, [cart]);
const ProductCart = () => {
return ( <div> <ul>
- <li>Product 1 <button onClick={() => setCart([...cart, "Product 1"])}>Add to Cart</button>
- <li>Product 2 <button onClick={() => setCart([...cart, "Product 2"])}>Add to Cart</button>
- <li>Product 3 <button onClick={() => setCart([...cart, "Product 3"])}>Add to Cart</button>
- <li>Product 4 <button onClick={() => setCart([...cart, "Product 4"])}>Add to Cart</button>
+ <li>Product 1 <button onClick={() => setCart([...cart, "Product 1"])}>Add to Cart</button></li>
+ <li>Product 2 <button onClick={() => setCart([...cart, "Product 2"])}>Add to Cart</button></li>
+ <li>Product 3 <button onClick={() => setCart([...cart, "Product 3"])}>Add to Cart</button></li>
+ <li>Product 4 <button onClick={() => setCart([...cart, "Product 4"])}>Add to Cart</button></li>
</ul> <button onClick={performCheckout}>Checkout</button> </div> ); }+ export default MyComponent; ```
azure-monitor Network Performance Monitor Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/network-performance-monitor-faq.md
If a hop is red, it signifies that it is part of at-least one unhealthy path. NP
### How does fault localization in Performance Monitor work? NPM uses a probabilistic mechanism to assign fault-probabilities to each network path, network segment, and the constituent network hops based on the number of unhealthy paths they are a part of. As the network segments and hops become part of more number of unhealthy paths, the fault-probability associated with them increases. This algorithm works best when you have many nodes with NPM agent connected to each other as this increases the data points for calculating the fault-probabilities.
-### How can I create alerts in NPM?
-Currently, creating alerts from the NPM UI is failing due to a known issue. Please [create alerts manually](../alerts/alerts-log.md).
- ### What are the default Log Analytics queries for alerts Performance monitor query
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
Some suggestions for reducing the volume of logs collected include:
| Syslog | Change [syslog configuration](../agents/data-sources-syslog.md) to: <br> - Reduce the number of facilities collected <br> - Collect only required event levels. For example, do not collect *Info* and *Debug* level events | | AzureDiagnostics | Change [resource log collection](../essentials/diagnostic-settings.md#create-in-azure-portal) to: <br> - Reduce the number of resources send logs to Log Analytics <br> - Collect only required logs | | Solution data from computers that don't need the solution | Use [solution targeting](../insights/solution-targeting.md) to collect data from only required groups of computers. |
-| Application Insights | Review options for [https://docs.microsoft.com/azure/azure-monitor/app/pricing#managing-your-data-volume](managing Application Insights data volume) |
+| Application Insights | Review options for [managing Application Insights data volume](../app/pricing.md#managing-your-data-volume) |
| [SQL Analytics](../insights/azure-sql.md) | Use [Set-AzSqlServerAudit](/powershell/module/az.sql/set-azsqlserveraudit) to tune the auditing settings. | | Azure Sentinel | Review any [Sentinel data sources](../../sentinel/connect-data-sources.md) which you recently enabled as sources of additional data volume. |
There are some additional Log Analytics limits, some of which depend on the Log
- Change [performance counter configuration](../agents/data-sources-performance-counters.md). - To modify your event collection settings, review [event log configuration](../agents/data-sources-windows-events.md). - To modify your syslog collection settings, review [syslog configuration](../agents/data-sources-syslog.md).-- To modify your syslog collection settings, review [syslog configuration](../agents/data-sources-syslog.md).
+- To modify your syslog collection settings, review [syslog configuration](../agents/data-sources-syslog.md).
azure-monitor Roles Permissions Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/roles-permissions-security.md
Title: Roles, permissions, and security in Azure Monitor description: Learn how to use Azure Monitor's built-in roles and permissions to restrict access to monitoring resources.- - Last updated 11/27/2017- # Roles, permissions, and security in Azure Monitor
azure-monitor Vminsights Health Configure Dcr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/vminsights-health-configure-dcr.md
List of one or more strings that define which monitors in health hierarchy will
The following table lists the current available monitor names. | Type name | Name | Description |
-|:|:|:|
-| root | root | Top level monitor representing virtual machine health. | |
-| cpu-utilization | cpu-utilization | CPU utilization monitor. | |
-| logical-disks | logical-disks | Aggregate monitor for health state of all monitored disks on Windows virtual machine. | |
-| logical-disks\|* | logical-disks\|C:<br>logical-disks\|D: | Aggregate monitor tracking health of a given disk on Windows virtual machine. |
-| logical-disks\|*\|free-space | logical-disks\|C:\|free-space<br>logical-disks\|D:\|free-space | Disk free space monitor on Windows virtual machine. |
+|:-|:--|:|
+| root | root | Top level monitor representing virtual machine health. |
+| cpu-utilization | cpu-utilization | CPU utilization monitor. |
+| logical-disks | logical-disks | Aggregate monitor for health state of all monitored disks on Windows virtual machine. |
+| logical-disks\|\* | logical-disks\|C:<br>logical-disks\|D: | Aggregate monitor tracking health of a given disk on Windows virtual machine. |
+| logical-disks\|\*\|free-space | logical-disks\|C:\|free-space<br>logical-disks\|D:\|free-space | Disk free space monitor on Windows virtual machine. |
| filesystems | filesystems | Aggregate monitor for health of all filesystems on Linux virtual machine. |
-| filesystems\|* | filesystems\|/<br>filesystems\|/mnt | Aggregate monitor tracking health of a filesystem of Linux virtual machine. | filesystems|/var/log |
-| filesystems\|*\|free-space | filesystems\|/\|free-space<br>filesystems\|/mnt\|free-space | Disk free space monitor on Linux virtual machine filesystem. |
-| memory | memory | Aggregate monitor for health of virtual machine memory. | |
-| memory\|available| memory\|available | Monitor tracking available memory on the virtual machine. | |
+| filesystems\|\* | filesystems\|/<br>filesystems\|/mnt | Aggregate monitor tracking health of a filesystem of Linux virtual machine. |
+| filesystems\|\*\|free-space | filesystems\|/\|free-space<br>filesystems\|/mnt\|free-space | Disk free space monitor on Linux virtual machine filesystem. |
+| memory | memory | Aggregate monitor for health of virtual machine memory. |
+| memory\|available | memory\|available | Monitor tracking available memory on the virtual machine. |
## alertConfiguration element
For a sample data collection rule enabling guest monitoring, see [Enable a virtu
## Next steps -- Read more about [data collection rules](../agents/data-collection-rule-overview.md).
+- Read more about [data collection rules](../agents/data-collection-rule-overview.md).
azure-netapp-files Azure Netapp Files Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-create-volumes.md
na ms.devlang: na Previously updated : 09/24/2020 Last updated : 04/05/2021 # Create an NFS volume for Azure NetApp Files
Azure NetApp Files supports creating volumes using NFS (NFSv3 and NFSv4.1), SMB3
Additional configurations are required if you use Kerberos with NFSv4.1. Follow the instructions in [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md).
+ * If you want to enable Active Directory LDAP users and extended groups (up to 1024 groups) to access the volume, select the **LDAP** option. Follow instructions in [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) to complete the required configurations.
+
* Optionally, [configure export policy for the NFS volume](azure-netapp-files-configure-export-policy.md). ![Specify NFS protocol](../media/azure-netapp-files/azure-netapp-files-protocol-nfs.png)
Azure NetApp Files supports creating volumes using NFS (NFSv3 and NFSv4.1), SMB3
* [Configure NFSv4.1 default domain for Azure NetApp Files](azure-netapp-files-configure-nfsv41-domain.md) * [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md)
+* [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md)
* [Mount or unmount a volume for Windows or Linux virtual machines](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md) * [Configure export policy for an NFS volume](azure-netapp-files-configure-export-policy.md) * [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md)
azure-netapp-files Azure Netapp Files Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-faqs.md
na ms.devlang: na Previously updated : 04/05/2021 Last updated : 04/06/2021 # FAQs About Azure NetApp Files
Azure NetApp Files is an Azure native service. All PUT, POST, and DELETE APIs ag
For the complete list of API operations, see [Azure NetApp Files REST API](/rest/api/netapp/).
-### How do I audit file access on Azure NetApp Files NFS (v3 and v4.1) volumes?
-
-You can configure audit logs on the client side. All read, write, and attribute changes are logged.
- ### Can I use Azure policies with Azure NetApp Files? Yes, you can create [custom Azure policies](../governance/policy/tutorials/create-custom-policy-definition.md).
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na ms.devlang: na Previously updated : 03/29/2021 Last updated : 04/06/2021 # Solution architectures using Azure NetApp Files
This section provides references for Windows applications and SQL Server solutio
### SQL Server
+* [SQL Server on Azure Deployment Guide Using Azure NetApp Files](https://www.netapp.com/pdf.html?item=/media/27154-tr-4888.pdf)
+* [Benefits of using Azure NetApp Files for SQL Server deployment](solutions-benefits-azure-netapp-files-sql-server.md)
* [Deploy SQL Server Over SMB with Azure NetApp Files](https://www.youtube.com/watch?v=x7udfcYbibs) * [Deploy SQL Server Always-On Failover Cluster over SMB with Azure NetApp Files](https://www.youtube.com/watch?v=zuNJ5E07e8Q) * [Deploy Always-On Availability Groups with Azure NetApp Files](https://www.youtube.com/watch?v=y3VQmzzeyvc)
-* [Benefits of using Azure NetApp Files for SQL Server deployment](solutions-benefits-azure-netapp-files-sql-server.md)
## SAP on Azure solutions
azure-netapp-files Configure Ldap Extended Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/configure-ldap-extended-groups.md
+
+ Title: Configure ADDS LDAP with extended groups for Azure NetApp Files NFS volume access | Microsoft Docs
+description: Describes the considerations and steps for enabling LDAP with extended groups when you create an NFS volume by using Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ms.devlang: na
+ Last updated : 04/05/2021++
+# Configure ADDS LDAP with extended groups for NFS volume access
+
+When you [create an NFS volume](azure-netapp-files-create-volumes.md), you have the option to enable the LDAP with extended groups feature (the **LDAP** option) for the volume. This feature enables Active Directory LDAP users and extended groups (up to 1024 groups) to access the volume.
+
+This article explains the considerations and steps for enabling LDAP with extended groups when you create an NFS volume.
+
+## Considerations
+
+* LDAP over TLS must *not* be enabled if you are using Azure Active Directory Domain Services (AADDS).
+
+* If you enable the LDAP with extended groups feature, LDAP-enabled [Kerberos volumes](configure-kerberos-encryption.md) will not correctly display the file ownership for LDAP users. A file or directory created by an LDAP user will default to `root` as the owner instead of the actual LDAP user. However, the `root` account can manually change the file ownership by using the command `chown <username> <filename>`.
+
+* You cannot modify the LDAP option setting (enabled or disabled) after you have created the volume.
+
+* The following table describes the Time to Live (TTL) settings for the LDAP cache. You need to wait until the cache is refreshed before trying to access a file or directory through a client. Otherwise, an access denied message appears on the client.
+
+ | Error condition | Resolution |
+ |-|-|
+ | Cache | Default Timeout |
+ | Group membership list | 24-hour TTL |
+ | Unix groups | 24-hour TTL, 1-minute negative TTL |
+ | Unix users | 24-hour TTL, 1-minute negative TTL |
+
+ Caches have a specific timeout period called *Time to Live*. After the timeout period, entries age out so that stale entries do not linger. The *negative TTL* value is where a lookup that has failed resides to help avoid performance issues due to LDAP queries for objects that might not exist.ΓÇ¥
+
+## Steps
+
+1. The LDAP with extended groups feature is currently in preview. Before using this feature for the first time, you need to register the feature:
+
+ 1. Register the feature:
+
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFLdapExtendedGroups
+ ```
+
+ 2. Check the status of the feature registration:
+
+ > [!NOTE]
+ > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to`Registered`. Wait until the status is `Registered` before continuing.
+
+ ```azurepowershell-interactive
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFLdapExtendedGroups
+ ```
+
+ You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
+
+2. LDAP volumes require an Active Directory configuration for LDAP server settings. Follow instructions in [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections) and [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection) to configure Active Directory connections on the Azure portal.
+
+3. Ensure that the Active Directory LDAP server is up and running on the Active Directory. You can do so by installing and configuring the [Active Directory Lightweight Directory Services (AD LDS)](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/hh831593(v=ws.11)) role on the AD machine.
+
+4. LDAP NFS users need to have certain POSIX attributes on the LDAP server. Follow [Manage LDAP POSIX Attributes](create-volumes-dual-protocol.md#manage-ldap-posix-attributes) to set the required attributes.
+
+5. If you want to configure an LDAP-integrated Linux client, see [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md).
+
+6. Follow steps in [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md) to create an NFS volume. During the volume creation process, under the **Protocol** tab, enable the **LDAP** option.
+
+ ![Screenshot that shows Create a Volume page with LDAP option.](../media/azure-netapp-files/create-nfs-ldap.png)
+
+7. Optional - You can enable local NFS client users not present on the Windows LDAP server to access an NFS volume that has LDAP with extended groups enabled. To do so, enable the **Allow local NFS users with LDAP** option as follows:
+ 1. Click **Active Directory connections**. On an existing Active Directory connection, click the context menu (the three dots `…`), and select **Edit**.
+ 2. On the **Edit Active Directory settings** window that appears, select the **Allow local NFS users with LDAP** option.
+
+ ![Screenshot that shows the Allow local NFS users with LDAP option](../media/azure-netapp-files/allow-local-nfs-users-with-ldap.png)
+
+## Next steps
+
+* [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md)
+* [Create and manage Active Directory connections](create-active-directory-connections.md)
+* [Troubleshoot LDAP volume issues](troubleshoot-ldap-volumes.md)
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-volumes-dual-protocol.md
na ms.devlang: na Previously updated : 01/28/2020 Last updated : 04/05/2021 # Create a dual-protocol (NFSv3 and SMB) volume for Azure NetApp Files
Azure NetApp Files supports creating volumes using NFS (NFSv3 and NFSv4.1), SMB3
* Create a reverse lookup zone on the DNS server and then add a pointer (PTR) record of the AD host machine in that reverse lookup zone. Otherwise, the dual-protocol volume creation will fail. * Ensure that the NFS client is up to date and running the latest updates for the operating system. * Ensure that the Active Directory (AD) LDAP server is up and running on the AD. You can do so by installing and configuring the [Active Directory Lightweight Directory Services (AD LDS)](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/hh831593(v=ws.11)) role on the AD machine.
-* Dual-protocol volumes do not currently support Azure Active Directory Domain Services (AADDS).
+* Dual-protocol volumes do not currently support Azure Active Directory Domain Services (AADDS). LDAP over TLS must not be enabled if you are using AADDS.
* The NFS version used by a dual-protocol volume is NFSv3. As such, the following considerations apply: * Dual protocol does not support the Windows ACLS extended attributes `set/get` from NFS clients. * NFS clients cannot change permissions for the NTFS security style, and Windows clients cannot change permissions for UNIX-style dual-protocol volumes.
Azure NetApp Files supports creating volumes using NFS (NFSv3 and NFSv4.1), SMB3
A volume inherits subscription, resource group, location attributes from its capacity pool. To monitor the volume deployment status, you can use the Notifications tab.
+## Allow local NFS users with LDAP to access a dual-protocol volume
+
+You can enable local NFS client users not present on the Windows LDAP server to access a dual-protocol volume that has LDAP with extended groups enabled. To do so, enable the **Allow local NFS users with LDAP** option as follows:
+
+1. Click **Active Directory connections**. On an existing Active Directory connection, click the context menu (the three dots `…`), and select **Edit**.
+
+2. On the **Edit Active Directory settings** window that appears, select the **Allow local NFS users with LDAP** option.
+
+ ![Screenshot that shows the Allow local NFS users with LDAP option](../media/azure-netapp-files/allow-local-nfs-users-with-ldap.png)
++ ## Manage LDAP POSIX Attributes You can manage POSIX attributes such as UID, Home Directory, and other values by using the Active Directory Users and Computers MMC snap-in. The following example shows the Active Directory Attribute Editor:
Follow instructions in [Configure an NFS client for Azure NetApp Files](configur
* [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md) * [Troubleshoot SMB or dual-protocol volumes](troubleshoot-dual-protocol-volumes.md)
+* [Troubleshoot LDAP volume issues](troubleshoot-ldap-volumes.md)
azure-netapp-files Troubleshoot Ldap Volumes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/troubleshoot-ldap-volumes.md
+
+ Title: Troubleshoot LDAP volume issues for Azure NetApp Files | Microsoft Docs
+description: Describes resolutions to error conditions that you might have when configuring LDAP volumes for Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ms.devlang: na
+ Last updated : 04/05/2021++
+# Troubleshoot LDAP volume issues
+
+This article describes resolutions to error conditions you might have when configuring LDAP volumes.
+
+## Errors and resolutions for LDAP volumes
+
+| Error conditions | Resolutions |
+|-|-|
+| Error when creating an SMB volume with ldapEnabled as true: <br> `Error Message: ldapEnabled option is only supported with NFS protocol volume. ` | You cannot create an SMB volume with LDAP enabled. <br> Create SMB volumes with LDAP disabled. |
+| Error when updating the ldapEnabled parameter value for an existing volume: <br> `Error Message: ldapEnabled parameter is not allowed to update` | You cannot modify the LDAP option setting after creating a volume. <br> Do not update the LDAP option setting on a created volume. See [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) for details. |
+| Error when creating an LDAP-enabled NFS volume: <br> `Could not query DNS server` <br> `Sample error message:` <br> `"log": time="2020-10-21 05:04:04.300" level=info msg=Res method=GET url=/v2/Volumes/070d0d72-d82c-c893-8ce3-17894e56cea3 x-correlation-id=9bb9e9fe-abb6-4eb5-a1e4-9e5fbb838813 x-request-id=c8032cb4-2453-05a9-6d61-31ca4a922d85 xresp="200: {\"created\":\"2020-10-21T05:02:55.000Z\",\"lifeCycleState\":\"error\",\"lifeCycleStateDetails\":\"Error when creating - Could not query DNS server. Verify that the network configuration is correct and that DNS servers are available.\",\"name\":\"smb1\",\"ownerId\ \":\"8c925a51-b913-11e9-b0de-9af5941b8ed0\",\"region\":\"westus2stage\",\"volumeId\":\"070d0d72-d82c-c893-8ce3-` | This error occurs because DNS is unreachable. <br> <ul><li> Check if you have configured the correct site (site scoping) for Azure NetApp Files. </li><li> The reason that DNS is unreachable might be an incorrect DNS IP address or networking issues. Check the DNS IP address entered in the AD connection to make sure that it is correct. </li><li> Make sure that the AD and the volume are in the same region and the same VNet. If they are in different VNets, ensure that VNet peering is established between the two VNets.</li></ul> |
+| Error when creating volume from a snapshot: <br> `Aggregate does not exist` | Azure NetApp Files doesnΓÇÖt support provisioning a new, LDAP-enabled volume from a snapshot that belongs to an LDAP-disabled volume. <br> Try creating new an LDAP-disabled volume from the given snapshot. |
+
+## Next steps
+
+* [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md)
+* [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md)
+* [Create a dual-protocol (NFSv3 and SMB) volume for Azure NetApp Files](create-volumes-dual-protocol.md)
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/whats-new.md
na ms.devlang: na Previously updated : 03/19/2021 Last updated : 04/05/2021
Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## April 2021
+
+* [Active Directory Domain Services (ADDS) LDAP user-mapping with NFS extended groups](configure-ldap-extended-groups.md) (Preview)
+
+ By default, Azure NetApp Files supports up to 16 group IDs when handling NFS user credentials, as defined in [RFC 5531](https://tools.ietf.org/html/rfc5531). With this new capability, you can now increase the maximum up to 1,024 if you have users who are members of more than the default number of groups. To support this capability, NFS volumes can now also be added to ADDS LDAP, which enables Active Directory LDAP users with extended groups entries (with up to 1024 groups) to access the volume.
+ ## March 2021 * [SMB Continuous Availability (CA) shares](azure-netapp-files-create-volumes-smb.md#add-an-smb-volume) (Preview)
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resource-name-rules.md
Title: Resource naming restrictions description: Shows the rules and restrictions for naming Azure resources. Previously updated : 01/27/2021 Last updated : 04/06/2021 # Naming rules and restrictions for Azure resources
This article summarizes naming rules and restrictions for Azure resources. For r
This article lists resources by resource provider namespace. For a list of how resource providers match Azure services, see [Resource providers for Azure services](azure-services-resource-providers.md).
-Resource names are case-insensitive unless specifically noted in the valid characters column.
+Resource names are case-insensitive unless noted in the valid characters column.
In the following tables, the term alphanumeric refers to:
In the following tables, the term alphanumeric refers to:
> | | | | | > | certificates | resource group | 1-260 | Can't use:<br>`/` <br><br>Can't end with space or period. | > | serverfarms | resource group | 1-40 | Alphanumerics and hyphens. |
-> | sites | global | 2-60 | Contains alphanumerics and hyphens.<br><br>Can't start or end with hyphen. |
+> | sites | global or per domain. See note below. | 2-60 | Contains alphanumerics and hyphens.<br><br>Can't start or end with hyphen. |
> | sites / slots | site | 2-59 | Alphanumerics and hyphens. | > [!NOTE]
+> A web site must have a globally unique URL. When you create a web site that uses a hosting plan, the URL is `http://<app-name>.azurewebsites.net`. The app name must be globally unique. When you create a web site that uses an App Service Environment, the app name must be unique within the [domain for the App Service Environment](../../app-service/environment/using-an-ase.md#app-access). For both cases, the URL of the site is globally unique.
+>
> Azure Functions has the same naming rules and restrictions as Microsoft.Web/sites. ## Next steps
azure-sql Block Crud Tsql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/block-crud-tsql.md
+
+ Title: Block T-SQL commands to create or modify Azure SQL resources
+
+description: This article details a feature allowing Azure administrators to block T-SQL commands to create or modify Azure SQL resources
++++++ Last updated : 03/31/2021++++
+# What is Block T-SQL CRUD feature?
++
+This feature allows Azure administrators to block the creation or modification of Azure SQL resources through T-SQL. This is enforced at the subscription level to block T-SQL commands from affecting SQL resources in any Azure SQL database or managed instance.
+
+## Overview
+
+To block creation or modification of resources through T-SQL and enforce resource management through an Azure Resource Manager template (ARM template) for a given subscription, the subscription level preview features in Azure portal can be used. This is particularly useful when you are using [Azure Policies](/azure/governance/policy/overview) to enforce organizational standards through ARM templates. Since T-SQL does not adhere to the Azure Policies, a block on T-SQL create or modify operations can be applied. The syntax blocked includes CRUD (create, update, delete) statements for databases in Azure SQL, specifically `CREATE DATABASE`, `ALTER DATABASE`, and `DROP DATABASE` statements.
+
+T-SQL CRUD operations can be blocked via Azure portal, [PowerShell](/powershell/module/az.resources/register-azproviderfeature), or [Azure CLI](/cli/azure/feature#az_feature_register).
+
+## Permissions
+
+In order to register or remove this feature, the Azure user must be a member of the Owner or Contributor role of the subscription.
+
+## Examples
+
+The following section describes how you can register or unregister a preview feature with Microsoft.Sql resource provider in Azure portal:
+
+### Register Block T-SQL CRUD
+
+1. Go to your subscription on Azure portal.
+2. Select on **Preview Features** tab.
+3. Select **Block T-SQL CRUD**.
+4. After you select on **Block T-SQL CRUD**, a new window will open, select **Register**, to register this block with Microsoft.Sql resource provider.
+
+![Select "Block T-SQL CRUD" in the list of Preview Features](./media/block-tsql-crud/block-tsql-crud.png)
+
+![With "Block T-SQL CRUD" checked, select Register](./media/block-tsql-crud/block-tsql-crud-register.png)
+
+
+### Re-register Microsoft.sql resource provider
+After you register the block of T-SQL CRUD with Microsoft.Sql resource provider, you must re-register the Microsoft.Sql resource provider for the changes to take effect. To re-register the Microsoft.Sql resource provider:
+
+1. Go to your subscription on Azure portal.
+2. Select on **Resource Providers** tab.
+3. Search and select **Microsoft.Sql** resource provider.
+4. Select **Re-register**.
+
+> [!NOTE]
+> The re-registration step is mandatory for the T-SQL block to be applied to your subscription.
+
+![Re-register the Microsoft.Sql resource provider](./media/block-tsql-crud/block-tsql-crud-re-register.png)
+
+### Removing Block T-SQL CRUD
+To remove the block on T-SQL create or modify operations from your subscription, first unregister the previously registered T-SQL block. Then, re-register the Microsoft.Sql resource provider as shown above for the removal of T-SQL block to take effect.
++
+## Next steps
+
+- [An overview of Azure SQL Database security capabilities](security-overview.md)
+- [Azure SQL Database security best practices](security-best-practice.md)
azure-sql Design First Database Csharp Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/design-first-database-csharp-tutorial.md
--++ Last updated 07/29/2019
azure-sql Dynamic Data Masking Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/dynamic-data-masking-overview.md
You can use the REST API to programmatically manage data masking policy and rule
### Data masking policies -- [Create Or Update](/rest/api/sql/datamaskingpolicies/createorupdate): Creates or updates a database data masking policy.-- [Get](/rest/api/sql/datamaskingpolicies/get): Gets a database data masking policy.
+- [Create Or Update](/rest/api/sql/2014-04-01/datamaskingpolicies/createorupdate): Creates or updates a database data masking policy.
+- [Get](/rest/api/sql/2014-04-01/datamaskingpolicies/get): Gets a database data masking policy.
### Data masking rules -- [Create Or Update](/rest/api/sql/datamaskingrules/createorupdate): Creates or updates a database data masking rule.-- [List By Database](/rest/api/sql/datamaskingrules/listbydatabase): Gets a list of database data masking rules.
+- [Create Or Update](/rest/api/sql/2014-04-01/datamaskingrules/createorupdate): Creates or updates a database data masking rule.
+- [List By Database](/rest/api/sql/2014-04-01/datamaskingrules/listbydatabase): Gets a list of database data masking rules.
## Permissions
azure-sql Security Best Practice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/security-best-practice.md
Most security standards address data availability in terms of operational contin
## Next steps -- See [An overview of Azure SQL Database security capabilities](security-overview.md)
+- See [An overview of Azure SQL Database security capabilities](security-overview.md)
azure-sql Service Tier Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tier-hyperscale.md
Enabled Regions:
- Australia Central - Brazil South - Canada Central
+- Canada East
- Central US - China East 2 - China North 2
azure-sql Db2 To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/db2-to-sql-database-guide.md
Title: "Db2 to Azure SQL Database: Migration guide"
-description: This guide teaches you to migrate your Db2 databases to Azure SQL Database using SQL Server Migration Assistant for Db2 (SSMA for Db2).
+description: This guide teaches you to migrate your IMB Db2 databases to Azure SQL Database, by using the SQL Server Migration Assistant for Db2 (SSMA for Db2).
Last updated 11/06/2020
-# Migration guide: Db2 to Azure SQL Database
+# Migration guide: IBM Db2 to Azure SQL Database
[!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqldb.md)]
-This guide teaches you to migrate your Db2 databases to Azure SQL Database using SQL Server Migration Assistant for Db2.
+This guide teaches you to migrate your IBM Db2 databases to Azure SQL Database, by using the SQL Server Migration Assistant for Db2.
-For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
+For other migration guides, see [Azure Database Migration Guides](https://docs.microsoft.com/data-migration).
## Prerequisites To migrate your Db2 database to SQL Database, you need: -- To verify your [source environment is supported](/sql/ssma/db2/installing-ssma-for-db2-client-db2tosql#prerequisites).
+- To verify that your [source environment is supported](/sql/ssma/db2/installing-ssma-for-db2-client-db2tosql#prerequisites).
- To download [SQL Server Migration Assistant (SSMA) for Db2](https://www.microsoft.com/download/details.aspx?id=54254).-- A target [Azure SQL Database](../../database/single-database-create-quickstart.md).
+- A target database in [Azure SQL Database](../../database/single-database-create-quickstart.md).
- Connectivity and sufficient permissions to access both source and target. -- ## Pre-migration
-After you have met the prerequisites, you are ready to discover the topology of your environment and assess the feasibility of your migration.
+After you have met the prerequisites, you're ready to discover the topology of your environment and assess the feasibility of your migration.
### Assess and convert
-Use SQL Server Migration Assistant (SSMA) for DB2 to review database objects and data, and assess databases for migration.
+Use SSMA for DB2 to review database objects and data, and assess databases for migration.
To create an assessment, follow these steps:
-1. Open [SQL Server Migration Assistant (SSMA) for Db2](https://www.microsoft.com/download/details.aspx?id=54254).
-1. Select **File** and then choose **New Project**.
-1. Provide a project name, a location to save your project, and then select Azure SQL Database as the migration target from the drop-down. Select **OK**:
+1. Open [SSMA for Db2](https://www.microsoft.com/download/details.aspx?id=54254).
+1. Select **File** > **New Project**.
+1. Provide a project name and a location to save your project. Then select Azure SQL Database as the migration target from the drop-down list, and select **OK**.
- :::image type="content" source="media/db2-to-sql-database-guide/new-project.png" alt-text="Provide project details and select OK to save.":::
+ :::image type="content" source="media/db2-to-sql-database-guide/new-project.png" alt-text="Screenshot that shows project details to specify.":::
-1. Enter in values for the Db2 connection details on the **Connect to Db2** dialog box.
+1. On **Connect to Db2**, enter values for the Db2 connection details.
- :::image type="content" source="media/db2-to-sql-database-guide/connect-to-db2.png" alt-text="Connect to your Db2 instance":::
+ :::image type="content" source="media/db2-to-sql-database-guide/connect-to-db2.png" alt-text="Screenshot that shows options to connect to your Db2 instance.":::
-1. Right-click the Db2 schema you want to migrate, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the schema:
+1. Right-click the Db2 schema you want to migrate, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the schema.
- :::image type="content" source="media/db2-to-sql-database-guide/create-report.png" alt-text="Right-click the schema and choose create report":::
+ :::image type="content" source="media/db2-to-sql-database-guide/create-report.png" alt-text="Screenshot that shows how to create a report.":::
-1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of Db2 objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects.
+1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of Db2 objects and the effort required to perform schema conversions. The default location for the report is in the report folder within *SSMAProjects*.
For example: `drive:\<username>\Documents\SSMAProjects\MyDb2Migration\report\report_<date>`.
- :::image type="content" source="media/db2-to-sql-database-guide/report.png" alt-text="Review the report to identify any errors or warnings":::
+ :::image type="content" source="media/db2-to-sql-database-guide/report.png" alt-text="Screenshot of the report that you review to identify any errors or warnings.":::
### Validate data types
-Validate the default data type mappings and change them based on requirements if necessary. To do so, follow these steps:
+Validate the default data type mappings, and change them based on requirements if necessary. To do so, follow these steps:
1. Select **Tools** from the menu. 1. Select **Project Settings**.
-1. Select the **Type mappings** tab:
+1. Select the **Type mappings** tab.
- :::image type="content" source="media/db2-to-sql-database-guide/type-mapping.png" alt-text="Select the schema and then type-mapping":::
+ :::image type="content" source="media/db2-to-sql-database-guide/type-mapping.png" alt-text="Screenshot that shows selecting the schema and type mapping.":::
-1. You can change the type mapping for each table by selecting the table in the **Db2 Metadata explorer**.
+1. You can change the type mapping for each table by selecting the table in the **Db2 Metadata Explorer**.
### Convert schema
To convert the schema, follow these steps:
1. (Optional) Add dynamic or ad-hoc queries to statements. Right-click the node, and then choose **Add statements**. 1. Select **Connect to Azure SQL Database**. 1. Enter connection details to connect your database in Azure SQL Database.
- 1. Choose your target SQL Database from the drop-down, or provide a new name, in which case a database will be created on the target server.
+ 1. Choose your target SQL Database from the drop-down list, or provide a new name, in which case a database will be created on the target server.
1. Provide authentication details.
- 1. Select **Connect**:
+ 1. Select **Connect**.
- :::image type="content" source="media/db2-to-sql-database-guide/connect-to-sql-database.png" alt-text="Fill in details to connect to the logical server in Azure":::
+ :::image type="content" source="media/db2-to-sql-database-guide/connect-to-sql-database.png" alt-text="Screenshot that shows the details needed to connect to the logical server in Azure.":::
-1. Right-click the schema and then choose **Convert Schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your schema:
+1. Right-click the schema, and then choose **Convert Schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your schema.
- :::image type="content" source="media/db2-to-sql-database-guide/convert-schema.png" alt-text="Right-click the schema and choose convert schema":::
+ :::image type="content" source="media/db2-to-sql-database-guide/convert-schema.png" alt-text="Screenshot that shows selecting the schema and converting it.":::
-1. After the conversion completes, compare and review the structure of the schema to identify potential problems and address them based on the recommendations:
+1. After the conversion completes, compare and review the structure of the schema to identify potential problems. Address the problems based on the recommendations.
- :::image type="content" source="media/db2-to-sql-database-guide/compare-review-schema-structure.png" alt-text="Compare and review the structure of the schema to identify potential problems and address them based on recommendations.":::
-
-1. Select **Review results** in the Output pane, and review errors in the **Error list** pane.
-1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to SQL Database.
+ :::image type="content" source="media/db2-to-sql-database-guide/compare-review-schema-structure.png" alt-text="Screenshot that shows comparing and reviewing the structure of the schema to identify potential problems.":::
+1. In the **Output** pane, select **Review results**. In the **Error list** pane, review errors.
+1. Save the project locally for an offline schema remediation exercise. From the **File** menu, select **Save Project**. This gives you an opportunity to evaluate the source and target schemas offline, and perform remediation before you can publish the schema to SQL Database.
## Migrate
After you have completed assessing your databases and addressing any discrepanci
To publish your schema and migrate your data, follow these steps:
-1. Publish the schema: Right-click the database from the **Databases** node in the **Azure SQL Database Metadata Explorer** and choose **Synchronize with Database**.
+1. Publish the schema. In **Azure SQL Database Metadata Explorer**, from the **Databases** node, right-click the database. Then select **Synchronize with Database**.
- :::image type="content" source="media/db2-to-sql-database-guide/synchronize-with-database.png" alt-text="Right-click the database and choose synchronize with database":::
+ :::image type="content" source="media/db2-to-sql-database-guide/synchronize-with-database.png" alt-text="Screenshot that shows the option to synchronize with database.":::
-1. Migrate the data: Right-click the database or object you want to migrate in **Db2 Metadata Explorer**, and choose **Migrate data**. Alternatively, you can select **Migrate Data** from the top-line navigation bar. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand Tables, and then select the check box next to the table. To omit data from individual tables, clear the check box:
+1. Migrate the data. Right-click the database or object you want to migrate in **Db2 Metadata Explorer**, and choose **Migrate data**. Alternatively, you can select **Migrate Data** from the navigation bar. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand **Tables**, and then select the check box next to the table. To omit data from individual tables, clear the check box.
- :::image type="content" source="media/db2-to-sql-database-guide/migrate-data.png" alt-text="Right-click the schema and choose migrate data":::
+ :::image type="content" source="media/db2-to-sql-database-guide/migrate-data.png" alt-text="Screenshot that shows selecting the schema and choosing to migrate data.":::
-1. Provide connection details for both the Db2 and Azure SQL Database.
-1. After migration completes, view the **Data Migration Report**:
+1. Provide connection details for both Db2 and Azure SQL Database.
+1. After migration completes, view the **Data Migration Report**.
- :::image type="content" source="media/db2-to-sql-database-guide/data-migration-report.png" alt-text="Review the data migration report":::
+ :::image type="content" source="media/db2-to-sql-database-guide/data-migration-report.png" alt-text="Screenshot that shows where to review the data migration report.":::
-1. Connect to your database in Azure SQL Database by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema:
+1. Connect to your database in Azure SQL Database by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms). Validate the migration by reviewing the data and schema.
- :::image type="content" source="media/db2-to-sql-database-guide/compare-schema-in-ssms.png" alt-text="Compare the schema in SSMS":::
+ :::image type="content" source="media/db2-to-sql-database-guide/compare-schema-in-ssms.png" alt-text="Screenshot that shows comparing the schema in SQL Server Management Studio.":::
## Post-migration
-After you have successfully completed the Migration stage, you need to go through a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
+After the migration is complete, you need to go through a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
### Remediate applications After the data is migrated to the target environment, all the applications that formerly consumed the source need to start consuming the target. Accomplishing this will in some cases require changes to the applications. - ### Perform tests
-The test approach for database migration consists of the following activities:
+Testing consists of the following activities:
1. **Develop validation tests**: To test database migration, you need to use SQL queries. You must create the validation queries to run against both the source and the target databases. Your validation queries should cover the scope you have defined.
-1. **Set up test environment**: The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
+1. **Set up the test environment**: The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
1. **Run validation tests**: Run the validation tests against the source and the target, and then analyze the results.
-1. **Run performance tests**: Run performance test against the source and the target, and then analyze and compare the results.
+1. **Run performance tests**: Run performance tests against the source and the target, and then analyze and compare the results.
-
-## Leverage advanced features
+## Advanced features
Be sure to take advantage of the advanced cloud-based features offered by SQL Database, such as [built-in high availability](../../database/high-availability-sla.md), [threat detection](../../database/azure-defender-for-sql.md), and [monitoring and tuning your workload](../../database/monitor-tune-overview.md). -
-Some SQL Server features are only available once the [database compatibility level](/sql/relational-databases/databases/view-or-change-the-compatibility-level-of-a-database) is changed to the latest compatibility level (150).
+Some SQL Server features are only available when the [database compatibility level](/sql/relational-databases/databases/view-or-change-the-compatibility-level-of-a-database) is changed to the latest compatibility level.
## Migration assets
For additional assistance, see the following resources, which were developed in
||| |[Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.| |[Db2 zOS data assets discovery and assessment package](https://github.com/microsoft/DataMigrationTeam/tree/master/DB2%20zOS%20Data%20Assets%20Discovery%20and%20Assessment%20Package)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including *.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.|
-|[IBM Db2 LUW inventory scripts and artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/IBM%20Db2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of 'Raw Data' in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
-|[Db2 LUW pure scale on Azure - setup guide](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Db2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a Db2 implementation plan. While business requirements will differ, the same basic pattern applies. This architectural pattern may also be used for OLAP applications on Azure.|
+|[IBM Db2 LUW inventory scripts and artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/IBM%20Db2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of "raw data" in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
+|[Db2 LUW pure scale on Azure - setup guide](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Db2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a Db2 implementation plan. Although business requirements will differ, the same basic pattern applies. This architectural pattern can also be used for OLAP applications on Azure.|
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
The Data SQL Engineering team developed these resources. This team's core charte
## Next steps -- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see [Service and tools for data migration](../../../dms/dms-tools-matrix.md).
+- For Microsoft and third-party services and tools to assist you with various database and data migration scenarios, see [Service and tools for data migration](../../../dms/dms-tools-matrix.md).
-- To learn more about Azure SQL Database see:
+- To learn more about Azure SQL Database, see:
- [An overview of SQL Database](../../database/sql-database-paas-overview.md)
- - [Azure total Cost of Ownership Calculator](https://azure.microsoft.com/pricing/tco/calculator/)
-
+ - [Azure total cost of ownership calculator](https://azure.microsoft.com/pricing/tco/calculator/)
-- To learn more about the framework and adoption cycle for Cloud migrations, see
+- To learn more about the framework and adoption cycle for cloud migrations, see:
- [Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/contoso-migration-scale)
- - [Best practices for costing and sizing workloads migrate to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
+ - [Best practices for costing and sizing workloads migrated to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
-- To assess the Application access layer, see [Data Access Migration Toolkit (Preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit)-- For details on how to perform Data Access Layer A/B testing see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
+- To assess the application access layer, see [Data Access Migration Toolkit](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit).
+- For details on how to perform data access layer A/B testing, see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
azure-sql Oracle To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/oracle-to-sql-database-guide.md
Title: "Oracle to Azure SQL Database: Migration guide"
-description: This guide teaches you to migrate your Oracle schema to Azure SQL Database using SQL Server Migration Assistant for Oracle (SSMA for Oracle).
+description: In this guide, you learn how to migrate your Oracle schema to Azure SQL Database by using SQL Server Migration Assistant for Oracle.
Last updated 08/25/2020
# Migration guide: Oracle to Azure SQL Database+ [!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqldb.md)]
-This guide teaches you to migrate your Oracle schemas to Azure SQL Database using SQL Server Migration Assistant for Oracle.
+In this guide, you learn how to migrate your Oracle schemas to Azure SQL Database by using SQL Server Migration Assistant for Oracle (SSMA for Oracle).
-For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
+For other migration guides, see [Azure Database Migration Guides](https://docs.microsoft.com/data-migration).
## Prerequisites
-To migrate your Oracle schema to SQL Database you need:
+Before you begin migrating your Oracle schema to SQL Database:
-- To verify your source environment is supported. -- To download [SQL Server Migration Assistant (SSMA) for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258). -- A target [Azure SQL Database](../../database/single-database-create-quickstart.md). -- The [necessary permissions for SSMA for Oracle](/sql/ssma/oracle/connecting-to-oracle-database-oracletosql) and [provider](/sql/ssma/oracle/connect-to-oracle-oracletosql).
+- Verify that your source environment is supported.
+- Download [SSMA for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258).
+- Have a target [SQL Database](../../database/single-database-create-quickstart.md) instance.
+- Obtain the [necessary permissions for SSMA for Oracle](/sql/ssma/oracle/connecting-to-oracle-database-oracletosql) and [provider](/sql/ssma/oracle/connect-to-oracle-oracletosql).
- ## Pre-migration
-After you have met the prerequisites, you are ready to discover the topology of your environment and assess the feasibility of your migration. This part of the process involves conducting an inventory of the databases that you need to migrate, assessing those databases for potential migration issues or blockers, and then resolving any items you might have uncovered.
---
-### Assess
+After you've met the prerequisites, you're ready to discover the topology of your environment and assess the feasibility of your migration. This part of the process involves conducting an inventory of the databases that you need to migrate, assessing those databases for potential migration issues or blockers, and then resolving any items you might have uncovered.
+### Assess
-Use the SQL Server Migration Assistant (SSMA) for Oracle to review database objects and data, assess databases for migration, migrate database objects to Azure SQL Database, and then finally migrate data to the database.
+By using SSMA for Oracle, you can review database objects and data, assess databases for migration, migrate database objects to SQL Database, and then finally migrate data to the database.
-To create an assessment, follow these steps:
+To create an assessment:
-1. Open [SQL Server Migration Assistant for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258).
-1. Select **File** and then choose **New Project**.
-1. Provide a project name, a location to save your project, and then select Azure SQL Database as the migration target from the drop-down. Select **OK**:
+1. Open [SSMA for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258).
+1. Select **File**, and then select **New Project**.
+1. Enter a project name and a location to save your project. Then select **Azure SQL Database** as the migration target from the drop-down list and select **OK**.
- ![New Project](./media/oracle-to-sql-database-guide/new-project.png)
+ ![Screenshot that shows Connect to Oracle.](./media/oracle-to-sql-database-guide/connect-to-oracle.png)
-1. Select **Connect to Oracle**. Enter in values for Oracle connection details on the **Connect to Oracle** dialog box:
+1. Select **Connect to Oracle**. Enter values for Oracle connection details in the **Connect to Oracle** dialog box.
- ![Connect to Oracle](./media/oracle-to-sql-database-guide/connect-to-oracle.png)
+1. Select the Oracle schemas you want to migrate.
- Select the Oracle schema(s) you want to migrate:
+ ![Screenshot that shows selecting Oracle schema.](./media/oracle-to-sql-database-guide/select-schema.png)
- ![Select Oracle schema](./media/oracle-to-sql-database-guide/select-schema.png)
+1. In **Oracle Metadata Explorer**, right-click the Oracle schema you want to migrate and then select **Create Report** to generate an HTML report. Alternatively, you can select a database and then select the **Create Report** tab.
-1. Right-click the Oracle schema you want to migrate in the **Oracle Metadata Explorer**, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the database:
-
- ![Create Report](./media/oracle-to-sql-database-guide/create-report.png)
+ ![Screenshot that shows Create Report.](./media/oracle-to-sql-database-guide/create-report.png)
1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of Oracle objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects.
- For example: `drive:\<username>\Documents\SSMAProjects\MyOracleMigration\report\report_2020_11_12T02_47_55\`
-
- ![Assessment Report](./media/oracle-to-sql-database-guide/assessment-report.png)
-
+ For example, see `drive:\<username>\Documents\SSMAProjects\MyOracleMigration\report\report_2020_11_12T02_47_55\`.
+ ![Screenshot that shows an Assessment report.](./media/oracle-to-sql-database-guide/assessment-report.png)
-### Validate data types
+### Validate the data types
Validate the default data type mappings and change them based on requirements if necessary. To do so, follow these steps:
-1. Select **Tools** from the menu.
-1. Select **Project Settings**.
-1. Select the **Type mappings** tab:
+1. In SSMA for Oracle, select **Tools**, and then select **Project Settings**.
+1. Select the **Type Mapping** tab.
- ![Type Mappings](./media/oracle-to-sql-database-guide/type-mappings.png)
+ ![Screenshot that shows Type Mapping.](./media/oracle-to-sql-database-guide/type-mappings.png)
-1. You can change the type mapping for each table by selecting the table in the **Oracle Metadata Explorer**.
+1. You can change the type mapping for each table by selecting the table in **Oracle Metadata Explorer**.
-### Convert schema
+### Convert the schema
-To convert the schema, follow these steps:
+To convert the schema:
-1. (Optional) Add dynamic or ad-hoc queries to statements. Right-click the node, and then choose **Add statements**.
-1. Select **Connect to Azure SQL Database**.
- 1. Enter connection details to connect your database in Azure SQL Database.
- 1. Choose your target SQL Database from the drop-down, or provide a new name, in which case a database will be created on the target server.
- 1. Provide authentication details.
- 1. Select **Connect**:
+1. (Optional) Add dynamic or ad-hoc queries to statements. Right-click the node, and then select **Add statements**.
+1. Select the **Connect to Azure SQL Database** tab.
+ 1. In **SQL Database**, enter connection details to connect your database.
+ 1. Select your target SQL Database instance from the drop-down list, or enter a new name, in which case a database will be created on the target server.
+ 1. Enter authentication details, and select **Connect**.
- ![Connect to SQL Database](./media/oracle-to-sql-database-guide/connect-to-sql-database.png)
+ ![Screenshot that shows Connect to Azure SQL Database.](./media/oracle-to-sql-database-guide/connect-to-sql-database.png)
+1. In **Oracle Metadata Explorer**, right-click the Oracle schema and then select **Convert Schema**. Alternatively, you can select your schema and then select the **Convert Schema** tab.
-1. Right-click the Oracle schema in the **Oracle Metadata Explorer** and then choose **Convert Schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your schema:
+ ![Screenshot that shows Convert Schema.](./media/oracle-to-sql-database-guide/convert-schema.png)
- ![Convert Schema](./media/oracle-to-sql-database-guide/convert-schema.png)
+1. After the conversion finishes, compare and review the converted objects to the original objects to identify potential problems and address them based on the recommendations.
-1. After the conversion completes, compare and review the converted objects to the original objects to identify potential problems and address them based on the recommendations:
+ ![Screenshot that shows the Review recommendations schema.](./media/oracle-to-sql-database-guide/table-mapping.png)
- ![Review recommendations schema](./media/oracle-to-sql-database-guide/table-mapping.png)
+1. Compare the converted Transact-SQL text to the original stored procedures, and review the recommendations.
- Compare the converted Transact-SQL text to the original stored procedures and review the recommendations:
+ ![Screenshot that shows the Review recommendations.](./media/oracle-to-sql-database-guide/procedure-comparison.png)
- ![Review recommendations](./media/oracle-to-sql-database-guide/procedure-comparison.png)
-
-1. Select **Review results** in the Output pane, and review errors in the **Error list** pane.
-1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to SQL Database.
+1. In the output pane, select **Review results** and review the errors in the **Error List** pane.
+1. Save the project locally for an offline schema remediation exercise. On the **File** menu, select **Save Project**. This step gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you publish the schema to SQL Database.
## Migrate
-After you have completed assessing your databases and addressing any discrepancies, the next step is to execute the migration process. Migration involves two steps ΓÇô publishing the schema and migrating the data.
-
-To publish your schema and migrate your data, follow these steps:
+After you've assessed your databases and addressed any discrepancies, the next step is to run the migration process. Migration involves two steps: publishing the schema and migrating the data.
-1. Publish the schema: Right-click the database from the **Databases** node in the **Azure SQL Database Metadata Explorer** and choose **Synchronize with Database**:
+To publish your schema and migrate your data:
- ![Synchronize with Database](./media/oracle-to-sql-database-guide/synchronize-with-database.png)
+1. Publish the schema by right-clicking the database from the **Databases** node in **Azure SQL Database Metadata Explorer** and selecting **Synchronize with Database**.
- Review the mapping between your source project and your target:
+ ![Screenshot that shows Synchronize with Database.](./media/oracle-to-sql-database-guide/synchronize-with-database.png)
- ![Synchronize with Database review](./media/oracle-to-sql-database-guide/synchronize-with-database-review.png)
+1. Review the mapping between your source project and your target.
+ ![Screenshot that shows Synchronize with the Database review.](./media/oracle-to-sql-database-guide/synchronize-with-database-review.png)
-1. Migrate the data: Right-click the database or object you want to migrate in **Oracle Metadata Explorer**, and choose **Migrate data**. Alternatively, you can select **Migrate Data** from the top-line navigation bar. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand Tables, and then select the check box next to the table. To omit data from individual tables, clear the check box:
+1. Migrate the data by right-clicking the database or object you want to migrate in **Oracle Metadata Explorer** and selecting **Migrate Data**. Alternatively, you can select the **Migrate Data** tab. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand **Tables**, and then select the check boxes next to the tables. To omit data from individual tables, clear the check boxes.
- ![Migrate Data](./media/oracle-to-sql-database-guide/migrate-data.png)
+ ![Screenshot that shows Migrate Data.](./media/oracle-to-sql-database-guide/migrate-data.png)
-1. Provide connection details for both Oracle and Azure SQL Database.
-1. After migration completes, view the **Data Migration Report**:
+1. Enter connection details for both Oracle and SQL Database.
+1. After the migration is completed, view the **Data Migration Report**.
- ![Data Migration Report](./media/oracle-to-sql-database-guide/data-migration-report.png)
+ ![Screenshot that shows the Data Migration Report.](./media/oracle-to-sql-database-guide/data-migration-report.png)
-1. Connect to your Azure SQL Database by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema:
+1. Connect to your SQL Database instance by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms), and validate the migration by reviewing the data and schema.
- ![Validate in SSMA](./media/oracle-to-sql-database-guide/validate-data.png)
+ ![Screenshot that shows validation in SQL Server Management Studio.](./media/oracle-to-sql-database-guide/validate-data.png)
-Alternatively, you can also use SQL Server Integration Services (SSIS) to perform the migration. To learn more, see:
+Alternatively, you can also use SQL Server Integration Services to perform the migration. To learn more, see:
-- [Getting Started with SQL Server Integration Services](/sql/integration-services/sql-server-integration-services)-- [SQL Server Integration
+- [Getting started with SQL Server Integration Services](/sql/integration-services/sql-server-integration-services)
+- [SQL Server Integration Services for Azure and Hybrid Data Movement](https://download.microsoft.com/download/D/2/0/D20E1C5F-72EA-4505-9F26-FEF9550EFD44/SSIS%20Hybrid%20and%20Azure.docx)
+## Post-migration
-## Post-migration
-
-After you have successfully completed the **Migration** stage, you need to go through a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
+After you've successfully completed the *migration* stage, you need to complete a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
### Remediate applications
-After the data is migrated to the target environment, all the applications that formerly consumed the source need to start consuming the target. Accomplishing this will in some cases require changes to the applications.
-
-The [Data Access Migration Toolkit](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) is an extension for Visual Studio Code that allows you to analyze your Java source code and detect data access API calls and queries, providing you with a single-pane view of what needs to be addressed to support the new database back end. To learn more, see the [Migrate our Java application from Oracle](https://techcommunity.microsoft.com/t5/microsoft-data-migration/migrate-your-java-applications-from-oracle-to-sql-server-with/ba-p/368727) blog.
-
+After the data is migrated to the target environment, all the applications that formerly consumed the source need to start consuming the target. Accomplishing this task will require changes to the applications in some cases.
+The [Data Access Migration Toolkit](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) is an extension for Visual Studio Code that allows you to analyze your Java source code and detect data access API calls and queries. The toolkit provides you with a single-pane view of what needs to be addressed to support the new database back end. To learn more, see the [Migrate your Java applications from Oracle](https://techcommunity.microsoft.com/t5/microsoft-data-migration/migrate-your-java-applications-from-oracle-to-sql-server-with/ba-p/368727) blog post.
### Perform tests
-The test approach for database migration consists of performing the following activities:
-
-1. **Develop validation tests**. To test database migration, you need to use SQL queries. You must create the validation queries to run against both the source and the target databases. Your validation queries should cover the scope you have defined.
-
-2. **Set up test environment**. The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
-
-3. **Run validation tests**. Run the validation tests against the source and the target, and then analyze the results.
-
-4. **Run performance tests**. Run performance test against the source and the target, and then analyze and compare the results.
+The test approach to database migration consists of the following activities:
+1. **Develop validation tests**: To test the database migration, you need to use SQL queries. You must create the validation queries to run against both the source and the target databases. Your validation queries should cover the scope you've defined.
+1. **Set up a test environment**: The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
+1. **Run validation tests**: Run validation tests against the source and the target, and then analyze the results.
+1. **Run performance tests**: Run performance tests against the source and the target, and then analyze and compare the results.
### Optimize
-The post-migration phase is crucial for reconciling any data accuracy issues and verifying completeness, as well as addressing performance issues with the workload.
+The post-migration phase is crucial for reconciling any data accuracy issues, verifying completeness, and addressing performance issues with the workload.
> [!NOTE]
-> For additional detail about these issues and specific steps to mitigate them, see the [Post-migration Validation and Optimization Guide](/sql/relational-databases/post-migration-validation-and-optimization-guide).
+> For more information about these issues and the steps to mitigate them, see the [Post-migration validation and optimization guide](/sql/relational-databases/post-migration-validation-and-optimization-guide).
+## Migration assets
-## Migration assets
-
-For additional assistance with completing this migration scenario, please see the following resources, which were developed in support of a real-world migration project engagement.
+For more assistance with completing this migration scenario, see the following resources. They were developed in support of a real-world migration project engagement.
| **Title/link** | **Description** | | - | -- |
-| [Data Workload Assessment Model and Tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | This tool provides suggested ΓÇ£best fitΓÇ¥ target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that greatly helps to accelerate large estate assessments by providing and automated and uniform target platform decision process. |
-| [Oracle Inventory Script Artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/Oracle%20Inventory%20Script%20Artifacts) | This asset includes a PL/SQL query that hits Oracle system tables and provides a count of objects by schema type, object type, and status. It also provides a rough estimate of ΓÇÿRaw DataΓÇÖ in each schema and the sizing of tables in each schema, with results stored in a CSV format. |
-| [Automate SSMA Oracle Assessment Collection & Consolidation](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/Automate%20SSMA%20Oracle%20Assessment%20Collection%20%26%20Consolidation) | This set of resource uses a .csv file as entry (sources.csv in the project folders) to produce the xml files that are needed to run SSMA assessment in console mode. The source.csv is provided by the customer based on an inventory of existing Oracle instances. The output files are AssessmentReportGeneration_source_1.xml, ServersConnectionFile.xml, and VariableValueFile.xml.|
-| [SSMA for Oracle Common Errors and how to fix them](https://aka.ms/dmj-wp-ssma-oracle-errors) | With Oracle, you can assign a non-scalar condition in the WHERE clause. However, SQL Server doesnΓÇÖt support this type of condition. As a result, SQL Server Migration Assistant (SSMA) for Oracle doesnΓÇÖt convert queries with a non-scalar condition in the WHERE clause, instead generating an error O2SS0001. This white paper provides more details on the issue and ways to resolve it. |
-| [Oracle to SQL Server Migration Handbook](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20SQL%20Server%20Migration%20Handbook.pdf) | This document focuses on the tasks associated with migrating an Oracle schema to the latest version of SQL Server base. If the migration requires changes to features/functionality, then the possible impact of each change on the applications that use the database must be considered carefully. |
+| [Data Workload Assessment Model and Tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | This tool provides suggested "best fit" target platforms, cloud readiness, and application or database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated and uniform target platform decision process. |
+| [Oracle Inventory Script Artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/Oracle%20Inventory%20Script%20Artifacts) | This asset includes a PL/SQL query that hits Oracle system tables and provides a count of objects by schema type, object type, and status. It also provides a rough estimate of raw data in each schema and the sizing of tables in each schema, with results stored in a CSV format. |
+| [Automate SSMA Oracle Assessment Collection & Consolidation](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/Automate%20SSMA%20Oracle%20Assessment%20Collection%20%26%20Consolidation) | This set of resources uses a .csv file as entry (sources.csv in the project folders) to produce the xml files that are needed to run an SSMA assessment in console mode. The source.csv is provided by the customer based on an inventory of existing Oracle instances. The output files are AssessmentReportGeneration_source_1.xml, ServersConnectionFile.xml, and VariableValueFile.xml.|
+| [SSMA for Oracle Common Errors and How to Fix Them](https://aka.ms/dmj-wp-ssma-oracle-errors) | With Oracle, you can assign a nonscalar condition in the WHERE clause. However, SQL Server doesn't support this type of condition. As a result, SSMA for Oracle doesn't convert queries with a nonscalar condition in the WHERE clause. Instead, it generates the error O2SS0001. This white paper provides more details on the issue and ways to resolve it. |
+| [Oracle to SQL Server Migration Handbook](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20SQL%20Server%20Migration%20Handbook.pdf) | This document focuses on the tasks associated with migrating an Oracle schema to the latest version of SQL Server Database. If the migration requires changes to features or functionality, the possible impact of each change on the applications that use the database must be considered carefully. |
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform. ## Next steps -- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see the article [Service and tools for data migration](../../../dms/dms-tools-matrix.md).
+- For a matrix of Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios and specialty tasks, see [Services and tools for data migration](../../../dms/dms-tools-matrix.md).
-- To learn more about Azure SQL Database, see:
+- To learn more about SQL Database, see:
- [An overview of Azure SQL Database](../../database/sql-database-paas-overview.md) - [Azure Total Cost of Ownership (TCO) Calculator](https://azure.microsoft.com/en-us/pricing/tco/calculator/) --- To learn more about the framework and adoption cycle for Cloud migrations, see
+- To learn more about the framework and adoption cycle for cloud migrations, see:
- [Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/contoso-migration-scale)
- - [Best practices for costing and sizing workloads migrate to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
+ - [Best practices for costing and sizing workloads for migration to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
-- For video content, see:
- - [Overview of the migration journey and the tools/services recommended for performing assessment and migration](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/)
+- For video content, see:
+ - [Overview of the migration journey and the tools and services recommended for performing assessment and migration](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/)
azure-sql Db2 To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/db2-to-managed-instance-guide.md
Title: "Db2 to Azure SQL Managed Instance: Migration guide"
-description: This guide teaches you to migrate your Db2 databases to Azure SQL Managed Instance using SQL Server Migration Assistant for Db2.
+description: This guide teaches you to migrate your IBM Db2 databases to Azure SQL Managed Instance, by using SQL Server Migration Assistant for Db2.
Last updated 11/06/2020
-# Migration guide: Db2 to Azure SQL Managed Instance
+# Migration guide: IBM Db2 to Azure SQL Managed Instance
[!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqlmi.md)]
-This guide teaches you to migrate your Db2 databases to Azure SQL Managed Instance using the SQL Server Migration Assistant for Db2.
+This guide teaches you to migrate your IBM Db2 databases to Azure SQL Managed Instance, by using the SQL Server Migration Assistant for Db2.
-For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
+For other migration guides, see [Azure Database Migration Guides](https://docs.microsoft.com/data-migration).
## Prerequisites To migrate your Db2 database to SQL Managed Instance, you need: -- to verify your [source environment is supported](/sql/ssma/db2/installing-ssma-for-db2-client-db2tosql#prerequisites).-- to download [SQL Server Migration Assistant (SSMA) for Db2](https://www.microsoft.com/download/details.aspx?id=54254).-- a target [Azure SQL Managed Instance](../../managed-instance/instance-create-quickstart.md).
+- To verify that your [source environment is supported](/sql/ssma/db2/installing-ssma-for-db2-client-db2tosql#prerequisites).
+- To download [SQL Server Migration Assistant (SSMA) for Db2](https://www.microsoft.com/download/details.aspx?id=54254).
+- A target instance of [Azure SQL Managed Instance](../../managed-instance/instance-create-quickstart.md).
- Connectivity and sufficient permissions to access both source and target. -- ## Pre-migration
-After you have met the prerequisites, you are ready to discover the topology of your environment and assess the feasibility of your migration.
+After you have met the prerequisites, you're ready to discover the topology of your environment and assess the feasibility of your migration.
### Assess and convert --
-Create an assessment using SQL Server Migration Assistant (SSMA).
+Create an assessment by using SQL Server Migration Assistant.
To create an assessment, follow these steps:
-1. Open SQL Server Migration Assistant (SSMA) for Db2.
-1. Select **File** and then choose **New Project**.
-1. Provide a project name, a location to save your project, and then select Azure SQL Managed Instance as the migration target from the drop-down. Select **OK**:
+1. Open [SSMA for Db2](https://www.microsoft.com/download/details.aspx?id=54254).
+1. Select **File** > **New Project**.
+1. Provide a project name and a location to save your project. Then select Azure SQL Managed Instance as the migration target from the drop-down list, and select **OK**.
- :::image type="content" source="media/db2-to-managed-instance-guide/new-project.png" alt-text="Provide project details and select OK to save.":::
+ :::image type="content" source="media/db2-to-managed-instance-guide/new-project.png" alt-text="Screenshot that shows project details to specify.":::
-1. Enter in values for the Db2 connection details on the **Connect to Db2** dialog box:
+1. On **Connect to Db2**, enter values for the Db2 connection details.
- :::image type="content" source="media/db2-to-managed-instance-guide/connect-to-db2.png" alt-text="Connect to your Db2 instance":::
+ :::image type="content" source="media/db2-to-managed-instance-guide/connect-to-db2.png" alt-text="Screenshot that shows options to connect to your Db2 instance.":::
-1. Right-click the Db2 schema you want to migrate, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the schema:
+1. Right-click the Db2 schema you want to migrate, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the schema.
- :::image type="content" source="media/db2-to-managed-instance-guide/create-report.png" alt-text="Right-click the schema and choose create report":::
+ :::image type="content" source="media/db2-to-managed-instance-guide/create-report.png" alt-text="Screenshot that shows how to create a report.":::
-1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of Db2 objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects.
+1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of Db2 objects and the effort required to perform schema conversions. The default location for the report is in the report folder within *SSMAProjects*.
For example: `drive:\<username>\Documents\SSMAProjects\MyDb2Migration\report\report_<date>`.
- :::image type="content" source="media/db2-to-managed-instance-guide/report.png" alt-text="Review the report to identify any errors or warnings":::
+ :::image type="content" source="media/db2-to-managed-instance-guide/report.png" alt-text="Screenshot of the report that you review to identify any errors or warnings":::
### Validate data types
-Validate the default data type mappings and change them based on requirements if necessary. To do so, follow these steps:
+Validate the default data type mappings, and change them based on requirements if necessary. To do so, follow these steps:
1. Select **Tools** from the menu. 1. Select **Project Settings**.
-1. Select the **Type mappings** tab:
+1. Select the **Type mappings** tab.
- :::image type="content" source="media/db2-to-managed-instance-guide/type-mapping.png" alt-text="Select the schema and then type-mapping":::
+ :::image type="content" source="media/db2-to-managed-instance-guide/type-mapping.png" alt-text="Screenshot that shows selecting the schema and type mapping.":::
-1. You can change the type mapping for each table by selecting the table in the **Db2 Metadata explorer**.
+1. You can change the type mapping for each table by selecting the table in the **Db2 Metadata Explorer**.
-### Schema conversion
+### Convert schema
To convert the schema, follow these steps: 1. (Optional) Add dynamic or ad-hoc queries to statements. Right-click the node, and then choose **Add statements**. 1. Select **Connect to Azure SQL Managed Instance**.
- 1. Enter connection details to connect to your Azure SQL Managed Instance.
- 1. Choose your target database from the drop-down, or provide a new name, in which case a database will be created on the target server.
+ 1. Enter connection details to connect to Azure SQL Managed Instance.
+ 1. Choose your target database from the drop-down list, or provide a new name, in which case a database will be created on the target server.
1. Provide authentication details.
- 1. Select **Connect**:
+ 1. Select **Connect**.
- :::image type="content" source="media/db2-to-managed-instance-guide/connect-to-sql-managed-instance.png" alt-text="Fill in details to connect to SQL Server":::
+ :::image type="content" source="media/db2-to-managed-instance-guide/connect-to-sql-managed-instance.png" alt-text="Screenshot that shows the details needed to connect to SQL Server.":::
-1. Right-click the schema and then choose **Convert Schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your schema:
+1. Right-click the schema, and then choose **Convert Schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your schema.
- :::image type="content" source="media/db2-to-managed-instance-guide/convert-schema.png" alt-text="Right-click the schema and choose convert schema":::
+ :::image type="content" source="media/db2-to-managed-instance-guide/convert-schema.png" alt-text="Screenshot that shows selecting the schema and converting it.":::
-1. After the conversion completes, compare and review the structure of the schema to identify potential problems and address them based on the recommendations:
+1. After the conversion completes, compare and review the structure of the schema to identify potential problems. Address the problems based on the recommendations.
- :::image type="content" source="media/db2-to-managed-instance-guide/compare-review-schema-structure.png" alt-text="Compare and review the structure of the schema to identify potential problems and address them based on recommendations.":::
-
-1. Select **Review results** in the Output pane, and review errors in the **Error list** pane.
-1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to SQL Managed Instance.
+ :::image type="content" source="media/db2-to-managed-instance-guide/compare-review-schema-structure.png" alt-text="Screenshot that shows comparing and reviewing the structure of the schema to identify potential problems.":::
+1. In the **Output** pane, select **Review results**. In the **Error list** pane, review errors.
+1. Save the project locally for an offline schema remediation exercise. From the **File** menu, select **Save Project**. This gives you an opportunity to evaluate the source and target schemas offline, and perform remediation before you can publish the schema to SQL Managed Instance.
## Migrate
After you have completed assessing your databases and addressing any discrepanci
To publish your schema and migrate your data, follow these steps:
-1. Publish the schema: Right-click the database from the **Databases** node in the **Azure SQL Managed Instance Metadata Explorer** and choose **Synchronize with Database**:
+1. Publish the schema. In **Azure SQL Managed Instance Metadata Explorer**, from the **Databases** node, right-click the database. Then select **Synchronize with Database**.
- :::image type="content" source="media/db2-to-managed-instance-guide/synchronize-with-database.png" alt-text="Right-click the database and choose synchronize with database":::
+ :::image type="content" source="media/db2-to-managed-instance-guide/synchronize-with-database.png" alt-text="Screenshot that shows the option to synchronize with database.":::
-1. Migrate the data: Right-click the database or object you want to migrate in **Db2 Metadata Explorer**, and choose **Migrate data**. Alternatively, you can select **Migrate Data** from the top-line navigation bar. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand Tables, and then select the check box next to the table. To omit data from individual tables, clear the check box:
+1. Migrate the data. Right-click the database or object you want to migrate in **Db2 Metadata Explorer**, and choose **Migrate data**. Alternatively, you can select **Migrate Data** from the navigation bar. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand **Tables**, and then select the check box next to the table. To omit data from individual tables, clear the check box.
- :::image type="content" source="media/db2-to-managed-instance-guide/migrate-data.png" alt-text="Right-click the schema and choose migrate data":::
+ :::image type="content" source="media/db2-to-managed-instance-guide/migrate-data.png" alt-text="Screenshot that shows selecting the schema and choosing to migrate data.":::
1. Provide connection details for both Db2 and SQL Managed Instance.
-1. After migration completes, view the **Data Migration Report**:
+1. After migration completes, view the **Data Migration Report**.
- :::image type="content" source="media/db2-to-managed-instance-guide/data-migration-report.png" alt-text="Review the data migration report":::
+ :::image type="content" source="media/db2-to-managed-instance-guide/data-migration-report.png" alt-text="Screenshot that shows where to review the data migration report.":::
-1. Connect to your Azure SQL Managed Instance by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema:
+1. Connect to your instance of Azure SQL Managed Instance by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms). Validate the migration by reviewing the data and schema:
- :::image type="content" source="media/db2-to-managed-instance-guide/compare-schema-in-ssms.png" alt-text="Compare the schema in SSMS":::
+ :::image type="content" source="media/db2-to-managed-instance-guide/compare-schema-in-ssms.png" alt-text="Screenshot that shows comparing the schema in SQL Server Management Studio.":::
## Post-migration
-After you have successfully completed the Migration stage, you need to go through a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
+After the migration is complete, you need to go through a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
### Remediate applications
After the data is migrated to the target environment, all the applications that
### Perform tests
-The test approach for database migration consists of the following activities:
+Testing consists of the following activities:
1. **Develop validation tests**: To test database migration, you need to use SQL queries. You must create the validation queries to run against both the source and the target databases. Your validation queries should cover the scope you have defined.
-1. **Set up test environment**: The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
+1. **Set up the test environment**: The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
1. **Run validation tests**: Run the validation tests against the source and the target, and then analyze the results.
-1. **Run performance tests**: Run performance test against the source and the target, and then analyze and compare the results.
-
+1. **Run performance tests**: Run performance tests against the source and the target, and then analyze and compare the results.
-## Leverage advanced features
+## Advanced features
Be sure to take advantage of the advanced cloud-based features offered by Azure SQL Managed Instance, such as [built-in high availability](../../database/high-availability-sla.md), [threat detection](../../database/azure-defender-for-sql.md), and [monitoring and tuning your workload](../../database/monitor-tune-overview.md). -
-Some SQL Server features are only available once the [database compatibility level](/sql/relational-databases/databases/view-or-change-the-compatibility-level-of-a-database) is changed to the latest compatibility level (150).
+Some SQL Server features are only available when the [database compatibility level](/sql/relational-databases/databases/view-or-change-the-compatibility-level-of-a-database) is changed to the latest compatibility level.
## Migration assets
For additional assistance, see the following resources, which were developed in
||| |[Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.| |[Db2 zOS data assets discovery and assessment package](https://github.com/microsoft/DataMigrationTeam/tree/master/DB2%20zOS%20Data%20Assets%20Discovery%20and%20Assessment%20Package)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including *.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.|
-|[IBM Db2 LUW inventory scripts and artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/IBM%20Db2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of 'Raw Data' in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
-|[Db2 LUW pure scale on Azure - setup guide](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Db2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a Db2 implementation plan. While business requirements will differ, the same basic pattern applies. This architectural pattern may also be used for OLAP applications on Azure.|
+|[IBM Db2 LUW inventory scripts and artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/IBM%20Db2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of "raw data" in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
+|[Db2 LUW pure scale on Azure - setup guide](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Db2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a Db2 implementation plan. Although business requirements will differ, the same basic pattern applies. This architectural pattern can also be used for OLAP applications on Azure.|
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform. ## Next steps -- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see [Service and tools for data migration](../../../dms/dms-tools-matrix.md).
+- For Microsoft and third-party services and tools to assist you with various database and data migration scenarios, see [Service and tools for data migration](../../../dms/dms-tools-matrix.md).
-- To learn more about Azure SQL Managed Instance see:
+- To learn more about Azure SQL Managed Instance, see:
- [An overview of SQL Managed Instance](../../managed-instance/sql-managed-instance-paas-overview.md)
- - [Azure total Cost of Ownership Calculator](https://azure.microsoft.com/pricing/tco/calculator/)
+ - [Azure total cost of ownership calculator](https://azure.microsoft.com/pricing/tco/calculator/)
-- To learn more about the framework and adoption cycle for Cloud migrations, see
+- To learn more about the framework and adoption cycle for cloud migrations, see:
- [Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/contoso-migration-scale)
- - [Best practices for costing and sizing workloads migrate to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
+ - [Best practices for costing and sizing workloads migrated to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
-- To assess the Application access layer, see [Data Access Migration Toolkit (Preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit)-- For details on how to perform Data Access Layer A/B testing see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
+- To assess the application access layer, see [Data Access Migration Toolkit](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit).
+- For details on how to perform data access layer A/B testing, see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
azure-sql Oracle To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/oracle-to-managed-instance-guide.md
Title: "Oracle to Azure SQL Managed Instance: Migration guide"
-description: This guide teaches you to migrate your Oracle schemas to Azure SQL Managed Instance using SQL Server Migration Assistant for Oracle.
+description: In this guide, you learn how to migrate your Oracle schemas to Azure SQL Managed Instance by using SQL Server Migration Assistant for Oracle.
Last updated 11/06/2020 # Migration guide: Oracle to Azure SQL Managed Instance+ [!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqlmi.md)]
-This guide teaches you to migrate your Oracle schemas to Azure SQL Managed Instance using SQL Server Migration Assistant for Oracle.
+In this guide, you learn how to migrate your Oracle schemas to Azure SQL Managed Instance by using SQL Server Migration Assistant for Oracle (SSMA for Oracle).
-For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
+For other migration guides, see [Azure Database Migration Guides](https://docs.microsoft.com/data-migration).
## Prerequisites
-To migrate your Oracle schema to SQL Managed Instance you need:
+Before you begin migrating your Oracle schema to SQL Managed Instance:
-- To verify your source environment is supported. -- To download [SQL Server Migration Assistant (SSMA) for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258). -- A target [Azure SQL Managed Instance](../../managed-instance/instance-create-quickstart.md). -- The [necessary permissions for SSMA for Oracle](/sql/ssma/oracle/connecting-to-oracle-database-oracletosql) and [provider](/sql/ssma/oracle/connect-to-oracle-oracletosql).
+- Verify your source environment is supported.
+- Download [SSMA for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258).
+- Have a [SQL Managed Instance](../../managed-instance/instance-create-quickstart.md) target.
+- Obtain the [necessary permissions for SSMA for Oracle](/sql/ssma/oracle/connecting-to-oracle-database-oracletosql) and [provider](/sql/ssma/oracle/connect-to-oracle-oracletosql).
- ## Pre-migration
-After you have met the prerequisites, you are ready to discover the topology of your environment and assess the feasibility of your migration. This part of the process involves conducting an inventory of the databases that you need to migrate, assessing those databases for potential migration issues or blockers, and then resolving any items you might have uncovered.
--
+After you've met the prerequisites, you're ready to discover the topology of your environment and assess the feasibility of your migration. This part of the process involves conducting an inventory of the databases that you need to migrate, assessing those databases for potential migration issues or blockers, and then resolving any items you might have uncovered.
-### Assess
+### Assess
-Use the SQL Server Migration Assistant (SSMA) for Oracle to review database objects and data, assess databases for migration, migrate database objects to Azure SQL Managed Instance, and then finally migrate data to the database.
+By using SSMA for Oracle, you can review database objects and data, assess databases for migration, migrate database objects to SQL Managed Instance, and then finally migrate data to the database.
-To create an assessment, follow these steps:
+To create an assessment:
+1. Open [SSMA for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258).
+1. Select **File**, and then select **New Project**.
+1. Enter a project name and a location to save your project. Then select **Azure SQL Managed Instance** as the migration target from the drop-down list and select **OK**.
-1. Open [SQL Server Migration Assistant for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258).
-1. Select **File** and then choose **New Project**.
-1. Provide a project name, a location to save your project, and then select Azure SQL Managed Instance as the migration target from the drop-down. Select **OK**:
+ ![Screenshot that shows New Project.](./media/oracle-to-managed-instance-guide/new-project.png)
- ![New Project](./media/oracle-to-managed-instance-guide/new-project.png)
+1. Select **Connect to Oracle**. Enter values for Oracle connection details in the **Connect to Oracle** dialog box.
-1. Select **Connect to Oracle**. Enter in values for Oracle connection details on the **Connect to Oracle** dialog box:
+ ![Screenshot that shows Connect to Oracle.](./media/oracle-to-managed-instance-guide/connect-to-oracle.png)
- ![Connect to Oracle](./media/oracle-to-managed-instance-guide/connect-to-oracle.png)
+1. Select the Oracle schemas you want to migrate.
- Select the Oracle schema(s) you want to migrate:
+ ![Screenshot that shows selecting Oracle schema.](./media/oracle-to-managed-instance-guide/select-schema.png)
- ![Choose Oracle schema](./media/oracle-to-managed-instance-guide/select-schema.png)
+1. In **Oracle Metadata Explorer**, right-click the Oracle schema you want to migrate and then select **Create Report** to generate an HTML report. Alternatively, you can select a database and then select the **Create Report** tab.
-1. Right-click the Oracle schema you want to migrate in the **Oracle Metadata Explorer**, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the database:
-
- ![Create Report](./media/oracle-to-managed-instance-guide/create-report.png)
+ ![Screenshot that shows Create Report.](./media/oracle-to-managed-instance-guide/create-report.png)
1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of Oracle objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects.
- For example: `drive:\<username>\Documents\SSMAProjects\MyOracleMigration\report\report_2020_11_12T02_47_55\`
-
- ![Assessment Report](./media/oracle-to-managed-instance-guide/assessment-report.png)
+ For example, see `drive:\<username>\Documents\SSMAProjects\MyOracleMigration\report\report_2020_11_12T02_47_55\`.
+ ![Screenshot that shows an Assessment report.](./media/oracle-to-managed-instance-guide/assessment-report.png)
-### Validate data types
+### Validate the data types
Validate the default data type mappings and change them based on requirements if necessary. To do so, follow these steps:
-1. Select **Tools** from the menu.
-1. Select **Project Settings**.
-1. Select the **Type mappings** tab:
+1. In SSMA for Oracle, select **Tools**, and then select **Project Settings**.
+1. Select the **Type Mapping** tab.
- ![Type Mappings](./media/oracle-to-managed-instance-guide/type-mappings.png)
+ ![Screenshot that shows Type Mapping.](./media/oracle-to-managed-instance-guide/type-mappings.png)
-1. You can change the type mapping for each table by selecting the table in the **Oracle Metadata Explorer**.
+1. You can change the type mapping for each table by selecting the table in **Oracle Metadata Explorer**.
-### Convert schema
+### Convert the schema
-To convert the schema, follow these steps:
+To convert the schema:
-1. (Optional) Add dynamic or ad-hoc queries to statements. Right-click the node, and then choose **Add statements**.
-1. Select **Connect to Azure SQL Managed Instance**.
- 1. Enter connection details to connect your database in Azure SQL Managed Instance.
- 1. Choose your target database from the drop-down, or provide a new name, in which case a database will be created on the target server.
- 1. Provide authentication details.
- 1. Select **Connect**:
+1. (Optional) Add dynamic or ad-hoc queries to statements. Right-click the node, and then select **Add statements**.
+1. Select the **Connect to Azure SQL Managed Instance** tab.
+ 1. Enter connection details to connect your database in **SQL Database Managed Instance**.
+ 1. Select your target database from the drop-down list, or enter a new name, in which case a database will be created on the target server.
+ 1. Enter authentication details, and select **Connect**.
- ![Connect to SQL Managed Instance](./media/oracle-to-managed-instance-guide/connect-to-sql-managed-instance.png)
+ ![Screenshot that shows Connect to Azure SQL Managed Instance.](./media/oracle-to-managed-instance-guide/connect-to-sql-managed-instance.png)
-1. Right-click the Oracle schema in the **Oracle Metadata Explorer** and then choose **Convert Schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your schema:
+1. In **Oracle Metadata Explorer**, right-click the Oracle schema and then select **Convert Schema**. Alternatively, you can select your schema and then select the **Convert Schema** tab.
- ![Convert Schema](./media/oracle-to-managed-instance-guide/convert-schema.png)
+ ![Screenshot that shows Convert Schema.](./media/oracle-to-managed-instance-guide/convert-schema.png)
-1. After the conversion completes, compare and review the converted objects to the original objects to identify potential problems and address them based on the recommendations:
+1. After the conversion finishes, compare and review the converted objects to the original objects to identify potential problems and address them based on the recommendations.
- ![Compare table recommendations](./media/oracle-to-managed-instance-guide/table-comparison.png)
+ ![Screenshot that shows comparing table recommendations.](./media/oracle-to-managed-instance-guide/table-comparison.png)
- Compare the converted Transact-SQL text to the original code and review the recommendations:
+1. Compare the converted Transact-SQL text to the original code, and review the recommendations.
- ![Compare procedure recommendations](./media/oracle-to-managed-instance-guide/procedure-comparison.png)
+ ![Screenshot that shows comparing procedure recommendations.](./media/oracle-to-managed-instance-guide/procedure-comparison.png)
-1. Select **Review results** in the Output pane, and review errors in the **Error list** pane.
-1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to SQL Managed Instance.
+1. In the output pane, select **Review results** and review the errors in the **Error List** pane.
+1. Save the project locally for an offline schema remediation exercise. On the **File** menu, select **Save Project**. This step gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you publish the schema to SQL Managed Instance.
## Migrate
-After you have completed assessing your databases and addressing any discrepancies, the next step is to execute the migration process. Migration involves two steps ΓÇô publishing the schema and migrating the data.
-
-To publish your schema and migrate your data, follow these steps:
+After you've completed assessing your databases and addressing any discrepancies, the next step is to run the migration process. Migration involves two steps: publishing the schema and migrating the data.
-1. Publish the schema: Right-click the schema or object you want to migrate in **Oracle Metadata Explorer**, and choose **Migrate data**. Alternatively, you can select **Migrate Data** from the top-line navigation bar. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand Tables, and then select the check box next to the table. To omit data from individual tables, clear the check box:
+To publish your schema and migrate your data:
+1. Publish the schema by right-clicking the database from the **Databases** node in **Azure SQL Managed Instance Metadata Explorer** and selecting **Synchronize with Database**.
- ![Synchronize with Database](./media/oracle-to-managed-instance-guide/synchronize-with-database.png)
+ ![Screenshot that shows Synchronize with Database.](./media/oracle-to-managed-instance-guide/synchronize-with-database.png)
+
- Review the mapping between your source project and your target:
+1. Review the mapping between your source project and your target.
- ![Synchronize with Database Review](./media/oracle-to-managed-instance-guide/synchronize-with-database-review.png)
+ ![Screenshot that shows Synchronize with the Database review.](./media/oracle-to-managed-instance-guide/synchronize-with-database-review.png)
-1. Migrate the data: Right-click the schema from the **Oracle Metadata Explorer** and choose **Migrate Data**.
+1. Migrate the data by right-clicking the schema or object you want to migrate in **Oracle Metadata Explorer** and selecting **Migrate Data**. Alternatively, you can select the **Migrate Data** tab. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand **Tables**, and then select the check boxes next to the tables. To omit data from individual tables, clear the check boxes.
- ![Migrate Data](./media/oracle-to-managed-instance-guide/migrate-data.png)
+ ![Screenshot that shows Migrate Data.](./media/oracle-to-managed-instance-guide/migrate-data.png)
-1. Provide connection details for both Oracle and Azure SQL Managed Instance.
-1. After migration completes, view the **Data Migration Report**:
+1. Enter connection details for both Oracle and SQL Managed Instance.
+1. After the migration is completed, view the **Data Migration Report**.
- ![Data Migration Report](./media/oracle-to-managed-instance-guide/data-migration-report.png)
+ ![Screenshot that shows Data Migration Report.](./media/oracle-to-managed-instance-guide/data-migration-report.png)
-1. Connect to your Azure SQL Managed Instance by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema:
+1. Connect to your instance of SQL Managed Instance by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms), and validate the migration by reviewing the data and schema.
- ![Validate in SSMA](./media/oracle-to-managed-instance-guide/validate-data.png)
+ ![Screenshot that shows validation in SSMA for Oracle.](./media/oracle-to-managed-instance-guide/validate-data.png)
+Alternatively, you can also use SQL Server Integration Services to perform the migration. To learn more, see:
-Alternatively, you can also use SQL Server Integration Services (SSIS) to perform the migration. To learn more, see:
+- [Getting started with SQL Server Integration Services](/sql/integration-services/sql-server-integration-services)
+- [SQL Server Integration Services for Azure and Hybrid Data Movement](https://download.microsoft.com/download/D/2/0/D20E1C5F-72EA-4505-9F26-FEF9550EFD44/SSIS%20Hybrid%20and%20Azure.docx)
-- [Getting Started with SQL Server Integration Services](/sql/integration-services/sql-server-integration-services)-- [SQL Server Integration
+## Post-migration
-## Post-migration
-
-After you have successfully completed the **Migration** stage, you need to go through a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
+After you've successfully completed the *migration* stage, you need to complete a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
### Remediate applications
-After the data is migrated to the target environment, all the applications that formerly consumed the source need to start consuming the target. Accomplishing this will in some cases require changes to the applications.
+After the data is migrated to the target environment, all the applications that formerly consumed the source need to start consuming the target. Accomplishing this step will require changes to the applications in some cases.
-The [Data Access Migration Toolkit](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) is an extension for Visual Studio Code that allows you to analyze your Java source code and detect data access API calls and queries, providing you with a single-pane view of what needs to be addressed to support the new database back end. To learn more, see the [Migrate our Java application from Oracle](https://techcommunity.microsoft.com/t5/microsoft-data-migration/migrate-your-java-applications-from-oracle-to-sql-server-with/ba-p/368727) blog.
+The [Data Access Migration Toolkit](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) is an extension for Visual Studio Code that allows you to analyze your Java source code and detect data access API calls and queries. The toolkit provides you with a single-pane view of what needs to be addressed to support the new database back end. To learn more, see the [Migrate our Java application from Oracle](https://techcommunity.microsoft.com/t5/microsoft-data-migration/migrate-your-java-applications-from-oracle-to-sql-server-with/ba-p/368727) blog post.
### Perform tests
-The test approach for database migration consists of performing the following activities:
-
-1. **Develop validation tests**. To test database migration, you need to use SQL queries. You must create the validation queries to run against both the source and the target databases. Your validation queries should cover the scope you have defined.
-
-2. **Set up test environment**. The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
+The test approach to database migration consists of the following activities:
-3. **Run validation tests**. Run the validation tests against the source and the target, and then analyze the results.
-
-4. **Run performance tests**. Run performance test against the source and the target, and then analyze and compare the results.
+1. **Develop validation tests**: To test the database migration, you need to use SQL queries. You must create the validation queries to run against both the source and the target databases. Your validation queries should cover the scope you've defined.
+2. **Set up a test environment**: The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
+3. **Run validation tests**: Run validation tests against the source and the target, and then analyze the results.
+4. **Run performance tests**: Run performance tests against the source and the target, and then analyze and compare the results.
### Optimize
-The post-migration phase is crucial for reconciling any data accuracy issues and verifying completeness, as well as addressing performance issues with the workload.
+The post-migration phase is crucial for reconciling any data accuracy issues, verifying completeness, and addressing performance issues with the workload.
> [!NOTE]
-> For additional detail about these issues and specific steps to mitigate them, see the [Post-migration Validation and Optimization Guide](/sql/relational-databases/post-migration-validation-and-optimization-guide).
-
+> For more information about these issues and the steps to mitigate them, see the [Post-migration validation and optimization guide](/sql/relational-databases/post-migration-validation-and-optimization-guide).
-## Migration assets
+## Migration assets
-For additional assistance with completing this migration scenario, please see the following resources, which were developed in support of a real-world migration project engagement.
+For more assistance with completing this migration scenario, see the following resources. They were developed in support of a real-world migration project engagement.
| **Title/link** | **Description** | | - | -- |
-| [Data Workload Assessment Model and Tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | This tool provides suggested ΓÇ£best fitΓÇ¥ target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that greatly helps to accelerate large estate assessments by providing and automated and uniform target platform decision process. |
-| [Oracle Inventory Script Artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/Oracle%20Inventory%20Script%20Artifacts) | This asset includes a PL/SQL query that hits Oracle system tables and provides a count of objects by schema type, object type, and status. It also provides a rough estimate of ΓÇÿRaw DataΓÇÖ in each schema and the sizing of tables in each schema, with results stored in a CSV format. |
-| [Automate SSMA Oracle Assessment Collection & Consolidation](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/Automate%20SSMA%20Oracle%20Assessment%20Collection%20%26%20Consolidation) | This set of resource uses a .csv file as entry (sources.csv in the project folders) to produce the xml files that are needed to run SSMA assessment in console mode. The source.csv is provided by the customer based on an inventory of existing Oracle instances. The output files are AssessmentReportGeneration_source_1.xml, ServersConnectionFile.xml, and VariableValueFile.xml.|
-| [SSMA for Oracle Common Errors and how to fix them](https://aka.ms/dmj-wp-ssma-oracle-errors) | With Oracle, you can assign a non-scalar condition in the WHERE clause. However, SQL Server doesnΓÇÖt support this type of condition. As a result, SQL Server Migration Assistant (SSMA) for Oracle doesnΓÇÖt convert queries with a non-scalar condition in the WHERE clause, instead generating an error O2SS0001. This white paper provides more details on the issue and ways to resolve it. |
-| [Oracle to SQL Server Migration Handbook](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20SQL%20Server%20Migration%20Handbook.pdf) | This document focuses on the tasks associated with migrating an Oracle schema to the latest version of SQL Server base. If the migration requires changes to features/functionality, then the possible impact of each change on the applications that use the database must be considered carefully. |
+| [Data Workload Assessment Model and Tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | This tool provides suggested "best fit" target platforms, cloud readiness, and application or database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated and uniform target platform decision process. |
+| [Oracle Inventory Script Artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/Oracle%20Inventory%20Script%20Artifacts) | This asset includes a PL/SQL query that hits Oracle system tables and provides a count of objects by schema type, object type, and status. It also provides a rough estimate of raw data in each schema and the sizing of tables in each schema, with results stored in a CSV format. |
+| [Automate SSMA Oracle Assessment Collection & Consolidation](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/Automate%20SSMA%20Oracle%20Assessment%20Collection%20%26%20Consolidation) | This set of resources uses a .csv file as entry (sources.csv in the project folders) to produce the xml files that are needed to run an SSMA assessment in console mode. The source.csv is provided by the customer based on an inventory of existing Oracle instances. The output files are AssessmentReportGeneration_source_1.xml, ServersConnectionFile.xml, and VariableValueFile.xml.|
+| [SSMA for Oracle Common Errors and How to Fix Them](https://aka.ms/dmj-wp-ssma-oracle-errors) | With Oracle, you can assign a nonscalar condition in the WHERE clause. However, SQL Server doesn't support this type of condition. As a result, SSMA for Oracle doesn't convert queries with a nonscalar condition in the WHERE clause. Instead, it generates the error O2SS0001. This white paper provides more details on the issue and ways to resolve it. |
+| [Oracle to SQL Server Migration Handbook](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20SQL%20Server%20Migration%20Handbook.pdf) | This document focuses on the tasks associated with migrating an Oracle schema to the latest version of SQL Server Database. If the migration requires changes to features or functionality, the possible impact of each change on the applications that use the database must be considered carefully. |
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform. ## Next steps -- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see the article [Service and tools for data migration](../../../dms/dms-tools-matrix.md).
+- For a matrix of Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios and specialty tasks, see [Services and tools for data migration](../../../dms/dms-tools-matrix.md).
-- To learn more about Azure SQL Managed Instance, see:
+- To learn more about SQL Managed Instance, see:
- [An overview of Azure SQL Managed Instance](../../managed-instance/sql-managed-instance-paas-overview.md) - [Azure Total Cost of Ownership (TCO) Calculator](https://azure.microsoft.com/en-us/pricing/tco/calculator/) --- To learn more about the framework and adoption cycle for Cloud migrations, see
+- To learn more about the framework and adoption cycle for cloud migrations, see:
- [Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/contoso-migration-scale)
- - [Best practices for costing and sizing workloads migrate to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs) s)
+ - [Best practices for costing and sizing workloads for migration to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
-- For video content, see:
- - [Overview of the migration journey and the tools/services recommended for performing assessment and migration](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/)
+- For video content, see:
+ - [Overview of the migration journey and the tools and services recommended for performing assessment and migration](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/)
azure-sql Db2 To Sql On Azure Vm Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/db2-to-sql-on-azure-vm-guide.md
Title: "Db2 to SQL Server on Azure VMs: Migration guide"
-description: This guide teaches you to migrate your Db2 database to SQL Server on Azure VMs using SQL Server Migration Assistant for Db2.
+ Title: "Db2 to SQL Server on Azure VM: Migration guide"
+description: This guide teaches you to migrate your IBM Db2 databases to SQL Server on Azure VM, by using SQL Server Migration Assistant for Db2.
Last updated 11/06/2020
-# Migration guide: Db2 to SQL Server on Azure VMs
+# Migration guide: IBM Db2 to SQL Server on Azure VM
[!INCLUDE[appliesto--sqlmi](../../includes/appliesto-sqlvm.md)]
-This migration guide teaches you to migrate your user databases from Db2 to SQL Server on Azure VMs using the SQL Server Migration Assistant for Db2.
-
-For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
+This guide teaches you to migrate your user databases from IBM Db2 to SQL Server on Azure VM, by using the SQL Server Migration Assistant for Db2.
+For other migration guides, see [Azure Database Migration Guides](https://docs.microsoft.com/data-migration).
## Prerequisites To migrate your Db2 database to SQL Server, you need: -- to verify your [source environment is supported](/sql/ssma/db2/installing-ssma-for-Db2-client-Db2tosql#prerequisites).
+- To verify that your [source environment is supported](/sql/ssma/db2/installing-ssma-for-Db2-client-Db2tosql#prerequisites).
- [SQL Server Migration Assistant (SSMA) for Db2](https://www.microsoft.com/download/details.aspx?id=54254). - [Connectivity](../../virtual-machines/windows/ways-to-connect-to-sql.md) between your source environment and your SQL Server VM in Azure. - A target [SQL Server on Azure VM](../../virtual-machines/windows/create-sql-vm-portal.md). -- ## Pre-migration
-After you have met the prerequisites, you are ready to discover the topology of your environment and assess the feasibility of your migration.
-
+After you have met the prerequisites, you're ready to discover the topology of your environment and assess the feasibility of your migration.
### Assess
-Use SQL Server Migration Assistant (SSMA) for DB2 to review database objects and data, and assess databases for migration.
+Use SSMA for DB2 to review database objects and data, and assess databases for migration.
To create an assessment, follow these steps:
-1. Open [SQL Server Migration Assistant (SSMA) for Db2](https://www.microsoft.com/download/details.aspx?id=54254).
-1. Select **File** and then choose **New Project**.
-1. Provide a project name, a location to save your project, and then select a SQL Server migration target from the drop-down. Select **OK**:
+1. Open [SSMA for Db2](https://www.microsoft.com/download/details.aspx?id=54254).
+1. Select **File** > **New Project**.
+1. Provide a project name and a location to save your project. Then select a SQL Server migration target from the drop-down list, and select **OK**.
- :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/new-project.png" alt-text="Provide project details and select OK to save.":::
+ :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/new-project.png" alt-text="Screenshot that shows project details to specify.":::
-1. Enter in values for the Db2 connection details on the **Connect to Db2** dialog box:
+1. On **Connect to Db2**, enter values for the Db2 connection details.
- :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/connect-to-Db2.png" alt-text="Connect to your Db2 instance":::
+ :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/connect-to-Db2.png" alt-text="Screenshot that shows options to connect to your Db2 instance.":::
-1. Right-click the Db2 schema you want to migrate, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the schema:
+1. Right-click the Db2 schema you want to migrate, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the schema.
- :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/create-report.png" alt-text="Right-click the schema and choose create report":::
+ :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/create-report.png" alt-text="Screenshot that shows how to create a report.":::
-1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of Db2 objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects.
+1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of Db2 objects and the effort required to perform schema conversions. The default location for the report is in the report folder within *SSMAProjects*.
For example: `drive:\<username>\Documents\SSMAProjects\MyDb2Migration\report\report_<date>`.
- :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/report.png" alt-text="Review the report to identify any errors or warnings":::
+ :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/report.png" alt-text="Screenshot of the report that you review to identify any errors or warnings.":::
### Validate data types
-Validate the default data type mappings and change them based on requirements if necessary. To do so, follow these steps:
+Validate the default data type mappings, and change them based on requirements if necessary. To do so, follow these steps:
1. Select **Tools** from the menu. 1. Select **Project Settings**.
-1. Select the **Type mappings** tab:
+1. Select the **Type mappings** tab.
- :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/type-mapping.png" alt-text="Select the schema and then type-mapping":::
+ :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/type-mapping.png" alt-text="Screenshot that shows selecting the schema and type mapping.":::
-1. You can change the type mapping for each table by selecting the table in the **Db2 Metadata explorer**.
+1. You can change the type mapping for each table by selecting the table in the **Db2 Metadata Explorer**.
### Convert schema To convert the schema, follow these steps:
-1. (Optional) Add dynamic or ad-hoc queries to statements. Right-click the node, and then choose **Add statements**.
+1. (Optional) Add dynamic or ad hoc queries to statements. Right-click the node, and then choose **Add statements**.
1. Select **Connect to SQL Server**.
- 1. Enter connection details to connect to your SQL Server instance on your Azure VM.
+ 1. Enter connection details to connect to your instance of SQL Server on your Azure VM.
1. Choose to connect to an existing database on the target server, or provide a new name to create a new database on the target server. 1. Provide authentication details.
- 1. Select **Connect**:
+ 1. Select **Connect**.
- :::image type="content" source="../../../../includes/media/virtual-machines-sql-server-connection-steps/rm-ssms-connect.png" alt-text="Connect to your SQL Server on Azure VM":::
+ :::image type="content" source="../../../../includes/media/virtual-machines-sql-server-connection-steps/rm-ssms-connect.png" alt-text="Screenshot that shows the details needed to connect to your SQL Server on Azure VM.":::
-1. Right-click the schema and then choose **Convert Schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your schema:
+1. Right-click the schema and then choose **Convert Schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your schema.
- :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/convert-schema.png" alt-text="Right-click the schema and choose convert schema":::
+ :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/convert-schema.png" alt-text="Screenshot that shows selecting the schema and converting it.":::
-1. After the conversion completes, compare and review the structure of the schema to identify potential problems and address them based on the recommendations:
+1. After the conversion finishes, compare and review the structure of the schema to identify potential problems. Address the problems based on the recommendations.
- :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/compare-review-schema-structure.png" alt-text="Compare and review the structure of the schema to identify potential problems and address them based on recommendations.":::
-
-1. Select **Review results** in the Output pane, and review errors in the **Error list** pane.
-1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to SQL Server on Azure VM.
+ :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/compare-review-schema-structure.png" alt-text="Screenshot that shows comparing and reviewing the structure of the schema to identify potential problems.":::
+1. In the **Output** pane, select **Review results**. In the **Error list** pane, review errors.
+1. Save the project locally for an offline schema remediation exercise. From the **File** menu, select **Save Project**. This gives you an opportunity to evaluate the source and target schemas offline, and perform remediation before you can publish the schema to SQL Server on Azure VM.
## Migrate
After you have completed assessing your databases and addressing any discrepanci
To publish your schema and migrate your data, follow these steps:
-1. Publish the schema: Right-click the database from the **Databases** node in the **SQL Server Metadata Explorer** and choose **Synchronize with Database**:
+1. Publish the schema. In **SQL Server Metadata Explorer**, from the **Databases** node, right-click the database. Then select **Synchronize with Database**.
- :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/synchronize-with-database.png" alt-text="Right-click the database and choose synchronize with database":::
+ :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/synchronize-with-database.png" alt-text="Screenshot that shows the option to synchronize with database.":::
-1. Migrate the data: Right-click the database or object you want to migrate in **Db2 Metadata Explorer**, and choose **Migrate data**. Alternatively, you can select **Migrate Data** from the top-line navigation bar. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand Tables, and then select the check box next to the table. To omit data from individual tables, clear the check box:
+1. Migrate the data. Right-click the database or object you want to migrate in **Db2 Metadata Explorer**, and choose **Migrate data**. Alternatively, you can select **Migrate Data** from the navigation bar. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand **Tables**, and then select the check box next to the table. To omit data from individual tables, clear the check box.
- :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/migrate-data.png" alt-text="Right-click the schema and choose migrate data":::
+ :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/migrate-data.png" alt-text="Screenshot that shows selecting the schema and choosing to migrate data.":::
1. Provide connection details for both the Db2 and SQL Server instances.
-1. After migration completes, view the **Data Migration Report**:
+1. After migration finishes, view the **Data Migration Report**:
- :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/data-migration-report.png" alt-text="Review the data migration report":::
+ :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/data-migration-report.png" alt-text="Screenshot that shows where to review the data migration report.":::
-1. Connect to your SQL Server on Azure VM instance by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema:
+1. Connect to your instance of SQL Server on Azure VM by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms). Validate the migration by reviewing the data and schema.
- :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/compare-schema-in-ssms.png" alt-text="Compare the schema in SSMS":::
+ :::image type="content" source="media/db2-to-sql-on-azure-vm-guide/compare-schema-in-ssms.png" alt-text="Screenshot that shows comparing the schema in SQL Server Management Studio.":::
## Post-migration
-After you have successfully completed the Migration stage, you need to go through a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
+After the migration is complete, you need to go through a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
### Remediate applications
After the data is migrated to the target environment, all the applications that
### Perform tests
-The test approach for database migration consists of the following activities:
+Testing consists of the following activities:
1. **Develop validation tests**: To test database migration, you need to use SQL queries. You must create the validation queries to run against both the source and the target databases. Your validation queries should cover the scope you have defined.
-1. **Set up test environment**: The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
+1. **Set up the test environment**: The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
1. **Run validation tests**: Run the validation tests against the source and the target, and then analyze the results.
-1. **Run performance tests**: Run performance test against the source and the target, and then analyze and compare the results.
-
+1. **Run performance tests**: Run performance tests against the source and the target, and then analyze and compare the results.
## Migration assets
For additional assistance, see the following resources, which were developed in
||| |[Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.| |[Db2 zOS data assets discovery and assessment package](https://github.com/microsoft/DataMigrationTeam/tree/master/DB2%20zOS%20Data%20Assets%20Discovery%20and%20Assessment%20Package)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including *.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.|
-|[IBM Db2 LUW inventory scripts and artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/IBM%20Db2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of 'Raw Data' in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
-|[Db2 LUW pure scale on Azure - setup guide](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/db2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a Db2 implementation plan. While business requirements will differ, the same basic pattern applies. This architectural pattern may also be used for OLAP applications on Azure.|
+|[IBM Db2 LUW inventory scripts and artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/IBM%20Db2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of "raw data" in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
+|[Db2 LUW pure scale on Azure - setup guide](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/db2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a Db2 implementation plan. Although business requirements will differ, the same basic pattern applies. This architectural pattern can also be used for OLAP applications on Azure.|
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
The Data SQL Engineering team developed these resources. This team's core charte
After migration, review the [Post-migration validation and optimization guide](/sql/relational-databases/post-migration-validation-and-optimization-guide).
-For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios, as well as specialty tasks, see [Data migration services and tools](../../../dms/dms-tools-matrix.md).
-
-For other migration guides, see [Database Migration](https://datamigration.microsoft.com/).
+For Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios, see [Data migration services and tools](../../../dms/dms-tools-matrix.md).
-For video content, see:
-- [Overview of the migration journey](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/)
+For video content, see [Overview of the migration journey](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/).
azure-sql Sql Server To Sql On Azure Vm Individual Databases Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-individual-databases-guide.md
Title: "SQL Server to SQL Server on Azure VMs: Migration guide"
-description: This guide teaches you to migrate your individual SQL Server databases to SQL Server on Azure VMs.
+ Title: "SQL Server to SQL Server on Azure Virtual Machines: Migration guide"
+description: In this guide, you learn how to migrate your individual SQL Server databases to SQL Server on Azure Virtual Machines.
Last updated 03/19/2021
-# Migration guide: SQL Server to SQL Server on Azure VMs
+# Migration guide: SQL Server to SQL Server on Azure Virtual Machines
+ [!INCLUDE[appliesto--sqlmi](../../includes/appliesto-sqlvm.md)]
-This migration guide teaches you to **discover**, **assess**, and **migrate** your user databases from SQL Server to an instance of SQL Server on Azure Virtual Machines (VMs) by using the backup and restore and log shipping utilizing the [Database Migration Assistant (DMA)](/sql/dma/dma-overview) for assessment.
+In this guide, you learn how to *discover*, *assess*, and *migrate* your user databases from SQL Server to an instance of SQL Server on Azure Virtual Machines by using backup and restore and log shipping that uses [Data Migration Assistant](/sql/dma/dma-overview) for assessment.
You can migrate SQL Server running on-premises or on: -- SQL Server on Virtual Machines -- Amazon Web Services (AWS) EC2 -- Amazon Relational Database Service (AWS RDS) -- Compute Engine (Google Cloud Platform - GCP)
+- SQL Server on virtual machines (VMs).
+- Amazon Web Services (AWS) EC2.
+- Amazon Relational Database Service (AWS RDS).
+- Compute Engine (Google Cloud Platform [GCP]).
-For information about additional migration strategies, see the [SQL Server VM migration overview](sql-server-to-sql-on-azure-vm-migration-overview.md). For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
+For information about extra migration strategies, see the [SQL Server VM migration overview](sql-server-to-sql-on-azure-vm-migration-overview.md). For other migration guides, see [Azure Database Migration Guides](https://docs.microsoft.com/data-migration).
## Prerequisites
-Migrating to SQL Server on Azure VMs requires the following:
+Migrating to SQL Server on Azure Virtual Machines requires the following resources:
-- [Database Migration Assistant (DMA)](https://www.microsoft.com/download/details.aspx?id=53595).
+- [Data Migration Assistant](https://www.microsoft.com/download/details.aspx?id=53595).
- An [Azure Migrate project](../../../migrate/create-manage-projects.md).-- A prepared target [SQL Server on Azure VM](../../virtual-machines/windows/create-sql-vm-portal.md) that is the same or greater version than the source SQL Server.
+- A prepared target [SQL Server on Azure Virtual Machines](../../virtual-machines/windows/create-sql-vm-portal.md) instance that's the same or greater version than the SQL Server source.
- [Connectivity between Azure and on-premises](/azure/architecture/reference-architectures/hybrid-networking). - [Choosing an appropriate migration strategy](sql-server-to-sql-on-azure-vm-migration-overview.md#migrate). ## Pre-migration
-Before you begin your migration, discover the topology of your SQL environment and assess the feasibility of your intended migration.
+Before you begin your migration, you need to discover the topology of your SQL environment and assess the feasibility of your intended migration.
### Discover
-Azure Migrate assesses migration suitability of on-premises computers, performs performance-based sizing, and provides cost estimations for running on-premises. To plan for the migration, use Azure Migrate to [identify existing data sources and details about the features](../../../migrate/concepts-assessment-calculation.md) your SQL Server instances use. This process involves scanning the network to identify all of your SQL Server instances in your organization with the version and features in use.
+Azure Migrate assesses migration suitability of on-premises computers, performs performance-based sizing, and provides cost estimations for running on-premises. To plan for the migration, use Azure Migrate to [identify existing data sources and details about the features](../../../migrate/concepts-assessment-calculation.md) your SQL Server instances use. This process involves scanning the network to identify all of your SQL Server instances in your organization with the version and features in use.
> [!IMPORTANT]
-> When choosing a target Azure virtual machine for your SQL Server instance, be sure to consider the [Performance guidelines for SQL Server on Azure VMs](../../virtual-machines/windows/performance-guidelines-best-practices.md).
-
-For additional discovery tools, see [Services and tools](../../../dms/dms-tools-matrix.md#business-justification-phase) available for data migration scenarios.
+> When you choose a target Azure virtual machine for your SQL Server instance, be sure to consider the [Performance guidelines for SQL Server on Azure Virtual Machines](../../virtual-machines/windows/performance-guidelines-best-practices.md).
+For more discovery tools, see the [services and tools](../../../dms/dms-tools-matrix.md#business-justification-phase) available for data migration scenarios.
### Assess [!INCLUDE [assess-estate-with-azure-migrate](../../../../includes/azure-migrate-to-assess-sql-data-estate.md)]
-After you've discovered all of the data sources, use the [Data Migration Assistant (DMA)](/sql/dma/dma-overview) to assess on-premises SQL Server instance(s) migrating to an instance of SQL Server on Azure VM to understand the gaps between the source and target instances.
-
+After you've discovered all the data sources, use [Data Migration Assistant](/sql/dma/dma-overview) to assess on-premises SQL Server instances migrating to an instance of SQL Server on Azure Virtual Machines to understand the gaps between the source and target instances.
> [!NOTE]
-> If you're _not_ upgrading the version of SQL Server, skip this step and move to the [migrate](#migrate) section.
--
-#### Assess user databases
+> If you're _not_ upgrading the version of SQL Server, skip this step and move to the [Migrate](#migrate) section.
-The Data Migration Assistant (DMA) assists your migration to a modern data platform by detecting compatibility issues that can impact database functionality in your new version of SQL Server. DMA recommends performance and reliability improvements for your target environment and also allows you to move your schema, data, and login objects from your source server to your target server.
+#### Assess user databases
-See [assessment](/sql/dma/dma-migrateonpremsql) to learn more.
+Data Migration Assistant assists your migration to a modern data platform by detecting compatibility issues that can affect database functionality in your new version of SQL Server. Data Migration Assistant recommends performance and reliability improvements for your target environment and also allows you to move your schema, data, and login objects from your source server to your target server.
+To learn more, see [Assessment](/sql/dma/dma-migrateonpremsql).
> [!IMPORTANT]
->Based on the type of assessment, the permissions required on the source SQL Server can be different.
- > - For the **feature parity** advisor, the credentials provided to connect to source SQL Server database must be a member of the *sysadmin* server role.
- > - For the compatibility issues advisor, the credentials provided must have at least `CONNECT SQL`, `VIEW SERVER STATE` and `VIEW ANY DEFINITION` permissions.
- > - DMA will highlight the permissions required for the chosen advisor before running the assessment.
+>Based on the type of assessment, the permissions required on the source SQL Server can be different:
+ > - For the *feature parity* advisor, the credentials provided to connect to the source SQL Server database must be a member of the *sysadmin* server role.
+ > - For the *compatibility issues* advisor, the credentials provided must have at least `CONNECT SQL`, `VIEW SERVER STATE`, and `VIEW ANY DEFINITION` permissions.
+ > - Data Migration Assistant will highlight the permissions required for the chosen advisor before running the assessment.
+#### Assess the applications
-#### Assess applications
+Typically, an application layer accesses user databases to persist and modify data. Data Migration Assistant can assess the data access layer of an application in two ways:
-Typically, an application layer accesses user databases to persist and modify data. DMA can assess the data access layer of an application in two ways:
+- By using captured [extended events](/sql/relational-databases/extended-events/extended-events) or [SQL Server Profiler traces](/sql/tools/sql-server-profiler/create-a-trace-sql-server-profiler) of your user databases. You can also use the [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-capture-trace) to create a trace log that can also be used for A/B testing.
+- By using the [Data Access Migration Toolkit (preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit), which provides discovery and assessment of SQL queries within the code and is used to migrate application source code from one database platform to another. This tool supports popular file types like C#, Java, XML, and plain text. For a guide on how to perform a Data Access Migration Toolkit assessment, see the [Use Data Migration Assistant](https://techcommunity.microsoft.com/t5/microsoft-data-migration/using-data-migration-assistant-to-assess-an-application-s-data/ba-p/990430) blog post.
-- Using captured [extended events](/sql/relational-databases/extended-events/extended-events) or [SQL Server Profiler traces ](/sql/tools/sql-server-profiler/create-a-trace-sql-server-profiler) of your user databases. You can also use the [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-capture-trace) to create a trace log that can also be used for A/B testing.
+During the assessment of user databases, use Data Migration Assistant to [import](/sql/dma/dma-assesssqlonprem#add-databases-and-extended-events-trace-to-assess) captured trace files or Data Access Migration Toolkit files.
-- Using the [Data Access Migration Toolkit (preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) (DAMT), which provides discovery and assessment of SQL queries within the code and is used to migrate application source code from one database platform to another. This tool supports a variety of popular file types including C# and Java, XML, and Plaint Text. For a guide on how to perform a DAMT assessment see the [Use DMAT](https://techcommunity.microsoft.com/t5/microsoft-data-migration/using-data-migration-assistant-to-assess-an-application-s-data/ba-p/990430) blog.
+#### Assessments at scale
-Use DMA to [import](/sql/dma/dma-assesssqlonprem#add-databases-and-extended-events-trace-to-assess) captured trace files or DAMT files during the assessment of user databases.
+If you have multiple servers that require a Data Migration Assistant assessment, you can automate the process by using the [command-line interface](/sql/dma/dma-commandline). Using the interface, you can prepare assessment commands in advance for each SQL Server instance in the scope for migration.
+For summary reporting across large estates, Data Migration Assistant assessments can now be [consolidated into Azure Migrate](/sql/dma/dma-assess-sql-data-estate-to-sqldb).
-#### Scale assessments
+#### Refactor databases with Data Migration Assistant
-If you have multiple servers that require a DMA assessment, you can automate the process through the [command line interface](/sql/dma/dma-commandline). Using the interface, you can prepare assessment commands in advance for each SQL Server instance in the scope for migration.
+Based on the Data Migration Assistant assessment results, you might have a series of recommendations to ensure your user databases perform and function correctly after migration. Data Migration Assistant provides details on the impacted objects and resources for how to resolve each issue. Make sure to resolve all breaking changes and behavior changes before you start production migration.
-For summary reporting across large estates, Data Migration Assistant (DMA) assessments can now be [consolidated into Azure Migrate](/sql/dma/dma-assess-sql-data-estate-to-sqldb).
+For deprecated features, you can choose to run your user databases in their original [compatibility](/sql/t-sql/statements/alter-database-transact-sql-compatibility-level) mode if you want to avoid making these changes and speed up migration. This action will prevent [upgrading your database compatibility](/sql/database-engine/install-windows/compatibility-certification#compatibility-levels-and-database-engine-upgrades) until the deprecated items have been resolved.
-#### Refactor databases with DMA
-
-Based on the DMA assessment results, you may have a series of recommendations to ensure your user database(s) perform and function correctly after migration. DMA provides details on the impacted objects as well as resources for how to resolve each issue. It is recommended that all breaking changes, and behavior changes are resolved before production migration.
-
-For deprecated features, you can choose to run your user database(s) in their original [compatibility](/sql/t-sql/statements/alter-database-transact-sql-compatibility-level) mode if you wish to avoid making these changes and speed up migration. However, this will prevent [upgrading your database compatibility](/sql/database-engine/install-windows/compatibility-certification#compatibility-levels-and-database-engine-upgrades) until the deprecated items have been resolved.
-
-It is highly recommended that all DMA fixes are scripted and applied to the target SQL Server database during [post-migration](#post-migration).
+You need to script all Data Migration Assistant fixes and apply them to the target SQL Server database during the [post-migration](#post-migration) phase.
> [!CAUTION]
-> Not all SQL Server versions support all compatibility modes. Check that your [target SQL Server version](/sql/t-sql/statements/alter-database-transact-sql-compatibility-level) supports your chosen database compatibility. For example, SQL Server 2019 does not support databases with level 90 compatibility (which is SQL Server 2005). These databases would require, at least, an upgrade to compatibility level 100.
+> Not all SQL Server versions support all compatibility modes. Check that your [target SQL Server version](/sql/t-sql/statements/alter-database-transact-sql-compatibility-level) supports your chosen database compatibility. For example, SQL Server 2019 doesn't support databases with level 90 compatibility (which is SQL Server 2005). These databases would require, at least, an upgrade to compatibility level 100.
> ## Migrate
-After you have completed the pre-migration steps, you are ready to migrate the user databases and components. Migrate your databases using your preferred [migration method](sql-server-to-sql-on-azure-vm-migration-overview.md#migrate).
+After you've completed the pre-migration steps, you're ready to migrate the user databases and components. Migrate your databases by using your preferred [migration method](sql-server-to-sql-on-azure-vm-migration-overview.md#migrate).
-The following provides steps for performing either a migration using backup and restore, or a minimal downtime migration using backup and restore along with log shipping.
+The following sections provide steps for performing either a migration by using backup and restore or a minimal downtime migration by using backup and restore along with log shipping.
### Backup and restore
-To perform a standard migration using backup and restore, follow these steps:
+To perform a standard migration by using backup and restore:
-1. Set up connectivity to the target SQL Server on Azure VM, based on your requirements. See [Connect to a SQL Server Virtual Machine on Azure (Resource Manager)](../../virtual-machines/windows/ways-to-connect-to-sql.md).
-1. Pause/stop any applications that are using databases intended for migration.
-1. Ensure user database(s) are inactive using [single user mode](/sql/relational-databases/databases/set-a-database-to-single-user-mode).
+1. Set up connectivity to SQL Server on Azure Virtual Machines based on your requirements. For more information, see [Connect to a SQL Server virtual machine on Azure (Resource Manager)](../../virtual-machines/windows/ways-to-connect-to-sql.md).
+1. Pause or stop any applications that are using databases intended for migration.
+1. Ensure user databases are inactive by using [single user mode](/sql/relational-databases/databases/set-a-database-to-single-user-mode).
1. Perform a full database backup to an on-premises location.
-1. Copy your on-premises backup file(s) to your VM using remote desktop, [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), or the [AZCopy command line utility](../../../storage/common/storage-use-azcopy-v10.md) (> 2-TB backups recommended).
-1. Restore full database backup(s) to the SQL Server on Azure VM.
-
-### Log shipping (minimize downtime)
+1. Copy your on-premises backup files to your VM by using a remote desktop, [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), or the [AZCopy command-line utility](../../../storage/common/storage-use-azcopy-v10.md). (Greater than 2-TB backups are recommended.)
+1. Restore full database backups to the SQL Server on Azure Virtual Machines.
-To perform a minimal downtime migration using backup, restore, and log shipping, follow these steps:
+### Log shipping (minimize downtime)
-1. Set up connectivity to target SQL Server on Azure VM, based on your requirements. See [Connect to a SQL Server Virtual Machine on Azure (Resource Manager)](../../virtual-machines/windows/ways-to-connect-to-sql.md).
-1. Ensure on-premise User Database(s) to be migrated are in full or bulk-logged recovery model.
-1. Perform a full database backup to an on-premises location and modify any existing full database backups jobs to use [COPY_ONLY](/sql/relational-databases/backup-restore/copy-only-backups-sql-server) keyword to preserve the log chain.
-1. Copy your on-premises backup file(s) to your VM using remote desktop, [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), or the [AZCopy command line utility](../../../storage/common/storage-use-azcopy-v10.md) (>1-TB backups recommended).
-1. Restore Full Database backup(s) on the SQL Server on Azure VM.
-1. Set up [log shipping](/sql/database-engine/log-shipping/configure-log-shipping-sql-server) between on-premise database and target SQL Server on Azure VM. Be sure not to reinitialize the database(s) as this has already been completed in the previous steps.
-1. **Cut over** to the target server.
- 1. Pause/stop applications using databases to be migrated.
- 1. Ensure user database(s) are inactive using [single user mode](/sql/relational-databases/databases/set-a-database-to-single-user-mode).
- 1. When ready, perform a log shipping [controlled fail-over](/sql/database-engine/log-shipping/fail-over-to-a-log-shipping-secondary-sql-server) of on-premise database(s) to target SQL Server on Azure VM.
+To perform a minimal downtime migration by using backup and restore and log shipping:
+1. Set up connectivity to the SQL Server on Azure Virtual Machines based on your requirements. For more information, see [Connect to a SQL Server virtual machine on Azure (Resource Manager)](../../virtual-machines/windows/ways-to-connect-to-sql.md).
+1. Ensure on-premises user databases to be migrated are in full or bulk-logged recovery model.
+1. Perform a full database backup to an on-premises location, and modify any existing full database backups jobs to use the [COPY_ONLY](/sql/relational-databases/backup-restore/copy-only-backups-sql-server) keyword to preserve the log chain.
+1. Copy your on-premises backup files to your VM by using a remote desktop, [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), or the [AZCopy command-line utility](../../../storage/common/storage-use-azcopy-v10.md). (Greater than 1-TB backups are recommended.)
+1. Restore full database backups on SQL Server on Azure Virtual Machines.
+1. Set up [log shipping](/sql/database-engine/log-shipping/configure-log-shipping-sql-server) between the on-premises database and SQL Server on Azure Virtual Machines. Be sure not to reinitialize the databases because this task was already completed in the previous steps.
+1. Cut over to the target server.
+ 1. Pause or stop applications by using databases to be migrated.
+ 1. Ensure user databases are inactive by using [single user mode](/sql/relational-databases/databases/set-a-database-to-single-user-mode).
+ 1. When you're ready, perform a log shipping [controlled failover](/sql/database-engine/log-shipping/fail-over-to-a-log-shipping-secondary-sql-server) of on-premises databases to SQL Server on Azure Virtual Machines.
+### Migrate objects outside user databases
-### Migrating objects outside user database(s)
+More SQL Server objects might be required for the seamless operation of your user databases post migration.
-There may be additional SQL Server objects that are required for the seamless operation of your user databases post migration.
+The following table provides a list of components and recommended migration methods that can be completed before or after migration of your user databases.
-The following table provides a list components and recommended migration methods that can be completed before or after migration of your User databases:
--
-| **Feature** | **Component** | **Migration Method(s)** |
+| **Feature** | **Component** | **Migration methods** |
| | | |
-| **Databases** | Model | Script with SQL Server Management Studio |
-|| TempDB | Plan to move TempDB onto [Azure VM temporary disk (SSD](../../virtual-machines/windows/performance-guidelines-best-practices.md#temporary-disk)) for best performance. Be sure to pick a VM size that has a sufficient local SSD to accommodate your TempDB. |
-|| User databases with Filestream | Use the [Backup and restore](../../virtual-machines/windows/migrate-to-vm-from-sql-server.md#back-up-and-restore) methods for migration. DMA does not support databases with Filestream. |
-| **Security** | SQL Server and Windows Logins | Use DMA to [migrate user logins](/sql/dma/dma-migrateserverlogins). |
-|| SQL Server roles | Script with SQL Server Management Studio |
-|| Cryptographic providers | Recommend [converting to use Azure Key Vault Service](../../virtual-machines/windows/azure-key-vault-integration-configure.md). This procedure uses the [SQL VM resource provider](../../virtual-machines/windows/sql-agent-extension-manually-register-single-vm.md). |
-| **Server objects** | Backup devices | Replace with database backup using [Azure Backup Service](../../../backup/backup-sql-server-database-azure-vms.md) or write backups to [Azure Storage](../../virtual-machines/windows/azure-storage-sql-server-backup-restore-use.md) (SQL Server 2012 SP1 CU2 +). This procedure uses the [SQL VM resource provider](../../virtual-machines/windows/sql-agent-extension-manually-register-single-vm.md).|
-|| Linked Servers | Script with SQL Server Management Studio. |
-|| Server Triggers | Script with SQL Server Management Studio. |
-| **Replication** | Local Publications | Script with SQL Server Management Studio. |
-|| Local Subscribers | Script with SQL Server Management Studio. |
-| **Polybase** | Polybase | Script with SQL Server Management Studio. |
-| **Management** | Database Mail | Script with SQL Server Management Studio. |
+| **Databases** | Model | Script with SQL Server Management Studio. |
+|| TempDB | Plan to move tempDB onto [Azure VM temporary disk (SSD)](../../virtual-machines/windows/performance-guidelines-best-practices.md#temporary-disk)) for best performance. Be sure to pick a VM size that has a sufficient local SSD to accommodate your tempDB. |
+|| User databases with FileStream | Use the [Backup and restore](../../virtual-machines/windows/migrate-to-vm-from-sql-server.md#back-up-and-restore) methods for migration. Data Migration Assistant doesn't support databases with FileStream. |
+| **Security** | SQL Server and Windows logins | Use Data Migration Assistant to [migrate user logins](/sql/dma/dma-migrateserverlogins). |
+|| SQL Server roles | Script with SQL Server Management Studio. |
+|| Cryptographic providers | Recommend [converting to use Azure Key Vault](../../virtual-machines/windows/azure-key-vault-integration-configure.md). This procedure uses the [SQL VM resource provider](../../virtual-machines/windows/sql-agent-extension-manually-register-single-vm.md). |
+| **Server objects** | Backup devices | Replace with database backup by using [Azure Backup](../../../backup/backup-sql-server-database-azure-vms.md), or write backups to [Azure Storage](../../virtual-machines/windows/azure-storage-sql-server-backup-restore-use.md) (SQL Server 2012 SP1 CU2 +). This procedure uses the [SQL VM resource provider](../../virtual-machines/windows/sql-agent-extension-manually-register-single-vm.md).|
+|| Linked servers | Script with SQL Server Management Studio. |
+|| Server triggers | Script with SQL Server Management Studio. |
+| **Replication** | Local publications | Script with SQL Server Management Studio. |
+|| Local subscribers | Script with SQL Server Management Studio. |
+| **PolyBase** | PolyBase | Script with SQL Server Management Studio. |
+| **Management** | Database mail | Script with SQL Server Management Studio. |
| **SQL Server Agent** | Jobs | Script with SQL Server Management Studio. | || Alerts | Script with SQL Server Management Studio. | || Operators | Script with SQL Server Management Studio. | || Proxies | Script with SQL Server Management Studio. |
-| **Operating System** | Files, file shares | Make a note of any additional files or file shares that are used by your SQL Servers and replicate on the Azure VM target. |
-
+| **Operating system** | Files, file shares | Make a note of any other files or file shares that are used by your SQL servers and replicate on the Azure Virtual Machines target. |
## Post-migration
-After you have successfully completed the migration stage, go through a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
+After you've successfully completed the migration stage, you need to complete a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
### Remediate applications
-After the data is migrated to the target environment, all the applications that formerly consumed the source need to start consuming the target. Accomplishing this may in some cases require changes to the applications.
+After the data is migrated to the target environment, all the applications that formerly consumed the source need to start consuming the target. Accomplishing this task might require changes to the applications in some cases.
-Apply any Database Migration Assistant recommended fixes to user database(s). It is recommended these are scripted to ensure consistency and to allow for automation.
+Apply any fixes recommended by Data Migration Assistant to user databases. You need to script these fixes to ensure consistency and allow for automation.
### Perform tests
-The test approach for database migration consists of performing the following activities:
+The test approach to database migration consists of the following activities:
-1. **Develop validation tests.** Use SQL queries to test database migrations. Create validation queries to run against both the source and target databases. Your validation queries should cover the scope you have defined.
-2. **Set up test environment.** The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
-3. **Run validation tests.** Run the validation tests against the source and the target, and then analyze the results.
-4. **Run performance tests.** Run performance test against the source and target, and then analyze and compare the results.
+1. **Develop validation tests**: To test the database migration, you need to use SQL queries. Create validation queries to run against both the source and target databases. Your validation queries should cover the scope you've defined.
+1. **Set up a test environment**: The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
+1. **Run validation tests**: Run validation tests against the source and the target, and then analyze the results.
+1. **Run performance tests**: Run performance tests against the source and target, and then analyze and compare the results.
> [!TIP]
-> Use the [Database Experimentation Assistant (DEA)](/sql/dea/database-experimentation-assistant-overview) to assist with evaluating the target SQL Server performance.
-
+> Use the [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview) to assist with evaluating the target SQL Server performance.
### Optimize
-The post migration phase is crucial for reconciling any issues with data accuracy and completeness, as well as addressing potential performance issues with the workload.
+The post-migration phase is crucial for reconciling any data accuracy issues, verifying completeness, and addressing potential performance issues with the workload.
-For more information about these issues and specific steps to mitigate them, see the following resources:
+For more information about these issues and the steps to mitigate them, see:
-- [Post-migration Validation and Optimization Guide.](/sql/relational-databases/post-migration-validation-and-optimization-guide)-- [Tuning performance in Azure SQL Virtual Machines](../../virtual-machines/windows/performance-guidelines-best-practices.md).-- [Azure cost optimization center](https://azure.microsoft.com/overview/cost-optimization/).
+- [Post-migration validation and optimization guide](/sql/relational-databases/post-migration-validation-and-optimization-guide)
+- [Tuning performance in Azure SQL virtual machines](../../virtual-machines/windows/performance-guidelines-best-practices.md)
+- [Azure cost optimization center](https://azure.microsoft.com/overview/cost-optimization/)
## Next steps -- To check the availability of services applicable to SQL Server see the [Azure Global infrastructure center](https://azure.microsoft.com/global-infrastructure/services/?regions=all&amp;products=synapse-analytics,virtual-machines,sql-database)--- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see the article [Service and tools for data migration.](../../../dms/dms-tools-matrix.md)--- To learn more about Azure SQL see:
+- To check the availability of services that apply to SQL Server, see the [Azure global infrastructure center](https://azure.microsoft.com/global-infrastructure/services/?regions=all&amp;products=synapse-analytics,virtual-machines,sql-database).
+- For a matrix of Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios and specialty tasks, see [Services and tools for data migration](../../../dms/dms-tools-matrix.md).
+- To learn more about Azure SQL, see:
- [Deployment options](../../azure-sql-iaas-vs-paas-what-is-overview.md)
- - [SQL Server on Azure VMs](../../virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md)
- - [Azure total Cost of Ownership Calculator](https://azure.microsoft.com/pricing/tco/calculator/)
+ - [SQL Server on Azure Virtual Machines](../../virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md)
+ - [Azure Total Cost of Ownership (TCO) Calculator](https://azure.microsoft.com/pricing/tco/calculator/)
+- To learn more about the framework and adoption cycle for cloud migrations, see:
+ - [Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/contoso-migration-scale)
+ - [Best practices for costing and sizing workloads for migration to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
-- To learn more about the framework and adoption cycle for Cloud migrations, see
- - [Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/contoso-migration-scale)
- - [Best practices for costing and sizing workloads migrate to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
--- For information about licensing, see
+- For information about licensing, see:
- [Bring your own license with the Azure Hybrid Benefit](../../virtual-machines/windows/licensing-model-azure-hybrid-benefit-ahb-change.md) - [Get free extended support for SQL Server 2008 and SQL Server 2008 R2](../../virtual-machines/windows/sql-server-2008-extend-end-of-support.md) --- To assess the Application access layer, see [Data Access Migration Toolkit (Preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit)-- For details on how to perform Data Access Layer A/B testing see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
+- To assess the application access layer, see [Data Access Migration Toolkit (preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit).
+- For information about how to perform A/B testing for the data access layer, see [Overview of Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
backup Backup Azure File Folder Backup Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-file-folder-backup-faq.md
Title: Microsoft Azure Recovery Services (MARS) Agent ΓÇô FAQ description: Addresses common questions about backing up files and folders with Azure Backup. Previously updated : 07/29/2019 Last updated : 04/05/2021
This warning can appear even though you've configured a backup policy, when the
* When the server or the settings have been recovered to a known good state, backup schedules can become unsynchronized. * If you receive this warning, [configure](backup-azure-manage-windows-server.md) the backup policy again, and then run an on-demand backup to resynchronize the local server with Azure.
+### I see a few jobs are stuck in the In Progress state for a long time under Backup Jobs in the Azure portal. How can I resolve these?
+
+This can happen if a job was unable to complete due to reasons, such as network connectivity issues, machine shutdown, or process termination. No user action is required here. These jobs will automatically be marked as **Failed** after 30 days. [Learn more](backup-windows-with-mars-agent.md#run-an-on-demand-backup) to run an on-demand backup job using the MARS agent.
+ ## Manage the backup cache folder ### What's the minimum size requirement for the cache folder?
As a safety measure, Azure Backup will preserve the most recent recovery point,
If an ongoing restore job is canceled, the restore process stops. All files restored before the cancellation stay in configured destination (original or alternate location), without any rollbacks.
-### Does the MARS agent back up and restore ACLs set on files, folders, and volumes?
+### Does the MARS agent backup and restore ACLs set on files, folders, and volumes?
* The MARS agent backs up ACLs set on files, folders, and volumes * For Volume Restore recovery option, the MARS agent provides an option to skip restoring ACL permissions to the file or folder being recovered
bastion Quickstart Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/quickstart-host-portal.md
You can connect to a virtual machine (VM) through your browser using the Azure p
* Required VM ports: * Inbound ports: RDP (3389)
+ >[!NOTE]
+ >The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
+ >
+ ### <a name="values"></a>Example values You can use the following example values when creating this configuration, or you can substitute your own.
batch Batch Quota Limit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-quota-limit.md
Title: Service quotas and limits description: Learn about default Azure Batch quotas, limits, and constraints, and how to request quota increases Previously updated : 01/28/2021 Last updated : 04/06/2021
Keep these quotas in mind as you design and scale up your Batch workloads. For e
You can run multiple Batch workloads in a single Batch account, or distribute your workloads among Batch accounts that are in the same subscription but in different Azure regions.
-If you plan to run production workloads in Batch, you may need to increase one or more of the quotas above the default. If you want to raise a quota, you can open an online [customer support request](#increase-a-quota) at no charge.
+If you plan to run production workloads in Batch, you may need to increase one or more of the quotas above the default. To raise a quota, you can [request a quota increase](#increase-a-quota) at no charge.
## Resource quotas
Also note that quotas are not guaranteed values. Quotas can vary based on change
### Cores quotas in Batch service mode
-Core quotas exist for each VM series supported by Batch and are displayed on the **Quotas** page in the portal. VM series quota limits can be updated with a support request, as detailed below. For dedicated nodes, Batch enforces a core quota limit for each VM series as well as a total core quota limit for the entire Batch account. For low priority nodes, Batch enforces only a total core quota for the Batch account without any distinction between different VM series.
+Core quotas exist for each VM series supported by Batch. These core quotas are displayed on the **Quotas** page in the Azure portal. VM series quota limits can be updated with a support request, as detailed below. For dedicated nodes, Batch enforces a core quota limit for each VM series, as well as a total core quota limit for the entire Batch account. For low priority nodes, Batch enforces only a total core quota for the Batch account without any distinction between different VM series.
### Cores quotas in user subscription mode
To view your Batch account quotas in the [Azure portal](https://portal.azure.com
## Increase a quota
-You can request a quota increase for your Batch account or your subscription using the [Azure portal](https://portal.azure.com). The type of quota increase depends on the pool allocation mode of your Batch account. To request a quota increase, you must include the VM series you would like to increase the quota for. When the quota increase is applied, it is applied to all series of VMs.
+You can request a quota increase for your Batch account or your subscription using the [Azure portal](https://portal.azure.com) or by using the [Azure Quota REST API](#azure-quota-rest-api).
-1. Select the **Help + support** tile on your portal dashboard, or the question mark (**?**) in the upper-right corner of the portal.
-1. Select **New support request** > **Basics**.
-1. In **Basics**:
+The type of quota increase depends on the pool allocation mode of your Batch account. To request a quota increase, you must include the VM series you would like to increase the quota for. When the quota increase is applied, it is applied to all series of VMs.
- 1. **Issue Type** > **Service and subscription limits (quotas)**
+Once you've submitted your support request, Azure support will contact you. Quota requests may be completed within a few minutes or up to two business days.
- 1. Select your subscription.
+### Azure portal
- 1. **Quota type** > **Batch**
+1. From the **Quotas** page, select **Request quota increase**. Alternately, you can select the **Help + support** tile on your portal dashboard (or from the question mark (**?**) in the upper-right corner of the portal), and then select **New support request.**
- Select **Next**.
+1. In **Basics**:
+
+ 1. For **Issue Type**, select **Service and subscription limits (quotas)**.
+ 1. Select your subscription.
+ 1. For **Quota type**, select **Batch**.
+ 1. Select **Next** to continue.
1. In **Details**:
- 1. In **Provide details**, specify the location, quota type, and Batch account.
+ 1. In the **Provide details** section, specify the location, quota type, and Batch account (if applicable), then select the quota(s) to increase.
:::image type="content" source="media/batch-quota-limit/quota-increase.png" alt-text="Screenshot of the Quota details screen when requesting a quota increase."::: Quota types include:
- * **Per Batch account**
- Values specific to a single Batch account, including dedicated and low-priority cores, and number of jobs and pools.
+ - **Per Batch account**
+ Use this option to request quota increases specific to a single Batch account, including dedicated and low-priority cores, and the number of jobs and pools.
- * **Per region**
- Values that apply to all Batch accounts in a region and includes the number of Batch accounts per region per subscription.
+ If you select this option, specify the Batch account to which this request should apply, and then select the quota(s) you'd like to update. Provide the new limit you are requesting for each resource.
- Low-priority quota is a single value across all VM series. If you need constrained SKUs, you must select **Low-priority cores** and include the VM families to request.
+ Low-priority quota is a single value across all VM series. If you need constrained SKUs, you must select **Low-priority cores** and include the VM families to request.
- 1. Select a **Severity** according to your [business impact](https://aka.ms/supportseverity).
+ - **All accounts in this region**
+ Use this option to request quota increases that apply to all Batch accounts in a region, such as the number of Batch accounts per region per subscription.
- Select **Next**.
+ 1. In **Support method**, select a **Severity** according to your [business impact](https://aka.ms/supportseverity) and your preferred contact method and support language.
-1. In **Contact information**:
+ 1. In **Contact information**, verify and enter the required contact details.
- 1. Select a **Preferred contact method**.
+1. Select **Review + create**, then select **Create** to submit the support request.
- 1. Verify and enter the required contact details.
+### Azure Quota REST API
- Select **Create** to submit the support request.
+You can use the Azure Quota REST API to request a quota increase at the subscription level or at the Batch account level.
-Once you've submitted your support request, Azure support will contact you. Quota requests may be completed within a few minutes or up to two business days.
+For details and examples, see [Request a quota increase using the Azure Support REST API](/rest/api/support/quota-payload#azure-batch).
## Related quotas for VM pools
-Batch pools in the Virtual Machine Configuration deployed in an Azure virtual network automatically allocate additional Azure networking resources. The following resources are needed for each 50 pool nodes in a virtual network:
+[Batch pools in the Virtual Machine Configuration deployed in an Azure virtual network](batch-virtual-network.md) automatically allocate additional Azure networking resources. These resources are created in the subscription that contains the virtual network supplied when creating the Batch pool.
+
+The following resources are created for each 100 pool nodes in a virtual network:
- One [network security group](../virtual-network/network-security-groups-overview.md#network-security-groups) - One [public IP address](../virtual-network/public-ip-addresses.md) - One [load balancer](../load-balancer/load-balancer-overview.md)
-These resources are allocated in the subscription that contains the virtual network supplied when creating the Batch pool. These resources are limited by the subscription's [resource quotas](../azure-resource-manager/management/azure-subscription-service-limits.md). If you plan large pool deployments in a virtual network, check the subscription's quotas for these resources. If needed, request an increase in the Azure portal by selecting **Help + support**.
+These resources are limited by the subscription's [resource quotas](../azure-resource-manager/management/azure-subscription-service-limits.md). If you plan large pool deployments in a virtual network, you may need to request a quota increase for one or more of these resources.
## Next steps
-* [Create an Azure Batch account using the Azure portal](batch-account-create-portal.md).
-* Learn about the [Batch service workflow and primary resources](batch-service-workflow-features.md) such as pools, nodes, jobs, and tasks.
-* Learn about [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).
-
-[account_quotas]: ./media/batch-quota-limit/accountquota_portal.png
-[quota_increase]: ./media/batch-quota-limit/quota-increase.png
+- Learn about the [Batch service workflow and primary resources](batch-service-workflow-features.md) such as pools, nodes, jobs, and tasks.
+- Learn about [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).
cdn Cdn How Caching Works https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-how-caching-works.md
Azure CDN supports the following HTTP cache-directive headers, which define cach
- When used in an HTTP request from the client to the CDN POP, `Cache-Control` is ignored by all Azure CDN profiles, by default. - When used in an HTTP response from the client to the CDN POP: - **Azure CDN Standard/Premium from Verizon** and **Azure CDN Standard from Microsoft** support all `Cache-Control` directives.
+ - **Azure CDN Standard/Premium from Verizon** and **Azure CDN Standard from Microsoft** honors caching behaviors for Cache-Control directives in [RFC 7234 - Hypertext Transfer Protocol (HTTP/1.1): Caching (ietf.org)](https://tools.ietf.org/html/rfc7234#section-5.2.2.8).
- **Azure CDN Standard from Akamai** supports only the following `Cache-Control` directives; all others are ignored: - `max-age`: A cache can store the content for the number of seconds specified. For example, `Cache-Control: max-age=5`. This directive specifies the maximum amount of time the content is considered to be fresh. - `no-cache`: Cache the content, but validate the content every time before delivering it from the cache. Equivalent to `Cache-Control: max-age=0`.
The following table describes the default caching behavior for the Azure CDN pro
## Next steps - To learn how to customize and override the default caching behavior on the CDN through caching rules, see [Control Azure CDN caching behavior with caching rules](cdn-caching-rules.md). -- To learn how to use query strings to control caching behavior, see [Control Azure CDN caching behavior with query strings](cdn-query-string.md).
+- To learn how to use query strings to control caching behavior, see [Control Azure CDN caching behavior with query strings](cdn-query-string.md).
cognitive-services Luis Reference Prebuilt Sentiment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-reference-prebuilt-sentiment.md
Previously updated : 07/01/2020 Last updated : 04/06/2021 # Sentiment analysis
If Sentiment analysis is configured, the LUIS json response includes sentiment a
LUIS uses Text Analytics V2.
+Sentiment Analysis is configured when publishing your application. See [how to publish an app](./luis-how-to-publish-app.md) for more information.
+ ## Resolution for sentiment Sentiment data is a score between 1 and 0 indicating the positive (closer to 1) or negative (closer to 0) sentiment of the data.
For all other cultures, the response is:
## Next steps
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
cognitive-services How To Audio Content Creation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-audio-content-creation.md
After signing up for the Azure account, you need to create a Speech resource und
It takes a few moments to deploy your new Speech resource. Once the deployment is complete, you can start the Audio Content Creation journey. >[!NOTE]
- > If you plan to use neural voices, make sure that you create your resource in [a region that supports neural voices](regions.md#standard-and-neural-voices).
+ > If you plan to use neural voices, make sure that you create your resource in [a region that supports neural voices](regions.md#neural-and-standard-voices).
### Step 3 - Log into the Audio Content Creation with your Azure account and Speech resource
cognitive-services How To Develop Custom Commands Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-develop-custom-commands-application.md
Another way to customize Custom Commands responses is to select an output voice.
> ![Screenshot showing sample sentences and parameters.](media/custom-commands/select-custom-voice.png) > [!NOTE]
-> For public voices, neural types are available only for specific regions. For more information, see [Speech service supported regions](./regions.md#standard-and-neural-voices).
+> For public voices, neural types are available only for specific regions. For more information, see [Speech service supported regions](./regions.md#neural-and-standard-voices).
> > You can create custom voices on the **Custom Voice** project page. For more information, see [Get started with Custom Voice](./how-to-custom-voice.md).
cognitive-services How To Use Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-use-logging.md
# Enable logging in the Speech SDK
-Logging to file is an optional feature for the Speech SDK. During development logging provides additional information and diagnostics from the Speech SDK's core components. It can be enabled by setting the property `Speech_LogFilename` on a speech configuration object to the location and name of the log file. Logging will be activated globally once a recognizer is created from that configuration and can't be disabled afterwards. You can't change the name of a log file during a running logging session.
+Logging to file is an optional feature for the Speech SDK. During development logging provides additional information and diagnostics from the Speech SDK's core components. It can be enabled by setting the property `Speech_LogFilename` on a speech configuration object to the location and name of the log file. Logging is handled by a static class in Speech SDKΓÇÖs native library. You can turn on logging for any Speech SDK recognizer or synthesizer instance. All instances in the same process write log entries to the same log file.
> [!NOTE] > Logging is available since Speech SDK version 1.4.0 in all supported Speech SDK programming languages, with the exception of JavaScript.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
Below neural voices are in public preview.
> [!IMPORTANT] > Voices in public preview are only available in 3 service regions: East US, West Europe and Southeast Asia.
-For more information about regional availability, see [regions](regions.md#standard-and-neural-voices).
+For more information about regional availability, see [regions](regions.md#neural-and-standard-voices).
To learn how you can configure and adjust neural voices, such as Speaking Styles, see [Speech Synthesis Markup Language](speech-synthesis-markup.md#adjust-speaking-styles).
To learn how you can configure and adjust neural voices, such as Speaking Styles
### Standard voices
-More than 75 standard voices are available in over 45 languages and locales, which allow you to convert text into synthesized speech. For more information about regional availability, see [regions](regions.md#standard-and-neural-voices).
+More than 75 standard voices are available in over 45 languages and locales, which allow you to convert text into synthesized speech. For more information about regional availability, see [regions](regions.md#neural-and-standard-voices).
> [!NOTE] > With two exceptions, standard voices are created from samples that use a 16 khz sample rate.
cognitive-services Tutorial Voice Enable Your Bot Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/tutorial-voice-enable-your-bot-speech-sdk.md
The client app that you'll create in this tutorial uses a handful of Azure servi
If you'd like to use a different region for this tutorial these factors may limit your choices: * Ensure that you use a [supported Azure region](regions.md#voice-assistants).
-* The Direct Line Speech channel uses the text-to-speech service, which has standard and neural voices. Neural voices are [limited to specific Azure regions](regions.md#standard-and-neural-voices).
+* The Direct Line Speech channel uses the text-to-speech service, which has neural and standard voices. Neural and standard voices all are available at these [Azure regions](regions.md#neural-and-standard-voices).
For more information about regions, see [Azure locations](https://azure.microsoft.com/global-infrastructure/locations/).
If you get an error message in your main app window, use this table to identify
|Error (AuthenticationFailure) : WebSocket Upgrade failed with an authentication error (401). Check for correct subscription key (or authorization token) and region name| In the Settings page of the app, make sure you entered the Speech Subscription key and its region correctly.<br>Make sure your speech key and key region were entered correctly. | |Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1011. Error details: We could not connect to the bot before sending a message | Make sure you [checked the "Enable Streaming Endpoint"](#register-the-direct-line-speech-channel) box and/or [toggled **Web sockets**](#enable-web-sockets) to On.<br>Make sure your Azure App Service is running. If it is, try restarting your App Service.| |Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1002. Error details: The server returned status code '503' when status code '101' was expected | Make sure you [checked the "Enable Streaming Endpoint"](#register-the-direct-line-speech-channel) box and/or [toggled **Web sockets**](#enable-web-sockets) to On.<br>Make sure your Azure App Service is running. If it is, try restarting your App Service.|
-|Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1011. Error details: Response status code does not indicate success: 500 (InternalServerError)| Your bot specified a neural voice in its output Activity [Speak](https://github.com/microsoft/botframework-sdk/blob/master/specs/botframework-activity/botframework-activity.md#speak) field, but the Azure region associated with your Speech subscription key does not support neural voices. See [Standard and neural voices](./regions.md#standard-and-neural-voices).|
+|Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1011. Error details: Response status code does not indicate success: 500 (InternalServerError)| Your bot specified a neural voice in its output Activity [Speak](https://github.com/microsoft/botframework-sdk/blob/master/specs/botframework-activity/botframework-activity.md#speak) field, but the Azure region associated with your Speech subscription key does not support neural voices. See [Neural and standard voices](./regions.md#neural-and-standard-voices).|
If your issue isn't addressed in the table, see [Voice assistants: Frequently asked questions](faq-voice-assistants.md). If your are still not able to resolve your issue after following all the steps in this tutorial, please enter a new issue in the [Voice Assistant GitHub page](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/issues).
If you're not going to continue using the echo-bot deployed in this tutorial, yo
## See also * Deploying to an [Azure region near you](https://azure.microsoft.com/global-infrastructure/locations/) to see bot response time improvement
-* Deploying to an [Azure region that supports high quality Neural TTS voices](./regions.md#standard-and-neural-voices)
+* Deploying to an [Azure region that supports high quality Neural TTS voices](./regions.md#neural-and-standard-voices)
* Pricing associated with Direct Line Speech channel: * [Bot Service pricing](https://azure.microsoft.com/pricing/details/bot-service/) * [Speech service](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)
cognitive-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-layout.md
The Layout API extracts text, tables, selection marks, and structure information
To try out the Form Recognizer Layout Service, go to the online sample UI tool: > [!div class="nextstepaction"]
-> [Form OCR Test Tool (FOTT)](https://fott-preview.azurewebsites.net)
+> [Try Form Recognizer](https://fott-preview.azurewebsites.net)
You will need an Azure subscription ([create one for free](https://azure.microsoft.com/free/cognitive-services)) and a [Form Recognizer resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) endpoint and key to try out the Form Recognizer Layout API.
First, call the [Analyze Layout](https://westcentralus.dev.cognitive.microsoft.c
|Response header| Result URL | |:--|:-|
-|Operation-Location | `https://cognitiveservice/formrecognizer/v2.1-preview.3/prebuilt/layout/analyzeResults/44a436324-fc4b-4387-aa06-090cfbf0064f` |
+|Operation-Location | `https://cognitiveservice/formrecognizer/v2.1-preview.3/layout/analyzeResults/44a436324-fc4b-4387-aa06-090cfbf0064f` |
### Natural reading order output (Latin only)
cosmos-db Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/data-residency.md
In Azure Cosmos DB, you must explicitly configure the cross-region data replicat
**Continuous mode Backups**: These backups are resident by default as they are stored in either locally redundant or zone redundant storage. To learn more, see the [continuous backup](continuous-backup-restore-portal.md) article.
-**Periodic mode Backups**: For periodic backup modes, you can configure data redundancy at the account level. There are three redundancy options for the backup storage. They are local redundancy, zone redundancy, or geo redundancy. To learn more, see how to [configure backup redundancy](configure-periodic-backup-restore.md#configure-backup-interval-retention) using portal.
+**Periodic mode Backups**: By default, periodic mode account backups will be stored in geo-redundant storage. For periodic backup modes, you can configure data redundancy at the account level. There are three redundancy options for the backup storage. They are local redundancy, zone redundancy, or geo redundancy. To learn more, see how to [configure backup redundancy](configure-periodic-backup-restore.md#configure-backup-interval-retention) using portal.
## Use Azure Policy to enforce the residency requirements
cosmos-db Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/plan-manage-costs.md
Previously updated : 11/19/2020 Last updated : 04/05/2021 # Plan and manage costs for Azure Cosmos DB
Azure Cosmos DB supports two types of capacity modes: [provisioned throughput](s
Cost analysis in Cost Management supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for an Azure account. For information about assigning access to Azure Cost Management data, see [Assign access to data](../cost-management-billing/costs/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
-## Estimating provisioned throughput costs before using Azure Cosmos DB
+## Estimate costs before using Azure Cosmos DB
+
+Azure Cosmos DB is available in two different capacity modes: provisioned throughput and serverless. You can perform the exact same database operations in both modes, but the way you get billed for these operations is different.
+
+### Estimate provisioned throughput costs
If you plan to use Azure Cosmos DB in provisioned throughput mode, use the [Azure Cosmos DB capacity calculator](https://cosmos.azure.com/capacitycalculator/) to estimate costs before you create the resources in an Azure Cosmos account. The capacity calculator is used to get an estimate of the required throughput and cost of your workload. Configuring your Azure Cosmos databases and containers with the right amount of provisioned throughput, or [Request Units (RU/s)](request-units.md), for your workload is essential to optimize the cost and performance. You have to input details such as API type, number of regions, item size, read/write requests per second, total data stored to get a cost estimate. To learn more about the capacity calculator, see the [estimate](estimate-ru-with-capacity-planner.md) article.
The following screenshot shows the throughput and cost estimation by using the c
:::image type="content" source="./media/plan-manage-costs/capacity-calculator-cost-estimate.png" alt-text="Cost estimate in Azure Cosmos DB capacity calculator":::
-## <a id="estimating-serverless-costs"></a> Estimating serverless costs before using Azure Cosmos DB
+### <a id="estimating-serverless-costs"></a> Estimate serverless costs
If you plan to use Azure Cosmos DB in serverless mode, you need to estimate how many [Request Units](request-units.md) and GB of storage you may consume on a monthly basis. You can estimate the required amount of Request Units by evaluating the number of database operations that would be issued in a month, and multiply their amount by their corresponding RU cost. The following table lists estimated RU charges for common database operations:
Once you have computed the total number of Request Units and GB of storage you'r
> [!NOTE] > The costs shown in the previous example are for demonstration purposes only. See the [pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for the latest pricing information.
+## Understand the full billing model
+
+Azure Cosmos DB runs on Azure infrastructure that accrues costs when you deploy new resources. It's important to understand that there could be other additional infrastructure costs that might accrue.
+
+### How you're charged for Azure Cosmos DB
+
+When you create or use Azure Cosmos DB resources, you might get charged for the following meters:
+
+* **Database operations** - You're charged for it based on the request units(RU/s) provisioned or consumed:
+ * Standard (manual) provisioned throughput - You are billed an hourly rate for the RU/s provisioned on your container or database.
+ * Autoscale provisioned throughput - You are billed based on the maximum number of RU/s the system scaled up to in each hour.
+
+* **Consumed storage** - You're charged for it based the total amount of storage (in GBs) consumed by your data and indexes for a given hour.
+
+There is an additional charge in case you are using the Azure Cosmos DB features like backup storage, analytical storage, Availability zones, Multi-region writes. At the end of your billing cycle, the charges for each meter are summed. Your bill or invoice shows a section for all Azure Cosmos DB costs. There's a separate line item for each meter. To learn more, see the [Pricing model](how-pricing-works.md) article.
+
+### Using Azure Prepayment
+
+You can pay for Azure Cosmos DB charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those from the Azure Marketplace.
+ ## Review estimated costs in the Azure portal As you start using Azure Cosmos DB resources from Azure portal, you can see the estimated costs. Use the following steps to review the cost estimate:
You can pay for Azure Cosmos DB charges with your Azure Prepayment (previously c
As you use resources with Azure Cosmos DB, you incur costs. Resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by request unit usage. As soon as usage of Azure Cosmos DB starts, costs are incurred and you can see them in the [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) pane in the Azure portal.
-When you use cost analysis, you can view the Azure Cosmos DB costs in graphs and tables for different time intervals. Some examples are by day, current, prior month, and year. You can also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends and see where overspending might have occurred. If youΓÇÖve created budgets, you can also easily see where they exceeded.
+When you use cost analysis, you can view the Azure Cosmos DB costs in graphs and tables for different time intervals. Some examples are by day, current, prior month, and year. You can also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends and see where overspending might have occurred. If youΓÇÖve created budgets, you can also easily see where they exceeded.
To view Azure Cosmos DB costs in cost analysis:
To view Azure Cosmos DB costs in cost analysis:
1. By default, cost for all services are shown in the first donut chart. Select the area in the chart labeled "Azure Cosmos DB". 1. To narrow costs for a single service such as Azure Cosmos DB, select **Add filter** and then select **Service name**. Then, choose **Azure Cosmos DB** from the list. HereΓÇÖs an example showing costs for just Azure Cosmos DB:
-
+ :::image type="content" source="./media/plan-manage-costs/cost-analysis-pane.png" alt-text="Monitor costs with Cost Analysis pane"::: In the preceding example, you see the current cost for Azure Cosmos DB for the month of Feb. The charts also contain Azure Cosmos DB costs by location and by resource group.
Budgets can be created with filters for specific resources or services in Azure
You can also [export your cost data](../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do additional data analysis for costs. For example, a finance teams can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
+## Other ways to manage and reduce costs
+
+The following are some best practices you can use to reduce the costs:
+
+* [Optimize provisioned throughput cost](optimize-cost-throughput.md) - This article details the best practices to optimize your throughput cost. It describes when to provision throughput at the container-level Vs at the database-level based on your workload type.
+
+* [Optimize request cost](optimize-cost-reads-writes.md) - This article describes how read and write requests translate into request units and how to optimize the cost of these requests.
+
+* [Optimize storage cost](optimize-cost-storage.md) - Storage cost is billed on consumption basis. Learn how to optimize your storage cost with item size, indexing policy, by using features like change feed and time to live.
+
+* [Optimize multi-region cost](optimize-cost-regions.md) - If you have one or more under-utilized read regions you can take steps to make the maximum use of the RUs in read regions by using change feed from the read-region or move it to another secondary if over-utilized.
+
+* [Optimize development/testing cost](optimize-dev-test.md) - Learn how to optimize your development cost by using the local emulator, the Azure Cosmos DB free tier, Azure free account and few other options.
+
+* [Optimize cost with reserved capacity](cosmos-db-reserved-capacity.md) - Learn how to use reserved capacity to save money by committing to a reservation for Azure Cosmos DB resources for either one year or three years.
+ ## Next steps See the following articles to learn more on how pricing works in Azure Cosmos DB: * [Pricing model in Azure Cosmos DB](how-pricing-works.md)
-* [Optimize provisioned throughput cost in Azure Cosmos DB](optimize-cost-throughput.md)
-* [Optimize query cost in Azure Cosmos DB](./optimize-cost-reads-writes.md)
-* [Optimize storage cost in Azure Cosmos DB](optimize-cost-storage.md)
* Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). * Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). * Learn about how to [prevent unexpected costs](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
cosmos-db Troubleshoot Bad Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/troubleshoot-bad-request.md
+
+ Title: Troubleshoot Azure Cosmos DB bad request exceptions
+description: Learn how to diagnose and fix bad request exceptions such as input content or partition key is invalid, partition key doesn't match in Azure Cosmos DB.
+++ Last updated : 04/06/2021+++++
+# Diagnose and troubleshoot bad request exceptions in Azure Cosmos DB
+
+The HTTP status code 400 represents the request contains invalid data or it's missing required parameters.
+
+## <a name="missing-id-property"></a>Missing the ID property
+On this scenario, it's common to see the error:
+
+*The input content is invalid because the required properties - 'id; ' - are missing*
+
+A response with this error means the JSON document that is being sent to the service is lacking the required ID property.
+
+### Solution
+Specify an `id` property with a string value as per the [REST specification](/rest/api/cosmos-db/documents) as part of your document, the SDKs do not autogenerate values for this property.
+
+## <a name="invalid-partition-key-type"></a>Invalid partition key type
+On this scenario, it's common to see errors like:
+
+*Partition key ... is invalid.*
+
+A response with this error means the partition key value is of an invalid type.
+
+### Solution
+The value of the partition key should be a string or a number, make sure the value is of the expected types.
+
+## <a name="wrong-partition-key-value"></a>Wrong partition key value
+On this scenario, it's common to see the error:
+
+*PartitionKey extracted from document doesnΓÇÖt match the one specified in the header*
+
+A response with this error means you are executing an operation and passing a partition key value that does not match the document's body value for the expected property. If the collection's partition key path is `/myPartitionKey`, the document has a property called `myPartitionKey` with a value that does not match what was provided as partition key value when calling the SDK method.
+
+### Solution
+Send the partition key value parameter that matches the document property value.
+
+## Next steps
+* [Diagnose and troubleshoot](troubleshoot-dot-net-sdk.md) issues when you use the Azure Cosmos DB .NET SDK.
+* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3-sql.md) and [.NET v2](performance-tips.md).
+* [Diagnose and troubleshoot](troubleshoot-java-sdk-v4-sql.md) issues when you use the Azure Cosmos DB Java v4 SDK.
+* Learn about performance guidelines for [Java v4 SDK](performance-tips-java-sdk-v4-sql.md).
cost-management-billing Assign Roles Azure Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/assign-roles-azure-service-principals.md
tags: billing
Previously updated : 03/07/2021 Last updated : 04/05/2021
For the next steps, you give permission to the Azure AD app to do actions using
| Role | Actions allowed | Role definition ID | | | | | | EnrollmentReader | Can view usage and charges across all accounts and subscriptions. Can view the Azure Prepayment (previously called monetary commitment) balance associated with the enrollment. | 24f8edb6-1668-4659-b5e2-40bb5f3a7d7e |
+| EA purchaser | Purchase reservation orders and view reservation transactions. Can view usage and charges across all accounts and subscriptions. Can view the Azure Prepayment (previously called monetary commitment) balance associated with the enrollment. | da6647fb-7651-49ee-be91-c43c4877f0c4 |
| DepartmentReader | Download the usage details for the department they administer. Can view the usage and charges associated with their department. | db609904-a47f-4794-9be8-9bd86fbffd8a | | SubscriptionCreator | Create new subscriptions in the given scope of Account. | a0bcee42-bf30-4d1b-926a-48d21664ef71 | - An enrollment reader can be assigned to an SPN only by a user with enrollment writer role. - A department reader can be assigned to an SPN only by a user that has enrollment writer role or department writer role.-- A subscription creator role can be assigned to an SPN only by a user that is the Account Owner of the enrollment account.
+- A subscription creator role can be assigned to an SPN only by a user that is the Account Owner of the enrollment account. The role isn't shown in the EA portal. It's only created by programmatic means and is only for programmatic use.
+- The EA purchaser role isn't shown in the EA portal. It's only created by programmatic means and is only for programmatic use.
## Assign enrollment account role permission to the SPN
A `200 OK` response shows that the SPN was successfully added.
Now you can use the SPN (Azure AD App with the object ID) to access EA APIs in an automated manner. The SPN has the EnrollmentReader role.
+## Assign EA Purchaser role permission to the SPN
+
+For the EA purchaser role, use the same steps for the enrollment reader. Specify the `roleDefinitionId`, using the following example.
+
+`"/providers/Microsoft.Billing/billingAccounts/1111111/billingRoleDefinitions/ da6647fb-7651-49ee-be91-c43c4877f0c4"`
+
+
+ ## Assign the department reader role to the SPN Before you begin, read the [Enrollment Department Role Assignments - Put](/rest/api/billing/2019-10-01-preview/enrollmentdepartmentroleassignments/put) REST API article.
cost-management-billing Understand Ea Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/understand-ea-roles.md
Previously updated : 12/10/2020 Last updated : 04/05/2021
To help manage your organization's usage and spend, Azure customers with an Ente
- Enterprise Administrator - Enterprise Administrator (read only)<sup>1</sup>
+- EA purchaser
- Department Administrator - Department Administrator (read only) - Account Owner<sup>2</sup>
The following diagram illustrates simple Azure EA hierarchies.
The following administrative user roles are part of your enterprise enrollment: - Enterprise administrator
+- Ea purchaser
- Department administrator - Account owner - Service administrator
Users with this role have the highest level of access. They can:
- Manage other enterprise administrators. - Manage department administrators. - Manage notification contacts.
+- Purchase Azure services, including reservations.
- View usage across all accounts. - View unbilled charges across all accounts. - View and manage all reservation orders and reservations that apply to the Enterprise Agreement.
Users with this role have the highest level of access. They can:
You can have multiple enterprise administrators in an enterprise enrollment. You can grant read-only access to enterprise administrators. They all inherit the department administrator role.
+### EA purchaser
+
+Users with this role have permissions to purchase Azure services, but are not allowed to manage accounts. They can:
+
+- Purchase Azure services, including reservations.
+- View usage across all accounts.
+- View unbilled charges across all accounts.
+- View and manage all reservation orders and reservations that apply to the Enterprise Agreement.
+
+The EA purchaser role is currently enabled only for SPN-based access. To learn how to assign the role to a service principal name, see [Assign roles to Azure Enterprise Agreement service principal names](assign-roles-azure-service-principals.md).
+ ### Department administrator Users with this role can:
The following sections describe the limitations and capabilities of each role.
||| |Enterprise Administrator|Unlimited| |Enterprise Administrator (read only)|Unlimited|
+| EA purchaser assigned to an SPN | Unlimited |
|Department Administrator|Unlimited| |Department Administrator (read only)|Unlimited| |Account Owner|1 per account<sup>3</sup>|
The following sections describe the limitations and capabilities of each role.
## Organization structure and permissions by role
-|Tasks| Enterprise Administrator|Enterprise Administrator (read only)|Department Administrator|Department Administrator (read only)|Account Owner| Partner|
-||||||||
-|View Enterprise Administrators|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|
-|Add or remove Enterprise Administrators|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
-|View Notification Contacts<sup>4</sup> |Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|
-|Add or remove Notification Contacts<sup>4</sup> |Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
-|Create and manage Departments |Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
-|View Department Administrators|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ö|
-|Add or remove Department Administrators|Γ£ö|Γ£ÿ|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|
-|View Accounts in the enrollment |Γ£ö|Γ£ö|Γ£ö<sup>5</sup>|Γ£ö<sup>5</sup>|Γ£ÿ|Γ£ö|
-|Add Accounts to the enrollment and change Account Owner|Γ£ö|Γ£ÿ|Γ£ö<sup>5</sup>|Γ£ÿ|Γ£ÿ|Γ£ÿ|
-|Create and manage subscriptions and subscription permissions|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ÿ|
+|Tasks| Enterprise Administrator|Enterprise Administrator (read only)| EA Purchaser | Department Administrator|Department Administrator (read only)|Account Owner| Partner|
+|||||||||
+|View Enterprise Administrators|Γ£ö|Γ£ö| Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|
+|Add or remove Enterprise Administrators|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
+|View Notification Contacts<sup>4</sup> |Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|
+|Add or remove Notification Contacts<sup>4</sup> |Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
+|Create and manage Departments |Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
+|View Department Administrators|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ö|
+|Add or remove Department Administrators|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|
+|View Accounts in the enrollment |Γ£ö|Γ£ö|Γ£ö|Γ£ö<sup>5</sup>|Γ£ö<sup>5</sup>|Γ£ÿ|Γ£ö|
+|Add Accounts to the enrollment and change Account Owner|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö<sup>5</sup>|Γ£ÿ|Γ£ÿ|Γ£ÿ|
+|Purchase reservations|Γ£ö|Γ£ÿ|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
+|Create and manage subscriptions and subscription permissions|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ÿ|
- <sup>4</sup> Notification contacts are sent email communications about the Azure Enterprise Agreement. - <sup>5</sup> Task is limited to accounts in your department.
For more information about adding a department admin, see [Create an Azure EA de
## Usage and costs access by role
-|Tasks| Enterprise Administrator|Enterprise Administrator (read only)|Department Administrator|Department Administrator (read only) |Account Owner| Partner|
-||||||||
-|View credit balance including Azure Prepayment|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|
-|View department spending quotas|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|
-|Set department spending quotas|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
-|View organization's EA price sheet|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|
-|View usage and cost details|Γ£ö|Γ£ö|Γ£ö<sup>6</sup>|Γ£ö<sup>6</sup>|Γ£ö<sup>7</sup>|Γ£ö|
-|Manage resources in Azure portal|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ÿ|
+|Tasks| Enterprise Administrator|Enterprise Administrator (read only)|EA Purchaser|Department Administrator|Department Administrator (read only) |Account Owner| Partner|
+|||||||||
+|View credit balance including Azure Prepayment|Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|
+|View department spending quotas|Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|
+|Set department spending quotas|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
+|View organization's EA price sheet|Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|
+|View usage and cost details|Γ£ö|Γ£ö|Γ£ö|Γ£ö<sup>6</sup>|Γ£ö<sup>6</sup>|Γ£ö<sup>7</sup>|Γ£ö|
+|Manage resources in Azure portal|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ÿ|
- <sup>6</sup> Requires that the Enterprise Administrator enable **DA view charges** policy in the Enterprise portal. The Department Administrator can then see cost details for the department. - <sup>7</sup> Requires that the Enterprise Administrator enable **AO view charges** policy in the Enterprise portal. The Account Owner can then see cost details for the account.
The following table shows the relationship between the Enterprise Agreement admi
You set the Enterprise admin role and view charges policies in the Enterprise portal. The Azure role can be updated in the Azure portal. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). -- ## Next steps - [Manage access to billing information for Azure](manage-billing-access.md)
data-factory Source Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/source-control.md
By default, the Azure Data Factory user interface experience (UX) authors direct
To provide a better authoring experience, Azure Data Factory allows you to configure a Git repository with either Azure Repos or GitHub. Git is a version control system that allows for easier change tracking and collaboration. This article will outline how to configure and work in a git repository along with highlighting best practices and a troubleshooting guide. > [!NOTE]
-> For Azure Government Cloud, only GitHub Enterprise is available.
+> For Azure Government Cloud, only *GitHub Enterprise Server* is available.
To learn more about how Azure Data Factory integrates with Git, view the 15-minute tutorial video below:
databox-online Azure Stack Edge Gpu Connect Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-connect-resource-manager.md
Set the Azure Resource Manager environment and verify that your device to client
An alternative way to log in is to use the `login-AzureRmAccount` cmdlet.
- `login-AzureRMAccount -EnvironmentName <Environment Name>` -TenantId c0257de7-538f-415c-993a-1b87a031879d
+ `login-AzureRMAccount -EnvironmentName <Environment Name> -TenantId c0257de7-538f-415c-993a-1b87a031879d`
Here is a sample output of the command.
databox-online Azure Stack Edge Gpu Create Virtual Switch Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-create-virtual-switch-powershell.md
+
+ Title: Create a new virtual switch in Azure Stack Edge via PowerShell
+description: Describes how to create a virtual switch on an Azure Stack Edge device by using PowerShell.
++++++ Last updated : 04/06/2021+++
+# Create a new virtual switch in Azure Stack Edge Pro GPU via PowerShell
++
+This article describes how to create a new virtual switch on your Azure Stack Edge Pro GPU device. For example, you would create a new virtual switch if you want your virtual machines to connect through a different physical network port.
+
+## VM deployment workflow
+
+1. Connect to the PowerShell interface on your device.
+2. Query available physical network interfaces.
+3. Create a virtual switch.
+4. Verify the virtual network and subnet that are automatically created.
+
+## Prerequisites
+
+Before you begin, make sure that:
+
+- You've access to a client machine that can access the PowerShell interface of your device. See [Connect to the PowerShell interface](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface).
+
+ The client machine should be running a [Supported OS](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device).
+
+- Use the local UI to enable compute on one of the physical network interfaces on your device as per the instructions in [Enable compute network](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#enable-compute-network) on your device.
++
+## Connect to the PowerShell interface
+
+[Connect to the PowerShell interface of your device](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface).
+
+## Query available network interfaces
+
+1. Use the following command to display a list of physical network interfaces on which you can create a new virtual switch. You will select one of these network interfaces.
+
+ ```powershell
+ Get-NetAdapter -Physical
+ ```
+ Here is an example output:
+
+ ```powershell
+ [10.100.10.10]: PS>Get-NetAdapter -Physical
+
+ Name InterfaceDescription ifIndex Status MacAddress LinkSpeed
+ - -- - - --
+ Port2 QLogic 2x1GE+2x25GE QL41234HMCU NIC ... 12 Up 34-80-0D-05-26-EA ...ps
+ Ethernet Remote NDIS Compatible Device 11 Up F4-02-70-CD-41-39 ...ps
+ Port1 QLogic 2x1GE+2x25GE QL41234HMCU NI...#3 9 Up 34-80-0D-05-26-EB ...ps
+ Port5 Mellanox ConnectX-4 Lx Ethernet Ad...#2 8 Up 0C-42-A1-C0-E3-99 ...ps
+ Port3 QLogic 2x1GE+2x25GE QL41234HMCU NI...#4 7 Up 34-80-0D-05-26-E9 ...ps
+ Port6 Mellanox ConnectX-4 Lx Ethernet Adapter 6 Up 0C-42-A1-C0-E3-98 ...ps
+ Port4 QLogic 2x1GE+2x25GE QL41234HMCU NI...#2 4 Up 34-80-0D-05-26-E8 ...ps
+
+ [10.100.10.10]: PS>
+ ```
+2. Choose a network interface that is:
+
+ - In the **Up** status.
+ - Not used by any existing virtual switches. Currently, only one vswitch can be configured per network interface.
+
+ To check the existing virtual switch and network interface association, run the `Get-HcsExternalVirtualSwitch` command.
+
+ Here is an example output.
+
+ ```powershell
+ [10.100.10.10]: PS>Get-HcsExternalVirtualSwitch
+
+ Name : vSwitch1
+ InterfaceAlias : {Port2}
+ EnableIov : True
+ MacAddressPools :
+ IPAddressPools : {}
+ ConfigurationSource : Dsc
+ EnabledForCompute : True
+ SupportsAcceleratedNetworking : False
+ DbeDhcpHostVnicName : f4a92de8-26ed-4597-a141-cb233c2ba0aa
+ Type : External
+
+ [10.100.10.10]: PS>
+ ```
+ In this instance, Port 2 is associated with an existing virtual switch and shouldn't be used.
+
+## Create a virtual switch
+
+Use the following cmdlet to create a new virtual switch on your specified network interface. After this operation is complete, your compute instances can use the new virtual network.
+
+```powershell
+Add-HcsExternalVirtualSwitch -InterfaceAlias <Network interface name> -WaitForSwitchCreation $true
+```
+
+Use the `Get-HcsExternalVirtualSwitch` command to identify the newly created switch. The new switch that is created is named as `vswitch-<InterfaceAlias>`.
+
+Here is an example output:
+
+```powershell
+[10.100.10.10]: P> Add-HcsExternalVirtualSwitch -InterfaceAlias Port5 -WaitForSwitchCreation $true
+[10.100.10.10]: PS>Get-HcsExternalVirtualSwitch
+
+Name : vSwitch1
+InterfaceAlias : {Port2}
+EnableIov : True
+MacAddressPools :
+IPAddressPools : {}
+ConfigurationSource : Dsc
+EnabledForCompute : True
+SupportsAcceleratedNetworking : False
+DbeDhcpHostVnicName : f4a92de8-26ed-4597-a141-cb233c2ba0aa
+Type : External
+
+Name : vswitch-Port5
+InterfaceAlias : {Port5}
+EnableIov : True
+MacAddressPools :
+IPAddressPools :
+ConfigurationSource : Dsc
+EnabledForCompute : False
+SupportsAcceleratedNetworking : False
+DbeDhcpHostVnicName : 9b301c40-3daa-49bf-a20b-9f7889820129
+Type : External
+
+[10.100.10.10]: PS>
+```
+
+## Verify network, subnet
+
+After you have created the new virtual switch, Azure Stack Edge Pro GPU automatically creates a virtual network and subnet that corresponds to it. You can use this virtual network when creating VMs.
+
+<!--To identify the virtual network and subnet associated with the new switch that you created, use the `Get-HcsVirtualNetwork` command. This cmdlet will be released in April some time. -->
+
+## Next steps
+
+- [Deploy VMs on your Azure Stack Edge Pro GPU device via Azure PowerShell](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md)
+
+- [Deploy VMs on your Azure Stack Edge Pro GPU device via the Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md)
databox-online Azure Stack Edge Gpu Deploy Vm Specialized Image Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-vm-specialized-image-powershell.md
+
+ Title: Create VM images from specialized image of Windows VHD for your Azure Stack Edge Pro GPU device
+description: Describes how to create VM images from specialized images starting from a Windows VHD or a VHDX. Use this specialized image to create VM images to use with VMs deployed on your Azure Stack Edge Pro GPU device.
++++++ Last updated : 03/30/2021+
+#Customer intent: As an IT admin, I need to understand how to create and upload Azure VM images that I can use with my Azure Stack Edge Pro device so that I can deploy VMs on the device.
++
+# Deploy a VM from a specialized image on your Azure Stack Edge Pro device via Azure PowerShell
++
+This article describes the steps required to deploy a virtual machine (VM) on your Azure Stack Edge Pro device from a specialized image.
+
+## About specialized images
+
+A Windows VHD or VHDX can be used to create a *specialized* image or a *generalized* image. The following table summarizes key differences between the *specialized* and the *generalized* images.
++
+|Image type |Generalized |Specialized |
+||||
+|Target |Deployed on any system | Targeted to a specific system |
+|Setup after boot | Setup required at first boot of the VM. | Setup not needed. <br> Platform turns on the VM. |
+|Configuration |Hostname, admin-user, and other VM-specific settings required. |Pre-configured. |
+|Used to |Create multiple new VMs from the same image. |Migrate a specific machine or restoring a VM from previous backup. |
++
+This article covers steps required to deploy from a specialized image. To deploy from a generalized image, see [Use generalized Windows VHD](azure-stack-edge-gpu-prepare-windows-vhd-generalized-image.md) for your device.
++
+## VM image workflow
+
+The high-level workflow to deploy a VM from a specialized image is:
+
+1. Copy the VHD to a local storage account on your Azure Stack Edge Pro GPU device.
+1. Create a new managed disk from the VHD.
+1. Create a new virtual machine from the managed disk and attach the managed disk.
++
+## Prerequisites
+
+Before you can deploy a VM on your device via PowerShell, make sure that:
+
+- You have access to a client that you'll use to connect to your device.
+ - Your client runs a [Supported OS](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device).
+ - Your client is configured to connect to the local Azure Resource Manager of your device as per the instructions in [Connect to Azure Resource Manager for your device](azure-stack-edge-gpu-connect-resource-manager.md).
+
+## Verify the local Azure Resource Manager connection
+
+Verify that your client can connect to the local Azure Resource Manager.
+
+1. Call local device APIs to authenticate:
+
+ ```powershell
+ Login-AzureRMAccount -EnvironmentName <Environment Name>
+ ```
+
+2. Provide the username `EdgeArmUser` and the password to connect via Azure Resource Manager. If you do not recall the password, [Reset the password for Azure Resource Manager](azure-stack-edge-gpu-set-azure-resource-manager-password.md) and use this password to sign in.
+
+
+## Deploy VM from specialized image
+
+The following sections contain step-by-step instructions to deploy a VM from a specialized image.
+
+## Copy VHD to local storage account on device
+
+Follow these steps to copy VHD to local storage account:
+
+1. Copy the source VHD to a local blob storage account on your Azure Stack Edge.
+
+1. Take note of the resulting URI. You'll use this URI in a later step.
+
+ To create and access a local storage account, see the sections [Create a storage account](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#create-a-storage-account) through [Upload a VHD](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#upload-a-vhd) in the article: [Deploy VMs on your Azure Stack Edge device via Azure PowerShell](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md).
+
+## Create a managed disk from VHD
+
+Follow these steps to create a managed disk from a VHD that you uploaded to the storage account earlier:
+
+1. Set some parameters.
+
+ ```powershell
+ $VhdURI = <URI of VHD in local storage account>
+ $DiskRG = <managed disk resource group>
+ $DiskName = <managed disk name>
+ ```
+ Here is an example output.
+
+ ```powershell
+ PS C:\WINDOWS\system32> $VHDURI = "https://myasevmsa.blob.myasegpudev.wdshcsso.com/vhds/WindowsServer2016Datacenter.vhd"
+ PS C:\WINDOWS\system32> $DiskRG = "myasevm1rg"
+ PS C:\WINDOWS\system32> $DiskName = "myasemd1"
+ ```
+1. Create a new managed disk.
+
+ ```powershell
+ $DiskConfig = New-AzureRmDiskConfig -Location DBELocal -CreateOption Import -SourceUri $VhdURI
+ $disk = New-AzureRMDisk -ResourceGroupName $DiskRG -DiskName $DiskName -Disk $DiskConfig
+ ```
+
+ Here is an example output. The location here is set to the location of the local storage account and is always `DBELocal` for all local storage accounts on your Azure Stack Edge Pro GPU device.
+
+ ```powershell
+ PS C:\WINDOWS\system32> $DiskConfig = New-AzureRmDiskConfig -Location DBELocal -CreateOption Import -SourceUri $VHDURI
+ PS C:\WINDOWS\system32> $disk = New-AzureRMDisk -ResourceGroupName $DiskRG -DiskName $DiskName -Disk $DiskConfig
+ PS C:\WINDOWS\system32>
+ ```
+## Create a VM from managed disk
+
+Follow these steps to create a VM from a managed disk:
+
+1. Set some parameters.
+
+ ```powershell
+ $NicRG = <NIC resource group>
+ $NicName = <NIC name>
+ $IPConfigName = <IP config name>
+ $PrivateIP = <IP address> #Optional
+
+ $VMRG = <VM resource group>
+ $VMName = <VM name>
+ $VMSize = <VM size>
+ ```
+
+ >[!NOTE]
+ > The `PrivateIP` parameter is optional. Use this parameter to assign a static IP else the default is a dynamic IP using DHCP.
+
+ Here is an example output. In this example, the same resource group is specified for all the VM resources though you can create and specify separate resource groups for the resources if needed.
+
+ ```powershell
+ PS C:\WINDOWS\system32> $NicRG = "myasevm1rg"
+ PS C:\WINDOWS\system32> $NicName = "myasevmnic1"
+ PS C:\WINDOWS\system32> $IPConfigName = "myaseipconfig1"
+
+ PS C:\WINDOWS\system32> $VMRG = "myasevm1rg"
+ PS C:\WINDOWS\system32> $VMName = "myasetestvm1"
+ PS C:\WINDOWS\system32> $VMSize = "Standard_D1_v2"
+ ```
+
+1. Get the virtual network information and create a new network interface.
+
+ This sample assumes you are creating a single network interface on the default virtual network `ASEVNET` that is associated with the default resource group `ASERG`. If needed, you could specify an alternate virtual network, or create multiple network interfaces. For more information, see [Add a network interface to a VM via the Azure portal](azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal.md).
+
+ ```powershell
+ $armVN = Get-AzureRMVirtualNetwork -Name ASEVNET -ResourceGroupName ASERG
+ $ipConfig = New-AzureRmNetworkInterfaceIpConfig -Name $IPConfigName -SubnetId $armVN.Subnets[0].Id [-PrivateIpAddress $PrivateIP]
+ $nic = New-AzureRmNetworkInterface -Name $NicName -ResourceGroupName $NicRG -Location DBELocal -IpConfiguration $ipConfig
+ ```
+
+ Here is an example output.
+
+ ```powershell
+ PS C:\WINDOWS\system32> $armVN = Get-AzureRMVirtualNetwork -Name ASEVNET -ResourceGroupName ASERG
+ PS C:\WINDOWS\system32> $ipConfig = New-AzureRmNetworkInterfaceIpConfig -Name $IPConfigName -SubnetId $armVN.Subnets[0].Id
+ PS C:\WINDOWS\system32> $nic = New-AzureRmNetworkInterface -Name $NicName -ResourceGroupName $NicRG -Location DBELocal -IpConfiguration $ipConfig
+ WARNING: The output object type of this cmdlet will be modified in a future release.
+ PS C:\WINDOWS\system32>
+ ```
+1. Create a new VM configuration object.
+
+ ```powershell
+ $vmConfig = New-AzureRmVMConfig -VMName $VMName -VMSize $VMSize
+ ```
+
+
+1. Add the network interface to the VM.
+
+ ```powershell
+ $vm = Add-AzureRmVMNetworkInterface -VM $vmConfig -Id $nic.Id
+ ```
+
+1. Set the OS disk properties on the VM.
+
+ ```powershell
+ $vm = Set-AzureRmVMOSDisk -VM $vm -ManagedDiskId $disk.Id -StorageAccountType StandardLRS -CreateOption Attach ΓÇô[Windows/Linux]
+ ```
+ The last flag in this command will be either `-Windows` or `-Linux` depending on which OS you are using for your VM.
+
+1. Create the VM.
+
+ ```powershell
+ New-AzureRmVM -ResourceGroupName $VMRG -Location DBELocal -VM $vm
+ ```
+
+ Here is an example output.
+
+ ```powershell
+ PS C:\WINDOWS\system32> $vmConfig = New-AzureRmVMConfig -VMName $VMName -VMSize $VMSize
+ PS C:\WINDOWS\system32> $vm = Add-AzureRmVMNetworkInterface -VM $vmConfig -Id $nic.Id
+ PS C:\WINDOWS\system32> $vm = Set-AzureRmVMOSDisk -VM $vm -ManagedDiskId $disk.Id -StorageAccountType StandardLRS -CreateOption Attach -Windows
+ PS C:\WINDOWS\system32> New-AzureRmVM -ResourceGroupName $VMRG -Location DBELocal -VM $vm
+ WARNING: Since the VM is created using premium storage or managed disk, existing standard storage account, myasevmsa, is used for
+ boot diagnostics.
+ RequestId IsSuccessStatusCode StatusCode ReasonPhrase
+ - -
+ True OK OK
+ PS C:\WINDOWS\system32>
+ ```
+
+## Delete VM and resources
+
+This article used only one resource group to create all the VM resource. Deleting that resource group will delete the VM and all the associated resources.
+
+1. First view all the resources created under the resource group.
+
+ ```powershell
+ Get-AzureRmResource -ResourceGroupName <Resource group name>
+ ```
+ Here is an example output.
+
+ ```powershell
+ PS C:\WINDOWS\system32> Get-AzureRmResource -ResourceGroupName myasevm1rg
+
+
+ Name : myasemd1
+ ResourceGroupName : myasevm1rg
+ ResourceType : Microsoft.Compute/disks
+ Location : dbelocal
+ ResourceId : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/myasevm1rg/providers/Microsoft.Compute/disk
+ s/myasemd1
+
+ Name : myasetestvm1
+ ResourceGroupName : myasevm1rg
+ ResourceType : Microsoft.Compute/virtualMachines
+ Location : dbelocal
+ ResourceId : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/myasevm1rg/providers/Microsoft.Compute/virt
+ ualMachines/myasetestvm1
+
+ Name : myasevmnic1
+ ResourceGroupName : myasevm1rg
+ ResourceType : Microsoft.Network/networkInterfaces
+ Location : dbelocal
+ ResourceId : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/myasevm1rg/providers/Microsoft.Network/netw
+ orkInterfaces/myasevmnic1
+
+ Name : myasevmsa
+ ResourceGroupName : myasevm1rg
+ ResourceType : Microsoft.Storage/storageaccounts
+ Location : dbelocal
+ ResourceId : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/myasevm1rg/providers/Microsoft.Storage/stor
+ ageaccounts/myasevmsa
+
+ PS C:\WINDOWS\system32>
+ ```
+
+1. Delete the resource group and all the associated resources.
+
+ ```powershell
+ Remove-AzureRmResourceGroup -ResourceGroupName <Resource group name>
+ ```
+ Here is an example output.
+
+ ```powershell
+ PS C:\WINDOWS\system32> Remove-AzureRmResourceGroup -ResourceGroupName myasevm1rg
+
+ Confirm
+ Are you sure you want to remove resource group 'myasevm1rg'
+ [Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): Y
+ True
+ PS C:\WINDOWS\system32>
+ ```
+
+1. Verify that the resource group has deleted. Get all the resource groups that exist on the device.
+
+ ```powershell
+ Get-AzureRmResourceGroup
+ ```
+ Here is an example output.
+
+ ```powershell
+ PS C:\WINDOWS\system32> Get-AzureRmResourceGroup
+
+ ResourceGroupName : ase-image-resourcegroup
+ Location : dbelocal
+ ProvisioningState : Succeeded
+ Tags :
+ ResourceId : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/ase-image-resourcegroup
+
+ ResourceGroupName : ASERG
+ Location : dbelocal
+ ProvisioningState : Succeeded
+ Tags :
+ ResourceId : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/ASERG
+
+ ResourceGroupName : myaserg
+ Location : dbelocal
+ ProvisioningState : Succeeded
+ Tags :
+ ResourceId : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/myaserg
+
+ PS C:\WINDOWS\system32>
+ ```
+
+## Next steps
+
+Depending on the nature of deployment, you can choose one of the following procedures.
+
+- [Deploy a VM from a generalized image via Azure PowerShell](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md)
+- [Deploy a VM via Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md)
databox-online Azure Stack Edge Gpu Manage Virtual Machine Tags Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-manage-virtual-machine-tags-powershell.md
+
+ Title: Manage VM Tags on Azure Stack Edge Pro GPU device via Azure PowerShell
+description: Describes how to create and manage virtual machine tags for virtual machines running in Azure Stack Edge by using Azure PowerShell.
++++++ Last updated : 04/06/2021+
+#Customer intent: As an IT admin, XXXX.
++
+# Manage VM Tags on Azure Stack Edge via Azure PowerShell
+
+This article describes how to tag virtual machines (VMs) running on your Azure Stack Edge Pro GPU devices using Azure PowerShell.
+
+## About tags
+
+Tags are user-defined key-value pairs that can be assigned to a resource or a resource group. You can apply tags to VMs running on your device to logically organize them into a taxonomy. You can place tags on a resource at the time of creation or add it to an existing resource. For example, you can apply the name `Organization` and the value `Engineering` to all VMs that are used by the Engineering department in your organization.
+
+For more information on tags, see how to [Manage tags via AzureRM PowerShell](/powershell/module/azurerm.tags/?view=azurermps-6.13.0&preserve-view=true).
+
+## Prerequisites
+
+Before you can deploy a VM on your device via PowerShell, make sure that:
+
+- You have access to a client that you'll use to connect to your device.
+ - Your client runs a [Supported OS](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device).
+ - Your client is configured to connect to the local Azure Resource Manager of your device as per the instructions in [Connect to Azure Resource Manager for your device](azure-stack-edge-gpu-connect-resource-manager.md).
++
+## Verify connection to local Azure Resource Manager
+
+Make sure that the following steps can be used to access the device from your client.
+
+Verify that your client can connect to the local Azure Resource Manager.
+
+1. Call local device APIs to authenticate:
+
+ ```powershell
+ login-AzureRMAccount -EnvironmentName <Environment Name> -TenantId c0257de7-538f-415c-993a-1b87a031879d
+ ```
+
+1. Provide the username `EdgeArmUser` and the password to connect via Azure Resource Manager. If you do not recall the password, [Reset the password for Azure Resource Manager](azure-stack-edge-gpu-set-azure-resource-manager-password.md) and use this password to sign in.
++
+## Add a tag to a VM
+
+1. Set some parameters.
+
+ ```powershell
+ $VMName = <VM Name>
+ $VMRG = <VM Resource Group>
+ $TagName = <Tag Name>
+ $TagValue = <Tag Value>
+ ```
+
+ Here is an example output:
+
+ ```powershell
+ PS C:\WINDOWS\system32> $VMName = "myasetestvm1"
+ PS C:\WINDOWS\system32> $VMRG = "myaserg2"
+ PS C:\WINDOWS\system32> $TagName = "Organization"
+ PS C:\WINDOWS\system32> $TagValue = "Sales"
+ ```
+
+2. Get the VM object and its tags.
+
+ ```powershell
+ $VirtualMachine = Get-AzureRmVM -ResourceGroupName $VMRG -Name $VMName
+ $tags = $VirtualMachine.Tags
+ ```
+
+3. Add the tag and update the VM. Updating the VM may take a few minutes.
+
+ You can use the optional **-Force** flag to run the command without user confirmation.
+
+ ```powershell
+ $tags.Add($TagName, $TagValue)
+ Set-AzureRmResource -ResourceId $VirtualMachine.Id -Tag $tags [-Force]
+ ```
+
+ Here is an example output:
+
+ ```powershell
+ PS C:\WINDOWS\system32> $VirtualMachine = Get-AzureRMVM -ResourceGroupName $VMRG -Name $VMName
+ PS C:\WINDOWS\system32> $tags = $VirtualMachine.Tags
+ PS C:\WINDOWS\system32> $tags.Add($TagName, $TagValue)
+ PS C:\WINDOWS\system32> Set-AzureRmResource -ResourceID $VirtualMachine.ID -Tag $tags -Force
+
+ Name : myasetestvm1
+ ResourceId : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/myaserg2/providers/Microsoft.Compute/virtua
+ lMachines/myasetestvm1
+ ResourceName : myasetestvm1
+ ResourceType : Microsoft.Compute/virtualMachines
+ ResourceGroupName : myaserg2
+ Location : dbelocal
+ SubscriptionId : 992601bc-b03d-4d72-598e-d24eac232122
+ Tags : {Organization}
+ Properties : @{vmId=958c0baa-e143-4d8a-82bd-9c6b1ba45e86; hardwareProfile=; storageProfile=; osProfile=; networkProfile=;
+ provisioningState=Succeeded}
+
+ PS C:\WINDOWS\system32>
+ ```
+
+For more information, see [Add-AzureRMTag](/powershell/module/azurerm.tags/remove-azurermtag?view=azurermps-6.13.0&preserve-view=true).
+
+## View tags of a VM
+
+You can view the tags applied to a specific virtual machine running on your device.
+
+1. Define the parameters associated with the VM whose tags you want to view.
+
+ ```powershell
+ $VMName = <VM Name>
+ $VMRG = <VM Resource Group>
+ ```
+ Here is an example output:
+
+ ```powershell
+ PS C:\WINDOWS\system32> $VMName = "myasetestvm1"
+ PS C:\WINDOWS\system32> $VMRG = "myaserg2"
+ PS C:\WINDOWS\system32> $TagName = "Organization"
+ PS C:\WINDOWS\system32> $TagValue = "Sales"
+ ```
+1. Get the VM object and view its tags.
+
+ ```powershell
+ $VirtualMachine = Get-AzureRmVM -ResourceGroupName $VMRG -Name $VMName
+ $VirtualMachine.Tags
+ ```
+ Here is an example output:
+
+ ```powershell
+ PS C:\WINDOWS\system32> $VirtualMachine = Get-AzureRMVM -ResourceGroupName $VMRG -Name $VMName
+ PS C:\WINDOWS\system32> $VirtualMachine
+
+ ResourceGroupName : myaserg2
+ Id : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/myaserg2/providers/Microsoft.Compute/virtua
+ lMachines/myasetestvm1
+ VmId : 958c0baa-e143-4d8a-82bd-9c6b1ba45e86
+ Name : myasetestvm1
+ Type : Microsoft.Compute/virtualMachines
+ Location : dbelocal
+ Tags : {"Organization":"Sales"}
+ HardwareProfile : {VmSize}
+ NetworkProfile : {NetworkInterfaces}
+ OSProfile : {ComputerName, AdminUsername, LinuxConfiguration, Secrets}
+ ProvisioningState : Succeeded
+ StorageProfile : {ImageReference, OsDisk, DataDisks}
+
+ PS C:\WINDOWS\system32>
+ ```
+## View tags for all resources
+
+To view the current list of tags for all the resources in the local Azure Resource Manager subscription (different from your Azure subscription) of your device, use the `Get-AzureRMTag` command.
++
+Here is an example output when multiple VMs are running on your device and you want to view all the tags on all the VMs.
+
+```powershell
+PS C:\WINDOWS\system32> Get-AzureRMTag
+
+Name Count
+- --
+Organization 3
+
+PS C:\WINDOWS\system32>
+```
+
+The preceding output indicates that there are three `Organization` tags on the VMs running on your device.
+
+To view further details, use the `-Detailed` parameter.
+
+```powershell
+PS C:\WINDOWS\system32> Get-AzureRMTag -Detailed |fl
+
+Name : Organization
+ValuesTable :
+ Name Count
+ =========== =====
+ Engineering 2
+ Sales 1
+
+Count : 3
+Values : {Engineering, Sales}
+
+PS C:\WINDOWS\system32>
+```
+
+The preceding output indicates that out of the three tags, 2 VMs are tagged as `Engineering` and one is tagged as belonging to `Sales`.
+
+## Remove a tag from a VM
+
+1. Set some parameters.
+
+ ```powershell
+ $VMName = <VM Name>
+ $VMRG = <VM Resource Group>
+ $TagName = <Tag Name>
+ ```
+ Here is an example output:
+
+ ```powershell
+ PS C:\WINDOWS\system32> $VMName = "myaselinuxvm1"
+ PS C:\WINDOWS\system32> $VMRG = "myaserg1"
+ PS C:\WINDOWS\system32> $TagName = "Organization"
+ ```
+2. Get the VM object.
+
+ ```powershell
+ $VirtualMachine = Get-AzureRmVM -ResourceGroupName $VMRG -Name $VMName
+ $VirtualMachine
+ ```
+
+ Here is an example output:
+
+ ```powershell
+ PS C:\WINDOWS\system32> $VirtualMachine = Get-AzureRMVM -ResourceGroupName $VMRG -Name $VMName
+ ResourceGroupName : myaserg1
+ Id : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/myaserg1/providers/Microsoft.Compute/virtualMachines/myaselinuxvm1
+ VmId : 290b3fdd-0c99-4905-9ea1-cf93cd6f25ee
+ Name : myaselinuxvm1
+ Type : Microsoft.Compute/virtualMachines
+ Location : dbelocal
+ Tags : {"Organization":"Engineering"}
+ HardwareProfile : {VmSize}
+ NetworkProfile : {NetworkInterfaces}
+ OSProfile : {ComputerName, AdminUsername, LinuxConfiguration, Secrets}
+ ProvisioningState : Succeeded
+ StorageProfile : {ImageReference, OsDisk, DataDisks}
+ PS C:\WINDOWS\system32>
+ ```
+3. Remove the tag and update the VM. Use the optional `-Force` flag to run the command without user confirmation.
+
+ ```powershell
+ $tags = $VirtualMachine.Tags
+ $tags.Remove($TagName)
+ Set-AzureRmResource -ResourceId $VirtualMachine.Id -Tag $tags [-Force]
+ ```
+ Here is an example output:
+
+ ```powershell
+ PS C:\WINDOWS\system32> $tags = $Virtualmachine.Tags
+ PS C:\WINDOWS\system32> $tags
+ Key Value
+ --
+ Organization Engineering
+ PS C:\WINDOWS\system32> $tags.Remove($TagName)
+ True
+ PS C:\WINDOWS\system32> Set-AzureRMResource -ResourceID $VirtualMachine.ID -Tag $tags -Force
+ Name : myaselinuxvm1
+ ResourceId : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGrou
+ ps/myaserg1/providers/Microsoft.Compute/virtualMachines/myaselin
+ uxvm1
+ ResourceName : myaselinuxvm1
+ ResourceType : Microsoft.Compute/virtualMachines
+ ResourceGroupName : myaserg1
+ Location : dbelocal
+ SubscriptionId : 992601bc-b03d-4d72-598e-d24eac232122
+ Tags : {}
+ Properties : @{vmId=290b3fdd-0c99-4905-9ea1-cf93cd6f25ee; hardwareProfile=;
+ storageProfile=; osProfile=; networkProfile=;
+ provisioningState=Succeeded}
+ PS C:\WINDOWS\system32>
+ ```
++
+## Next steps
+
+Learn how to [Manage tags via AzureRM PowerShell](/powershell/module/azurerm.tags/?view=azurermps-6.13.0&preserve-view=true).
defender-for-iot How To Activate And Set Up Your On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-activate-and-set-up-your-on-premises-management-console.md
Title: Activate and set up your on-premises management console description: Activating the management console ensures that sensors are registered with Azure and send information to the on-premises management console, and that the on-premises management console carries out management tasks on connected sensors. Previously updated : 3/18/2021 Last updated : 4/6/2021
To set up a site:
6. [Assign sensor to site zones](#assign-sensors-to-zones).
+### Delete a site
+
+If you no longer need a site, you can delete it from your on-premises management console.
+ To delete a site: 1. In the **Site Management** window, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/expand-view-icon.png" border="false"::: from the bar that contains the site name, and then select **Delete Site**. The confirmation box appears, verifying that you want to delete the site.
event-hubs Event Hubs For Kafka Ecosystem Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-for-kafka-ecosystem-overview.md
bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required;
-sasl.login.callback.handler.class=CustomAuthenticateCallbackHandler;
+sasl.login.callback.handler.class=CustomAuthenticateCallbackHandler
``` #### Shared Access Signature (SAS)
The listed services and frameworks can generally acquire event streams and refer
If you must use the Kafka Streams framework on Azure, [Apache Kafka on HDInsight](../hdinsight/kafk) will provide you with that option. Apache Kafka on HDInsight provides full control over all configuration aspects of Apache Kafka, while being fully integrated with various aspects of the Azure platform, from fault/update domain placement to network isolation to monitoring integration. ## Next steps
-This article provided an introduction to Event Hubs for Kafka. To learn more, see [Apache Kafka developer guide for Azure Event Hubs](apache-kafka-developer-guide.md).
+This article provided an introduction to Event Hubs for Kafka. To learn more, see [Apache Kafka developer guide for Azure Event Hubs](apache-kafka-developer-guide.md).
expressroute Expressroute About Virtual Network Gateways https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-about-virtual-network-gateways.md
Title: About ExpressRoute virtual network gateways - Azure| Microsoft Docs
description: Learn about virtual network gateways for ExpressRoute. This article includes information about gateway SKUs and types. - Previously updated : 10/14/2019 Last updated : 04/05/2021
The following table shows the gateway types and the estimated performances. This
[!INCLUDE [expressroute-table-aggthroughput](../../includes/expressroute-table-aggtput-include.md)] > [!IMPORTANT]
-> Application performance depends on multiple factors, such as the end-to-end latency, and the number of traffic flows the application opens. The numbers in the table represent the upper limit that the application can theoretically achieve in an ideal environment.
->
+> * Number of VMs in the virtual network also includes VMs in peered virtual networks that uses remote ExpressRoute gateway.
+> * Application performance depends on multiple factors, such as the end-to-end latency, and the number of traffic flows the application opens. The numbers in the table represent the upper limit that the application can theoretically achieve in an ideal environment.
> ## <a name="gwsub"></a>Gateway subnet
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-faqs.md
Title: FAQ - Azure ExpressRoute | Microsoft Docs
-description: The ExpressRoute FAQ contains information about Supported Azure Services, Cost, Data and Connections, SLA, Providers and Locations, Bandwidth, and additional Technical Details.
+description: The ExpressRoute FAQ contains information about Supported Azure Services, Cost, Data and Connections, SLA, Providers and Locations, Bandwidth, and other Technical Details.
## What is ExpressRoute?
-ExpressRoute is an Azure service that lets you create private connections between Microsoft datacenters and infrastructure that's on your premises or in a colocation facility. ExpressRoute connections do not go over the public Internet, and offer higher security, reliability, and speeds with lower latencies than typical connections over the Internet.
+ExpressRoute is an Azure service that lets you create private connections between Microsoft datacenters and infrastructure that's on your premises or in a colocation facility. ExpressRoute connections don't go over the public Internet, and offer higher security, reliability, and speeds with lower latencies than typical connections over the Internet.
### What are the benefits of using ExpressRoute and private network connections?
-ExpressRoute connections do not go over the public Internet. They offer higher security, reliability, and speeds, with lower and consistent latencies than typical connections over the Internet. In some cases, using ExpressRoute connections to transfer data between on-premises devices and Azure can yield significant cost benefits.
+ExpressRoute connections don't go over the public Internet. They offer higher security, reliability, and speeds, with lower and consistent latencies than typical connections over the Internet. In some cases, using ExpressRoute connections to transfer data between on-premises devices and Azure can yield significant cost benefits.
### Where is the service available?
No. You can purchase a private connection of any speed from your service provide
### If I pay for an ExpressRoute circuit of a given bandwidth, do I have the ability to use more than my procured bandwidth?
-Yes, you may use up to two times the bandwidth limit you procured by using the bandwidth available on the secondary connection of your ExpressRoute circuit. The built-in redundancy of your circuit is configured using primary and secondary connections, each of the procured bandwidth, to two Microsoft Enterprise Edge routers (MSEEs). The bandwidth available through your secondary connection can be used for additional traffic if necessary. Because the secondary connection is meant for redundancy, however, it is not guaranteed and should not be used for additional traffic for a sustained period of time. To learn more about how to use both connnections to transmit traffic, see [Use AS PATH prepending](./expressroute-optimize-routing.md#solution-use-as-path-prepending).
+Yes, you may use up to two times the bandwidth limit you procured by using the bandwidth available on the secondary connection of your ExpressRoute circuit. The built-in redundancy of your circuit is configured using primary and secondary connections, each of the procured bandwidth, to two Microsoft Enterprise Edge routers (MSEEs). The bandwidth available through your secondary connection can be used for additional traffic if necessary. Because the secondary connection is meant for redundancy, however, it is not guaranteed and should not be used for additional traffic for a sustained period of time. To learn more about how to use both connections to transmit traffic, see [Use AS PATH prepending](./expressroute-optimize-routing.md#solution-use-as-path-prepending).
If you plan to use only your primary connection to transmit traffic, the bandwidth for the connection is fixed and attempting to oversubscribe it will result in increased packet drops. If traffic flows through an ExpressRoute Gateway, the bandwidth for the Gateway SKU is fixed and not burstable. For the bandwidth of each Gateway SKU, see [About ExpressRoute virtual network gateways](expressroute-about-virtual-network-gateways.md#aggthroughput).
If your ExpressRoute circuit is enabled for Azure Microsoft peering, you can acc
* Azure Active Directory * [Azure DevOps](https://blogs.msdn.microsoft.com/devops/2018/10/23/expressroute-for-azure-devops/) (Azure Global Services community) * Azure Public IP addresses for IaaS (Virtual Machines, Virtual Network Gateways, Load Balancers, etc.)
-* Most of the other Azure services are also supported. Please check directly with the service that you want to use to verify support.
+* Most of the other Azure services are also supported. Check directly with the service that you want to use to verify support.
**Not supported:**
If your ExpressRoute circuit is enabled for Azure Microsoft peering, you can acc
### Public peering
-Public peering has been disabled on new ExpressRoute circuits. Azure services are now available on Microsoft peering. If you a circuit that was created prior to public peering being deprecated, you can choose to use Microsoft peering or public peering, depending on the services that you want.
+Public peering has been disabled on new ExpressRoute circuits. Azure services are now available on Microsoft peering. If you a circuit that was created before public peering was deprecated, you can choose to use Microsoft peering or public peering, depending on the services that you want.
For more information and configuration steps for public peering, see [ExpressRoute public peering](about-public-peering.md).
Refer to [pricing details](https://azure.microsoft.com/pricing/details/expressro
Yes. ExpressRoute premium charges apply on top of ExpressRoute circuit charges and charges required by the connectivity provider. ## ExpressRoute Local+ ### What is ExpressRoute Local?
-ExpressRoute Local is a SKU of ExpressRoute circuit, in addition to the Standard SKU and the Premium SKU. A key feature of Local is that a Local circuit at an ExpressRoute peering location gives you access only to one or two Azure regions in or near the same metro. In contrast, a Standard circuit gives you access to all Azure regions in a geopolitical area and a Premium circuit to all Azure regions globally.
+
+ExpressRoute Local is a SKU of ExpressRoute circuit, in addition to the Standard SKU and the Premium SKU. A key feature of Local is that a Local circuit at an ExpressRoute peering location gives you access only to one or two Azure regions in or near the same metro. In contrast, a Standard circuit gives you access to all Azure regions in a geopolitical area and a Premium circuit to all Azure regions globally. Specifically, with a Local SKU you can only advertise routes (over Microsoft and private peering) from the corresponding local region of the ExpressRoute circuit. You won't be able to receive routes for other regions different than the defined Local region.
### What are the benefits of ExpressRoute Local?+ While you need to pay egress data transfer for your Standard or Premium ExpressRoute circuit, you don't pay egress data transfer separately for your ExpressRoute Local circuit. In other words, the price of ExpressRoute Local includes data transfer fees. ExpressRoute Local is a more economical solution if you have massive amount of data to transfer and you can bring your data over a private connection to an ExpressRoute peering location near your desired Azure regions. ### What features are available and what are not on ExpressRoute Local?+ Compared to a Standard ExpressRoute circuit, a Local circuit has the same set of features except: * Scope of access to Azure regions as described above * ExpressRoute Global Reach is not available on Local
Compared to a Standard ExpressRoute circuit, a Local circuit has the same set of
ExpressRoute Local also has the same limits on resources (e.g. the number of VNets per circuit) as Standard. ### Where is ExpressRoute Local available and which Azure regions is each peering location mapped to?+ ExpressRoute Local is available at the peering locations where one or two Azure regions are close-by. It is not available at a peering location where there is no Azure region in that state or province or country/region. Please see the exact mappings on [the Locations page](expressroute-locations-providers.md). ## ExpressRoute for Microsoft 365
expressroute Expressroute Howto Expressroute Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-expressroute-direct-cli.md
Once enrolled, verify that the **Microsoft.Network** resource provider is regist
} ```
+## <a name="resources"></a>Generate the Letter of Authorization (LOA)
+
+Input the recently created ExpressRoute Direct resource name, resource group name, and a customer name to write the LOA to and (optionally) define a file location to store the document. If a file path is not referenced, the document will download to the current directory.
+
+```azurecli
+az network express-route port generate-loa -n Contoso-Direct -g Contoso-Direct-rg --customer-name Contoso --destination C:\Users\SampleUser\Downloads\LOA.pdf
+```
+ ## <a name="state"></a>Change AdminState for links Use this process to conduct a layer 1 test. Ensure that each cross-connection is properly patched into each router in the primary and secondary ports.
expressroute Expressroute Troubleshooting Expressroute Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-troubleshooting-expressroute-overview.md
In the ExpressRoute Essentials, *Circuit status* indicates the status of the cir
For an ExpressRoute circuit to be operational, the *Circuit status* must be *Enabled* and the *Provider status* must be *Provisioned*. > [!NOTE]
-> After configuring an ExpressRoute circuit, if the *Circuit status* is struck in not enabled status, contact [Microsoft Support][Support]. On the other hand, if the *Provider status* is struck in not provisioned status, contact your service provider.
+> After configuring an ExpressRoute circuit, if the *Circuit status* is stuck in not enabled status, contact [Microsoft Support][Support]. On the other hand, if the *Provider status* is stuck in not provisioned status, contact your service provider.
> >
ServiceProviderProvisioningState : Provisioned
``` > [!NOTE]
-> After configuring an ExpressRoute circuit, if the *Circuit status* is struck in not enabled status, contact [Microsoft Support][Support]. On the other hand, if the *Provider status* is struck in not provisioned status, contact your service provider.
+> After configuring an ExpressRoute circuit, if the *Circuit status* is stuck in not enabled status, contact [Microsoft Support][Support]. On the other hand, if the *Provider status* is stuck in not provisioned status, contact your service provider.
> >
For more information or help, check out the following links:
[CreatePeering]: ./expressroute-howto-routing-portal-resource-manager.md [ARP]: ./expressroute-troubleshooting-arp-resource-manager.md [HA]: ./designing-for-high-availability-with-expressroute.md
-[DR-Pvt]: ./designing-for-disaster-recovery-with-expressroute-privatepeering.md
+[DR-Pvt]: ./designing-for-disaster-recovery-with-expressroute-privatepeering.md
frontdoor Front Door Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-custom-domain.md
To create a CNAME record with the afdverify subdomain:
||-|| | afdverify.www.contoso.com | CNAME | afdverify.contoso-frontend.azurefd.net |
- - Source: Enter your custom domain name, including the afdverify subdomain, in the following format: afdverify._&lt;custom domain name&gt;_. For example, afdverify.www.contoso.com.
+ - Source: Enter your custom domain name, including the afdverify subdomain, in the following format: afdverify._&lt;custom domain name&gt;_. For example, afdverify.www.contoso.com. If you are mapping a wildcard domain, like \*.contoso.com, the source value is the same as it would be without the wildcard: afdverify.contoso.com.
- Type: Enter *CNAME*.
In this tutorial, you learned how to:
To learn how to enable HTTPS for your custom domain, continue to the next tutorial. > [!div class="nextstepaction"]
-> [Enable HTTPS for a custom domain](front-door-custom-domain-https.md)
+> [Enable HTTPS for a custom domain](front-door-custom-domain-https.md)
frontdoor Front Door Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-diagnostics.md
To configure diagnostic logs for your Front Door:
3. Select **Turn on diagnostics**. Archive diagnostic logs along with metrics to a storage account, stream them to an event hub, or send them to Azure Monitor logs.
-Front Door currently provides diagnostic logs (batched hourly). Diagnostic logs provide individual API requests with each entry having the following schema:
+Front Door currently provides diagnostic logs. Diagnostic logs provide individual API requests with each entry having the following schema:
| Property | Description | | - | - |
frontdoor Front Door Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-health-probes.md
To determine the health and proximity of each backend for a given Front Door env
> [!WARNING] > Since Front Door has many edge environments globally, health probe volume for your backends can be quite high - ranging from 25 requests every minute to as high as 1200 requests per minute, depending on the health probe frequency configured. With the default probe frequency of 30 seconds, the probe volume on your backend should be about 200 requests per minute.
+> [!NOTE]
+> Front Door HTTP/HTTPS probes are sent with `User-Agent` header set with value: `Edge Health Probes`.
+ ## Supported protocols Front Door supports sending probes over either HTTP or HTTPS protocols.ΓÇï These probes are sent over the same TCP ports configured for routing client requests, and cannot be overridden.
frontdoor Front Door Lb With Azure App Delivery Suite https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-lb-with-azure-app-delivery-suite.md
ms.devlang: na
na Previously updated : 09/28/2020 Last updated : 04/06/2021
When you combine these global and regional services, your application will benef
## Global load balancing **Traffic Manager** provides global DNS load balancing. It looks at incoming DNS requests and responds with a healthy backend, following the routing policy the customer has selected. Options for routing methods are:-- **Performance routing sends requests to the closest backend with the least latency.-- **Priority routing** direct all traffic to a backend, with other backends as backup.
+- **Performance routing** sends requests to the closest backend with the least latency.
+- **Priority routing** directs all traffic to a backend, with other backends as backup.
- **Weighted round-robin routing** distributes traffic based on the weighting that is assigned to each backend. - **Geographic routing** ensures requests that get sourced from specific geographical regions get handled by backends mapped for those regions. (For example, all requests from Spain should be directed to the France Central Azure region) - **Subnet routing** allows you to map IP address ranges to backends, so that incoming requests for those IPs will be sent to the specific backend. (For example, any users that connect from your corporate HQΓÇÖs IP address range should get different web content than the general users)
frontdoor Front Door Security Headers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-security-headers.md
In this tutorial, you learn how to:
1. Add the header name: **Content-Security-Policy** and define the values this header should accept. In this scenario, we choose *"script-src 'self' https://apiphany.portal.azure-api.net."*
+ > [!NOTE]
+ > Header values are limited to 128 characters.
+ 1. Once you've added all of the rules you'd like to your configuration, don't forget to go to your preferred route and associate your Rules Engine configuration to your Route Rule. This step is required to enable the rule to work. ![portal sample](./media/front-door-rules-engine/rules-engine-security-header-example.png)
frontdoor Concept Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/concept-private-link.md
When you enable Private Link to your origin in Azure Front Door Premium configur
## Limitations
-Azure Front Door private endpoints are available in the following regions during public preview: East US, West 2 US, and South Central US.
+Azure Front Door private endpoints are available in the following regions during public preview: East US, West 2 US, South Central US, and UK South.
For the best latency, you should always pick an Azure region closest to your origin when choosing to enable Front Door private link endpoint.
frontdoor Create Front Door Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/create-front-door-portal.md
ms.devlang: na
na Previously updated : 02/18/2021 Last updated : 04/16/2021
Configure Azure Front Door Standard/Premium (Preview) to direct user traffic bas
:::image type="content" source="../media/create-front-door-portal/front-door-custom-create-add-endpoint.png" alt-text="Screenshot of add an endpoint.":::
-1. Next, add an Origin Group that contains your two web apps. Select **+ Add** to open **Add an origin group** page. For Name, enter *myOrignGroup*, then select **+ Add an origin**.
+1. Next, add an Origin Group that contains your two web apps. Select **+ Add** to open **Add an origin group** page. For Name, enter *myOriginGroup*, then select **+ Add an origin**.
:::image type="content" source="../media/create-front-door-portal/front-door-custom-create-add-origin-group.png" alt-text="Screenshot of add an origin group.":::
governance Supported Tables Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/reference/supported-tables-resources.md
Title: Supported Azure Resource Manager resource types description: Provide a list of the Azure Resource Manager resource types supported by Azure Resource Graph and Change History. Previously updated : 03/24/2021 Last updated : 04/06/2021
part of a **table** in Resource Graph.
- Citrix.Services/XenDesktopEssentials (Citrix Virtual Desktops Essentials) - Conexlink.MyCloudIt/accounts (MyCloudIT - Azure Desktop Hosting) - Crypteron.DataSecurity/apps (Crypteron)-- github.enterprise/accounts
+- GitHub.Enterprise/accounts (GitHub AE)
- gridpro.evops/accounts - gridpro.evops/accounts/eventrules - gridpro.evops/accounts/requesttemplates
part of a **table** in Resource Graph.
- microsoft.aadiam/azureadmetrics - microsoft.aadiam/privateLinkForAzureAD (Private Link for Azure AD) - microsoft.aadiam/tenants-- microsoft.agfoodplatform/farmbeats
+- Microsoft.AgFoodPlatform/farmBeats (Azure FarmBeats PaaS)
- microsoft.aisupercomputer/accounts - microsoft.aisupercomputer/accounts/jobgroups - microsoft.aisupercomputer/accounts/jobgroups/jobs
part of a **table** in Resource Graph.
- Microsoft.Cognition/syntheticsAccounts (Synthetics Accounts) - Microsoft.CognitiveServices/accounts (Cognitive Services) - Microsoft.Compute/availabilitySets (Availability sets)-- microsoft.compute/capacityreservationgroups
+- Microsoft.Compute/capacityReservationGroups (Capacity Reservation Groups)
- microsoft.compute/capacityreservationgroups/capacityreservations - microsoft.compute/capacityreservations - Microsoft.Compute/cloudServices (Cloud services (extended support))
part of a **table** in Resource Graph.
- Microsoft.ConnectedCache/cacheNodes (Connected Cache Resources) - microsoft.connectedvehicle/platformaccounts - microsoft.connectedvmwarevsphere/resourcepools-- microsoft.connectedvmwarevsphere/vcenters
+- Microsoft.connectedVMwareVSphere/vCenters (VMware vCenters)
- Microsoft.ConnectedVMwarevSphere/VirtualMachines (VMware + AVS virtual machines) - microsoft.connectedvmwarevsphere/virtualmachines/extensions - microsoft.connectedvmwarevsphere/virtualmachinetemplates
part of a **table** in Resource Graph.
- microsoft.containerregistry/registries/taskruns - microsoft.containerregistry/registries/tasks - Microsoft.ContainerRegistry/registries/webhooks (Container registry webhooks)-- Microsoft.ContainerService/containerServices (Container services (deprecated))
+- microsoft.containerservice/containerservices
- Microsoft.ContainerService/managedClusters (Kubernetes services) - microsoft.containerservice/openshiftmanagedclusters - microsoft.contoso/clusters
part of a **table** in Resource Graph.
- microsoft.datamigration/slots - microsoft.datamigration/sqlmigrationservices - Microsoft.DataProtection/BackupVaults (Backup vaults)
+- microsoft.dataprotection/resourceguards
- microsoft.dataprotection/resourceoperationgatekeepers - Microsoft.DataShare/accounts (Data Shares) - Microsoft.DBforMariaDB/servers (Azure Database for MariaDB servers)
part of a **table** in Resource Graph.
- Microsoft.DomainRegistration/domains (App Service Domains) - microsoft.edgeorder/addresses - microsoft.edgeorder/ordercollections-- microsoft.edgeorder/orders
+- Microsoft.EdgeOrder/orders (Azure Edge)
- Microsoft.Elastic/monitors (Elasticsearch) - microsoft.enterpriseknowledgegraph/services - Microsoft.EventGrid/domains (Event Grid Domains)
part of a **table** in Resource Graph.
- Microsoft.EventHub/clusters (Event Hubs Clusters) - Microsoft.EventHub/namespaces (Event Hubs Namespaces) - Microsoft.Experimentation/experimentWorkspaces (Experiment Workspaces)-- Microsoft.ExtendedLocation/CustomLocations (Custom Locations)
+- Microsoft.ExtendedLocation/CustomLocations (Custom locations)
- microsoft.falcon/namespaces - microsoft.footprintmonitoring/profiles - microsoft.gaming/titles
part of a **table** in Resource Graph.
- Microsoft.HanaOnAzure/hanaInstances (SAP HANA on Azure) - Microsoft.HanaOnAzure/sapMonitors (Azure Monitors for SAP Solutions) - microsoft.hardwaresecuritymodules/dedicatedhsms
+- microsoft.hdinsight/clusterpools
+- microsoft.hdinsight/clusterpools/clusters
- Microsoft.HDInsight/clusters (HDInsight clusters) - Microsoft.HealthBot/healthBots (Azure Health Bot) - Microsoft.HealthcareApis/services (Azure API for FHIR)
part of a **table** in Resource Graph.
- microsoft.machinelearningservices/workspaces/batchendpoints/deployments - microsoft.machinelearningservices/workspaces/inferenceendpoints - microsoft.machinelearningservices/workspaces/inferenceendpoints/deployments-- Microsoft.MachineLearningServices/workspaces/onlineEndpoints (ML Apps)-- Microsoft.MachineLearningServices/workspaces/onlineEndpoints/deployments (ML App Deployments)
+- Microsoft.MachineLearningServices/workspaces/onlineEndpoints (Machine learning online endpoints)
+- Microsoft.MachineLearningServices/workspaces/onlineEndpoints/deployments (Machine learning online deployments)
- Microsoft.Maintenance/maintenanceConfigurations (Maintenance Configurations) - microsoft.maintenance/maintenancepolicies - microsoft.managedidentity/groups
part of a **table** in Resource Graph.
- microsoft.media/mediaservices/liveevents (Live events) - microsoft.media/mediaservices/streamingEndpoints (Streaming Endpoints) - microsoft.media/mediaservices/transforms-- microsoft.media/videoanalyzers
+- microsoft.media/videoanalyzers (Video Analyzers)
- microsoft.microservices4spring/appclusters - microsoft.migrate/assessmentprojects - microsoft.migrate/migrateprojects
part of a **table** in Resource Graph.
- Microsoft.OperationalInsights/workspaces (Log Analytics workspaces) - Microsoft.OperationsManagement/solutions (Solutions) - microsoft.operationsmanagement/views-- microsoft.orbital/contactprofiles-- microsoft.orbital/spacecrafts
+- Microsoft.Orbital/contactProfiles (ContactProfiles)
+- Microsoft.Orbital/spacecrafts (Spacecrafts)
- Microsoft.Peering/peerings (Peerings) - Microsoft.Peering/peeringServices (Peering Services) - Microsoft.Portal/dashboards (Shared dashboards)
part of a **table** in Resource Graph.
- microsoft.powerbidedicated/autoscalevcores - Microsoft.PowerBIDedicated/capacities (Power BI Embedded) - microsoft.powerplatform/enterprisepolicies-- Microsoft.ProjectBabylon/Accounts (Babylon accounts)
+- microsoft.projectbabylon/accounts
- Microsoft.Purview/Accounts (Purview accounts) - Microsoft.Quantum/Workspaces (Quantum Workspaces) - Microsoft.RecoveryServices/vaults (Recovery Services vaults)
part of a **table** in Resource Graph.
- Microsoft.Synapse/privateLinkHubs (Azure Synapse Analytics (private link hubs)) - Microsoft.Synapse/workspaces (Azure Synapse Analytics) - Microsoft.Synapse/workspaces/bigDataPools (Apache Spark pools)
+- microsoft.synapse/workspaces/kustopools
- microsoft.synapse/workspaces/sqldatabases - Microsoft.Synapse/workspaces/sqlPools (Dedicated SQL pools) - microsoft.terraformoss/providerregistrations
internet-peering Walkthrough Communications Services Partner https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/internet-peering/walkthrough-communications-services-partner.md
+
+ Title: Azure Internet peering for Communications Services walkthrough
+
+description: Azure Internet peering for Communications Services walkthrough
++++ Last updated : 03/30/2021+++
+# Azure Internet peering for Communications Services walkthrough
+
+This section explains the steps a Communications Services Provider needs to follow to establish a Direct interconnect with Microsoft.
+
+**Communications Services Providers:** Communications Services Providers are the organizations which offer communication services (Communications, messaging, conferencing etc.) and are looking to integrate their communications services infrastructure (SBC/SIP Gateway etc.) with Azure Communication Services and Microsoft Teams.
+
+Azure Internet peering support Communications Services Providers to establish direct interconnect with Microsoft at any of its edge sites (pop locations). The list of all the public edges sites is available in [PeeringDB](https://www.peeringdb.com/net/694).
+
+The Azure Internet peering provides highly reliable and QoS (Quality of Service) enabled interconnect for Communications services to ensure high quality and performance centric services.
+
+## Technical Requirements
+The technical requirements to establish direct interconnect for Communication Services are as following:
+- The Peer MUST provide own Autonomous System Number (ASN), which MUST be public.
+- The peer MUST have redundant Interconnect (PNI) at each interconnect location to ensure local redundancy.
+- The Peer MUST have geo redundancy in place to ensure failover in event of site failures in region/ metro.
+- The Peer MUST has the BGP sessions as Active- Active to ensure high availability and faster convergence and should not be provisioned as Primary and backup.
+- The Peer MUST maintain a 1:1 ratio for Peer peering routers to peering circuits and no rate limiting is applied.
+- The Peer MUST supply and advertise their own publicly routable IPv4 address space used by PeerΓÇÖs communications service endpoints (e.g. SBC).
+- The Peer MUST supply detail of what class of traffic and endpoints are housed in each advertised subnet.
+- The Peer MUST run BGP over Bi-directional Forwarding Detection (BFD) to facilitate sub second route convergence.
+- All communications infrastructure prefixes are registered in Azure portal and advertised with community string 8075:8007.
+- The Peer MUST NOT terminate peering on a device running a stateful firewall.
+- Microsoft will configure all the interconnect links as LAG (link bundles) by default, so, peer MUST support LACP (Link Aggregation Control Protocol) on the interconnect links.
+
+## Establishing Direct Interconnect with Microsoft for Communications Services.
+
+To establish a direct interconnect using Azure Internet peering please follow the below steps:
+
+**1. Associate Peer public ASN to the Azure Subscription:**
+
+In case Peer already associated public ASN to Azure subscription, please ignore this step.
+
+[Associate peer ASN to Azure subscription using the portal - Azure | Microsoft Docs](https://docs.microsoft.com/azure/internet-peering/howto-subscription-association-portal)
+
+The next step is to create a Direct peering connection for Peering Service.
+
+> [!NOTE]
+> Once ASN association is approved, email us at peeringservices@microsoft.com with your ASN and subscription ID to associate your subscription with Communications services.
+
+**2. Create Direct peering connection for Peering Service:**
+
+Follow the instructions to [Create or modify a Direct peering using the portal](https://docs.microsoft.com/azure/internet-peering/howto-direct-portal)
+
+Ensure it meets high-availability requirement.
+
+Please ensure you are selecting following options on ΓÇ£Create a PeeringΓÇ¥ Page:
+
+Peering Type: **Direct**
+
+Microsoft Network: **8075**
+
+SKU: **Premium Free**
++
+Under ΓÇ£Direct Peering Connection PageΓÇ¥ select following options:
+
+Session Address provider: **Microsoft**
+
+Use for Peering
+
+> [!NOTE]
+> Ignore the following message while selecting for activating for Peering Services.
+> *Do not enable unless you have contacted peering@microsoft.com about becoming a MAPS provider.*
++
+ **2a. Use Existing Direct peering connection for Peering Services**
+
+If you have an existing Direct peering that you want to use to support Peering Service, you can activate on Azure portal.
+1. Follow the instructions to [Convert a legacy Direct peering to Azure resource using the portal](https://docs.microsoft.com/azure/internet-peering/howto-legacy-direct-portal).
+As required, order additional circuits to meet high-availability requirement.
+
+2. Follow steps to [Enable Peering Service](https://docs.microsoft.com/azure/internet-peering/howto-peering-service-portal) on a Direct peering using the portal.
++++
+**3. Register your prefixes for Optimized Routing**
+
+For optimized routing for your Communication services infrastructure prefixes, you should register all your prefixes with your peering interconnects.
+[Register Azure Peering Service - Azure portal | Microsoft Docs](https://docs.microsoft.com/azure/peering-service/azure-portal)
+
+The Prefix key is auto populated for Communications Service Partners, so the partner need not use any prefix key to register.
+
+Please ensure that the prefixes being registered are being announced over the direct interconnects established for the region.
++
+## FAQs:
+
+**Q.** I have smaller subnets (</24) for my Communications services. Can I get the smaller subnets also routed?
+
+**A.** Yes, Microsoft Azure Peering service supports smaller prefix routing also. Please ensure that you are registering the smaller prefixes for routing and the same are announced over the interconnects.
+
+**Q.** What Microsoft routes will we receive over these interconnects?
+
+**A.** Microsoft announces all of Microsoft's public service prefixes over these interconnects. This will ensure not only Communications but other cloud services are accessible from the same interconnect.
+
+**Q.** I need to set the prefix limit, how many routes Microsoft would be announcing?
+
+**A.** Microsoft announces roughly 280 prefixes on internet, and it may increase by 10-15% in future. So, a safe limit of 400-500 can be good to set as ΓÇ£Max prefix countΓÇ¥
+
+**Q.** Will Microsoft re-advertise the Peer prefixes to the Internet?
+
+**A.** No.
+
+**Q.** Is there a fee for this service?
+
+**A.** No, however Peer is expected to carry site cross connect costs.
+
+**Q.** What is the minimum link speed for an interconnect?
+
+**A.** 10Gbps.
+
+**Q.** Is the Peer bound to an SLA?
+
+**A.** Yes, once utilization reaches 40% a 45-60day LAG augmentation process must begin.
+
+**Q.** What is the advantage of this service over current direct peering or express route?
+
+**A.** Settlement free and entire path is optimized for voice traffic over Microsoft WAN and convergence is tuned for sub-second with BFD.
+
+**Q.** How does it take to complete the onboarding process?
+
+**A.** Time will be variable depending on number and location of sites, and if Peer is migrating existing private peerings or establishing new cabling. Carrier should plan for 3+ weeks.
+
+**Q.** Can we use APIs for onboarding?
+
+**A.** Currently there is no API support, and configuration must be performed via web portal.
iot-central Concepts Device Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-device-templates.md
A device model defines how a device interacts with your IoT Central application.
A solution developer can also export a JSON file that contains the device model. A device developer can use this JSON document to understand how the device should communicate with the IoT Central application.
-The JSON file that defines the device model uses the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). IoT Central expects the JSON file to contain the device model with the interfaces defined inline, rather than in separate files.
+The JSON file that defines the device model uses the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). IoT Central expects the JSON file to contain the device model with the interfaces defined inline, rather than in separate files. To learn more, see [IoT Plug and Play modeling guide](../../iot-pnp/concepts-modeling-guide.md).
A typical IoT device is made up of: - Custom parts, which are the things that make your device unique. - Standard parts, which are things that are common to all devices.
-These parts are called _interfaces_ in a device model. Interfaces define the details of each part your device implements. Interfaces are reusable across device models. In the DTDL, a component refers to an interface defined in a separate DTDL file.
+These parts are called _interfaces_ in a device model. Interfaces define the details of each part your device implements. Interfaces are reusable across device models. In DTDL, a component refers to another interface, which may defined in a separate DTDL file or in a separate section of the file.
-The following example shows the outline of device model for a temperature controller device. The default component includes definitions for `workingSet`, `serialNumber`, and `reboot`. The device model also includes to the `thermostat` and `deviceInformation` interfaces:
+The following example shows the outline of device model for a [temperature controller device](https://github.com/Azure/iot-plugandplay-models/blob/main/dtmi/com/example/temperaturecontroller-2.json). The default component includes definitions for `workingSet`, `serialNumber`, and `reboot`. The device model also includes two `thermostat` components and a `deviceInformation` component. The contents of the three components have been removed for the sake of brevity:
```json
-{
- "@context": "dtmi:dtdl:context;2",
- "@id": "dtmi:com:example:TemperatureController;1",
- "@type": "Interface",
- "displayName": "Temperature Controller",
- "description": "Device with two thermostats and remote reboot.",
- "contents": [
- {
- "@type": [
- "Telemetry", "DataSize"
- ],
- "name": "workingSet",
- "displayName": "Working Set",
- "description": "Current working set of the device memory in KiB.",
- "schema": "double",
- "unit" : "kibibyte"
- },
- {
- "@type": "Property",
- "name": "serialNumber",
- "displayName": "Serial Number",
- "description": "Serial number of the device.",
- "schema": "string"
- },
- {
- "@type": "Command",
- "name": "reboot",
- "displayName": "Reboot",
- "description": "Reboots the device after waiting the number of seconds specified.",
- "request": {
- "name": "delay",
- "displayName": "Delay",
- "description": "Number of seconds to wait before rebooting the device.",
- "schema": "integer"
+[
+ {
+ "@context": [
+ "dtmi:iotcentral:context;2",
+ "dtmi:dtdl:context;2"
+ ],
+ "@id": "dtmi:com:example:TemperatureController;2",
+ "@type": "Interface",
+ "contents": [
+ {
+ "@type": [
+ "Telemetry",
+ "DataSize"
+ ],
+ "description": {
+ "en": "Current working set of the device memory in KiB."
+ },
+ "displayName": {
+ "en": "Working Set"
+ },
+ "name": "workingSet",
+ "schema": "double",
+ "unit": "kibibit"
+ },
+ {
+ "@type": "Property",
+ "displayName": {
+ "en": "Serial Number"
+ },
+ "name": "serialNumber",
+ "schema": "string",
+ "writable": false
+ },
+ {
+ "@type": "Command",
+ "commandType": "synchronous",
+ "description": {
+ "en": "Reboots the device after waiting the number of seconds specified."
+ },
+ "displayName": {
+ "en": "Reboot"
+ },
+ "name": "reboot",
+ "request": {
+ "@type": "CommandPayload",
+ "description": {
+ "en": "Number of seconds to wait before rebooting the device."
+ },
+ "displayName": {
+ "en": "Delay"
+ },
+ "name": "delay",
+ "schema": "integer"
+ }
+ },
+ {
+ "@type": "Component",
+ "displayName": {
+ "en": "thermostat1"
+ },
+ "name": "thermostat1",
+ "schema": "dtmi:com:example:Thermostat;2"
+ },
+ {
+ "@type": "Component",
+ "displayName": {
+ "en": "thermostat2"
+ },
+ "name": "thermostat2",
+ "schema": "dtmi:com:example:Thermostat;2"
+ },
+ {
+ "@type": "Component",
+ "displayName": {
+ "en": "DeviceInfo"
+ },
+ "name": "deviceInformation",
+ "schema": "dtmi:azure:DeviceManagement:DeviceInformation;1"
}
- },
- {
- "@type" : "Component",
- "schema": "dtmi:com:example:Thermostat;1",
- "name": "thermostat",
- "displayName": "Thermostat",
- "description": "Thermostat One."
- },
- {
- "@type": "Component",
- "schema": "dtmi:azure:DeviceManagement:DeviceInformation;1",
- "name": "deviceInformation",
- "displayName": "Device Information interface",
- "description": "Optional interface with basic device hardware information."
+ ],
+ "displayName": {
+ "en": "Temperature Controller"
}
- ]
-}
+ },
+ {
+ "@context": "dtmi:dtdl:context;2",
+ "@id": "dtmi:com:example:Thermostat;2",
+ "@type": "Interface",
+ "displayName": "Thermostat",
+ "description": "Reports current temperature and provides desired temperature control.",
+ "contents": [
+ ...
+ ]
+ },
+ {
+ "@context": "dtmi:dtdl:context;2",
+ "@id": "dtmi:azure:DeviceManagement:DeviceInformation;1",
+ "@type": "Interface",
+ "displayName": "Device Information",
+ "contents": [
+ ...
+ ]
+ }
+]
``` An interface has some required fields:
The following example shows the thermostat interface definition:
```json { "@context": "dtmi:dtdl:context;2",
- "@id": "dtmi:com:example:Thermostat;1",
+ "@id": "dtmi:com:example:Thermostat;2",
"@type": "Interface", "displayName": "Thermostat", "description": "Reports current temperature and provides desired temperature control.",
The following example shows the thermostat interface definition:
"Temperature" ], "name": "temperature",
- "displayName" : "Temperature",
- "description" : "Temperature in degrees Celsius.",
+ "displayName": "Temperature",
+ "description": "Temperature in degrees Celsius.",
"schema": "double", "unit": "degreeCelsius" },
The following example shows the thermostat interface definition:
"schema": "double", "displayName": "Target Temperature", "description": "Allows to remotely specify the desired target temperature.",
- "unit" : "degreeCelsius",
+ "unit": "degreeCelsius",
"writable": true }, {
The following example shows the thermostat interface definition:
], "name": "maxTempSinceLastReboot", "schema": "double",
- "unit" : "degreeCelsius",
+ "unit": "degreeCelsius",
"displayName": "Max temperature since last reboot.", "description": "Returns the max temperature since last device reboot." },
The following example shows the thermostat interface definition:
"schema": "dateTime" }, "response": {
- "name" : "tempReport",
+ "name": "tempReport",
"displayName": "Temperature Report", "schema": { "@type": "Object",
The following example shows the thermostat interface definition:
"schema": "double" }, {
- "name" : "avgTemp",
+ "name": "avgTemp",
"displayName": "Average Temperature", "schema": "double" }, {
- "name" : "startTime",
+ "name": "startTime",
"displayName": "Start Time", "schema": "dateTime" }, {
- "name" : "endTime",
+ "name": "endTime",
"displayName": "End Time", "schema": "dateTime" }
Optional fields, such as display name and description, let you add more details
By default, properties are read-only. Read-only properties mean that the device reports property value updates to your IoT Central application. Your IoT Central application can't set the value of a read-only property.
-You can also mark a property as writeable on an interface. A device can receive an update to a writeable property from your IoT Central application as well as reporting property value updates to your application.
+You can also mark a property as writable on an interface. A device can receive an update to a writable property from your IoT Central application as well as reporting property value updates to your application.
-Devices don't need to be connected to set property values. The updated values are transferred when the device next connects to the application. This behavior applies to both read-only and writeable properties.
+Devices don't need to be connected to set property values. The updated values are transferred when the device next connects to the application. This behavior applies to both read-only and writable properties.
Don't use properties to send telemetry from your device. For example, a readonly property such as `temperatureSetting=80` should mean that the device temperature has been set to 80, and the device is trying to get to, or stay at, this temperature.
iot-central Concepts Get Connected https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-get-connected.md
The following table summarizes how Azure IoT Central device features map on to I
| Telemetry | Device-to-cloud messaging | | Offline commands | Cloud-to-device messaging | | Property | Device twin reported properties |
-| Property (writeable) | Device twin desired and reported properties |
+| Property (writable) | Device twin desired and reported properties |
| Command | Direct methods | ### Protocols
iot-central Concepts Telemetry Properties Commands https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-telemetry-properties-commands.md
IoT Central lets you view the raw data that a device sends to an application. Th
## Telemetry
+### Telemetry in components
+
+If the telemetry is defined in a component, add a custom message property called `$.sub` with the name of the component as defined in the device model. To learn more, see [Tutorial: Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md).
+ ### Primitive types This section shows examples of primitive telemetry types that a device streams to an IoT Central application.
A device client should send the state as JSON that looks like the following exam
> [!NOTE] > The payload formats for properties applies to applications created on or after 07/14/2020.
+### Properties in components
+
+If the property is defined in a component, wrap the property in the component name. The following example sets the `maxTempSinceLastReboot` in the `thermostat2` component. The marker `__t` indicates that this a component:
+
+```json
+{
+ "thermostat2" : {
+ "__t" : "c",
+ "maxTempSinceLastReboot" : 38.7
+ }
+}
+```
+
+To learn more, see [Tutorial: Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md).
+ ### Primitive types This section shows examples of primitive property types that a device sends to an IoT Central application.
A device client should send a JSON payload that looks like the following example
} ```
-### Writeable property types
+### Writable property types
-This section shows examples of writeable property types that a device receives from an IoT Central application.
+This section shows examples of writable property types that a device receives from an IoT Central application.
-IoT Central expects a response from the device to writeable property updates. The response message should include the `ac` and `av` fields. The `ad` field is optional. See the following snippets for examples.
+If the writable property is defined in a component, the desired property message includes the component name. The following example shows the message requesting the device to update the `targetTemperature` in the `thermostat2` component. The marker `__t` indicates that this a component:
+
+```json
+{
+ "thermostat2": {
+ "targetTemperature": {
+ "value": 57
+ },
+ "__t": "c"
+ },
+ "$version": 3
+}
+```
+
+To learn more, see [Tutorial: Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md).
+
+IoT Central expects a response from the device to writable property updates. The response message should include the `ac` and `av` fields. The `ad` field is optional. See the following snippets for examples.
`ac` is a numeric field that uses the values in the following table:
IoT Central expects a response from the device to writeable property updates. Th
`ad` is an option string description.
-The following snippet from a device model shows the definition of a writeable `string` property type:
+The following snippet from a device model shows the definition of a writable `string` property type:
```json {
The device should send the following JSON payload to IoT Central after it proces
} ```
-The following snippet from a device model shows the definition of a writeable `Enum` property type:
+The following snippet from a device model shows the definition of a writable `Enum` property type:
```json {
The device should send the following JSON payload to IoT Central after it proces
## Commands
+If the command is defined in a component, the name of the command the device receives includes the component name. For example, if the command is called `getMaxMinReport` and the component is called `thermostat2`, the device receives a request to execute a command called `thermostat2*getMaxMinReport`.
+ The following snippet from a device model shows the definition of a command that has no parameters and that doesn't expect the device to return anything: ```json
iot-central Howto Configure Rules Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-configure-rules-advanced.md
Use this action to update cloud property values for a specific device.
### Update device properties
-Use this action to update writeable property values for a specific device.
+Use this action to update writable property values for a specific device.
| Field | Description | | -- | -- | | Application | Choose from your list of IoT Central applications. | | Device | The unique ID of the device to delete. | | Device Template | Choose from the list of device templates in your IoT Central application. |
-| Writeable properties | After you choose a device template, a field is added for each writeable property defined in the template. |
+| Writable properties | After you choose a device template, a field is added for each writable property defined in the template. |
## Next steps
iot-central Howto Manage Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-devices.md
To delete either a real or simulated device from your Azure IoT Central applicat
## Change a property
-Cloud properties are the device metadata associated with the device, such as city and serial number. Cloud properties only exist in the IoT Central application and aren't synchronized to your devices. Writeable properties control the behavior of a device and let you set the state of a device remotely, for example by setting the target temperature of a thermostat device. Device properties are set by the device and are read-only within IoT Central. You can view and update properties on the **Device Details** views for your device.
+Cloud properties are the device metadata associated with the device, such as city and serial number. Cloud properties only exist in the IoT Central application and aren't synchronized to your devices. Writable properties control the behavior of a device and let you set the state of a device remotely, for example by setting the target temperature of a thermostat device. Device properties are set by the device and are read-only within IoT Central. You can view and update properties on the **Device Details** views for your device.
1. Choose **Devices** on the left pane. 1. Choose the device template of the device whose properties you want to change and select the target device.
-1. Choose the view that contains properties for your device, this view enables you to input values and select **Save** at the top of the page. Here you see the properties your device has and their current values. Cloud properties and writeable properties have editable fields, while device properties are read-only. For writeable properties, you can see their sync status at the bottom of the field.
+1. Choose the view that contains properties for your device, this view enables you to input values and select **Save** at the top of the page. Here you see the properties your device has and their current values. Cloud properties and writable properties have editable fields, while device properties are read-only. For writable properties, you can see their sync status at the bottom of the field.
1. Modify the properties to the values you need. You can modify multiple properties at a time and update them all at the same time.
-1. Choose **Save**. If you saved writeable properties, the values are sent to your device. When the device confirms the change for the writeable property, the status returns back to **synced**. If you saved a cloud property, the value is updated.
+1. Choose **Save**. If you saved writable properties, the values are sent to your device. When the device confirms the change for the writable property, the status returns back to **synced**. If you saved a cloud property, the value is updated.
## Next steps
iot-central Howto Use Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-use-properties.md
The following table shows the configuration settings for a property capability.
| Capability type | Property. | | Semantic type | The semantic type of the property, such as temperature, state, or event. The choice of semantic type determines which of the following fields are available. | | Schema | The property data type, such as double, string, or vector. The available choices are determined by the semantic type. Schema isn't available for the event and state semantic types. |
-| Writable | If the property isn't writeable, the device can report property values to Azure IoT Central. If the property is writeable, the device can report property values to Azure IoT Central. Then Azure IoT Central can send property updates to the device. |
+| Writable | If the property isn't writable, the device can report property values to Azure IoT Central. If the property is writable, the device can report property values to Azure IoT Central. Then Azure IoT Central can send property updates to the device. |
| Severity | Only available for the event semantic type. The severities are **Error**, **Information**, or **Warning**. | | State values | Only available for the state semantic type. Define the possible state values, each of which has display name, name, enumeration type, and value. | | Unit | A unit for the property value, such as **mph**, **%**, or **&deg;C**. |
This example shows two properties. These properties relate to the property defin
* `@type` specifies the type of capability: `Property`. The previous example also shows the semantic type `Temperature` for both properties. * `name` for the property. * `schema` specifies the data type for the property. This value can be a primitive type, such as double, integer, Boolean, or string. Complex object types and maps are also supported.
-* `writable` By default, properties are read-only. You can mark a property as writeable by using this field.
+* `writable` By default, properties are read-only. You can mark a property as writable by using this field.
Optional fields, such as display name and description, let you add more details to the interface and capabilities.
The following snippet from a device model shows the definition of a writable pro
} ```
-To define and handle the writeable properties your device responds to, you can use the following code:
+To define and handle the writable properties your device responds to, you can use the following code:
``` javascript hubClient.getTwin((err, twin) => {
The response message should include the `ac` and `av` fields. The `ad` field is
For more information on device twins, see [Configure your devices from a back-end service](../../iot-hub/tutorial-device-twins.md).
-When the operator sets a writeable property in the Azure IoT Central application, the application uses a device twin desired property to send the value to the device. The device then responds by using a device twin reported property. When Azure IoT Central receives the reported property value, it updates the property view with a status of **Accepted**.
+When the operator sets a writable property in the Azure IoT Central application, the application uses a device twin desired property to send the value to the device. The device then responds by using a device twin reported property. When Azure IoT Central receives the reported property value, it updates the property view with a status of **Accepted**.
The following view shows the writable properties. When you enter the value and select **Save**, the initial status is **Pending**. When the device accepts the change, the status changes to **Accepted**.
iot-central Overview Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central.md
The web UI lets you monitor device conditions, create rules, and manage millions
This article outlines, for IoT Central: -- The typical personas associated with a project.
+- The typical user roles associated with a project.
- How to create your application. - How to connect your devices to your application - How to manage your application. - Azure IoT Edge capabilities in IoT Central. - How to connect your Azure IoT Edge runtime powered devices to your application.
-## Personas
+## User roles
-The IoT Central documentation refers to four personas who interact with an IoT Central application:
+The IoT Central documentation refers to four user roles that interact with an IoT Central application:
- A _solution builder_ is responsible for [creating an application](quick-deploy-iot-central.md), [configuring rules and actions](quick-configure-rules.md), [defining integrations with other services](howto-export-data.md), and further customizing the application for operators and device developers. - An _operator_ [manages the devices](howto-manage-devices.md) connected to the application.
iot-central Tutorial Connect Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-connect-device.md
zone_pivot_groups: programming-languages-set-twenty-six
*This article applies to solution builders and device developers.*
-This tutorial shows you how, as a device developer, to connect a client application to your Azure IoT Central application. The application simulates the behavior of a thermostat device. When the application connects to IoT Central, it sends the model ID of the thermostat device model. IoT Central uses the model ID to retrieve the device model and create a device template for you. You add customizations and views to the device template to enable an operator to interact with a device.
+This tutorial shows you how, as a device developer, to connect a client application to your Azure IoT Central application. The application simulates the behavior of a temperature controller device. When the application connects to IoT Central, it sends the model ID of the temperature controller device model. IoT Central uses the model ID to retrieve the device model and create a device template for you. You add customizations and views to the device template to enable an operator to interact with a device.
In this tutorial, you learn how to:
As a device developer, you can use the **Raw data** view to examine the raw data
:::image type="content" source="media/tutorial-connect-device/raw-data.png" alt-text="The raw data view":::
-On this view, you can select the columns to display and set a time range to view. The **Unmodeled data** column shows data from the device that doesn't match any property or telemetry definitions in the device template.
+On this view, you can select the columns to display and set a time range to view. The **Unmodeled data** column shows device data that doesn't match any property or telemetry definitions in the device template.
## Clean up resources
iot-central Tutorial Define Gateway Device Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-define-gateway-device-type.md
In this tutorial, you create a **Smart Building** gateway device template. A **S
As well as enabling downstream devices to communicate with your IoT Central application, a gateway device can also: * Send its own telemetry, such as temperature.
-* Respond to writeable property updates made by an operator. For example, an operator could changes the telemetry send interval.
+* Respond to writable property updates made by an operator. For example, an operator could changes the telemetry send interval.
* Respond to commands, such as rebooting the device. > [!div class="checklist"]
iot-hub Iot Hub Devguide File Upload https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-file-upload.md
To use the file upload functionality, you must first link an Azure Storage accou
The [Upload files from your device to the cloud with IoT Hub](iot-hub-csharp-csharp-file-upload.md) how-to guides provide a complete walkthrough of the file upload process. These how-to guides show you how to use the Azure portal to associate a storage account with an IoT hub. > [!NOTE]
-> The [Azure IoT SDKs](iot-hub-devguide-sdks.md) automatically handle retrieving the SAS URI, uploading the file, and notifying IoT Hub of a completed upload.
+> The [Azure IoT SDKs](iot-hub-devguide-sdks.md) automatically handle retrieving the shared access signature URI, uploading the file, and notifying IoT Hub of a completed upload. If a firewall blocks access to the Blob Storage endpoint but allows access to the IoT Hub endpoint, the file upload process fails and shows the following error for the IoT C# device SDK:
+>
+> `> System.Net.Http.HttpRequestException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond`
+>
+> For the file upload feature to work, access to both the IoT Hub endpoint and the Blob Storage endpoint must be available to the device.
+>
+ ## Initialize a file upload IoT Hub has an endpoint specifically for devices to request a SAS URI for storage to upload a file. To start the file upload process, the device sends a POST request to `{iot hub}.azure-devices.net/devices/{deviceId}/files` with the following JSON body:
Now you've learned how to upload files from devices using IoT Hub, you may be in
To try out some of the concepts described in this article, see the following IoT Hub tutorial:
-* [How to upload files from devices to the cloud with IoT Hub](iot-hub-csharp-csharp-file-upload.md)
+* [How to upload files from devices to the cloud with IoT Hub](iot-hub-csharp-csharp-file-upload.md)
iot-pnp Tutorial Configure Tsi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/tutorial-configure-tsi.md
Title: Tutorial - Use Azure Time Series Insights to store and analyze your Azure IoT Plug and Play device telemetry
+ Title: Tutorial - Use Azure Time Series Insights to store and analyze your Azure IoT Plug and Play device telemetry
description: Tutorial - Set up a Time Series Insights environment and connect your IoT hub to view and analyze telemetry from your IoT Plug and Play devices.--+++ Last updated 10/14/2020
In this tutorial, you
> * Use the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) sample model files that you used for the temperature controller and thermostat devices. > [!NOTE]
-> This integration between Time Series Insights and IoT Plug and Play is in preview. The way that DTDL device models map to the Time Series Insights Time Series Model might change.
+> This integration between Time Series Insights and IoT Plug and Play is in preview. The way that DTDL device models map to the Time Series Insights Time Series Model might change.
## Prerequisites
Next, you translate your DTDL device model to the asset model in Azure Time Seri
### Define your types
-You can begin ingesting data into Azure Time Series Insights Gen2 without having predefined a model. When telemetry arrives, Time Series Insights attempts to automatically resolve time series instances based on your Time Series ID property values. All instances are assigned the *default type*. You need to manually create a new type to correctly categorize your instances.
+You can begin ingesting data into Azure Time Series Insights Gen2 without having predefined a model. When telemetry arrives, Time Series Insights attempts to automatically resolve time series instances based on your Time Series ID property values. All instances are assigned the *default type*. You need to manually create a new type to correctly categorize your instances.
The following details outline the simplest method to synchronize your device DTDL models with your Time Series Model types:
The following details outline the simplest method to synchronize your device DTD
|--||-| | `@id` | `id` | `dtmi:com:example:TemperatureController;1` | | `displayName` | `name` | `Temperature Controller` |
-| `description` | `description` | `Device with two thermostats and remote reboot.` |
+| `description` | `description` | `Device with two thermostats and remote reboot.` |
|`contents` (array)| `variables` (object) | See the following example. ![Screenshot showing D T D L to Time Series Model type.](./media/tutorial-configure-tsi/DTDL-to-TSM-Type.png)
Open a text editor and save the following JSON to your local drive.
"kind": "numeric", "value": { "tsx": "coalesce($event.workingSet.Long, toLong($event.workingSet.Double))"
- },
+ },
"aggregation": { "tsx": "avg($value)" }
key-vault Soft Delete Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/soft-delete-overview.md
Last updated 12/15/2020
> You must enable soft-delete on your key vaults immediately. The ability to opt out of soft-delete will be deprecated soon. See full details [here](soft-delete-change.md) > [!IMPORTANT]
-> Soft-deleted vault triggers delete settings for integrated with Key Vault services i.e. Azure RBAC roles assignments, Event Grid subscriptions, Azure Monitor diagnostics settings. After recovery of soft-deleted Key Vault settings for integrated services will need to be manually recreated.
+> Soft-deleted vault triggers delete settings for integrated with Key Vault services i.e. Azure RBAC roles assignments, Event Grid subscriptions. After recovery of soft-deleted Key Vault settings for integrated services will need to be manually recreated.
Key Vault's soft-delete feature allows recovery of the deleted vaults and deleted key vault objects (for example, keys, secrets, certificates), known as soft-delete. Specifically, we address the following scenarios: This safeguard offer the following protections:
The following two guides offer the primary usage scenarios for using soft-delete
- [How to use Key Vault soft-delete with Portal](./key-vault-recovery.md?tabs=azure-portal) - [How to use Key Vault soft-delete with PowerShell](./key-vault-recovery.md) -- [How to use Key Vault soft-delete with CLI](./key-vault-recovery.md)
+- [How to use Key Vault soft-delete with CLI](./key-vault-recovery.md)
lighthouse Publish Managed Services Offers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/publish-managed-services-offers.md
Title: Publish a Managed Service offer to Azure Marketplace description: Learn how to publish a Managed Service offer that onboards customers to Azure Lighthouse. Previously updated : 02/17/2021 Last updated : 03/31/2021
The following table can help determine whether to onboard customers by publishin
|Requires customer acceptance in Azure portal |Yes |No | |Can use automation to onboard multiple subscriptions, resource groups, or customers |No |Yes | |Immediate access to new built-in roles and Azure Lighthouse features |Not always (generally available after some delay) |Yes |
+|Customers can review and accept updated offers in the Azure portal | Yes | No |
> [!NOTE] > Managed Service offers may not be available in Azure Government and other national clouds.
After a customer adds your offer, they'll be able to [delegate one or more speci
Once the customer delegates a subscription (or one or more resource groups within a subscription), the **Microsoft.ManagedServices** resource provider will be registered for that subscription, and users in your tenant will be able to access the delegated resources according to the authorizations in your offer.
+If you publish an updated version of your offer, the customer can [review the changes in the Azure portal and accept the new version](view-manage-service-providers.md#update-service-provider-offers).
+ ## Next steps - Learn about the [Commercial Marketplace](../../marketplace/overview.md).
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-limits-and-config.md
ms.suite: integration Previously updated : 03/18/2021 Last updated : 04/05/2021 # Limits and configuration information for Azure Logic Apps
Here are the limits for a single logic app run:
| Until timeout | - Default: PT1H (1 hour) | The most amount of time that the "Until" loop can run before exiting and is specified in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601). The timeout value is evaluated for each loop cycle. If any action in the loop takes longer than the timeout limit, the current cycle doesn't stop. However, the next cycle doesn't start because the limit condition isn't met. <p><p>To change this limit, in the "Until" loop shape, select **Change limits**, and specify the value for the **Timeout** property. | ||||
+<a name="concurrency-debatching"></a>
+ ### Concurrency and debatching | Name | Limit | Notes | | - | -- | -- |
-| Trigger concurrency | With concurrency off: Unlimited <p><p>With concurrency on, which you can't undo after enabling: <p><p>- Default: 25 <br>- Min: 1 <br>- Max: 50 | This limit is the maximum number of logic app instances that can run at the same time, or in parallel. <p><p>**Note**: When concurrency is turned on, the SplitOn limit is reduced to 100 items for [debatching arrays](../logic-apps/logic-apps-workflow-actions-triggers.md#split-on-debatch). <p><p>To change this limit, see [Change trigger concurrency limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-trigger-concurrency) or [Trigger instances sequentially](../logic-apps/logic-apps-workflow-actions-triggers.md#sequential-trigger). |
+| Trigger concurrency | With concurrency off: Unlimited <p><p>With concurrency on, which you can't undo after enabling: <p><p>- Default: 25 <br>- Min: 1 <br>- Max: 100 | This limit is the maximum number of logic app instances that can run at the same time, or in parallel. <p><p>**Note**: When concurrency is turned on, the SplitOn limit is reduced to 100 items for [debatching arrays](../logic-apps/logic-apps-workflow-actions-triggers.md#split-on-debatch). <p><p>To change this limit, see [Change trigger concurrency limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-trigger-concurrency) or [Trigger instances sequentially](../logic-apps/logic-apps-workflow-actions-triggers.md#sequential-trigger). |
| Maximum waiting runs | With concurrency off: <p><p>- Min: 1 <br>- Max: 50 <p><p>With concurrency on: <p><p>- Min: 10 plus the number of concurrent runs (trigger concurrency) <br>- Max: 100 | This limit is the maximum number of logic app instances that can wait to run when your logic app is already running the maximum concurrent instances. <p><p>To change this limit, see [Change waiting runs limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-waiting-runs). | | SplitOn items | With concurrency off: 100,000 <p><p>With concurrency on: 100 | For triggers that return an array, you can specify an expression that uses a 'SplitOn' property that [splits or debatches array items into multiple workflow instances](../logic-apps/logic-apps-workflow-actions-triggers.md#split-on-debatch) for processing, rather than use a "Foreach" loop. This expression references the array to use for creating and running a workflow instance for each array item. <p><p>**Note**: When concurrency is turned on, the SplitOn limit is reduced to 100 items. | ||||
logic-apps Logic Apps Workflow Actions Triggers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-workflow-actions-triggers.md
ms.suite: integration Previously updated : 09/22/2020 Last updated : 04/05/2021
By default, logic app workflow instances all run at the same time (concurrently
When you turn on the trigger's concurrency control, trigger instances run in parallel up to the [default limit](../logic-apps/logic-apps-limits-and-config.md#looping-debatching-limits). To change this default concurrency limit, you can use either the code view editor or Logic Apps Designer because changing the concurrency setting through the designer adds or updates the `runtimeConfiguration.concurrency.runs` property in the underlying trigger definition and vice versa. This property controls the maximum number of new workflow instances that can run in parallel.
-Here are some considerations for when you want to enable concurrency on a trigger:
-
-* When concurrency is enabled, the [SplitOn limit](../logic-apps/logic-apps-limits-and-config.md#looping-debatching-limits) is significantly reduced for [debatching arrays](#split-on-debatch). If the number of items exceeds this limit, the SplitOn capability is disabled.
+Here are some considerations to review before you enable concurrency on a trigger:
* You can't disable concurrency after you enable the concurrency control.
+* When concurrency is enabled, the [SplitOn limit](../logic-apps/logic-apps-limits-and-config.md#looping-debatching-limits) is significantly reduced for [debatching arrays](#split-on-debatch). If the number of items exceeds this limit, the SplitOn capability is disabled.
+ * When concurrency is enabled, a long-running logic app instance might cause new logic app instances to enter a waiting state. This state prevents Azure Logic Apps from creating new instances and happens even when the number of concurrent runs is less than the specified maximum number of concurrent runs. * To interrupt this state, cancel the earliest instances that are *still running*.
Here are some considerations for when you want to enable concurrency on a trigge
#### Edit in code view
-In the underlying trigger definition, add the `runtimeConfiguration.concurrency.runs` property, which can have a value that ranges from `1` to `50`.
+In the underlying trigger definition, add the `runtimeConfiguration.concurrency.runs` property, and set the value based on the [trigger concurrency limits](logic-apps-limits-and-config.md#concurrency-debatching). To run your workflow sequentially, set the property value to `1`.
-Here is an example that limits concurrent runs to 10 instances:
+This example limits concurrent runs to 10 instances:
```json "<trigger-name>": {
machine-learning How To Data Prep Synapse Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-data-prep-synapse-spark-pool.md
df = spark.read.option("header", "true").csv("wasbs://demo@dprepdata.blob.core.w
The following code demonstrates how to read data in from **Azure Data Lake Storage Generation 1 (ADLS Gen 1)** with your service principal credentials. ```python
+%%synapse
# setup service principal which has access of the data sc._jsc.hadoopConfiguration().set("fs.adl.account.<storage account name>.oauth2.access.token.provider.type","ClientCredential")
sc._jsc.hadoopConfiguration().set("fs.adl.account.<storage account name>.oauth2.
sc._jsc.hadoopConfiguration().set("fs.adl.account.<storage account name>.oauth2.credential", "<client secret>") sc._jsc.hadoopConfiguration().set("fs.adl.account.<storage account name>.oauth2.refresh.url",
-https://login.microsoftonline.com/<tenant id>/oauth2/token)
+"https://login.microsoftonline.com/<tenant id>/oauth2/token")
df = spark.read.csv("adl://<storage account name>.azuredatalakestore.net/<path>")
df = spark.read.csv("adl://<storage account name>.azuredatalakestore.net/<path>"
The following code demonstrates how to read data in from **Azure Data Lake Storage Generation 2 (ADLS Gen 2)** with your service principal credentials. ```python
+%%synapse
+ # setup service principal which has access of the data sc._jsc.hadoopConfiguration().set("fs.azure.account.auth.type.<storage account name>.dfs.core.windows.net","OAuth") sc._jsc.hadoopConfiguration().set("fs.azure.account.oauth.provider.type.<storage account name>.dfs.core.windows.net", "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider") sc._jsc.hadoopConfiguration().set("fs.azure.account.oauth2.client.id.<storage account name>.dfs.core.windows.net", "<client id>") sc._jsc.hadoopConfiguration().set("fs.azure.account.oauth2.client.secret.<storage account name>.dfs.core.windows.net", "<client secret>") sc._jsc.hadoopConfiguration().set("fs.azure.account.oauth2.client.endpoint.<storage account name>.dfs.core.windows.net",
-https://login.microsoftonline.com/<tenant id>/oauth2/token)
-
+"https://login.microsoftonline.com/<tenant id>/oauth2/token")
df = spark.read.csv("abfss://<container name>@<storage account>.dfs.core.windows.net/<path>")
input1 = train_ds.as_mount()
```
-## Example notebook
+## Example notebooks
+
+Once your data is prepared, learn how to [leverage a Synase spark cluster as a compute target for model training](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/azure-synapse/spark_job_on_synapse_spark_pool.ipynb).
-See this [end to end notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/azure-synapse/spark_session_on_synapse_spark_pool.ipynb) for a detailed code example of how to perform data preparation and model training from a single notebook with Azure Synapse Analytics and Azure Machine Learning.
+See this [example notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/azure-synapse/spark_session_on_synapse_spark_pool.ipynb) for additional concepts and demonstrations of the Azure Synapse Analytics and Azure Machine Learning integration capabilities.
## Next steps
mariadb Concepts Planned Maintenance Notification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/concepts-planned-maintenance-notification.md
Title: Planned maintenance notification - Azure Database for MariaDB description: This article describes the Planned maintenance notification feature in Azure Database for MariaDB--++ Last updated 10/21/2020
mariadb Howto Auto Grow Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/howto-auto-grow-storage-cli.md
Title: Auto grow storage - Azure CLI - Azure Database for MariaDB description: This article describes how you can enable auto grow storage using the Azure CLI in Azure Database for MariaDB.--++ Last updated 3/18/2020
mariadb Howto Auto Grow Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/howto-auto-grow-storage-portal.md
Title: Auto grow storage - Azure portal - Azure Database for MariaDB description: This article describes how you can enable auto grow storage for Azure Database for MariaDB using Azure portal--++ Last updated 3/18/2020
marketplace Azure Vm Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-create.md
To enable a test drive, select the **Enable a test drive** check box. You will c
## Configure customer leads management
-When you're publishing an offer to the commercial marketplace with Partner Center, connect it to your Customer Relationship Management (CRM) system. This lets you receive customer contact information as soon as someone expresses interest in or uses your product. Connecting to a CRM is required if you want to enable a test drive (see the preceding section). Otherwise, connecting to a CRM is optional.
-
-1. Under **Customer leads**, select the **Connect** link.
-1. In the **Connection details** dialog box, select a lead destination.
-1. Complete the fields that appear. For detailed steps, see the following articles:
-
- - [Configure your offer to send leads to the Azure table](./partner-center-portal/commercial-marketplace-lead-management-instructions-azure-table.md#configure-your-offer-to-send-leads-to-the-azure-table)
- - [Configure your offer to send leads to Dynamics 365 Customer Engagement](./partner-center-portal/commercial-marketplace-lead-management-instructions-dynamics.md#configure-your-offer-to-send-leads-to-dynamics-365-customer-engagement) (formerly Dynamics CRM Online)
- - [Configure your offer to send leads to HTTPS endpoint](./partner-center-portal/commercial-marketplace-lead-management-instructions-https.md#configure-your-offer-to-send-leads-to-the-https-endpoint)
- - [Configure your offer to send leads to Marketo](./partner-center-portal/commercial-marketplace-lead-management-instructions-marketo.md#configure-your-offer-to-send-leads-to-marketo)
- - [Configure your offer to send leads to Salesforce](./partner-center-portal/commercial-marketplace-lead-management-instructions-salesforce.md#configure-your-offer-to-send-leads-to-salesforce)
-
-1. To validate the configuration you provided, select the **Validate** link.
-1. Select **Connect**.
Select **Save draft** before continuing to the next tab in the left-nav menu, **Properties**.
marketplace Create New Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/create-new-saas-offer.md
On the **Offer setup** tab, under **Setup details**, youΓÇÖll choose whether to
+ To provide a 30-day free trial, select **Free trial**, and then in the **Trial URL** box that appears, enter the URL (beginning with *http* or *https*) where customers can access your free trial through [one-click authentication by using Azure Active Directory (Azure AD)](azure-ad-saas.md). For example, `https://contoso.com/trial/saas-app`. + To have potential customers contact you to purchase your offer, select **Contact me**.
-### Enable a test drive (optional)
+## Enable a test drive (optional)
A test drive is a great way to showcase your offer to potential customers by giving them access to a preconfigured environment for a fixed number of hours. Offering a test drive results in an increased conversion rate and generates highly qualified leads. To Learn more about test drives, see [What is a test drive?](./what-is-test-drive.md).
A test drive is a great way to showcase your offer to potential customers by giv
1. Under **Test drive**, select the **Enable a test drive** check box. 1. Select the test drive type from the list that appears.
-### Configure lead management
+## Configure lead management
Connect your customer relationship management (CRM) system with your commercial marketplace offer so you can receive customer contact information when a customer expresses interest or deploys your product. You can modify this connection at any time during or after you create the offer. > [!NOTE] > You must configure lead management if youΓÇÖre selling your offer through Microsoft or you selected the **Contact Me** listing option. For detailed guidance, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
-#### To configure the connection details in Partner Center
+### Configure the connection details in Partner Center
1. Under **Customer leads**, select the **Connect** link. 1. In the **Connection details** dialog box, select a lead destination from the list.
Connect your customer relationship management (CRM) system with your commercial
1. To validate the configuration you provided, select the **Validate** link. 1. To close the dialog box, select **OK**.
+## Configure Microsoft 365 App integration
+
+You can light up [unified discovery and delivery](./plan-SaaS-offer.md) of your SaaS offer and any related Microsoft 365 App consumption by linking them.
+
+### Integrate with Microsoft API
+
+1. If your SaaS offer does not integrate with Microsoft Graph API, select **No**. Continue to Link published Microsoft 365 App consumption clients.
+
+1. If your SaaS offer integrates with Microsoft Graph API, select **Yes**, and then provide the Azure Active Directory App ID you have created and registered to integrate with Microsoft Graph API.
+
+### Link published Microsoft 365 App consumption clients
+
+1. If you do not have published Office add-in, Teams app, or SharePoint Framework solutions that works with your SaaS offer, select **No**.
+
+1. If you have published Office add-in, Teams app, or SharePoint Framework solutions that works with your SaaS offer, select **Yes**, then select **+Add another AppSource link** to add new links.
+
+1. Provide a valid AppSource link.
+
+1. Continue adding all the links by select **+Add another AppSource link** and provide valid AppSource links.
+
+1. The order the linked products are shown on the listing page of the SaaS offer is indicated by the Rank value, you can change it by select, hold, and move the = icon up and down the list.
+
+1. You can delete a linked product by select **Delete** in the product row.
++
+> [!IMPORTANT]
+> If you stop-sell a linked product, it wonΓÇÖt be automatically unlinked on the SaaS offer, you must delete it from the list of linked products and resubmit the SaaS offer.
+
+
+ ## Next steps - [How to configure your SaaS offer properties](create-new-saas-offer-properties.md)
marketplace Marketplace Commercial Transaction Capabilities And Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-commercial-transaction-capabilities-and-considerations.md
description: This article describes pricing, billing, invoicing, and payout cons
Previously updated : 11/18/2020 Last updated : 04/06/2021
Either the publisher or Microsoft is responsible for managing software license t
### Contact me, free trial, and BYOL options
-Publishers can choose the _Contact me_ and _Free trial_, options for promotional and user acquisition purposes. For some offer types, publishers can choose the bring your own license (BYOL) option to enable customers to purchase a subscription to your offer using a license that theyΓÇÖve purchased directly from you. With these options, Microsoft doesn't participate directly in the publisher's software license transactions and there's no associated transaction fee.
+Publishers can choose the _Contact me_ and _Free trial_, options for promotional and user acquisition purposes. For some offer types, publishers can choose the _bring your own license_ (BYOL) option to enable customers to purchase a subscription to your offer using a license theyΓÇÖve purchased directly from you. With these options, Microsoft doesn't participate directly in the publisher's software license transactions and there's no associated transaction fee, so publishers keep all of that revenue.
-Publishers are responsible for supporting all aspects of the software license transaction. This includes but is not limited to order, fulfillment, metering, billing, invoicing, payment, and collection. With the Contact me listing option, publishers keep 100% of publisher software licensing fees collected from the customer.
+With these options, publishers are responsible for supporting all aspects of the software license transaction. This includes but is not limited to order, fulfillment, metering, billing, invoicing, payment, and collection. With the Contact me listing option, publishers keep 100% of publisher software licensing fees collected from the customer.
### Transact publishing option
-Choosing to sell through Microsoft takes advantage of Microsoft commerce capabilities and provides an end-to-end experience from discovery and evaluation to purchase and implementation. An offer thatΓÇÖs transactable is one in which Microsoft facilitates the exchange of money for a software license on the publisherΓÇÖs behalf. Transactable offers are billed against an existing Microsoft subscription or a credit card, allowing Microsoft to host cloud marketplace transactions on behalf of the publisher.
+Choosing to sell through Microsoft takes advantage of Microsoft commerce capabilities and provides an end-to-end experience from discovery and evaluation to purchase and implementation. A _transactable_ offer is one in which Microsoft facilitates the exchange of money for a software license on the publisherΓÇÖs behalf. Transact offers are billed against an existing Microsoft subscription or credit card, allowing Microsoft to host cloud marketplace transactions on behalf of the publisher.
-You choose the transact option when you create a new offer in Partner Center. This option will show only if transact is available for your offer type.
+You choose the transact option when you create a new offer in Partner Center. This option will appear only if transact is available for your offer type.
## Transact overview
-When using the transact option, Microsoft enables the sale of third-party software and deployment of some offer types to the customer's Azure subscription. You the publisher must consider the billing of infrastructure fees and your own software licensing fees when selecting a pricing model for an offer.
+When using the transact option, Microsoft enables the sale of third-party software and deployment of some offer types to the customer's Azure subscription. The publisher must consider the billing of infrastructure fees and your own software licensing fees when selecting a pricing model for an offer.
The transact publishing option is currently supported for the following offer types: -- Virtual machines-- Azure applications-- SaaS applications
+| Offer type | Billing cadence | Metered billing | Pricing model |
+| | - | - | - |
+| Azure Application<br>(Managed application) | Monthly | Yes | Usage-based |
+| Azure Virtual Machine | Monthly * | No | Usage-based, BYOL |
+| Software as a service (SaaS) | Monthly and annual | Yes | Flat rate, per user, usage-based. |
+|||||
+
+`*` Azure Virtual Machine offers support usage-based billing plans. These plans are billed monthly for hourly use of the subscription based on per core, per core size, or per market and core size usage.
+
+### Metered billing
+
+The _Marketplace metering service_ lets you specify pay-as-you-go (consumption-based) charges in addition to monthly or annual charges included in the contract (entitlement). You can charge usage costs for marketplace metering service dimensions that you specify such as bandwidth, tickets, or emails processed. For more information about metered billing for SaaS offers, see [Metered billing for SaaS using the commercial marketplace metering service](./partner-center-portal/saas-metered-billing.md). For more information about metered billing for Azure Application offers, see [Managed application metered billing](./partner-center-portal/azure-app-metered-billing.md).
### Billing infrastructure costs For **virtual machines** and **Azure applications**, Azure infrastructure usage fees are billed to the customer's Azure subscription. Infrastructure usage fees are priced and presented separately from the software provider's licensing fees on the customer's invoice.
-For **SaaS Apps**, you the publisher must account for Azure infrastructure usage fees and software licensing fees as a single cost item. It is represented as a flat fee to the customer. The Azure infrastructure usage is managed and billed to the publisher directly. Actual infrastructure usage fees are not seen by the customer. Publishers typically opt to bundle Azure infrastructure usage fees into their software license pricing. Software licensing fees aren't metered or based on user consumption.
+For **SaaS Apps**, the publisher must account for Azure infrastructure usage fees and software licensing fees as a single cost item. It is represented as a flat fee to the customer. The Azure infrastructure usage is managed and billed to the publisher directly. Actual infrastructure usage fees are not seen by the customer. Publishers typically opt to bundle Azure infrastructure usage fees into their software license pricing. Software licensing fees aren't metered or based on user consumption.
## Pricing models Depending on the transaction option used, subscription charges are as follows: -- **Get it now (Free)** ΓÇô No charge for software licenses. Customers are not charged Azure Marketplace fees for using a free offer. Free offers canΓÇÖt be converted to a paid offer. Customers must order a paid offer.-- **Bring your own license** (BYOL) ΓÇô Any applicable charges for software licenses are managed directly between the publisher and customer. Microsoft only passes through Azure infrastructure usage fees. If an offer is listed in the commercial marketplace, customers who obtain access or use of the offer outside of the commercial marketplace are not charged commercial marketplace fees.-- **Subscription pricing** ΓÇô Software license fees are presented as a monthly or annual, recurring subscription fee billed as a flat rate or per-seat. Recurrent subscription fees are not prorated for mid-term customer cancellations, or unused services. Recurrent subscription fees may be prorated if the customer upgrades or downgrades their subscription in the middle of the subscription term.-- **Usage-based pricing** ΓÇô For Azure Virtual Machine offers, customers are charged based on the extent of their use of the offer. For Virtual Machine Images, customers are charged an hourly Azure Marketplace fee, as set by publishers, for use of virtual machines deployed from the VM images. The hourly fee may be uniform or varied across virtual machine sizes. Partial hours are charged by the minute. Plans are billed monthly.-- **Metered pricing** ΓÇô For Azure Application offers and SaaS offers, publishers can use the [Marketplace metering service](./partner-center-portal/marketplace-metering-service-apis.md) to bill for consumption based on the meter dimensions they choose. For example, bandwidth, tickets, or emails processed. Publishers can define one or more meter dimensions for each plan. Publishers are responsible for tracking individual customersΓÇÖ usage, with each meter defined in the offer. Events should be reported to Microsoft within an hour. Microsoft charges customers based on the usage information reported by publishers for the applicable billing period.-- **Free trial** ΓÇô No charge for software licenses that range from 30 days up to six months, depending on the offer type. If publishers provide a free trial on multiple plans within the same offer, customers can switch to a free trial on another plan but the trial period does not restart. For virtual machine offers, customers are charged Azure infrastructure costs for using the offer during a trial period. Upon expiration of the trial period, customers are automatically charged for the last plan they tried based on standard rates unless they cancel before the end of the trial period.
+- **Get it now (Free)**: No charge for software licenses. Free offers canΓÇÖt be converted to a paid offer. Customers must order a paid offer.
+- **Bring your own license** (BYOL): If an offer is listed in the commercial marketplace, any applicable charges for software licenses are managed directly between the publisher and customer. Microsoft only charges applicable Azure infrastructure usage fees to the customerΓÇÖs Azure subscription account.
+- **Subscription pricing**: Software license fees are presented as a monthly or annual, recurring subscription fee billed as a flat rate or per-seat. Recurrent subscription fees are not prorated for mid-term customer cancellations, or unused services. Recurrent subscription fees may be prorated if the customer upgrades or downgrades their subscription in the middle of the subscription term.
+- **Usage-based pricing**: For Azure Virtual Machine offers, customers are charged based on the extent of their use of the offer. For Virtual Machine images, customers are charged an hourly Azure Marketplace fee, as set by the publisher, for use of virtual machines deployed from the VM images. The hourly fee may be uniform or varied across virtual machine sizes. Partial hours are charged by the minute. Plans are billed monthly.
+- **Metered pricing**: For Azure Application offers and SaaS offers, publishers can use the [Marketplace metering service](./partner-center-portal/marketplace-metering-service-apis.md) to bill for consumption based on the custom meter dimensions they configure. These changes are in addition to monthly or annual charges included in the contract (entitlement). Examples of custom meter dimensions are bandwidth, tickets, or emails processed. Publishers can define one or more metered dimensions for each plan but a maximum of 30 per offer. Publishers are responsible for tracking individual customer usage, with each meter defined in the offer. Events should be reported to Microsoft within an hour. Microsoft charges customers based on the usage information reported by publishers for the applicable billing period.
+- **Free trial**: No charge for software licenses that range from 30 days up to six months, depending on the offer type. If publishers provide a free trial on multiple plans within the same offer, customers can switch to a free trial on another plan, but the trial period does not restart. For virtual machine offers, customers are charged Azure infrastructure costs for using the offer during a trial period. Upon expiration of the trial period, customers are automatically charged for the last plan they tried based on standard rates unless they cancel before the end of the trial period.
> [!NOTE] > Offers that are billed according to consumption after a solution has been used are not eligible for refunds.
Publishers who want to change the usage fees associated with an offer, should fi
### Free, Contact me, and bring-your-own-license (BYOL) pricing
-When publishing an offer with the Get it now (Free), Contact me, or BYOL option, Microsoft does not play a role in facilitating the sales transaction for your software license fees. Like the list and free trial publishing options, the publisher keeps 100% of software license fees.
+When publishing an offer with the Get it now (Free), Contact me, or BYOL option, Microsoft does not play a role in facilitating the sales transaction for your software license fees. The publisher keeps 100% of the software license fees.
### Usage-based and subscription pricing
-When publishing an offer a a user-based or subscription transaction, Microsoft provides the technology and services to process software license purchases, returns, and chargebacks. In this scenario, the publisher authorizes Microsoft to act as an agent for these purposes. The publisher allows Microsoft to facilitate the software licensing transaction, while retaining their designation as the seller, provider, distributor, and licensor.
+When publishing an offer as a usage-based or subscription transaction, Microsoft provides the technology and services to process software license purchases, returns, and charge-backs. In this scenario, the publisher authorizes Microsoft to act as an agent for these purposes. The publisher allows Microsoft to facilitate the software licensing transaction. The publisher, retain your designation as the seller, provider, distributor, and licensor.
-Microsoft enables customers to order, license, and use your software, subject to the terms and conditions of both Microsoft's commercial marketplace and your end-user licensing agreement. You must provide your own end-user licensing agreement or select the [Standard Contract](./standard-contract.md) when creating the offer.
+Microsoft enables customers to order, license, and use your software, subject to the terms and conditions of both Microsoft's commercial marketplace and your end-user licensing agreement. You must either provide your own end-user licensing agreement or select the [Standard Contract](./standard-contract.md) when creating the offer.
### Free software trials
-For transact publishing scenarios, you can make a software license available free for 30 to 120 days, depending on the subscription. This discounting capability does not include the cost of Azure infrastructure usage driven by use of the partner solution.
-
-### Private offers
-
-In addition to using offer types and billing models to monetize an offer, you can transact a private offer, complete with negotiated, deal-specific pricing, or custom configurations. Private offers are supported by all three transact publishing options.
-
-This option allows higher or lower pricing than the publicly available offering. You can use private offers to discount or add a premium to an offer. You can make Private offers available to one or more customers by allowlisting their Azure subscription at the offer level.
-
-### Commercial marketplace service fees
-
-We charge a 20% standard store service fee when customers purchase your transact offer from the commercial marketplace. For details of this fee, see section 5c of the [Microsoft Publisher Agreement](https://go.microsoft.com/fwlink/?LinkID=699560).
-
-For certain transactable offers that you publish to the commercial marketplace, you may qualify for a reduced store service fee of 10%. For an offer to qualify, it must have been designated by Microsoft as Azure IP Co-sell incentivized. Eligibility must be met at least five business days before the end of each calendar month to receive the Reduced Marketplace Service Fee. Once eligibility is met, the reduced service fee is awarded to all transactions effective the first day of the following month until Azure IP Co-sell incentivized status is lost. For details about IP co-sell eligibility, see [Requirements for co-sell status](/legal/marketplace/certification-policies#3000-requirements-for-co-sell-status).
-
-The Reduced Marketplace Service Fee applies to Azure IP Co-sell incentivized SaaS, VMs, Managed apps, and any other qualified transactable IaaS solutions made available through the commercial marketplace. Paid SaaS offers associated with one Microsoft Teams app or at least two Microsoft 365 add-ins (Excel, PowerPoint, Word, Outlook, and SharePoint) and published to Microsoft AppSource also receive this discount.
+For transact publishing scenarios, you can make a software license available free for 30 to 120 days, depending on the subscription. Customers will be changed for applicable Azure infrastructure usage.
-### Examples
+### Examples of pricing and store fees
-**Usage-based**
+**Usage-based**
Usage-based pricing has the following cost structure:
-|Your license cost | $1.00 per hour |
+| **Your license cost** | **$1.00 per hour** |
|||
-|Azure usage cost (D1/1-Core) | $0.14 per hour |
-|*Customer is billed by Microsoft* | *$1.14 per hour* |
+| Azure usage cost (D1/1-Core) | $0.14 per hour |
+| _Customer is billed by Microsoft_ | _$1.14 per hour_ |
|| In this scenario, Microsoft bills $1.14 per hour for use of your published VM image.
-|Microsoft bills | $1.14 per hour |
+| **Microsoft bills** | **$1.14 per hour** |
|||
-|Microsoft pays you 80% of your license cost| $0.80 per hour |
-|Microsoft keeps 20% of your license cost | $0.20 per hour |
-|Microsoft keeps 100% of the Azure usage cost | $0.14 per hour |
+| Microsoft pays you 80% of your license cost | $0.80 per hour |
+| Microsoft keeps 20% of your license cost | $0.20 per hour |
+| Microsoft keeps 100% of the Azure usage cost | $0.14 per hour |
|| **Bring Your Own License (BYOL)** BYOL has the following cost structure:
-|Your license cost | License fee negotiated and billed by you |
+| **Your license cost** | **License fee negotiated and billed by you** |
||| |Azure usage cost (D1/1-Core) | $0.14 per hour |
-|*Customer is billed by Microsoft* | *$0.14 per hour* |
+| _Customer is billed by Microsoft_ | _$0.14 per hour_ |
|| In this scenario, Microsoft bills $0.14 per hour for use of your published VM image.
-|Microsoft bills | $0.14 per hour |
+| **Microsoft bills** | **$0.14 per hour** |
|||
-|Microsoft keeps the Azure usage cost | $0.14 per hour |
-|Microsoft keeps 0% of your license cost | $0.00 per hour |
+| Microsoft keeps the Azure usage cost | $0.14 per hour |
+| Microsoft keeps 0% of your license cost | $0.00 per hour |
|| **SaaS app subscription**
-This option must be configured to sell through Microsoft and can be priced at a flat rate or per user on a monthly or annual basis. If you enable the **Sell through Microsoft** option for a SaaS offer, you have the following cost structure:
+SaaS subscriptions can be priced at a flat rate or per user on a monthly or annual basis. If you enable the **Sell through Microsoft** option for a SaaS offer, you have the following cost structure:
-| Your license cost | $100.00 per month |
+| **Your license cost** | **$100.00 per month** |
|--||
-| Azure usage cost (D1/1-Core) | Billed directly to the publisher, not the customer |
-| *Customer is billed by Microsoft* | *$100.00 per month (publisher must account for any incurred or pass-through infrastructure costs in the license fee)* |
+| Azure usage cost (D1/1-Core) | Billed directly to the publisher, not the customer |
+| _Customer is billed by Microsoft_ | _$100.00 per month (publisher must account for any incurred or pass-through infrastructure costs in the license fee)_ |
||
-In this scenario, Microsoft bills $100.00 for your software license and pays out $80.00 to the publisher.
-
-In this scenario, Microsoft bills $100.00 for your software license and pays out $90.00 to the publisher:
+In this scenario, Microsoft bills $100.00 for your software license and pays out $80.00 or $90.00 to you depending on whether the offer qualifies for a reduced store service fee.
-|Microsoft bills | $100.00 per month |
+| **Microsoft bills** | **$100.00 per month** |
|||
-|Microsoft pays you 80% of your license cost <br> \* Microsoft pays you 90% of your license cost for any qualified SaaS apps | $80.00 per month <br> \* $90.00 per month |
-|Microsoft keeps 20% of your license cost <br> \* Microsoft keeps 10% of your license cost for any qualified SaaS apps. | $20.00 per month <br> \* $10.00 |
+| Microsoft pays you 80% of your license cost <br> \* Microsoft pays you 90% of your license cost for any qualified SaaS apps | $80.00 per month <br> \* $90.00 per month |
+| Microsoft keeps 20% of your license cost <br> \* Microsoft keeps 10% of your license cost for any qualified SaaS apps. | $20.00 per month <br> \* $10.00 |
+
+### Commercial marketplace service fees
+
+We charge a 20% standard store service fee when customers purchase your transact offer from the commercial marketplace. For details of this fee, see section 5c of the [Microsoft Publisher Agreement](https://go.microsoft.com/fwlink/?LinkID=699560).
+
+For certain transact offers that you publish to the commercial marketplace, you may qualify for a reduced store service fee of 10%. For an offer to qualify, it must have been designated by Microsoft as _Azure IP Co-sell incentivized_. Eligibility must be met at least five business days before the end of each calendar month to receive the Reduced Marketplace Service Fee. Once eligibility is met, the reduced service fee is awarded to all transactions effective the first day of the following month until _Azure IP Co-sell incentivized_ status is lost. For details about IP co-sell eligibility, see [Requirements for co-sell status](/legal/marketplace/certification-policies#3000-requirements-for-co-sell-status).
+
+The Reduced Marketplace Service Fee applies to Azure IP Co-sell incentivized SaaS, VMs, Managed apps, and any other qualified transactable IaaS solutions made available through the commercial marketplace. Paid SaaS offers associated with one Microsoft Teams app or at least two Microsoft 365 add-ins (Excel, PowerPoint, Word, Outlook, and SharePoint) and published to Microsoft AppSource can also qualify for this discount.
### Customer invoicing, payment, billing, and collections
-**Invoicing and payment** ΓÇô You can use the customer's preferred invoicing method to deliver subscription or PAYGO software license fees.
+**Invoicing and payment**: You can use the customer's preferred invoicing method to deliver subscription or [PAYGO](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go/) software license fees.
-**Enterprise Agreement** ΓÇô If the customer's preferred invoicing method is the Microsoft Enterprise Agreement, your software license fees will be billed using this invoicing method as an itemized cost, separate from any Azure-specific usage costs.
+**Enterprise Agreement**: If the customer's preferred invoicing method is the Microsoft Enterprise Agreement, your software license fees will be billed using this invoicing method as an itemized cost, separate from any Azure-specific usage costs.
-**Credit cards and monthly invoice** ΓÇô Customers can also pay using a credit card and a monthly invoice. In this case, your software license fees will be billed just like the Enterprise Agreement scenario, as an itemized cost, separate from any Azure-specific usage costs.
+**Credit cards and monthly invoice**: Customers can pay using a credit card and a monthly invoice. In this case, your software license fees will be billed just like the Enterprise Agreement scenario, as an itemized cost, separate from any Azure-specific usage costs.
-**Free credits and Azure Prepayment** ΓÇô Some customers elect to prepay Azure with Azure Prepayment (previously called monetary commitment) in the Enterprise Agreement or have been provided free credits for use with Azure. Although these credits can be used to pay for Azure usage, they can't be used to pay for publisher software license fees.
+**Free credits and monetary commitment**: Some customers choose to prepay Azure with a monetary commitment in the Enterprise Agreement or have been provided free credits to use for Azure usage. Although these credits can be used to pay for Azure usage, they can't be used to pay for publisher software license fees.
-**Billing and collections** ΓÇô Publisher software license billing is presented using the customer-selected method of invoicing and follows the invoicing timeline. Customers without an Enterprise Agreement in place are billed monthly for marketplace software licenses. Customers with an Enterprise Agreement are billed monthly via an invoice that is presented quarterly.
+**Billing and collections**: Publisher software license billing is presented using the customer-selected method of invoicing and follows the invoicing timeline. Customers without an Enterprise Agreement in place are billed monthly for marketplace software licenses. Customers with an Enterprise Agreement are billed monthly via an invoice that is presented quarterly.
-When subscription or Pay-as-You-Go pricing models are selected, Microsoft acts as the agent of the publisher and is responsible for all aspects of billing, payment, and collection.
+When subscription or Pay-as-You-Go (also called usage-based) pricing models are selected, Microsoft acts as the agent of the publisher and is responsible for all aspects of billing, payment, and collection.
### Publisher payout and reporting
Customers typically purchase using the Enterprise Agreement or a credit-card ena
#### Billing questions and support
-For more information and legal policies, see the [Microsoft Publisher Agreement](https://go.microsoft.com/fwlink/?LinkID=699560) (available in Partner Center).
-
-For help on billing questions, contact [commercial marketplace publisher support](https://aka.ms/marketplacepublishersupport).
+For more information and legal policies, see the [Microsoft Publisher Agreement](https://go.microsoft.com/fwlink/?LinkID=699560). For help with billing questions, contact [commercial marketplace publisher support](https://aka.ms/marketplacepublishersupport).
## Transact requirements
This section covers transact requirements for different offer types.
- A Microsoft account and financial information are required for the transact publishing option, regardless of the offer's pricing model. - Mandatory financial information includes payout account and tax profile.-- The publisher must live in a [supported country or region](sell-from-countries.md). For more information on setting up these accounts, see [Manage your commercial marketplace account in Partner Center](partner-center-portal/manage-account.md). ### Requirements for specific offer types
-The transact publishing option is only available for use with the following marketplace offer types:
+The ability to transact through Microsoft is available for the following commercial marketplace offer types only. This list provides the requirements for making these offer types transactable in the commercial marketplace.
+
+- **Azure application (solution template and managed application plans**: In some cases, Azure infrastructure usage fees are passed to the customer separately from software license fees, but on the same billing statement. However, if you configure a managed app plan for ISV infrastructure charges, the Azure resources are billed to the publisher, and the customer receives a flat fee that includes the cost of infrastructure, software licenses, and management services.
+
+- **Azure Virtual Machine**: Select from free, BYOL, or usage-based pricing models. On the customer's Azure bill, Microsoft presents the publisher software license fees separately from the underlying Azure infrastructure fees. Azure infrastructure fees are driven by use of the publisherΓÇÖs software.
+
+- **SaaS application**: Must be a multitenant solution, use [Azure Active Directory](https://azure.microsoft.com/services/active-directory/) for authentication, and integrate with the [SaaS Fulfillment APIs](partner-center-portal/pc-saas-fulfillment-api-v2.md). Azure infrastructure usage is managed and billed directly to you (the publisher), so you must account for Azure infrastructure usage fees and software licensing fees as a single cost item. For detailed guidance, see [How to plan a SaaS offer for the commercial marketplace](plan-saas-offer.md#plans).
-- **Azure Virtual Machine** ΓÇô Select from free, bring-your-own-license, or usage-based pricing models and present as plans defined at the offer level. On the customer's Azure bill, Microsoft presents the publisher software license fees separately from the underlying Azure infrastructure fees. Azure infrastructure fees are driven by use of the publisher software.
+## Private plans
-- **Azure application: solution template or managed app** ΓÇô In some cases, Azure infrastructure usage fees are passed to the customer separately from software license fees, but on the same billing statement. However, if you configure a managed app offering for ISV infrastructure charges, the Azure resources are billed to the publisher, and the customer receives a flat fee that includes the cost of infrastructure, software licenses, and management services.
+You can create a private plan for an offer, complete with negotiated, deal-specific pricing, or custom configurations.
-- **SaaS application** - Must be a multitenant solution, use [Azure Active Directory](https://azure.microsoft.com/services/active-directory/) for authentication, and integrate with the [SaaS Fulfillment APIs](partner-center-portal/pc-saas-fulfillment-api-v2.md). Azure infrastructure usage is managed and billed directly to you (the partner), so you must account for Azure infrastructure usage fees and software licensing fees as a single cost item. For detailed guidance, see [Create a new SaaS offer in the commercial marketplace](./create-new-saas-offer.md).
+Private plans enable you to provide higher or lower pricing to specific customers than the publicly available offering. Private plans can be used to discount or add a premium to an offer. Private plans can be made available to one or more customers by listing their Azure subscription at the plan-level.
## Next steps -- Review the eligibility requirements in the publishing options by offer type section to finalize the selection and configuration of your offer. - Review the publishing patterns by online store for examples on how your solution maps to an offer type and configuration.
+- [Publishing guide by offer type](publisher-guide-by-offer-type.md).
marketplace Plan Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/plan-saas-offer.md
Last updated 03/26/2021
# How to plan a SaaS offer for the commercial marketplace
-This article explains the different options and requirements for publishing software as a service (SaaS) offers to the Microsoft commercial marketplace. SaaS offers let you deliver and license software solutions to your customers via online subscriptions. As a SaaS publisher, you manage and pay for the infrastructure required to support your customers' use of your offer. This article will help you prepare your offer for publishing to the commercial marketplace with Partner Center.
+This article explains the different options and requirements for publishing software as a service (SaaS) offers to the Microsoft commercial marketplace. SaaS offers to let you deliver and license software solutions to your customers via online subscriptions. As a SaaS publisher, you manage and pay for the infrastructure required to support your customers' use of your offer. This article will help you prepare your offer for publishing to the commercial marketplace with Partner Center.
## Listing options
If you choose to use the standard contract, you have the option to add universal
> [!NOTE] > After you publish an offer using the standard contract for the commercial marketplace, you cannot use your own custom terms and conditions. It is an "or" scenario. You either offer your solution under the standard contract or your own terms and conditions. If you want to modify the terms of the standard contract you can do so through Standard Contract Amendments. +
+## Microsoft 365 integration
+
+Integration with Microsoft 365 allows your SaaS offer to provide connected experience across multiple Microsoft 365 App surfaces through related free add-ins like Teams apps, Office add-ins, and SharePoint Framework solutions. You can help your customers easily discover all facets of your E2E solution (web service + related add-ins) and deploy them within one process by providing the following information.
+ - If your SaaS offer integrates with Microsoft Graph, then provide the Azure Active Directory (AAD) App ID used by your SaaS offer for the integration. Administrators can review access permissions required for the proper functioning of your SaaS offer as set on the AAD App ID and grant access if advanced admin permission is needed at deployment time.
+
+ If you choose to sell your offer through Microsoft, then this is the same AAD App ID that you have registered to use on your landing page to get basic user information needed to complete customer subscription activation. For detailed guidance, see [Build the landing page for your transactable SaaS offer in the commercial marketplace](azure-ad-transactable-saas-landing-page.md).
+
+ - Provide a list of related add-ins that work with your SaaS offer you want to link. Customers will be able to discover your E2E solution on AppSource and administrators can deploy both the SaaS and all the related add-ins you have linked in the same process via Microsoft 365 admin center.
+
+ To link related add-ins, you need to provide the AppSource link of the add-in, this means the add-in must be first published to AppSource. Supported add-in types you can link are: Teams apps, Office add-ins, and SharePoint Framework (SPFx) solutions. Each linked add-in must be unique for a SaaS offer.
+
+For linked products, search on AppSource will return with one result that includes both SaaS and all linked add-ins. Customer can navigate between the product detail pages of the SaaS offer and linked add-ins.
+IT admins can review and deploy both the SaaS and linked add-ins within the same process through an integrated and connected experience within the Microsoft 365 admin center. To learn more, see [Test and deploy Microsoft 365 Apps - Microsoft 365 admin](/microsoft-365/admin/manage/test-and-deploy-microsoft-365-apps).
+
+### Microsoft 365 integration support limitations
+Discovery as a single E2E solution is supported on AppSource for all cases, however, simplified deployment of the E2E solution as described above via the Microsoft 365 admin center is not supported for the following scenarios:
+
+ - The same add-in is linked to more than one SaaS offer.
+ - The SaaS offer is linked to add-ins, but it does not integrate with Microsoft Graph and no AAD App ID is provided.
+ - The SaaS offer is linked to add-ins, but AAD App ID provided for Microsoft Graph integration is shared across multiple SaaS offers.
+
+
## Offer listing details When you [create a new SaaS offer](create-new-saas-offer.md) in Partner Center, you will enter text, images, optional videos, and other details on the **Offer listing** page. This is the information that customers will see when they discover your offer listing in the commercial marketplace, as shown in the following example.
marketplace Publisher Guide By Offer Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/publisher-guide-by-offer-type.md
Previously updated : 10/06/2020 Last updated : 04/06/2021 # Publishing guide by offer type
This article describes the offer types that are available in the commercial mark
After you [decide on a publishing option](determine-your-listing-type.md), you must choose an offer type before you start creating your offer in Partner Center. The offer type will correspond to the type of solution, app, or service offer that you wish to publish, as well as its alignment to Microsoft products and services.
+> [!NOTE]
+> After you select an offer type, you can't change the offer to another type. To create a different offer type, you need to create a new offer.
+ You can configure a single offer type in different ways to enable different publishing options, listing option, provisioning, or pricing. The publishing option and configuration of the offer type also align to the offer eligibility and technical requirements. Be sure to review the online store and offer type eligibility requirements and the technical publishing requirements before creating your offer.
media-services Configure Connect Dotnet Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/configure-connect-dotnet-howto.md
namespace ConsoleApp1
- [Tutorial: Analyze videos with Media Services v3 - .NET](analyze-videos-tutorial.md) - [Create a job input from a local file - .NET](job-input-from-local-file-how-to.md) - [Create a job input from an HTTPS URL - .NET](job-input-from-http-how-to.md)-- [Encode with a custom Transform - .NET](encode-custom-presets-how-to.md)
+- [Encode with a custom Transform - .NET](transform-custom-presets-how-to.md)
- [Use AES-128 dynamic encryption and the key delivery service - .NET](drm-playready-license-template-concept.md) - [Use DRM dynamic encryption and license delivery service - .NET](drm-protect-with-drm-tutorial.md) - [Get a signing key from the existing policy - .NET](drm-get-content-key-policy-dotnet-how-to.md)
media-services Encode Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/encode-concept.md
To encode with Media Services v3, you need to create a [Transform](/rest/api/med
When encoding with Media Services, you use presets to tell the encoder how the input media files should be processed. In Media Services v3, you use Standard Encoder to encode your files. For example, you can specify the video resolution and/or the number of audio channels you want in the encoded content.
-You can get started quickly with one of the recommended built-in presets based on industry best practices or you can choose to build a custom preset to target your specific scenario or device requirements. For more information, see [Encode with a custom Transform](encode-custom-presets-how-to.md).
+You can get started quickly with one of the recommended built-in presets based on industry best practices or you can choose to build a custom preset to target your specific scenario or device requirements. For more information, see [Encode with a custom Transform](transform-custom-presets-how-to.md).
Starting with January 2019, when encoding with the Standard Encoder to produce MP4 file(s), a new .mpi file is generated and added to the output Asset. This MPI file is intended to improve performance for [dynamic packaging](encode-dynamic-packaging-concept.md) and streaming scenarios.
Media Services fully supports customizing all values in presets to meet your spe
#### Examples -- [Customize presets with .NET](encode-custom-presets-how-to.md)-- [Customize presets with CLI](encode-custom-preset-cli-how-to.md)-- [Customize presets with REST](encode-custom-preset-rest-how-to.md)
+- [Customize presets with .NET](transform-custom-presets-how-to.md)
+- [Customize presets with CLI](transform-custom-preset-cli-how-to.md)
+- [Customize presets with REST](transform-custom-preset-rest-how-to.md)
## Preset schema
Check out the [Azure Media Services community](media-services-community.md) arti
* [Upload, encode, and stream using Media Services](stream-files-tutorial-with-api.md). * [Encode from an HTTPS URL using built-in presets](job-input-from-http-how-to.md). * [Encode a local file using built-in presets](job-input-from-local-file-how-to.md).
-* [Build a custom preset to target your specific scenario or device requirements](encode-custom-presets-how-to.md).
+* [Build a custom preset to target your specific scenario or device requirements](transform-custom-presets-how-to.md).
media-services Encode Dynamic Packaging Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/encode-dynamic-packaging-concept.md
The following articles show examples of [how to encode a video with Media Servic
* [Encode from an HTTPS URL by using built-in presets](job-input-from-http-how-to.md). * [Encode a local file by using built-in presets](job-input-from-local-file-how-to.md).
-* [Build a custom preset to target your specific scenario or device requirements](encode-custom-presets-how-to.md).
+* [Build a custom preset to target your specific scenario or device requirements](transform-custom-presets-how-to.md).
See the list of Standard Encoder [formats and codecs](encode-media-encoder-standard-formats-reference.md).
media-services Encode Media Encoder Standard Formats Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/encode-media-encoder-standard-formats-reference.md
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-This article contains a list of the most common import and export file formats that you can use with [StandardEncoderPreset](/rest/api/medi).
+This article contains a list of the most common import and export file formats that you can use with [StandardEncoderPreset](/rest/api/medi).
## Input container/file formats
The following table lists the codecs and file formats that are supported for exp
## Next steps
-[Create a transform with a custom preset](encode-custom-presets-how-to.md)
+[Create a transform with a custom preset](transform-custom-presets-how-to.md)
media-services Media Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/media-services-overview.md
How-to guides contain code samples that demonstrate how to complete a task. In t
* [Encode with HTTPS as job input - .NET](job-input-from-http-how-to.md) * [Monitor events - Portal](monitoring/monitor-events-portal-how-to.md) * [Encrypt dynamically with multi-DRM - .NET](drm-protect-with-drm-tutorial.md)
-* [How to encode with a custom transform - CLI](encode-custom-preset-cli-how-to.md)
+* [How to encode with a custom transform - CLI](transform-custom-preset-cli-how-to.md)
## Ask questions, give feedback, get updates
media-services Migrate V 2 V 3 Migration Scenario Based Content Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/migrate-v-2-v-3-migration-scenario-based-content-protection.md
Previously updated : 03/26/2021 Last updated : 04/05/2021
This article provides you with details and guidance on the migration of content
Use the support for [Multi-key](architecture-design-multi-drm-system.md) features in the new v3 API.
-See content protection concepts, tutorials and how to guides below for specific steps.
+See content protection concepts, tutorials and how to guides at the end of this article for specific steps.
-## Visibility of v2 Assets, StreamingLocators, and properties in the v3 API for content protection scenarios
+> [!NOTE]
+> The rest of this article discusses how you can migrate your v2 content protection to v3 with .NET. If you need instructions or sample code for a different language or method, please create a GitHub issue for this page.
-During migration to the v3 API, you will find that you need to access some properties or content keys from your v2 Assets. One key difference is that the v2 API would use the **AssetId** as the primary identification key and the new v3 API uses the Azure Resource Management name of the entity as the primary identifier. The v2 **Asset.Name** property is not typically used as a unique identifier, so when migrating to v3 you will find that your v2 Asset names now appear in the **Asset.Description** field.
+## v3 visibility of v2 Assets, StreamingLocators, and properties
-For example, if you previously had a v2 Asset with the ID of **"nb:cid:UUID:8cb39104-122c-496e-9ac5-7f9e2c2547b8"**, then you will find when listing the old v2 assets through the v3 API, the name will now be the GUID part at the end (in this case, **"8cb39104-122c-496e-9ac5-7f9e2c2547b8"**.)
+In the v2 API, `Assets`, `StreamingLocators`, and `ContentKeys` were used to protect your streaming content. When migrating to the v3 API, your v2 API `Assets`, `StreamingLocators`, and `ContentKeys` are all exposed automatically in the v3 API and all of the data on them is available for you to access.
-You can query the **StreamingLocators** associated with the Assets created in the v2 API using the new v3 method [ListStreamingLocators](https://docs.microsoft.com/rest/api/media/assets/liststreaminglocators) on the Asset entity. Also reference the .NET client SDK version of [ListStreamingLocatorsAsync](https://docs.microsoft.com/dotnet/api/microsoft.azure.management.media.assetsoperationsextensions.liststreaminglocatorsasync?view=azure-dotnet&preserve-view=true)
+However, you cannot *update* any properties on v2 entities through the v3 API that were created in v2.
-The results of the **ListStreamingLocators** method will provide you the **Name** and **StreamingLocatorId** of the locator along with the **StreamingPolicyName**.
+If you need to update, change or alter content stored on v2 entities, update them with the v2 API or create new v3 API entities to migrate them.
-To find the **ContentKeys** used in your **StreamingLocators** for content protection, you can call the [StreamingLocator.ListContentKeysAsync](https://docs.microsoft.com/dotnet/api/microsoft.azure.management.media.streaminglocatorsoperationsextensions.listcontentkeysasync?view=azure-dotnet&preserve-view=true) method.
+## Asset identifier differences
-Any **Assets** that were created and published using the v2 API will have both a [Content Key Policy](https://docs.microsoft.com/azure/media-services/latest/drm-content-key-policy-concept) and a Content Key defined on them in the v3 API, instead of using a default content key policy on the [Streaming Policy](https://docs.microsoft.com/azure/media-services/latest/stream-streaming-policy-concept).
+To migrate, you'll need to access properties or content keys from your v2 Assets. It's important to understand that the v2 API uses the `AssetId` as the primary identification key but the new v3 API uses the *Azure Resource Management name* of the entity as the primary identifier. (The v2 `Asset.Name` property is not used as a unique identifier.) With the v3 API, your v2 Asset name now appears as the `Asset.Description`.
-For more information on content protection in the v3 API, see the article [Protect your content with Media Services dynamic encryption.](https://docs.microsoft.com/azure/media-services/latest/drm-content-protection-concept)
+For example, if you previously had a v2 Asset with the ID of `nb:cid:UUID:8cb39104-122c-496e-9ac5-7f9e2c2547b8`, the identifier is now at the end of the GUID `8cb39104-122c-496e-9ac5-7f9e2c2547b8`. You'll see this when listing your v2 assets through the v3 API.
+
+Any Assets that were created and published using the v2 API will have both a `ContentKeyPolicy` and a `ContentKey` in the v3 API instead of a default content key policy on the `StreamingPolicy`.
+
+For more information, see the [Content key policy](https://docs.microsoft.com/azure/media-services/latest/drm-content-key-policy-concept) documentation and the [Streaming Policy](https://docs.microsoft.com/azure/media-services/latest/stream-streaming-policy-concept) documentation.
+
+## Use Azure Media Services Explorer (AMSE) v2 and AMSE v3 tools side by side
-## How to list your v2 Assets and content protection settings using the v3 API
+Use the [v2 Azure Media Services Explorer tool](https://github.com/Azure/Azure-Media-Services-Explorer/releases/tag/v4.3.15.0) along with the [v3 Azure Media Services Explorer tool](https://github.com/Azure/Azure-Media-Services-Explorer) to compare the data side by side for an Asset created and published via v2 APIs. The properties should all be visible, but in different locations.
-In the v2 API, you would commonly use **Assets**, **StreamingLocators**, and **ContentKeys** to protect your streaming content.
-When migrating to the v3 API, your v2 API Assets, StreamingLocators, and ContentKeys are all exposed automatically in the v3 API and all of the data on them is available for you to access.
+## Use the .NET content protection migration sample
-## Can I update v2 properties using the v3 API?
+You can find a code sample to compare the differences in Asset identifiers using the [v2tov3MigrationSample](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/ContentProtection/v2tov3Migration) under ContentProtection in the Media Services code samples.
-No, you cannot update any properties on v2 entities through the v3 API that were created using StreamingLocators, StreamingPolicies, Content Key Policies, and Content Keys in v2.
-If you need to update, change or alter content stored on v2 entities, you will need to update it via the v2 API or create new v3 API entities to migrate them forward.
+## List the Streaming Locators
-## How do I change the ContentKeyPolicy used for a v2 Asset that is published and keep the same content key?
+You can query the `StreamingLocators` associated with the Assets created in the v2 API using the new v3 method [ListStreamingLocators](https://docs.microsoft.com/rest/api/media/assets/liststreaminglocators) on the Asset entity. Also reference the .NET client SDK version of [ListStreamingLocatorsAsync](https://docs.microsoft.com/dotnet/api/microsoft.azure.management.media.assetsoperationsextensions.liststreaminglocatorsasync?view=azure-dotnet&preserve-view=true)
-In this situation, you should first unpublish (remove all Streaming Locators) on the Asset via the v2 SDK (delete the locator, unlink the Content Key Authorization Policy, unlink the Asset Delivery Policy, unlink the Content Key, delete the Content Key) then create a new **[StreamingLocator](https://docs.microsoft.com/azure/media-services/latest/stream-streaming-locators-concept)** in v3 using a v3 [StreamingPolicy](https://docs.microsoft.com/azure/media-services/latest/stream-streaming-policy-concept) and [ContentKeyPolicy](https://docs.microsoft.com/azure/media-services/latest/drm-content-key-policy-concept).
+The results of the `ListStreamingLocators` method will provide you the `Name` and `StreamingLocatorId` of the locator along with the `StreamingPolicyName`.
-You would need to specify the specific content key identifier and key value needed when you are creating the **[StreamingLocator](https://docs.microsoft.com/azure/media-services/latest/stream-streaming-locators-concept)**.
+## Find the content keys
+
+To find the `ContentKeys` used with your `StreamingLocators`, you can call the [StreamingLocator.ListContentKeysAsync](https://docs.microsoft.com/dotnet/api/microsoft.azure.management.media.streaminglocatorsoperationsextensions.listcontentkeysasync?view=azure-dotnet&preserve-view=true) method.
+
+For more information on content protection in the v3 API, see the article [Protect your content with Media Services dynamic encryption.](https://docs.microsoft.com/azure/media-services/latest/drm-content-protection-concept)
-Note that it is possible to delete the v2 locator using the v3 API, but this will not remove the content key or the content key policy used if they were created in the v2 API.
+## Change the v2 ContentKeyPolicy keeping the same ContentKey
-## Using AMSE v2 and AMSE v3 side by side
+You should first unpublish (remove all Streaming Locators) on the Asset via the v2 SDK. Here's how:
-When migrating your content from v2 to v3, it is advised to install the [v2 Azure Media Services Explorer tool](https://github.com/Azure/Azure-Media-Services-Explorer/releases/tag/v4.3.15.0) along with the [v3 Azure Media Services Explorer tool](https://github.com/Azure/Azure-Media-Services-Explorer) to help compare the data that they show side by side for an Asset that is created and published via v2 APIs. The properties should all be visible, but in slightly different locations now.
+1. Delete the locator.
+1. Unlink the `ContentKeyAuthorizationPolicy`.
+1. Unlink the `AssetDeliveryPolicy`.
+1. Unlink the `ContentKey`.
+1. Delete the `ContentKey`.
+1. Create a new `StreamingLocator` in v3 using a v3 `StreamingPolicy` and `ContentKeyPolicy`, specifying the specific content key identifier and key value needed.
+> [!NOTE]
+> It is possible to delete the v2 locator using the v3 API, but this won't remove the content key or the content key policy if they were created in the v2 API.
## Content protection concepts, tutorials and how to guides
When migrating your content from v2 to v3, it is advised to install the [v2 Azur
## Samples
-You can also [compare the V2 and V3 code in the code samples](migrate-v-2-v-3-migration-samples.md).
+- [v2tov3MigrationSample](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/ContentProtection/v2tov3Migration)
+- You can also [compare the V2 and V3 code in the code samples](migrate-v-2-v-3-migration-samples.md).
## Tools
media-services Migrate V 2 V 3 Migration Scenario Based Encoding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/migrate-v-2-v-3-migration-scenario-based-encoding.md
For customers using the Indexer v1 processor in the v2 API, you need to create a
- [Create a job input from a local file](job-input-from-local-file-how-to.md) - [Create a basic audio transform](transform-create-basic-audio-how-to.md) - With .NET
- - [How to encode with a custom transform - .NET](encode-custom-presets-how-to.md)
+ - [How to encode with a custom transform - .NET](transform-custom-presets-how-to.md)
- [How to create an overlay with Media Encoder Standard](transform-create-overlay-how-to.md) - [How to generate thumbnails using Encoder Standard with .NET](transform-generate-thumbnails-dotnet-how-to.md) - With Azure CLI
- - [How to encode with a custom transform - Azure CLI](encode-custom-preset-cli-how-to.md)
+ - [How to encode with a custom transform - Azure CLI](transform-custom-preset-cli-how-to.md)
- With REST
- - [How to encode with a custom transform - REST](encode-custom-preset-rest-how-to.md)
+ - [How to encode with a custom transform - REST](transform-custom-preset-rest-how-to.md)
- [How to generate thumbnails using Encoder Standard with REST](transform-generate-thumbnails-rest-how-to.md) - [Subclip a video when encoding with Media Services - .NET](transform-subclip-video-dotnet-how-to.md) - [Subclip a video when encoding with Media Services - REST](transform-subclip-video-rest-how-to.md)
media-services Stream Files Tutorial With Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/stream-files-tutorial-with-api.md
When encoding or processing content in Media Services, it's a common pattern to
When creating a new [Transform](/rest/api/medi).
-You can use a built-in EncoderNamedPreset or use custom presets. For more information, see [How to customize encoder presets](encode-custom-presets-how-to.md).
+You can use a built-in EncoderNamedPreset or use custom presets. For more information, see [How to customize encoder presets](transform-custom-presets-how-to.md).
When creating a [Transform](/rest/api/media/transforms), you should first check if one already exists using the **Get** method, as shown in the code that follows. In Media Services v3, **Get** methods on entities return **null** if the entity doesnΓÇÖt exist (a case-insensitive check on the name).
media-services Transform Create Overlay How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/transform-create-overlay-how-to.md
The Media Encoder Standard allows you to overlay an image, audio file, or anothe
If you aren't already familiar with Transforms, it is recommended that you complete the following activities: * Read [Encoding video and audio with Media Services](encode-concept.md)
-* Read [How to encode with a custom transform - .NET](encode-custom-presets-how-to.md). Follow the steps in that article to set up the .NET needed to work with transforms, then return here to try out an overlays preset sample.
+* Read [How to encode with a custom transform - .NET](transform-custom-presets-how-to.md). Follow the steps in that article to set up the .NET needed to work with transforms, then return here to try out an overlays preset sample.
* See the [Transforms reference document](/rest/api/media/transforms). Once you are familiar with Transforms, download the overlays sample.
media-services Transform Create Transform How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/transform-create-transform-how-to.md
The Azure CLI script in this article shows how to create a transform. Transforms
## [CLI](#tab/cli/) > [!NOTE]
-> You can only specify a path to a custom Standard Encoder preset JSON file for [StandardEncoderPreset](/rest/api/medi) example.
+> You can only specify a path to a custom Standard Encoder preset JSON file for [StandardEncoderPreset](/rest/api/medi) example.
> > You cannot pass a file name when using [BuiltInStandardEncoderPreset](/rest/api/media/transforms/createorupdate#builtinstandardencoderpreset).
media-services Transform Custom Preset Cli How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/transform-custom-preset-cli-how-to.md
+
+ Title: Encode a custom transform CLI
+description: This topic shows how to use Azure Media Services v3 to encode a custom transform using Azure CLI.
+
+documentationcenter: ''
++
+editor: ''
++++ Last updated : 08/31/2020+++
+# How to encode with a custom transform - Azure CLI
++
+When encoding with Azure Media Services, you can get started quickly with one of the recommended built-in presets, based on industry best practices, as demonstrated in the [Streaming files](stream-files-cli-quickstart.md#create-a-transform-for-adaptive-bitrate-encoding) quickstart. You can also build a custom preset to target your specific scenario or device requirements.
+
+## Considerations
+
+When creating custom presets, the following considerations apply:
+
+* All values for height and width on AVC content must be a multiple of 4.
+* In Azure Media Services v3, all of the encoding bitrates are in bits per second. This is different from the presets with our v2 APIs, which used kilobits/second as the unit. For example, if the bitrate in v2 was specified as 128 (kilobits/second), in v3 it would be set to 128000 (bits/second).
+
+## Prerequisites
+
+[Create a Media Services account](./account-create-how-to.md).
+
+Make sure to remember the resource group name and the Media Services account name.
+
+## Define a custom preset
+
+The following example defines the request body of a new Transform. We define a set of outputs that we want to be generated when this Transform is used.
+
+In this example, we first add an AacAudio layer for the audio encoding and two H264Video layers for the video encoding. In the video layers, we assign labels so that they can be used in the output file names. Next, we want the output to also include thumbnails. In the example below we specify images in PNG format, generated at 50% of the resolution of the input video, and at three timestamps - {25%, 50%, 75} of the length of the input video. Lastly, we specify the format for the output files - one for video + audio, and another for the thumbnails. Since we have multiple H264Layers, we have to use macros that produce unique names per layer. We can either use a `{Label}` or `{Bitrate}` macro, the example shows the former.
+
+We are going to save this transform in a file. In this example, we name the file `customPreset.json`.
+
+```json
+{
+ "@odata.type": "#Microsoft.Media.StandardEncoderPreset",
+ "codecs": [
+ {
+ "@odata.type": "#Microsoft.Media.AacAudio",
+ "channels": 2,
+ "samplingRate": 48000,
+ "bitrate": 128000,
+ "profile": "AacLc"
+ },
+ {
+ "@odata.type": "#Microsoft.Media.H264Video",
+ "keyFrameInterval": "PT2S",
+ "stretchMode": "AutoSize",
+ "sceneChangeDetection": false,
+ "complexity": "Balanced",
+ "layers": [
+ {
+ "width": "1280",
+ "height": "720",
+ "label": "HD",
+ "bitrate": 3400000,
+ "maxBitrate": 3400000,
+ "bFrames": 3,
+ "slices": 0,
+ "adaptiveBFrame": true,
+ "profile": "Auto",
+ "level": "auto",
+ "bufferWindow": "PT5S",
+ "referenceFrames": 3,
+ "entropyMode": "Cabac"
+ },
+ {
+ "width": "640",
+ "height": "360",
+ "label": "SD",
+ "bitrate": 1000000,
+ "maxBitrate": 1000000,
+ "bFrames": 3,
+ "slices": 0,
+ "adaptiveBFrame": true,
+ "profile": "Auto",
+ "level": "auto",
+ "bufferWindow": "PT5S",
+ "referenceFrames": 3,
+ "entropyMode": "Cabac"
+ }
+ ]
+ },
+ {
+ "@odata.type": "#Microsoft.Media.PngImage",
+ "stretchMode": "AutoSize",
+ "start": "25%",
+ "step": "25%",
+ "range": "80%",
+ "layers": [
+ {
+ "width": "50%",
+ "height": "50%"
+ }
+ ]
+ }
+ ],
+ "formats": [
+ {
+ "@odata.type": "#Microsoft.Media.Mp4Format",
+ "filenamePattern": "Video-{Basename}-{Label}-{Bitrate}{Extension}",
+ "outputFiles": []
+ },
+ {
+ "@odata.type": "#Microsoft.Media.PngFormat",
+ "filenamePattern": "Thumbnail-{Basename}-{Index}{Extension}"
+ }
+ ]
+}
+```
+
+## Create a new transform
+
+In this example, we create a **Transform** that is based on the custom preset we defined earlier. When creating a Transform, you should first check if one already exist. If the Transform exists, reuse it. The following `show` command returns the `customTransformName` transform if it exists:
+
+```azurecli-interactive
+az ams transform show -a amsaccount -g amsResourceGroup -n customTransformName
+```
+
+The following Azure CLI command creates the Transform based on the custom preset (defined earlier).
+
+```azurecli-interactive
+az ams transform create -a amsaccount -g amsResourceGroup -n customTransformName --description "Basic Transform using a custom encoding preset" --preset customPreset.json
+```
+
+For Media Services to apply the Transform to the specified video or audio, you need to submit a Job under that Transform. For a complete example that shows how to submit a job under a transform, see [Quickstart: Stream video files - Azure CLI](stream-files-cli-quickstart.md).
+
+## See also
+
+[Azure CLI](/cli/azure/ams)
media-services Transform Custom Preset Rest How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/transform-custom-preset-rest-how-to.md
+
+ Title: Encode a custom transform REST
+description: This topic shows how to use Azure Media Services v3 to encode a custom transform using REST.
+
+documentationcenter: ''
++
+editor: ''
++++ Last updated : 08/31/2020+++
+# How to encode with a custom transform - REST
++
+When encoding with Azure Media Services, you can get started quickly with one of the recommended built-in presets, based on industry best practices, as demonstrated in the [Streaming files](stream-files-tutorial-with-rest.md#create-a-transform) tutorial. You can also build a custom preset to target your specific scenario or device requirements.
+
+## Considerations
+
+When creating custom presets, the following considerations apply:
+
+* All values for height and width on AVC content must be a multiple of 4.
+* In Azure Media Services v3, all of the encoding bitrates are in bits per second. This is different from the presets with our v2 APIs, which used kilobits/second as the unit. For example, if the bitrate in v2 was specified as 128 (kilobits/second), in v3 it would be set to 128000 (bits/second).
+
+## Prerequisites
+
+- [Create a Media Services account](./account-create-how-to.md). <br/>Make sure to remember the resource group name and the Media Services account name.
+- [Configure Postman for Azure Media Services REST API calls](setup-postman-rest-how-to.md).<br/>Make sure to follow the last step in the topic [Get Azure AD Token](setup-postman-rest-how-to.md#get-azure-ad-token).
+
+## Define a custom preset
+
+The following example defines the request body of a new Transform. We define a set of outputs that we want to be generated when this Transform is used.
+
+In this example, we first add an AacAudio layer for the audio encoding and two H264Video layers for the video encoding. In the video layers, we assign labels so that they can be used in the output file names. Next, we want the output to also include thumbnails. In the example below we specify images in PNG format, generated at 50% of the resolution of the input video, and at three timestamps - {25%, 50%, 75} of the length of the input video. Lastly, we specify the format for the output files - one for video + audio, and another for the thumbnails. Since we have multiple H264Layers, we have to use macros that produce unique names per layer. We can either use a `{Label}` or `{Bitrate}` macro, the example shows the former.
+
+```json
+{
+ "properties": {
+ "description": "Basic Transform using a custom encoding preset",
+ "outputs": [
+ {
+ "onError": "StopProcessingJob",
+ "relativePriority": "Normal",
+ "preset": {
+ "@odata.type": "#Microsoft.Media.StandardEncoderPreset",
+ "codecs": [
+ {
+ "@odata.type": "#Microsoft.Media.AacAudio",
+ "channels": 2,
+ "samplingRate": 48000,
+ "bitrate": 128000,
+ "profile": "AacLc"
+ },
+ {
+ "@odata.type": "#Microsoft.Media.H264Video",
+ "keyFrameInterval": "PT2S",
+ "stretchMode": "AutoSize",
+ "sceneChangeDetection": false,
+ "complexity": "Balanced",
+ "layers": [
+ {
+ "width": "1280",
+ "height": "720",
+ "label": "HD",
+ "bitrate": 3400000,
+ "maxBitrate": 3400000,
+ "bFrames": 3,
+ "slices": 0,
+ "adaptiveBFrame": true,
+ "profile": "Auto",
+ "level": "auto",
+ "bufferWindow": "PT5S",
+ "referenceFrames": 3,
+ "entropyMode": "Cabac"
+ },
+ {
+ "width": "640",
+ "height": "360",
+ "label": "SD",
+ "bitrate": 1000000,
+ "maxBitrate": 1000000,
+ "bFrames": 3,
+ "slices": 0,
+ "adaptiveBFrame": true,
+ "profile": "Auto",
+ "level": "auto",
+ "bufferWindow": "PT5S",
+ "referenceFrames": 3,
+ "entropyMode": "Cabac"
+ }
+ ]
+ },
+ {
+ "@odata.type": "#Microsoft.Media.PngImage",
+ "stretchMode": "AutoSize",
+ "start": "25%",
+ "step": "25%",
+ "range": "80%",
+ "layers": [
+ {
+ "width": "50%",
+ "height": "50%"
+ }
+ ]
+ }
+ ],
+ "formats": [
+ {
+ "@odata.type": "#Microsoft.Media.Mp4Format",
+ "filenamePattern": "Video-{Basename}-{Label}-{Bitrate}{Extension}",
+ "outputFiles": []
+ },
+ {
+ "@odata.type": "#Microsoft.Media.PngFormat",
+ "filenamePattern": "Thumbnail-{Basename}-{Index}{Extension}"
+ }
+ ]
+ }
+ }
+ ]
+ }
+}
+
+```
+
+## Create a new transform
+
+In this example, we create a **Transform** that is based on the custom preset we defined earlier. When creating a Transform, you should first use [Get](/rest/api/media/transforms/get) to check if one already exists. If the Transform exists, reuse it.
+
+In the Postman's collection that you downloaded, select **Transforms and Jobs**->**Create or Update Transform**.
+
+The **PUT** HTTP request method is similar to:
+
+```
+PUT https://management.azure.com/subscriptions/:subscriptionId/resourceGroups/:resourceGroupName/providers/Microsoft.Media/mediaServices/:accountName/transforms/:transformName?api-version={{api-version}}
+```
+
+Select the **Body** tab and replace the body with the json code you [defined earlier](#define-a-custom-preset). For Media Services to apply the Transform to the specified video or audio, you need to submit a Job under that Transform.
+
+Select **Send**.
+
+For Media Services to apply the Transform to the specified video or audio, you need to submit a Job under that Transform. For a complete example that shows how to submit a job under a transform, see [Tutorial: Stream video files - REST](stream-files-tutorial-with-rest.md).
+
+## Next steps
+
+See [other REST operations](/rest/api/media/)
media-services Transform Custom Presets How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/transform-custom-presets-how-to.md
+
+ Title: Encode custom transform .NET
+description: This topic shows how to use Azure Media Services v3 to encode a custom transform using .NET.
+
+documentationcenter: ''
++
+editor: ''
+++ Last updated : 08/31/2020++++
+# How to encode with a custom transform - .NET
++
+When encoding with Azure Media Services, you can get started quickly with one of the recommended built-in presets based on industry best practices as demonstrated in the [Streaming files](stream-files-tutorial-with-api.md) tutorial. You can also build a custom preset to target your specific scenario or device requirements.
+
+## Considerations
+
+When creating custom presets, the following considerations apply:
+
+* All values for height and width on AVC content must be a multiple of 4.
+* In Azure Media Services v3, all of the encoding bitrates are in bits per second. This is different from the presets with our v2 APIs, which used kilobits/second as the unit. For example, if the bitrate in v2 was specified as 128 (kilobits/second), in v3 it would be set to 128000 (bits/second).
+
+## Prerequisites
+
+[Create a Media Services account](./account-create-how-to.md)
+
+## Download the sample
+
+Clone a GitHub repository that contains the full .NET Core sample to your machine using the following command:
+
+ ```bash
+ git clone https://github.com/Azure-Samples/media-services-v3-dotnet.git
+ ```
+
+The custom preset sample is located in the [Encoding with a custom preset using .NET](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/EncodingWithMESCustomPreset_H264) folder.
+
+## Create a transform with a custom preset
+
+When creating a new [Transform](/rest/api/media/transforms), you need to specify what you want it to produce as an output. The required parameter is a [TransformOutput](/rest/api/media/transforms/createorupdate#transformoutput) object, as shown in the code below. Each **TransformOutput** contains a **Preset**. The **Preset** describes the step-by-step instructions of video and/or audio processing operations that are to be used to generate the desired **TransformOutput**. The following **TransformOutput** creates custom codec and layer output settings.
+
+When creating a [Transform](/rest/api/media/transforms), you should first check if one already exists using the **Get** method, as shown in the code that follows. In Media Services v3, **Get** methods on entities return **null** if the entity doesn't exist (a case-insensitive check on the name).
+
+### Example
+
+The following example defines a set of outputs that we want to be generated when this Transform is used. We first add an AacAudio layer for the audio encoding and two H264Video layers for the video encoding. In the video layers, we assign labels so that they can be used in the output file names. Next, we want the output to also include thumbnails. In the example below we specify images in PNG format, generated at 50% of the resolution of the input video, and at three timestamps - {25%, 50%, 75%} of the length of the input video. Lastly, we specify the format for the output files - one for video + audio, and another for the thumbnails. Since we have multiple H264Layers, we have to use macros that produce unique names per layer. We can either use a `{Label}` or `{Bitrate}` macro, the example shows the former.
+
+[!code-csharp[Main](../../../media-services-v3-dotnet/VideoEncoding/EncodingWithMESCustomPreset_H264/Program.cs#EnsureTransformExists)]
+
+## Next steps
+
+[Streaming files](stream-files-tutorial-with-api.md)
media-services Transform Generate Thumbnails Dotnet How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/transform-generate-thumbnails-dotnet-how-to.md
You can use Media Encoder Standard to generate one or more thumbnails from your
## Recommended reading and practice
-It is recommended that you become familiar with custom transforms by reading [How to encode with a custom transform - .NET](encode-custom-presets-how-to.md).
+It is recommended that you become familiar with custom transforms by reading [How to encode with a custom transform - .NET](transform-custom-presets-how-to.md).
## Transform code example
private static Transform EnsureTransformExists(IAzureMediaServicesClient client,
return transform; } ```-
-## Next steps
-
-[Generate thumbnails using REST](transform-generate-thumbnails-rest-how-to.md)
media-services Transform Generate Thumbnails Rest How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/transform-generate-thumbnails-rest-how-to.md
You can use Media Encoder Standard to generate one or more thumbnails from your
## Recommended reading and practice
-It is recommended that you become familiar with custom transforms by reading [How to encode with a custom transform - REST](encode-custom-preset-rest-how-to.md).
+It is recommended that you become familiar with custom transforms by reading [How to encode with a custom transform - REST](transform-custom-preset-rest-how-to.md).
## Thumbnail parameters
media-services Transform Subclip Video Dotnet How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/transform-subclip-video-dotnet-how-to.md
private static async Task<Job> JobWithBuiltInStandardEncoderWithSingleClipAsync(
## Next steps
-[How to encode with a custom transform](encode-custom-presets-how-to.md)
+[How to encode with a custom transform](transform-custom-presets-how-to.md)
media-services Transform Subclip Video Rest How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/transform-subclip-video-rest-how-to.md
To complete the steps described in this topic, you have to:
## Next steps
-[How to encode with a custom transform](encode-custom-preset-rest-how-to.md)
+[How to encode with a custom transform](transform-custom-preset-rest-how-to.md)
media-services Media Services Custom Mes Presets With Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/media-services-custom-mes-presets-with-dotnet.md
namespace CustomizeMESPresests
## See also -- [How to encode with a custom transform by using CLI](../latest/encode-custom-preset-cli-how-to.md)
+- [How to encode with a custom transform by using CLI](../latest/transform-custom-preset-cli-how-to.md)
- [Encoding with Media Services v3](../latest/encode-concept.md) ## Media Services learning paths
mysql Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/concepts-networking.md
Learn how to enable and manage public access (allowed IP addresses) using the [A
### Troubleshooting public access issues Consider the following points when access to the Microsoft Azure Database for MySQL Server service does not behave as you expect:
-* **Changes to the allow list have not taken effect yet:** There may be as much as a five-minute delay for changes to the Azure Database for MySQL Server firewall configuration to take effect.
+* **Changes to the allowlist have not taken effect yet:** There may be as much as a five-minute delay for changes to the Azure Database for MySQL Server firewall configuration to take effect.
* **Authentication failed:** If a user does not have permissions on the Azure Database for MySQL server or the password used is incorrect, the connection to the Azure Database for MySQL server is denied. Creating a firewall setting only provides clients with an opportunity to attempt connecting to your server. Each client must still provide the necessary security credentials.
Example
## TLS and SSL
-Azure Database for MySQL Flexible Server supports connecting your client applications to the MySQL service using Transport Layer Security (TLS). TLS is an industry standard protocol that ensures encrypted network connections between your database server and client applications. TLS is an updated protocol of Secure Sockets Layer (SSL).
+Azure Database for MySQL Flexible Server supports connecting your client applications to the MySQL server using Secure Sockets Layer (SSL) with Transport layer security(TLS) encryption. TLS is an industry standard protocol that ensures encrypted network connections between your database server and client applications, allowing you to adhere to compliance requirements.
-Azure Database for MySQL Flexible Server only supports encrypted connections using Transport Layer Security (TLS 1.2). All incoming connections with TLS 1.0 and TLS 1.1 will be denied. You cannot disable or change the TLS version for connecting to Azure Database for MySQL Flexible Server. Review how to [connect using SSL/TLS](how-to-connect-tls-ssl.md) to learn more.
+Azure Database for MySQL Flexible Server supports encrypted connections using Transport Layer Security (TLS 1.2) by default and all incoming connections with TLS 1.0 and TLS 1.1 will be denied by default. The encrypted connection enforcement or TLS version configuration on your flexible server can be configured and changed.
+
+Following are the different configurations of SSL and TLS settings you can have for your flexible server:
+
+| Scenario | Server parameter settings | Description |
+||--||
+|Disable SSL (encrypted connections) | require_secure_transport = OFF |If your legacy application doesn't support encrypted connections to MySQL server, you can disable enforcement of encrypted connections to your flexible server by setting require_secure_transport=OFF.|
+|Enforce SSL with TLS version < 1.2 | require_secure_transport = ON and tls_version = TLSV1 or TLSV1.1| If your legacy application supports encrypted connections but requires TLS version < 1.2, you can enable encrypted connections but configure your flexible server to allow connections with the tls version (v1.0 or v1.1) supported by your application|
+|Enforce SSL with TLS version = 1.2(Default configuration)|require_secure_transport = ON and tls_version = TLSV1.2| This is the recommended and default configuration for flexible server.|
+|Enforce SSL with TLS version = 1.3(Supported with MySQL v8.0 and above)| require_secure_transport = ON and tls_version = TLSV1.3| This is useful and recommended for new applications development|
+
+> [!Note]
+> Changes to SSL Cipher on flexible server is not supported. FIPS cipher suites is enforced by default when tls_version is set to TLS version 1.2 . For TLS versions other than version 1.2, SSL Cipher is set to default settings which comes with MySQL community installation.
+
+Review how to [connect using SSL/TLS](how-to-connect-tls-ssl.md) to learn more.
## Next steps
mysql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/concepts-server-parameters.md
Refer to the following sections below to learn more about the limits of the seve
### log_bin_trust_function_creators
-In Azure Database for MySQL Flexible Server, binary logs are always enabled (that is, `log_bin` is set to ON). In case you want to use triggers you will get error similar to *you do not have the SUPER privilege and binary logging is enabled (you might want to use the less safe `log_bin_trust_function_creators` variable)*.
+In Azure Database for MySQL Flexible Server, binary logs are always enabled (that is, `log_bin` is set to ON). log_bin_trust_function_creators is set to ON by default in flexible servers.
-The binary logging format is always **ROW** and all connections to the server **ALWAYS** use row-based binary logging. With row-based binary logging, security issues do not exist and binary logging cannot break, so you can safely set [`log_bin_trust_function_creators`](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators) to **TRUE**.
+The binary logging format is always **ROW** and all connections to the server **ALWAYS** use row-based binary logging. With row-based binary logging, security issues do not exist and binary logging cannot break, so you can safely allow [`log_bin_trust_function_creators`](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators) to remain **ON**.
+
+If [`log_bin_trust_function_creators`] is set to OFF, if you try to create triggers you may get errors similar to *you do not have the SUPER privilege and binary logging is enabled (you might want to use the less safe `log_bin_trust_function_creators` variable)*.
### innodb_buffer_pool_size
mysql Connect Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/connect-azure-cli.md
Your preference of are now saved to local context. To learn more, type in `az l
## Next Steps > [!div class="nextstepaction"]
-> [Manage the server](./how-to-manage-server-cli.md)
+* [Connect to Azure Database for MySQL - Flexible Server with encrypted connections](how-to-connect-tls-ssl.md)
+* [Manage the server](./how-to-manage-server-cli.md)
mysql Connect Workbench https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/connect-workbench.md
Title: 'Quickstart: Connect - MySQL Workbench - Azure Database for MySQL - Flexible Server' description: This Quickstart provides the steps to use MySQL Workbench to connect and query data from Azure Database for MySQL - Flexible Server.--++
mysql How To Connect Tls Ssl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/how-to-connect-tls-ssl.md
Last updated 09/21/2020
-# Connect to Azure Database for MySQL - Flexible Server over TLS1.2/SSL
+# Connect to Azure Database for MySQL - Flexible Server with encrypted connections
> [!IMPORTANT] > Azure Database for MySQL Flexible Server is currently in public preview
-Azure Database for MySQL Flexible Server supports connecting your client applications to the MySQL service using Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL). TLS is an industry standard protocol that ensures encrypted network connections between your database server and client applications, allowing you to adhere to compliance requirements.
+Azure Database for MySQL Flexible Server supports connecting your client applications to the MySQL server using Secure Sockets Layer (SSL) with Transport layer security(TLS) encryption. TLS is an industry standard protocol that ensures encrypted network connections between your database server and client applications, allowing you to adhere to compliance requirements.
-Azure Database for MySQL Flexible Server only supports encrypted connections using Transport Layer Security (TLS 1.2) and all incoming connections with TLS 1.0 and TLS 1.1 will be denied. For all flexible servers enforcement of TLS connections is enabled and you cannot disable TLS/SSL for connecting to flexible server.
+Azure Database for MySQL Flexible Server supports encrypted connections using Transport Layer Security (TLS 1.2) by default and all incoming connections with TLS 1.0 and TLS 1.1 will be denied by default. The encrypted connection enforcement or TLS version configuration on your flexible server can be changed as discussed in this article.
-## Download the public SSL certificate
-To use with your appliations,please download the [public SSL certificate](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem).
+Following are the different configurations of SSL and TLS settings you can have for your flexible server:
-Save the certificate file to your preferred location. For example, this tutorial uses `c:\ssl` or `\var\www\html\bin` on your local environment or the client environment where your application is hosted. This will allow applications to connect securely to the database over SSL.
+| Scenario | Server parameter settings | Description |
+||--||
+|Disable SSL (encrypted connections) | require_secure_transport = OFF |If your legacy application doesn't support encrypted connections to MySQL server, you can disable enforcement of encrypted connections to your flexible server by setting require_secure_transport=OFF.|
+|Enforce SSL with TLS version < 1.2 | require_secure_transport = ON and tls_version = TLSV1 or TLSV1.1| If your legacy application supports encrypted connections but requires TLS version < 1.2, you can enable encrypted connections but configure your flexible server to allow connections with the tls version (v1.0 or v1.1) supported by your application|
+|Enforce SSL with TLS version = 1.2(Default configuration)|require_secure_transport = ON and tls_version = TLSV1.2| This is the recommended and default configuration for flexible server.|
+|Enforce SSL with TLS version = 1.3(Supported with MySQL v8.0 and above)| require_secure_transport = ON and tls_version = TLSV1.3| This is useful and recommended for new applications development|
-### Connect using mysql command-line client with TLS/SSL
+> [!Note]
+> Changes to SSL Cipher on flexible server is not supported. FIPS cipher suites is enforced by default when tls_version is set to TLS version 1.2 . For TLS versions other than version 1.2, SSL Cipher is set to default settings which comes with MySQL community installation.
+
+In this article, you will learn how to:
+* Configure your flexible server
+ * With SSL disabled
+ * With SSL enforced with TLS version < 1.2
+* Connect to your flexible server using mysql command-line
+ * With encrypted connections disabled
+ * With encrypted connections enabled
+* Verify encryption status for your connection
+* Connect to your flexible server with encrypted connections using various application frameworks
+
+## Disable SSL on your flexible server
+If your client application doesn't support encrypted connections, you will need to disable encrypted connections enforcement on your flexible server. To disable encrypted connections enforcement, you will need to set require_secure_transport server parameter to OFF as shown in the screenshot and save the server parameter configuration for it to take effect. require_secure_transport is a **dynamic server parameter** which takes effect immediately and doesn't require server restart to take effect.
+
+> :::image type="content" source="./media/how-to-connect-tls-ssl/disable-ssl.png" alt-text="Screenshot showing how to disable SSL with Azure Database for MySQL flexible server.":::
+
+### Connect using mysql command-line client with SSL disabled
+
+The following example shows how to connect to your server using the mysql command-line interface. Use the `--ssl-mode=DISABLED` connection string setting to disable TLS/SSL connection from mysql client. Replace values with your actual server name and password.
+
+```bash
+ mysql.exe -h mydemoserver.mysql.database.azure.com -u myadmin -p --ssl-mode=DISABLED
+```
+It is important to note that setting require_secure_transport to OFF doesn't mean encrypted connections will not supported on server side. If you set require_secure_transport to OFF on flexible server but if the client connects with encrypted connection, it will still be accepted. The following connection using mysql client to a flexible server configured with require_secure_transport=OFF will also work as shown below.
+
+```bash
+ mysql.exe -h mydemoserver.mysql.database.azure.com -u myadmin -p --ssl-mode=REQUIRED
+```
+```output
+Welcome to the MySQL monitor. Commands end with ; or \g.
+Your MySQL connection id is 17
+Server version: 5.7.29-log MySQL Community Server (GPL)
+
+Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
+
+Oracle is a registered trademark of Oracle Corporation and/or its
+affiliates. Other names may be trademarks of their respective
+owners.
+
+Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
+
+mysql> show global variables like '%require_secure_transport%';
++--+-+
+| Variable_name | Value |
++--+-+
+| require_secure_transport | OFF |
++--+-+
+1 row in set (0.02 sec)
+```
+
+In summary, require_secure_transport=OFF setting relaxes the enforcement of encrypted connections on flexible server and allows unencrypted connections to the server from client in addition to the encrypted connections.
+
+## Enforce SSL with TLS version < 1.2
+
+If your application supports connections to MySQL server with SSL, but supports TLS version < 1.2 you will require to set the TLS versions server parameter on your flexible server. To set TLS versions which you want your flexible server to support, you will need to set tls_version server parameter to TLSV1, TLSV1.1, or TLSV1 and TLSV1.1 as shown in the screenshot and save the server parameter configuration for it to take effect. tls_version is a **static server parameter** which will require a server restart for the parameter to take effect.
+
+> :::image type="content" source="./media/how-to-connect-tls-ssl/tls-version.png" alt-text="Screenshot showing how to set tls version for a Azure Database for MySQL flexible server.":::
+
+## Connect using mysql command-line client with TLS/SSL
+
+### Download the public SSL certificate
+To use encrypted connections with your client applications,you will need to download the [public SSL certificate](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem) which is also available in Azure portal Networking blade as shown in the screenshot below.
+
+> :::image type="content" source="./media/how-to-connect-tls-ssl/download-ssl.png" alt-text="Screenshot showing how to download public SSL certificate from Azure portal.":::
+
+Save the certificate file to your preferred location. For example, this tutorial uses `c:\ssl` or `\var\www\html\bin` on your local environment or the client environment where your application is hosted. This will allow applications to connect securely to the database over SSL.
If you created your flexible server with *Private access (VNet Integration)*, you will need to connect to your server from a resource within the same VNet as your server. You can create a virtual machine and add it to the VNet created with your flexible server.
You can choose either [mysql.exe](https://dev.mysql.com/doc/refman/8.0/en/mysql.
The following example shows how to connect to your server using the mysql command-line interface. Use the `--ssl-mode=REQUIRED` connection string setting to enforce TLS/SSL certificate verification. Pass the local certificate file path to the `--ssl-ca` parameter. Replace values with your actual server name and password. ```bash
- mysql.exe -h mydemoserver.mysql.database.azure.com -u myadmin -p --ssl-mode=REQUIRED --ssl-ca=c:\ssl\DigiCertGlobalRootCA.crt.pem
+sudo apt-get install mysql-client
+wget --no-check-certificate https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem
+mysql -h mydemoserver.mysql.database.azure.com -u mydemouser -p --ssl-mode=REQUIRED --ssl-ca=DigiCertGlobalRootCA.crt.pem
``` > [!Note] > Confirm that the value passed to `--ssl-ca` matches the file path for the certificate you saved.
-### Verify the TLS/SSL connection
+If you try to connect to your server with unencrypted connections, you will see error stating connections using insecure transport are prohibited similar to one below:
+
+```output
+ERROR 3159 (HY000): Connections using insecure transport are prohibited while --require_secure_transport=ON.
+```
+
+## Verify the TLS/SSL connection
Execute the mysql **status** command to verify that you have connected to your MySQL server using TLS/SSL: ```dos mysql> status ```
-Confirm the connection is encrypted by reviewing the output, which should show: **SSL: Cipher in use is AES256-SHA**. This cipher suite shows an example and based on the client, you can see a different cipher suite.
-
-## Ensure your application or framework supports TLS connections
+Confirm the connection is encrypted by reviewing the output, which should show: **SSL: Cipher in use is **. This cipher suite shows an example and based on the client, you can see a different cipher suite.
-Some application frameworks that use MySQL for their database services do not enable TLS by default during installation. Your MySQL server enforces TLS connections but if the application is not configured for TLS, the application may fail to connect to your database server. Consult your application's documentation to learn how to enable TLS connections.
+## Connect to your flexible server with encrypted connections using various application frameworks
-## Sample code
Connection strings that are pre-defined in the "Connection Strings" page available for your server in the Azure portal include the required parameters for common languages to connect to your database server using TLS/SSL. The TLS/SSL parameter varies based on the connector. For example, "useSSL=true", "sslmode=required", or "ssl_verify_cert=true" and other variations. To establish an encrypted connection to your flexible server over TLS/SSL from your application, refer to the following code samples:
mysql How To Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/how-to-troubleshoot-common-connection-issues.md
In this article, we will discuss how you can troubleshoot some of the common err
If the application persistently fails to connect to Azure Database for MySQL Flexible Server, it usually indicates an issue with one of the following:
-* Encrypted connection using TLS/SSL: Flexible Server only supports encrypted connections using Transport Layer Security (TLS 1.2) and all **incoming connections with TLS 1.0 and TLS 1.1 will be denied**. You cannot disable or change the TLS version. Learn more about [Encrypted connectivity using Transport Layer Security (TLS 1.2) in Azure Database for MySQL - Flexible Server](./how-to-connect-tls-ssl.md).
+* Encrypted connection using TLS/SSL: Flexible Server supports encrypted connections using Transport Layer Security (TLS 1.2) and all **incoming connections with TLS 1.0 and TLS 1.1 will be denied by default**. You can disable enforcement of encrypted connections or change the TLS version. Learn more about [Encrypted connectivity using Transport Layer Security (TLS 1.2) in Azure Database for MySQL - Flexible Server](./how-to-connect-tls-ssl.md).
- Flexible Server in *Private access (VNet Integration)*: Make sure you are connecting from within the same virtual network as the flexible server. Refer to [virtual network in Azure Database for MySQL Flexible Server]<!--(./concepts-networking-virtual-network.md)--> - Flexible Server with *Public access (allowed IP addresses)*, make sure that the firewall is configured to allow connections from your client. Refer to [Create and manage flexible server firewall rules using the Azure portal](./how-to-manage-firewall-portal.md). * Client firewall configuration: The firewall on your client must allow connections to your database server. IP addresses and ports of the server that you cannot to must be allowed as well as application names such as MySQL in some firewalls.
mysql Quickstart Create Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/quickstart-create-server-cli.md
az login
Select the specific subscription under your account using [az account set](/cli/azure/account#az-account-set) command. Make a note of the **id** value from the **az login** output to use as the value for **subscription** argument in the command. If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. To get all your subscription, use [az account list](/cli/azure/account#az-account-list).
-```azurecli
+```azurecli-interactive
az account set --subscription <subscription id> ```
az group create --name myresourcegroup --location eastus2
Create a flexible server with the `az mysql flexible-server create` command. A server can contain multiple databases. The following command creates a server using service defaults and values from your Azure CLI's [local context](/cli/azure/local-context):
-```azurecli
+```azurecli-interactive
az mysql flexible-server create ```
Make a note of your password. If you forget, you would have to reset your passwo
If you'd like to change any defaults, please refer to the Azure CLI [reference documentation](/cli/azure/mysql/flexible-server) for the complete list of configurable CLI parameters.
+## Create a database
+Run the following command to create a database, **newdatabase** if you have not already created one.
+
+```azurecli-interactive
+az mysql flexible-server db create -d newdatabase
+```
+ > [!NOTE] > Connections to Azure Database for MySQL communicate over port 3306. If you try to connect from within a corporate network, outbound traffic over port 3306 might not be allowed. If this is the case, you can't connect to your server unless your IT department opens port 3306.
The result is in JSON format. Make a note of the **fullyQualifiedDomainName** an
} ```
+## Connect and test the connection using Azure CLI
+
+Azure Database for MySQL Flexible Server enables you to connect to your mysql server with Azure CLI ```az mysql flexible-server connect``` command. This command allows you test connectivity to your database server, create a quick starter database and run queries directly against your server without having to install mysql.exe or MySQL Workbench. You can also use run the command in an interactive mode for running multiple queries.
+
+Run the following script to test and validate the connection to the database from your development environment.
+
+```azurecli-interactive
+az mysql flexible-server connect -n <servername> -u <username> -p <password> -d <databasename>
+```
+**Example:**
+```azurecli-interactive
+az mysql flexible-server connect -n mysqldemoserver1 -u dbuser -p "dbpassword" -d newdatabase
+```
+You should see the following output for successful connection:
+
+```output
+Command group 'mysql flexible-server' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
+Connecting to newdatabase database.
+Successfully connected to mysqldemoserver1.
+```
+If the connection failed, try these solutions:
+- Check if port 3306 is open on your client machine.
+- if your server administrator user name and password are correct
+- if you have configured firewall rule for your client machine
+- if you have configured your server with private access in virtual networking, make sure your client machine is in the same virtual network.
+
+Run the following command to execute a single query using ```--querytext``` argument, ```-q```.
+
+```azurecli-interactive
+az mysql flexible-server connect -n <server-name> -u <username> -p "<password>" -d <database-name> --querytext "<query text>"
+```
+
+**Example:**
+```azurecli-interactive
+az mysql flexible-server connect -n mysqldemoserver1 -u dbuser -p "dbpassword" -d newdatabase -q "select * from table1;" --output table
+```
+To learn more about using ```az mysql flexible-server connect``` command, refer to the [connect and query](connect-azure-cli.md) documentation.
+ ## Connect using mysql command-line client
-As the flexible server was created with *Private access (VNet Integration)*, you will need to connect to your server from a resource within the same VNet as your server. You can create a virtual machine and add it to the virtual network created.
+If you created your flexible server by using private access (VNet Integration), you'll need to connect to your server from a resource within the same virtual network as your server. You can create a virtual machine and add it to the virtual network created with your flexible server. Refer configuring [private access documentation](how-to-manage-virtual-network-portal.md) to learn more.
+
+If you created your flexible server by using public access (allowed IP addresses), you can add your local IP address to the list of firewall rules on your server. Refer [create or manage firewall rules documentation](how-to-manage-firewall-portal.md) for step by step guidance.
-Once your VM is created, you can SSH into the machine and install the popular client tool, **[mysql.exe](https://dev.mysql.com/downloads/)** command-line tool.
+You can use either [mysql.exe](https://dev.mysql.com/doc/refman/8.0/en/mysql.html) or [MySQL Workbench](./connect-workbench.md) to connect to the server from your local environment. Azure Database for MySQL Flexible Server supports connecting your client applications to the MySQL service using Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL). TLS is an industry standard protocol that ensures encrypted network connections between your database server and client applications, allowing you to adhere to compliance requirements.To connect with your MySQL flexible server, you will require to download the [public SSL certificate](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem) for certificate authority verification. To learn more about connecting with encrypted connections or disabling SSL, refer to [Connect to Azure Database for MySQL - Flexible Server with encrypted connections](how-to-connect-tls-ssl.md) documentation.
-With mysql.exe, connect using the below command. Replace values with your actual server name and password.
+The following example shows how to connect to your flexible server using the mysql command-line interface. You will first install the mysql command-line if it is not installed already. You will download the DigiCertGlobalRootCA certificate required for SSL connections. Use the --ssl-mode=REQUIRED connection string setting to enforce TLS/SSL certificate verification. Pass the local certificate file path to the --ssl-ca parameter. Replace values with your actual server name and password.
```bash
- mysql -h mydemoserver.mysql.database.azure.com -u mydemouser -p
+sudo apt-get install mysql-client
+wget --no-check-certificate https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem
+mysql -h mydemoserver.mysql.database.azure.com -u mydemouser -p --ssl-mode=REQUIRED --ssl-ca=DigiCertGlobalRootCA.crt.pem
```
+If you have provisioned your flexible server using **public access**, you can also use [Azure Cloud Shell](https://shell.azure.com/bash) to connect to your flexible server using pre-installed mysql client as shown below:
+
+In order to use Azure Cloud Shell to connect to your flexible server, you will need to allow networking access from Azure Cloud Shell to your flexible server. To achieve this, you can go to **Networking** blade on Azure portal for your MySQL flexible server and check the box under **Firewall** section which says, "Allow public access from any Azure service within Azure to this server" as shown in the screenshot below and click Save to persist the setting.
+
+ > :::image type="content" source="./media/quickstart-create-server-portal/allow-access-to-any-azure-service.png" alt-text="Screenshot that shows how to allow Azure Cloud Shell access to MySQL flexible server for public access network configuration.":::
+
+
+> [!NOTE]
+> Checking the **Allow public access from any Azure service within Azure to this server** should be used for development or testing only. It configures the firewall to allow connections from IP addresses allocated to any Azure service or asset, including connections from the subscriptions of other customers.
+
+Click on **Try it** to launch the Azure Cloud Shell and using the following commands to connect to your flexible server. Use your server name, user name, and password in the command.
+
+```azurecli-interactive
+wget --no-check-certificate https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem
+mysql -h mydemoserver.mysql.database.azure.com -u mydemouser -p --ssl=true --ssl-ca=DigiCertGlobalRootCA.crt.pem
+```
+> [!IMPORTANT]
+> While connecting to your flexible server using Azure Cloud Shell, you will require to use --ssl=true parameter and not --ssl-mode=REQUIRED.
+> The primary reason is Azure Cloud Shell comes with pre-installed mysql.exe client from MariaDB distribution which requires --ssl parameter while mysql client from Oracle's distribution requires --ssl-mode parameter.
+
+If you see the following error message while connecting to your flexible server following the command earlier, you missed setting the firewall rule using the "Allow public access from any Azure service within Azure to this server" mentioned earlier or the option isn't saved. Please retry setting firewall and try again.
+
+ERROR 2002 (HY000): Can't connect to MySQL server on <servername> (115)
+ ## Clean up resources If you don't need these resources for another quickstart/tutorial, you can delete them by doing the following command:
az mysql flexible-server delete --resource-group myresourcegroup --name mydemose
## Next steps
-> [!div class="nextstepaction"]
->[Build a PHP (Laravel) web app with MySQL](tutorial-php-database-app.md)
+>[!div class="nextstepaction"]
+> [Connect and query using Azure CLI](connect-azure-cli.md)
+> [Connect to Azure Database for MySQL - Flexible Server with encrypted connections](how-to-connect-tls-ssl.md)
+> [Build a PHP (Laravel) web app with MySQL](tutorial-php-database-app.md)
mysql Quickstart Create Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/quickstart-create-server-portal.md
Complete these steps to create a flexible server:
Subscription|Your subscription name|The Azure subscription that you want to use for your server. If you have multiple subscriptions, choose the subscription in which you want to be billed for the resource.| Resource group|**myresourcegroup**| A new resource group name or an existing one from your subscription.| Server name |**mydemoserver**|A unique name that identifies your flexible server. The domain name `mysql.database.azure.com` is appended to the server name you provide. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain between 3 and 63 characters.|
+ Region|The region closest to your users| The location that's closest to your users.|
+ Workload type| Development | For production workload, you can choose Small/Medium-size or Large-size depending on [max_connections](concepts-server-parameters.md#max_connections) requirements|
+ Availability zone| No preference | If your application in Azure VMs, virtual machine scale sets or AKS instance is provisioned in a specific availability zone, you can specify your flexible server in the same availability zone to collocate application and database to improve performance by cutting down network latency across zones.|
+ High Availability| Default | For production servers, enabling zone redundant high availability (HA) is highly recommended for business continuity and protection against zone failures|
+ MySQL version|**5.7**| A MySQL major version.|
Admin username |**mydemouser**| Your own sign-in account to use when you connect to the server. The admin user name can't be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**.| Password |Your password| A new password for the server admin account. It must contain between 8 and 128 characters. It must also contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (!, $, #, %, and so on).|
- Region|The region closest to your users| The location that's closest to your users.|
- Version|**5.7**| A MySQL major version.|
- Compute + storage | **Burstable**, **Standard_B1ms**, **10 GiB**, **7 days** | The compute, storage, and backup configurations for your new server. Select **Configure server**. **Burstable**, **Standard_B1ms**, **10 GiB**, and **7 days** are the default values for **Compute tier**, **Compute size**, **Storage size**, and backup **Retention period**. You can leave those values as is or adjust them. To save the compute and storage selection, select **Save** to continue with the configuration. The following screenshot shows the compute and storage options.|
+ Compute + storage | **Burstable**, **Standard_B1ms**, **10 GiB**, **100 iops**, **7 days** | The compute, storage, IOPS, and backup configurations for your new server. Select **Configure server**. **Burstable**, **Standard_B1ms**, **10 GiB**, **100 iops**, and **7 days** are the default values for **Compute tier**, **Compute size**, **Storage size**, **iops**, and backup **Retention period**. You can leave those values as is or adjust them. For faster data loads during migration, it is recommended to increase the IOPS to the maximum size supported by compute size and later scale it back to save cost. To save the compute and storage selection, select **Save** to continue with the configuration. The following screenshot shows the compute and storage options.|
> :::image type="content" source="./media/quickstart-create-server-portal/compute-storage.png" alt-text="Screenshot that shows compute and storage options.":::
If you created your flexible server by using private access (VNet Integration),
If you created your flexible server by using public access (allowed IP addresses), you can add your local IP address to the list of firewall rules on your server. Refer [create or manage firewall rules documentation](how-to-manage-firewall-portal.md) for step by step guidance.
-You can use either [mysql.exe](https://dev.mysql.com/doc/refman/8.0/en/mysql.html) or [MySQL Workbench](./connect-workbench.md) to connect to the server from your local environment.
+You can use either [mysql.exe](https://dev.mysql.com/doc/refman/8.0/en/mysql.html) or [MySQL Workbench](./connect-workbench.md) to connect to the server from your local environment. Azure Database for MySQL Flexible Server supports connecting your client applications to the MySQL service using Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL). TLS is an industry standard protocol that ensures encrypted network connections between your database server and client applications, allowing you to adhere to compliance requirements.To connect with your MySQL flexible server, you will require to download the [public SSL certificate](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem) for certificate authority verification.
+
+The following example shows how to connect to your flexible server using the mysql command-line interface. You will first install the mysql command-line if it is not installed already. You will download the DigiCertGlobalRootCA certificate required for SSL connections. Use the --ssl-mode=REQUIRED connection string setting to enforce TLS/SSL certificate verification. Pass the local certificate file path to the --ssl-ca parameter. Replace values with your actual server name and password.
```bash
+sudo apt-get install mysql-client
wget --no-check-certificate https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem
-mysql -h mydemoserver.mysql.database.azure.com -u mydemouser -p --ssl=true --ssl-ca=DigiCertGlobalRootCA.crt.pem
+mysql -h mydemoserver.mysql.database.azure.com -u mydemouser -p --ssl-mode=REQUIRED --ssl-ca=DigiCertGlobalRootCA.crt.pem
``` If you have provisioned your flexible server using **public access**, you can also use [Azure Cloud Shell](https://shell.azure.com/bash) to connect to your flexible server using pre-installed mysql client as shown below:
-In order to use Azure Cloud Shell to connect to your flexible server, you will need to allow networking access from Azure Cloud Shell to your flexible server. To achieve this, you can go to **Networking** blade on Azure portal for your MySQL flexible server and check the box under **Firewall** section which says, "Allow public access from any Azure service within Azure to this server" and click Save to persist the setting.
+In order to use Azure Cloud Shell to connect to your flexible server, you will need to allow networking access from Azure Cloud Shell to your flexible server. To achieve this, you can go to **Networking** blade on Azure portal for your MySQL flexible server and check the box under **Firewall** section which says, "Allow public access from any Azure service within Azure to this server" as shown in the screenshot below and click Save to persist the setting.
+
+ > :::image type="content" source="./media/quickstart-create-server-portal/allow-access-to-any-azure-service.png" alt-text="Screenshot that shows how to allow Azure Cloud Shell access to MySQL flexible server for public access network configuration.":::
> [!NOTE] > Checking the **Allow public access from any Azure service within Azure to this server** should be used for development or testing only. It configures the firewall to allow connections from IP addresses allocated to any Azure service or asset, including connections from the subscriptions of other customers.
Click on **Try it** to launch the Azure Cloud Shell and using the following comm
wget --no-check-certificate https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem mysql -h mydemoserver.mysql.database.azure.com -u mydemouser -p --ssl=true --ssl-ca=DigiCertGlobalRootCA.crt.pem ```
+> [!IMPORTANT]
+> While connecting to your flexible server using Azure Cloud Shell, you will require to use --ssl=true parameter and not --ssl-mode=REQUIRED.
+> The primary reason is Azure Cloud Shell comes with pre-installed mysql.exe client from MariaDB distribution which requires --ssl parameter while mysql client from Oracle's distribution requires --ssl-mode parameter.
If you see the following error message while connecting to your flexible server following the command earlier, you missed setting the firewall rule using the "Allow public access from any Azure service within Azure to this server" mentioned earlier or the option isn't saved. Please retry setting firewall and try again.
mysql Sample Scripts Java Connection Pooling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/sample-scripts-java-connection-pooling.md
Title: Java samples to illustrate connection pooling description: This article lists java samples to illustrate connection pooling.-++ - Last updated 02/28/2018
network-watcher Connection Monitor Create Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/connection-monitor-create-using-portal.md
description: This article describes how to create a monitor in Connection Monitor by using the Azure portal. documentationcenter: na-+ ms.devlang: na
In the Azure portal, to create a test group in a connection monitor, you specify
* To choose on-premises agents, select the **NonΓÇôAzure endpoints** tab. By default, agents are grouped into workspaces by region. All these workspaces have the Network Performance Monitor configured.
- If you need to add Network Performance Monitor to your workspace, get it from [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/solarwinds.solarwinds-orion-network-performance-monitor?tab=Overview). For information about how to add Network Performance Monitor, see [Monitoring solutions in Azure Monitor](../azure-monitor/insights/solutions.md).
+ If you need to add Network Performance Monitor to your workspace, get it from [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/solarwinds.solarwinds-orion-network-performance-monitor?tab=Overview). For information about how to add Network Performance Monitor, see [Monitoring solutions in Azure Monitor](../azure-monitor/insights/solutions.md). For information about how to configure agents for on-premises machines, see [Agents for on-premises machines](connection-monitor-overview.md#agents-for-on-premises-machines).
Under **Create Connection Monitor**, on the **Basics** tab, the default region is selected. If you change the region, you can choose agents from workspaces in the new region. You can select one or more agents or subnets. In the **Subnet** view, you can select specific IPs for monitoring. If you add multiple subnets, a custom on-premises network named **OnPremises_Network_1** will be created. You can also change the **Group by** selector to group by agents.
In the Azure portal, to create a test group in a connection monitor, you specify
* To choose non-Azure agents as destinations, select the **Non-Azure endpoints** tab. By default, agents are grouped into workspaces by region. All these workspaces have Network Performance Monitor configured.
- If you need to add Network Performance Monitor to your workspace, get it from Azure Marketplace. For information about how to add Network Performance Monitor, see [Monitoring solutions in Azure Monitor](../azure-monitor/insights/solutions.md).
+ If you need to add Network Performance Monitor to your workspace, get it from Azure Marketplace. For information about how to add Network Performance Monitor, see [Monitoring solutions in Azure Monitor](../azure-monitor/insights/solutions.md). For information about how to configure agents for on-premises machines, see [Agents for on-premises machines](connection-monitor-overview.md#agents-for-on-premises-machines).
Under **Create Connection Monitor**, on the **Basics** tab, the default region is selected. If you change the region, you can choose agents from workspaces in the new region. You can select one or more agents or subnets. In the **Subnet** view, you can select specific IPs for monitoring. If you add multiple subnets, a custom on-premises network named **OnPremises_Network_1** will be created.
network-watcher Connection Monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/connection-monitor-overview.md
For networks whose sources are Azure VMs, the following issues can be detected:
* The tunnel between two gateways is disconnected or missing. * The second gateway wasn't found by the tunnel. * No peering info was found.
+> [!NOTE]
+> If there are 2 connected gateways and one of them is not in the same region as source endpoint, CM identifies it as a 'no route learned' for the topology view. Connectivity is not impacted. This is a known issue and fix is in progress.
* Route was missing in Microsoft Edge. * Traffic stopped because of system routes or UDR. * BGP isn't enabled on the gateway connection.
networking Disaster Recovery Dns Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/disaster-recovery-dns-traffic-manager.md
ms.devlang: na
na Previously updated : 06/08/2018 Last updated : 04/06/2021 # Disaster recovery using Azure DNS and Traffic Manager
-Disaster recovery focuses on recovering from a severe loss of application functionality. In order to choose a disaster recovery solution, business and technology owners must first determine the level of functionality that is required during a disaster, such as - unavailable, partially available via reduced functionality, or delayed availability, or fully available.
+Disaster recovery focuses on recovering from a severe loss of application functionality. In order to choose a disaster recovery solution, business, and technology owners must first determine the level of functionality that is required during a disaster, such as - unavailable, partially available via reduced functionality, or delayed availability, or fully available.
Most enterprise customers are choosing a multi-region architecture for resiliency against an application or infrastructure level failover. Customers can choose several approaches in the quest to achieve failover and high availability via redundant architecture. Here are some of the popular approaches: -- **Active-passive with cold standby**: In this failover solution, the VMs and other appliances that are running in the standby region are not active until there is a need for failover. However, the production environment is replicated in the form of backups, VM images, or Resource Manager templates, to a different region. This failover mechanism is cost-effective but takes a longer time to undertake a complete failover.
+- **Active-passive with cold standby**: In this failover solution, the VMs and other appliances that are running in the standby region aren't active until there's a need for failover. However, the production environment gets replicated in the form of backups, VM images, or Resource Manager templates, to a different region. This failover mechanism is cost-effective but takes a longer time to undertake a complete failover.
![Active/Passive with cold standby](./media/disaster-recovery-dns-traffic-manager/active-passive-with-cold-standby.png) *Figure - Active/Passive with cold standby disaster recovery configuration* -- **Active/Passive with pilot light**: In this failover solution, the standby environment is set up with a minimal configuration. The setup has only the necessary services running to support only a minimal and critical set of applications. In its native form, this scenario can only execute minimal functionality but can scale up and spawn additional services to take bulk of the production load if a failover occurs.
+- **Active/Passive with pilot light**: In this failover solution, the standby environment is set up with a minimal configuration. The setup has only the necessary services running to support only a minimal and critical set of applications. In its native form, this scenario can only execute minimal functionality but can scale up and spawn more services to take bulk of the production load if a failover occurs.
![Active/Passive with pilot light](./media/disaster-recovery-dns-traffic-manager/active-passive-with-pilot-light.png) *Figure: Active/Passive with pilot light disaster recovery configuration* -- **Active/Passive with warm standby**: In this failover solution, the standby region is pre-warmed and is ready to take the base load, auto scaling is turned on, and all the instances are up and running. This solution is not scaled to take the full production load but is functional, and all services are up and running. This solution is an augmented version of the pilot light approach.
+- **Active/Passive with warm standby**: In this failover solution, the standby region gets pre-warmed and is ready to take the base load, auto scaling gets turned on, and all the instances are up and running. This solution isn't scaled to take the full production load but is functional, and all services are up and running. This solution is an augmented version of the pilot light approach.
![Active/Passive with warm standby](./media/disaster-recovery-dns-traffic-manager/active-passive-with-warm-standby.png)
There are two technical aspects towards setting up your disaster recovery archit
This article is limited to approaches via Network and Web traffic redirection. For instructions to set up Azure Site Recovery, see [Azure Site Recovery Documentation](../site-recovery/index.yml). DNS is one of the most efficient mechanisms to divert network traffic because DNS is often global and external to the data center and is insulated from any regional or availability zone (AZ) level failures. One can use a DNS-based failover mechanism and in Azure, two DNS services can accomplish the same in some fashion - Azure DNS (authoritative DNS) and Azure Traffic Manager (DNS-based smart traffic routing).
-It is important to understand few concepts in DNS that are extensively used to discuss the solutions provided in this article:
+It's important to understand few concepts in DNS that are extensively used to discuss the solutions provided in this article:
- **DNS A Record** ΓÇô A Records are pointers that point a domain to an IPv4 address. - **CNAME or Canonical name** - This record type is used to point to another DNS record. CNAME doesnΓÇÖt respond with an IP address but rather the pointer to the record that contains the IP address. - **Weighted Routing** ΓÇô one can choose to associate a weight to service endpoints and then distribute the traffic based on the assigned weights. This routing method is one of the four traffic routing mechanisms available within Traffic Manager. For more information, see [Weighted routing method](../traffic-manager/traffic-manager-routing-methods.md#weighted). - **Priority Routing** ΓÇô Priority routing is based on health checks of endpoints. By default, Azure Traffic manager sends all traffic to the highest priority endpoint, and upon a failure or disaster, Traffic Manager routes the traffic to the secondary endpoint. For more information, see [Priority routing method](../traffic-manager/traffic-manager-routing-methods.md#priority-traffic-routing-method). ## Manual failover using Azure DNS
-The Azure DNS manual failover solution for disaster recovery uses the standard DNS mechanism to failover to the backup site. The manual option via Azure DNS works best when used in conjunction with the cold standby or the pilot light approach.
+The Azure DNS manual failover solution for disaster recovery uses the standard DNS mechanism to fail over to the backup site. The manual option via Azure DNS works best when used in conjunction with the cold standby or the pilot light approach.
![Manual failover using Azure DNS](./media/disaster-recovery-dns-traffic-manager/manual-failover-using-dns.png)
Within this zone create three records (for example - www\.contoso.com, prod.cont
*Figure - Create DNS zone records in Azure* In this scenario, site, www\.contoso.com has a TTL of 30 mins, which is well below the stated RTO, and is pointing to the production site prod.contoso.com. This configuration is during normal business operations. The TTL of prod.contoso.com and dr.contoso.com has been set to 300 seconds or 5 mins.
-You can use an Azure monitoring service such as Azure Monitor or Azure App Insights, or, any partner monitoring solutions such as Dynatrace, You can even use home grown solutions that can monitor or detect application or virtual infrastructure level failures.
+You can use an Azure monitoring service such as Azure Monitor or Azure App Insights, or, any partner monitoring solutions such as Dynatrace. You can even use home grown solutions that can monitor or detect application or virtual infrastructure level failures.
### Step 3: Update the CNAME record
You can also run the following Azure CLI command to change the CNAME value:
This step can be executed manually or via automation. It can be done manually via the console or by the Azure CLI. The Azure SDK and API can be used to automate the CNAME update so that no manual intervention is required. Automation can be built via Azure functions or within a third-party monitoring application or even from on- premises. ### How manual failover works using Azure DNS
-Since the DNS server is outside the failover or disaster zone, it is insulated against any downtime. This enables user to architect a simple failover scenario that is cost effective and will work all the time assuming that the operator has network connectivity during disaster and can make the flip. If the solution is scripted, then one must ensure that the server or service running the script should be insulated against the problem affecting the production environment. Also, keep in mind the low TTL that was set against the zone so that no resolver around the world keeps the endpoint cached for long and customers can access the site within the RTO. For a cold standby and pilot light, since some prewarming and other administrative activity may be required ΓÇô one should also give enough time before making the flip.
+Since the DNS server is outside the failover or disaster zone, it's insulated against any downtime. This enables user to architect a simple failover scenario that is cost effective and will work all the time assuming that the operator has network connectivity during disaster and can make the flip. If the solution is scripted, then one must ensure that the server or service running the script should be insulated against the problem affecting the production environment. Also, keep in mind the low TTL that was set against the zone so that no resolver around the world keeps the endpoint cached for long and customers can access the site within the RTO. For a cold standby and pilot light, since some prewarming and other administrative activity may be required ΓÇô one should also give enough time before making the flip.
## Automatic failover using Azure Traffic Manager When you have complex architectures and multiple sets of resources capable of performing the same function, you can configure Azure Traffic Manager (based on DNS) to check the health of your resources and route the traffic from the non-healthy resource to the healthy resource.
In the following example, both the primary region and the secondary region have
*Figure - Automatic failover using Azure Traffic Manager* However, only the primary region is actively handling network requests from the users. The secondary region becomes active only when the primary region experiences a service disruption. In that case, all new network requests route to the secondary region. Since the backup of the database is near instantaneous, both the load balancers have IPs that can be health checked, and the instances are always up and running, this topology provides an option for going in for a low RTO and failover without any manual intervention. The secondary failover region must be ready to go-live immediately after failure of the primary region.
-This scenario is ideal for the use of Azure Traffic Manager that has inbuilt probes for various types of health checks including http / https and TCP. Azure Traffic manager also has a rule engine that can be configured to failover when a failure occurs as described below. LetΓÇÖs consider the following solution using Traffic
+This scenario is ideal for the use of Azure Traffic Manager that has inbuilt probes for various types of health checks including http / https and TCP. Azure Traffic manager also has a rule engine that can be configured to fail over when a failure occurs as described below. LetΓÇÖs consider the following solution using Traffic
- Customer has the Region #1 endpoint known as prod.contoso.com with a static IP as 100.168.124.44 and a Region #2 endpoint known as dr.contoso.com with a static IP as 100.168.124.43. - Each of these environments is fronted via a public facing property like a load balancer. The load balancer can be configured to have a DNS-based endpoint or a fully qualified domain name (FQDN) as shown above.-- All the instances in Region 2 are in near real-time replication with Region 1. Furthermore, the machine images are up-to-date, and all software/configuration data is patched and are in line with Region 1.
+- All the instances in Region 2 are in near real-time replication with Region 1. Furthermore, the machine images are up to date, and all software/configuration data is patched and are in line with Region 1.
- Autoscaling is preconfigured in advance. The steps taken to configure the failover with Azure Traffic Manager are as follows:
If you have a pre-existing resource group that you want to associate with, then
### Step 2: Create endpoints within the Traffic Manager profile
-In this step, you create endpoints that point to the production and disaster recovery sites. Here, choose the **Type** as an external endpoint, but if the resource is hosted in Azure, then you can choose **Azure endpoint** as well. If you choose **Azure endpoint**, then select a **Target resource** that is either an **App Service** or a **Public IP** that is allocated by Azure. The priority is set as **1** since it is the primary service for Region 1.
+In this step, you create endpoints that point to the production and disaster recovery sites. Here, choose the **Type** as an external endpoint, but if the resource is hosted in Azure, then you can choose **Azure endpoint** as well. If you choose **Azure endpoint**, then select a **Target resource** that is either an **App Service** or a **Public IP** that is allocated by Azure. The priority is set as **1** since it's the primary service for Region 1.
Similarly, create the disaster recovery endpoint within Traffic Manager as well. ![Create disaster recovery endpoints](./media/disaster-recovery-dns-traffic-manager/create-disaster-recovery-endpoint.png)
Similarly, create the disaster recovery endpoint within Traffic Manager as well.
### Step 3: Set up health check and failover configuration
-In this step, you set the DNS TTL to 10 seconds, which is honored by most internet-facing recursive resolvers. This configuration means that no DNS resolver will cache the information for more than 10 seconds. For the endpoint monitor settings, the path is current set at / or root, but you can customize the endpoint settings to evaluate a path, for example, prod.contoso.com/index. The example below shows the **https** as the probing protocol. However, you can choose **http** or **tcp** as well. The choice of protocol depends upon the end application. The probing interval is set to 10 seconds, which enables fast probing, and the retry is set to 3. As a result, Traffic Manager will failover to the second endpoint if three consecutive intervals register a failure. The following formula defines the total time for an automated failover:
+In this step, you set the DNS TTL to 10 seconds, which is honored by most internet-facing recursive resolvers. This configuration means that no DNS resolver will cache the information for more than 10 seconds. For the endpoint monitor settings, the path is current set at / or root, but you can customize the endpoint settings to evaluate a path, for example, prod.contoso.com/index. The example below shows the **https** as the probing protocol. However, you can choose **http** or **tcp** as well. The choice of protocol depends upon the end application. The probing interval is set to 10 seconds, which enables fast probing, and the retry is set to 3. As a result, Traffic Manager will fail over to the second endpoint if three consecutive intervals register a failure. The following formula defines the total time for an automated failover:
Time for failover = TTL + Retry * Probing interval And in this case, the value is 10 + 3 * 10 = 40 seconds (Max). If the Retry is set to 1 and TTL is set to 10 secs, then the time for failover 10 + 1 * 10 = 20 seconds. Set the Retry to a value greater than **1** to eliminate chances of failovers due to false positives or any minor network blips.
If the Retry is set to 1 and TTL is set to 10 secs, then the time for failover 1
### How automatic failover works using Traffic Manager
-During a disaster, the primary endpoint gets probed and the status changes to **degraded** and the disaster recovery site remains **Online**. By default, Traffic Manager sends all traffic to the primary (highest-priority) endpoint. If the primary endpoint appears degraded, Traffic Manager routes the traffic to the second endpoint as long as it remains healthy. One has the option to configure more endpoints within Traffic Manager that can serve as additional failover endpoints, or, as load balancers sharing the load between endpoints.
+During a disaster, the primary endpoint gets probed and the status changes to **degraded** and the disaster recovery site remains **Online**. By default, Traffic Manager sends all traffic to the primary (highest-priority) endpoint. If the primary endpoint appears degraded, Traffic Manager routes the traffic to the second endpoint as long as it remains healthy. One can configure more endpoints within Traffic Manager that can serve as extra failover endpoints, or, as load balancers sharing the load between endpoints.
## Next steps - Learn more about [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md).
networking Disaster Recovery Dns Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/partner-twilio-python-how-to-use-voice-sms.md
Now that you have learned the basics of the Twilio service, follow these links t
[twilio_on_github]: https://github.com/twilio [twilio_support]: https://www.twilio.com/help/contact [twilio_quickstarts]: https://www.twilio.com/docs/quickstart
+[azure_ips]: https://docs.microsoft.com/azure/virtual-network/virtual-network-public-ip-address
+[azure_vm_setup]: https://docs.microsoft.com/azure/virtual-machines/linux/quick-create-portal
+[azure_nsg]: https://docs.microsoft.com/azure/virtual-network/manage-network-security-group
postgresql Concepts Planned Maintenance Notification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-planned-maintenance-notification.md
Title: Planned maintenance notification - Azure Database for PostgreSQL - Single Server description: This article describes the Planned maintenance notification feature in Azure Database for PostgreSQL - Single Server--++ Last updated 10/21/2020
postgresql Connect Rust https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/connect-rust.md
+
+ Title: 'Quickstart: Connect with Rust - Azure Database for PostgreSQL - Single Server'
+description: This quickstart provides Rust code samples that you can use to connect and query data from Azure Database for PostgreSQL - Single Server.
+++
+ms.devlang: rust
+ Last updated : 03/26/2021++
+# Quickstart: Use Rust to connect and query data in Azure Database for PostgreSQL - Single Server
+
+In this article, you will learn how to use the [PostgreSQL driver for Rust](https://github.com/sfackler/rust-postgres) to interact with Azure Database for PostgreSQL by exploring CRUD (create, read, update, delete) operations implemented in the sample code. Finally, you can run the application locally to see it in action.
+
+## Prerequisites
+For this quickstart you need:
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- A recent version of [Rust](https://www.rust-lang.org/tools/install) installed.
+- An Azure Database for PostgreSQL single server - create one using [Azure portal](./quickstart-create-server-database-portal.md) <br/> or [Azure CLI](./quickstart-create-server-database-azure-cli.md).
+- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.
+
+ |Action| Connectivity method|How-to guide|
+ |: |: |: |
+ | **Configure firewall rules** | Public | [Portal](./howto-manage-firewall-using-portal.md) <br/> [CLI](./howto-manage-firewall-using-cli.md)|
+ | **Configure Service Endpoint** | Public | [Portal](./howto-manage-vnet-using-portal.md) <br/> [CLI](./howto-manage-vnet-using-cli.md)|
+ | **Configure private link** | Private | [Portal](./howto-configure-privatelink-portal.md) <br/> [CLI](./howto-configure-privatelink-cli.md) |
+
+- [Git](https://git-scm.com/downloads) installed.
+
+## Get database connection information
+Connecting to an Azure Database for PostgreSQL database requires the fully qualified server name and login credentials. You can get this information from the Azure portal.
+
+1. In the [Azure portal](https://portal.azure.com/), search for and select your Azure Database for PostgreSQL server name.
+1. On the server's **Overview** page, copy the fully qualified **Server name** and the **Admin username**. The fully qualified **Server name** is always of the form *\<my-server-name>.postgres.database.azure.com*, and the **Admin username** is always of the form *\<my-admin-username>@\<my-server-name>*.
+
+## Review the code (optional)
+
+If you're interested in learning how the code works, you can review the following snippets. Otherwise, feel free to skip ahead to [Run the application](#run-the-application).
+
+### Connect
+
+The `main` function starts by connecting to Azure Database for PostgreSQL and it depends on following environment variables for connectivity information `POSTGRES_HOST`, `POSTGRES_USER`, `POSTGRES_PASSWORD` and, `POSTGRES_DBNAME`. By default, the PostgreSQL database service is configured to require `TLS` connection. You can choose to disable requiring `TLS` if your client application does not support `TLS` connectivity. For details, please refer [Configure TLS connectivity in Azure Database for PostgreSQL - Single Server](./concepts-ssl-connection-security.md).
+
+The sample application in this article uses TLS with the [postgres-openssl crate](https://crates.io/crates/postgres-openssl/). [postgres::Client::connect](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.connect) function is used to initiate the connection and the program exits in case this fails.
+
+```rust
+fn main() {
+ let pg_host = std::env::var("POSTGRES_HOST").expect("missing environment variable POSTGRES_HOST");
+ let pg_user = std::env::var("POSTGRES_USER").expect("missing environment variable POSTGRES_USER");
+ let pg_password = std::env::var("POSTGRES_PASSWORD").expect("missing environment variable POSTGRES_PASSWORD");
+ let pg_dbname = std::env::var("POSTGRES_DBNAME").unwrap_or("postgres".to_string());
+
+ let builder = SslConnector::builder(SslMethod::tls()).unwrap();
+ let tls_connector = MakeTlsConnector::new(builder.build());
+
+ let url = format!(
+ "host={} port=5432 user={} password={} dbname={} sslmode=require",
+ pg_host, pg_user, pg_password, pg_dbname
+ );
+ let mut pg_client = postgres::Client::connect(&url, tls_connector).expect("failed to connect to postgres");
+...
+}
+```
+
+### Drop and create table
+
+The sample application uses a simple `inventory` table to demonstrate the CRUD (create, read, update, delete) operations.
+
+```sql
+CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);
+```
+
+The `drop_create_table` function initially tries to `DROP` the `inventory` table before creating a new one. This makes it easier for learning/experimentation, as you always start with a known (clean) state. The [execute](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.execute) method is used for create and drop operations.
+
+```rust
+const CREATE_QUERY: &str =
+ "CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);";
+
+const DROP_TABLE: &str = "DROP TABLE inventory";
+
+fn drop_create_table(pg_client: &mut postgres::Client) {
+ let res = pg_client.execute(DROP_TABLE, &[]);
+ match res {
+ Ok(_) => println!("dropped table"),
+ Err(e) => println!("failed to drop table {}", e),
+ }
+ pg_client
+ .execute(CREATE_QUERY, &[])
+ .expect("failed to create 'inventory' table");
+}
+```
+
+### Insert data
+
+`insert_data` adds entries to the `inventory` table. It creates a [prepared statement](https://docs.rs/postgres/0.19.0/postgres/struct.Statement.html) with [prepare](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.prepare) function.
++
+```rust
+const INSERT_QUERY: &str = "INSERT INTO inventory (name, quantity) VALUES ($1, $2) RETURNING id;";
+
+fn insert_data(pg_client: &mut postgres::Client) {
+
+ let prep_stmt = pg_client
+ .prepare(&INSERT_QUERY)
+ .expect("failed to create prepared statement");
+
+ let row = pg_client
+ .query_one(&prep_stmt, &[&"item-1", &42])
+ .expect("insert failed");
+
+ let id: i32 = row.get(0);
+ println!("inserted item with id {}", id);
+...
+}
+```
+
+Also note the usage of [prepare_typed](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.prepare_typed) method, that allows the types of query parameters to be explicitly specified.
+
+```rust
+...
+let typed_prep_stmt = pg_client
+ .prepare_typed(&INSERT_QUERY, &[Type::VARCHAR, Type::INT4])
+ .expect("failed to create prepared statement");
+
+let row = pg_client
+ .query_one(&typed_prep_stmt, &[&"item-2", &43])
+ .expect("insert failed");
+
+let id: i32 = row.get(0);
+println!("inserted item with id {}", id);
+...
+```
+
+Finally, a `for` loop is used to add `item-3`, `item-4` and, `item-5` with randomly generated quantity for each.
+
+```rust
+...
+ for n in 3..=5 {
+ let row = pg_client
+ .query_one(
+ &typed_prep_stmt,
+ &[
+ &("item-".to_owned() + &n.to_string()),
+ &rand::thread_rng().gen_range(10..=50),
+ ],
+ )
+ .expect("insert failed");
+
+ let id: i32 = row.get(0);
+ println!("inserted item with id {} ", id);
+ }
+...
+```
+
+### Query data
+
+`query_data` function demonstrates how to retrieve data from the `inventory` table. The [query_one](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.query_one) method is used to get an item by its `id`.
+
+```rust
+const SELECT_ALL_QUERY: &str = "SELECT * FROM inventory;";
+const SELECT_BY_ID: &str = "SELECT name, quantity FROM inventory where id=$1;";
+
+fn query_data(pg_client: &mut postgres::Client) {
+
+ let prep_stmt = pg_client
+ .prepare_typed(&SELECT_BY_ID, &[Type::INT4])
+ .expect("failed to create prepared statement");
+
+ let item_id = 1;
+
+ let c = pg_client
+ .query_one(&prep_stmt, &[&item_id])
+ .expect("failed to query item");
+
+ let name: String = c.get(0);
+ let quantity: i32 = c.get(1);
+ println!("quantity for item {} = {}", name, quantity);
+...
+}
+```
+
+All rows in the inventory table are fetched using a `select * from` query with the [query](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.query) method. The returned rows are iterated over to extract the value for each column using [get](https://docs.rs/postgres/0.19.0/postgres/row/struct.Row.html#method.get).
+
+> [!TIP]
+> Note how `get` makes it possible to specify the column either by its numeric index in the row, or by its column name.
+
+```rust
+...
+ let items = pg_client
+ .query(SELECT_ALL_QUERY, &[])
+ .expect("select all failed");
+
+ println!("listing items...");
+
+ for item in items {
+ let id: i32 = item.get("id");
+ let name: String = item.get("name");
+ let quantity: i32 = item.get("quantity");
+ println!(
+ "item info: id = {}, name = {}, quantity = {} ",
+ id, name, quantity
+ );
+ }
+...
+```
+
+### Update data
+
+The `update_date` function randomly updates the quantity for all the items. Since the `insert_data` function had added `5` rows, the same is taken into account in the `for` loop - `for n in 1..=5`
+
+> [!TIP]
+> Note that we use `query` instead of `execute` since we intend to get back the `id` and the newly generated `quantity` (using [RETURNING clause](https://www.postgresql.org/docs/current/dml-returning.html)).
+
+```rust
+const UPDATE_QUERY: &str = "UPDATE inventory SET quantity = $1 WHERE name = $2 RETURNING quantity;";
+
+fn update_data(pg_client: &mut postgres::Client) {
+ let stmt = pg_client
+ .prepare_typed(&UPDATE_QUERY, &[Type::INT4, Type::VARCHAR])
+ .expect("failed to create prepared statement");
+
+ for id in 1..=5 {
+ let row = pg_client
+ .query_one(
+ &stmt,
+ &[
+ &rand::thread_rng().gen_range(10..=50),
+ &("item-".to_owned() + &id.to_string()),
+ ],
+ )
+ .expect("update failed");
+
+ let quantity: i32 = row.get("quantity");
+ println!("updated item id {} to quantity = {}", id, quantity);
+ }
+}
+```
+
+### Delete data
+
+Finally, the `delete` function demonstrates how to remove an item from the `inventory` table by its `id`. The `id` is chosen randomly - it's a random integer between `1` to `5` (`5` inclusive) since the `insert_data` function had added `5` rows to start with.
+
+> [!TIP]
+> Note that we use `query` instead of `execute` since we intend to get back the info about the item we just deleted (using [RETURNING clause](https://www.postgresql.org/docs/current/dml-returning.html)).
+
+```rust
+const DELETE_QUERY: &str = "DELETE FROM inventory WHERE id = $1 RETURNING id, name, quantity;";
+
+fn delete(pg_client: &mut postgres::Client) {
+ let stmt = pg_client
+ .prepare_typed(&DELETE_QUERY, &[Type::INT4])
+ .expect("failed to create prepared statement");
+
+ let item = pg_client
+ .query_one(&stmt, &[&rand::thread_rng().gen_range(1..=5)])
+ .expect("delete failed");
+
+ let id: i32 = item.get(0);
+ let name: String = item.get(1);
+ let quantity: i32 = item.get(2);
+ println!(
+ "deleted item info: id = {}, name = {}, quantity = {} ",
+ id, name, quantity
+ );
+}
+```
+
+## Run the application
+
+1. To begin with, run the following command to clone the sample repository:
+
+ ```bash
+ git clone https://github.com/Azure-Samples/azure-postgresql-rust-quickstart.git
+ ```
+
+2. Set the required environment variables with the values you copied from the Azure portal:
+
+ ```bash
+ export POSTGRES_HOST=<server name e.g. my-server.postgres.database.azure.com>
+ export POSTGRES_USER=<admin username e.g. my-admin-user@my-server>
+ export POSTGRES_PASSWORD=<admin password>
+ export POSTGRES_DBNAME=<database name. it is optional and defaults to postgres>
+ ```
+
+3. To run the application, change into the directory where you cloned it and execute `cargo run`:
+
+ ```bash
+ cd azure-postgresql-rust-quickstart
+ cargo run
+ ```
+
+ You should see an output similar to this:
+
+ ```bash
+ dropped 'inventory' table
+ inserted item with id 1
+ inserted item with id 2
+ inserted item with id 3
+ inserted item with id 4
+ inserted item with id 5
+ quantity for item item-1 = 42
+ listing items...
+ item info: id = 1, name = item-1, quantity = 42
+ item info: id = 2, name = item-2, quantity = 43
+ item info: id = 3, name = item-3, quantity = 11
+ item info: id = 4, name = item-4, quantity = 32
+ item info: id = 5, name = item-5, quantity = 24
+ updated item id 1 to quantity = 27
+ updated item id 2 to quantity = 14
+ updated item id 3 to quantity = 31
+ updated item id 4 to quantity = 16
+ updated item id 5 to quantity = 10
+ deleted item info: id = 4, name = item-4, quantity = 16
+ ```
+
+4. To confirm, you can also connect to Azure Database for PostgreSQL [using psql](./quickstart-create-server-database-portal.md#connect-to-the-server-with-psql) and run queries against the database, for example:
+
+ ```sql
+ select * from inventory;
+ ```
+
+[Having issues? Let us know](https://aka.ms/postgres-doc-feedback)
+
+## Clean up resources
+
+To clean up all resources used during this quickstart, delete the resource group using the following command:
+
+```azurecli
+az group delete \
+ --name $AZ_RESOURCE_GROUP \
+ --yes
+```
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Manage Azure Database for PostgreSQL server using Portal](./howto-create-manage-server-portal.md)<br/>
+
+> [!div class="nextstepaction"]
+> [Manage Azure Database for PostgreSQL server using CLI](./how-to-manage-server-cli.md)<br/>
+
+[Cannot find what you are looking for? Let us know.](https://aka.ms/postgres-doc-feedback)
postgresql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/connect-python.md
Title: 'Quickstart: Connect using Python - Azure Database for PostgreSQL - Flexible Server' description: This quickstart provides several Python code samples you can use to connect and query data from Azure Database for PostgreSQL - Flexible Server.--++ ms.devlang: python
postgresql How To Connect Tls Ssl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/how-to-connect-tls-ssl.md
Title: Encrypted connectivity using TLS/SSL in Azure Database for PostgreSQL - Flexible Server description: Instructions and information on how to connect using TLS/SSL in Azure Database for PostgreSQL - Flexible Server.--++ Last updated 09/22/2020
postgresql How To Manage Firewall Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/how-to-manage-firewall-cli.md
Title: Manage firewall rules - Azure CLI - Azure Database for PostgreSQL - Flexible Server description: Create and manage firewall rules for Azure Database for PostgreSQL - Flexible Server using Azure CLI command line.--++ ms.devlang: azurecli
postgresql How To Manage Firewall Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/how-to-manage-firewall-portal.md
Title: Manage firewall rules - Azure portal - Azure Database for PostgreSQL - Flexible Server description: Create and manage firewall rules for Azure Database for PostgreSQL - Flexible Server using the Azure portal--++ Last updated 09/22/2020
postgresql How To Manage Virtual Network Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/how-to-manage-virtual-network-cli.md
Title: Manage virtual networks - Azure CLI - Azure Database for PostgreSQL - Flexible Server description: Create and manage virtual networks for Azure Database for PostgreSQL - Flexible Server using the Azure CLI--++ Last updated 09/22/2020
postgresql How To Manage Virtual Network Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/how-to-manage-virtual-network-portal.md
Title: Manage virtual networks - Azure portal - Azure Database for PostgreSQL - Flexible Server description: Create and manage virtual networks for Azure Database for PostgreSQL - Flexible Server using the Azure portal--++ Last updated 09/22/2020
postgresql Howto Auto Grow Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-auto-grow-storage-cli.md
Title: Auto-grow storage - Azure CLI - Azure Database for PostgreSQL - Single Server description: This article describes how you can configure storage auto-grow using the Azure CLI in Azure Database for PostgreSQL - Single Server.--++ Last updated 8/7/2019
postgresql Howto Auto Grow Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-auto-grow-storage-portal.md
Title: Auto grow storage - Azure portal - Azure Database for PostgreSQL - Single Server description: This article describes how you can configure storage auto-grow using the Azure portal in Azure Database for PostgreSQL - Single Server--++ Last updated 5/29/2019
postgresql Howto Auto Grow Storage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-auto-grow-storage-powershell.md
Title: Auto grow storage - Azure PowerShell - Azure Database for PostgreSQL description: This article describes how you can enable auto grow storage using PowerShell in Azure Database for PostgreSQL.--++ Last updated 06/08/2020
postgresql Howto Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-troubleshoot-common-connection-issues.md
Title: Troubleshoot connections - Azure Database for PostgreSQL - Single Server description: Learn how to troubleshoot connection issues to Azure Database for PostgreSQL - Single Server. keywords: postgresql connection,connection string,connectivity issues,transient error,connection error--+++ Last updated 5/6/2019
Last updated 5/6/2019
# Troubleshoot connection issues to Azure Database for PostgreSQL - Single Server
-Connection problems may be caused by a variety of things, including:
+Connection problems may be caused by various things, including:
* Firewall settings * Connection time-out
Transient errors occur when maintenance is performed, the system encounters an e
If the application persistently fails to connect to Azure Database for PostgreSQL, it usually indicates an issue with one of the following: * Server firewall configuration: Make sure that the Azure Database for PostgreSQL server firewall is configured to allow connections from your client, including proxy servers and gateways.
-* Client firewall configuration: The firewall on your client must allow connections to your database server. IP addresses and ports of the server that you cannot to must be allowed as well as application names such as PostgreSQL in some firewalls.
+* Client firewall configuration: The firewall on your client must allow connections to your database server. IP addresses and ports of the server that you can't connect to must be allowed and the application names such as PostgreSQL in some firewalls.
* User error: You might have mistyped connection parameters, such as the server name in the connection string or a missing *\@servername* suffix in the user name.
-* If you see the error _Server is not configured to allow ipv6 connections_, note that the Basic tier doesn't support VNet service endpoints. You have to remove the Microsoft.Sql endpoint from the subnet that is attempting to connect to the Basic server.
-* If you see the connection error _sslmode value "***" invalid when SSL support is not compiled in_ error, it means your PostgreSQL client does not support SSL. Most probably, the client-side libpq has not been compiled with the "--with-openssl" flag. Please try connecting with a PostgreSQL client that has SSL support.
+* If you see the error _Server isn't configured to allow ipv6 connections_, note that the Basic tier doesn't support VNet service endpoints. You have to remove the Microsoft.Sql endpoint from the subnet that is attempting to connect to the Basic server.
+* If you see the connection error _sslmode value "***" invalid when SSL support is not compiled in_ error, it means your PostgreSQL client doesn't support SSL. Most probably, the client-side libpq hasn't been compiled with the "--with-openssl" flag. Try connecting with a PostgreSQL client that has SSL support.
### Steps to resolve persistent connectivity issues
search Search Pagination Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-pagination-page-layout.md
Previously updated : 12/09/2020 Last updated : 04/06/2021 # How to work with search results in Azure Cognitive Search
Services created after July 15, 2020 will provide a different highlighting exper
With the new behavior:
-* Only phrases that match the full phrase query will be returned. The query "super bowl" will return highlights like this:
++ Only phrases that match the full phrase query will be returned. The query phrase "super bowl" will return highlights like this:
- ```html
- '<em>super bowl</em> is super awesome with a bowl of chips'
- ```
- Note that the term *bowl of chips* does not have any highlighting because it does not match the full phrase.
+ ```json
+ "@search.highlights": {
+ "sentence": [
+ "The <em>super</em> <em>bowl</em> is super awesome with a bowl of chips"
+ ]
+ ```
+
+ Note that other instances of *super* and *bowl* do not have any highlighting because those instances do not match the full phrase.
When you are writing client code that implements hit highlighting, be aware of this change. Note that this will not impact you unless you create a completely new search service.
security-center Defender For Kubernetes Azure Arc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-kubernetes-azure-arc.md
Title: Protect hybrid and multicloud Kubernetes deployments with Azure Defender for Kubernetes
-description: Use Azure Defender for Kubernetes with your on-premises and multicloud Kubernetes clusters
+ Title: Protect hybrid and multi-cloud Kubernetes deployments with Azure Defender for Kubernetes
+description: Use Azure Defender for Kubernetes with your on-premises and multi-cloud Kubernetes clusters
Last updated 04/05/2021
-# Defend Azure Arc enabled Kubernetes clusters running in on-premises and multicloud environments
+# Defend Azure Arc enabled Kubernetes clusters running in on-premises and multi-cloud environments
To defend your on-premises clusters with the same threat detection capabilities offered today for Azure Kubernetes Service clusters, enable Azure Arc on the clusters and deploy the **Azure Defender for Kubernetes cluster extension**
security-center Enable Azure Defender https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/enable-azure-defender.md
Title: Enable Azure Security Center's integrated workload protections
-description: Learn how to enable Azure Defender to extend the protections of Azure Security Center to your hybrid and multicloud resources
+description: Learn how to enable Azure Defender to extend the protections of Azure Security Center to your hybrid and multi-cloud resources
security-center Recommendations Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/recommendations-reference.md
description: This article lists Azure Security Center's security recommendations
Previously updated : 03/22/2021 Last updated : 04/06/2021
impact on your secure score.
## Deprecated recommendations
-|Recommendation|Description & related policy|Severity|Quick fix enabled?([Learn more](security-center-remediate-recommendations.md#quick-fix-remediation))|Resource type|
-|-|-|-|-|-|
-|**Access to App Services should be restricted**|Restrict access to your App Services by changing the networking configuration, to deny inbound traffic from ranges that are too broad.<br>(Related policy: [Preview]: Access to App Services should be restricted)|High|N|App service|
-|**The rules for web applications on IaaS NSGs should be hardened**|Harden the network security group (NSG) of your virtual machines that are running web applications, with NSG rules that are overly permissive with regard to web application ports.<br>(Related policy: The NSGs rules for web applications on IaaS should be hardened)|High|N|Virtual machine|
-|**Pod Security Policies should be defined to reduce the attack vector by removing unnecessary application privileges (Preview)**|Define Pod Security Policies to reduce the attack vector by removing unnecessary application privileges. It is recommended to configure pod security policies so pods can only access resources which they are allowed to access.<br>(Related policy: [Preview]: Pod Security Policies should be defined on Kubernetes Services)|Medium|N|Compute resources (Containers)|
-|**Install Azure Security Center for IoT security module to get more visibility into your IoT devices**|Install Azure Security Center for IoT security module to get more visibility into your IoT devices.|Low|N|IoT device|
+|Recommendation|Description & related policy|Severity|
+|-|-|-|
+|Access to App Services should be restricted|Restrict access to your App Services by changing the networking configuration, to deny inbound traffic from ranges that are too broad.<br>(Related policy: [Preview]: Access to App Services should be restricted)|High|
+|The rules for web applications on IaaS NSGs should be hardened|Harden the network security group (NSG) of your virtual machines that are running web applications, with NSG rules that are overly permissive with regard to web application ports.<br>(Related policy: The NSGs rules for web applications on IaaS should be hardened)|High|
+|Pod Security Policies should be defined to reduce the attack vector by removing unnecessary application privileges (Preview)|Define Pod Security Policies to reduce the attack vector by removing unnecessary application privileges. It is recommended to configure pod security policies so pods can only access resources which they are allowed to access.<br>(Related policy: [Preview]: Pod Security Policies should be defined on Kubernetes Services)|Medium|
+|Install Azure Security Center for IoT security module to get more visibility into your IoT devices|Install Azure Security Center for IoT security module to get more visibility into your IoT devices.|Low|
+|Your machines should be restarted to apply system updates|Restart your machines to apply the system updates and secure the machine from vulnerabilities. (Related policy: System updates should be installed on your machines)|Medium|
+|Monitoring agent should be installed on your machines|This action installs a monitoring agent on the selected virtual machines. Select a workspace for the agent to report to. (No related policy)|High|
+||||
## Next steps
security-center Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/release-notes.md
Previously updated : 04/05/2021 Last updated : 04/06/2021
To learn about *planned* changes that are coming soon to Security Center, see [I
Updates in April include: - [Four new recommendations related to guest configuration (preview)](#four-new-recommendations-related-to-guest-configuration-preview)
+- [Use Azure Defender for Kubernetes to protect hybrid and multi-cloud Kubernetes deployments (preview)](#use-azure-defender-for-kubernetes-to-protect-hybrid-and-multi-cloud-kubernetes-deployments-preview)
- [11 Azure Defender alerts deprecated](#11-azure-defender-alerts-deprecated)
+- [Two recommendations from "Apply system updates" security control were deprecated](#two-recommendations-from-apply-system-updates-security-control-were-deprecated)
### Four new recommendations related to guest configuration (preview)
We've added four new recommendations to Security Center to make the most of this
Learn more in [Understand Azure Policy's Guest Configuration](../governance/policy/concepts/guest-configuration.md).
+### Use Azure Defender for Kubernetes to protect hybrid and multi-cloud Kubernetes deployments (preview)
+
+Azure Defender for Kubernetes is expanding its threat protection capabilities to defend your clusters wherever they're deployed. This has been enabled by integrating with [Azure Arc enabled Kubernetes](../azure-arc/kubernetes/overview.md) and its new extensions capabilities.
+
+When you've enabled Azure Arc on your non-Azure Kubernetes clusters, a new recommendation from Azure Security Center offers to deploy the Azure Defender extension to them with only a few clicks.
+
+Use the recommendation (**Azure Arc enabled Kubernetes clusters should have Azure Defender's extension installed**) and the extension to protect Kubernetes clusters deployed in other cloud providers, although not on their managed Kubernetes services.
+
+This integration between Azure Security Center, Azure Defender, and Azure Arc enabled Kubernetes brings:
+
+- Easy provisioning of the Azure Defender extension to unprotected Azure Arc enabled Kubernetes clusters (manually and at-scale)
+- Monitoring of the Azure Defender extension and its provisioning state from the Azure Arc Portal
+- Security recommendations from Security Center are reported in the new Security page of the Azure Arc Portal
+- Identified security threats from Azure Defender are reported in the new Security page of the Azure Arc Portal
+- Azure Arc enabled Kubernetes clusters are integrated into the Azure Security Center platform and experience
+
+Learn more in [Use Azure Defender for Kubernetes with your on-premises and multi-cloud Kubernetes clusters](defender-for-kubernetes-azure-arc.md).
### 11 Azure Defender alerts deprecated
The eleven Azure Defender alerts listed below have been deprecated.
> [!TIP] > These nine IPC alerts were never Security Center alerts. TheyΓÇÖre part of the Azure Active Directory (AAD) Identity Protection connector (IPC) that was sending them to Security Center. For the last two years, the only customers whoΓÇÖve been seeing those alerts are organizations who configured the export (from the connector to ASC) in 2019 or earlier. AAD IPC has continued to show them in its own alerts systems and theyΓÇÖve continued to be available in Azure Sentinel. The only change is that theyΓÇÖre no longer appearing in Security Center.
+### Two recommendations from "Apply system updates" security control were deprecated
+
+The following two recommendations were deprecated and the changes might result in a slight impact on your secure score:
+
+- **Your machines should be restarted to apply system updates**
+- **Monitoring agent should be installed on your machines**. This recommendation relates to on-premises machines only and some of its logic will be transferred to another recommendation, **Log Analytics agent health issues should be resolved on your machines**
+
+We recommend checking your continuous export and workflow automation configurations to see whether these recommendations are included in them. Also, any dashboards or other monitoring tools that might be using them should be updated accordingly.
+
+Learn more about these recommendations in the [security recommendations reference page](recommendations-reference.md).
+ ## March 2021
Updates in December include:
Azure Security Center offers two Azure Defender plans for SQL Servers: - **Azure Defender for Azure SQL database servers** - defends your Azure-native SQL Servers -- **Azure Defender for SQL servers on machines** - extends the same protections to your SQL servers in hybrid, multicloud, and on-premises environments
+- **Azure Defender for SQL servers on machines** - extends the same protections to your SQL servers in hybrid, multi-cloud, and on-premises environments
With this announcement, **Azure Defender for SQL** now protects your databases and their data wherever they're located.
security-center Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/upcoming-changes.md
Previously updated : 04/04/2021 Last updated : 04/06/2021
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--||
-| [Two recommendations from "Apply system updates" security control being deprecated](#two-recommendations-from-apply-system-updates-security-control-being-deprecated) | April 2021 |
| [21 recommendations moving between security controls](#21-recommendations-moving-between-security-controls) | April 2021 |
-| [Two further recommendations from "Apply system updates" security control being deprecated](#two-further-recommendations-from-apply-system-updates-security-control-being-deprecated) | April 2021 |
+| [Two recommendations from "Apply system updates" security control being deprecated](#two-recommendations-from-apply-system-updates-security-control-being-deprecated) | April 2021 |
| [Recommendations from AWS will be released for general availability (GA)](#recommendations-from-aws-will-be-released-for-general-availability-ga) | April 2021 | | [Enhancements to SQL data classification recommendation](#enhancements-to-sql-data-classification-recommendation) | Q2 2021 | | | |
-### Two recommendations from "Apply system updates" security control being deprecated
-
-**Estimated date for change:** April 2021
-
-The following two recommendations are scheduled to be deprecated in April 2021:
--- **Your machines should be restarted to apply system updates**. This might result in a slight impact on your secure score.-- **Monitoring agent should be installed on your machines**. This recommendation relates to on-premises machines only and some of its logic will be transferred to another recommendation, **Log Analytics agent health issues should be resolved on your machines**. This might result in a slight impact on your secure score.-
-We recommend checking your continuous export and workflow automation configurations to see whether these recommendations are included in them. Also, any dashboards or other monitoring tools that might be using them should be updated accordingly.
-
-Learn more about these recommendations in the [security recommendations reference page](recommendations-reference.md).
- ### 21 recommendations moving between security controls **Estimated date for change:** April 2021
Learn which recommendations are in each security control in Security controls an
|||
-### Two further recommendations from "Apply system updates" security control being deprecated
+### Two recommendations from "Apply system updates" security control being deprecated
**Estimated date for change:** April 2021
sentinel Tutorial Monitor Your Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/tutorial-monitor-your-data.md
ms.devlang: na
na Previously updated : 05/04/2020 Last updated : 04/04/2021 # Tutorial: Visualize and monitor your data --
-Once you have [connected your data sources](quickstart-onboard.md) to Azure Sentinel, you can visualize and monitor the data using the Azure Sentinel adoption of Azure Monitor Workbooks, which provides versatility in creating custom dashboards. While the Workbooks are displayed differently in Azure Sentinel, it may be useful for you to see how to [create interactive reports with Azure Monitor Workbooks](../azure-monitor/visualize/workbooks-overview.md). Azure Sentinel allows you to create custom workbooks across your data, and also comes with built-in workbook templates to allow you to quickly gain insights across your data as soon as you connect a data source.
-
+Once you have [connected your data sources](quickstart-onboard.md) to Azure Sentinel, you can visualize and monitor the data using the Azure Sentinel adoption of Azure Monitor Workbooks, which provides versatility in creating custom dashboards. While the Workbooks are displayed differently in Azure Sentinel, it may be useful for you to see how to [create interactive reports with Azure Monitor Workbooks](../azure-monitor/visualize/workbooks-overview.md). Azure Sentinel allows you to create custom workbooks across your data, and also comes with built-in workbook templates to allow you to quickly gain insights across your data as soon as you connect a data source.
This tutorial helps you visualize your data in Azure Sentinel. > [!div class="checklist"]
This tutorial helps you visualize your data in Azure Sentinel.
## Prerequisites -- You must have at least Workbook reader or Workbook contributor permissions on the resource group of the Azure Sentinel workspace.
+You must have at least **Workbook reader** or **Workbook contributor** permissions on the resource group of the Azure Sentinel workspace.
> [!NOTE] > The workbooks that you can see in Azure Sentinel are saved within the Azure Sentinel workspace's resource group and are tagged by the workspace in which they were created. ## Use built-in workbooks
-1. Go to **Workbooks** and then select **Templates** to see the full list of Azure Sentinel built-in workbooks. To see which are relevant to the data types you have connected, the **Required data types** field in each workbook will list the data type next to a green check mark if you already stream relevant data to Azure Sentinel.
- ![go to workbooks](./media/tutorial-monitor-data/access-workbooks.png)
-1. Click **View template** to see the template populated with your data.
-
-1. To edit the workbook, select **Save**, and then select the location where you want to save the JSON file for the template.
+1. Go to **Workbooks** and then select **Templates** to see the full list of Azure Sentinel built-in workbooks.
+
+ To see which are relevant to the data types you have connected, the **Required data types** field in each workbook will list the data type next to a green check mark if you already stream relevant data to Azure Sentinel.
+
+ [ ![Go to workbooks.](media/tutorial-monitor-data/access-workbooks.png) ](media/tutorial-monitor-data/access-workbooks.png#lightbox)
+
+1. Select **View template** to see the template populated with your data.
+
+1. To edit the workbook, select **Save**, and then select the location where you want to save the JSON file for the template.
> [!NOTE] > This creates an Azure resource based on the relevant template and saves the JSON file of the workbook and not the data.
-1. Select **View saved workbook**. Then, click the **Edit** button at the top. You can now edit the workbook and customize it according to your needs. For more information on how to customize the workbook, see how to [Create interactive reports with Azure Monitor Workbooks](../azure-monitor/visualize/workbooks-overview.md).
-![view workbooks](./media/tutorial-monitor-data/workbook-graph.png)
-1. After you make your changes, you can save the workbook.
+1. Select **View saved workbook**.
-1. You can also clone the workbook: Select **Edit** and then **Save as**, making sure to save it with another name, under the same subscription and resource group. These cloned workbooks are displayed under the **My workbooks** tab.
+ [ ![View workbooks.](media/tutorial-monitor-data/workbook-graph.png) ](media/tutorial-monitor-data/workbook-graph.png#lightbox)
+ Select the **Edit** button in the workbook toolbar to customize the workbook according to your needs. When you're done, select **Save** to save your changes.
+ For more information, see how to [Create interactive reports with Azure Monitor Workbooks](../azure-monitor/visualize/workbooks-overview.md).
+
+> [!TIP]
+> To clone your workbook, select **Edit** and then **Save as**, making sure to save it with another name, under the same subscription and resource group.
+> Cloned workbooks are displayed under the **My workbooks** tab.
+>
## Create new workbook 1. Go to **Workbooks** and then select **Add workbook** to create a new workbook from scratch.
- ![Screenshot that shows the New workbook screen.](./media/tutorial-monitor-data/create-workbook.png)
+
+ [ ![New workbook.](media/tutorial-monitor-data/create-workbook.png) ](media/tutorial-monitor-data/create-workbook.png#lightbox)
1. To edit the workbook, select **Edit**, and then add text, queries, and parameters as necessary. For more information on how to customize the workbook, see how to [Create interactive reports with Azure Monitor Workbooks](../azure-monitor/visualize/workbooks-overview.md).
This tutorial helps you visualize your data in Azure Sentinel.
1. If you want to let others in your organization use the workbook, under **Save to** select **Shared reports**. If you want this workbook to be available only to you, select **My reports**.
-1. To switch between workbooks in your workspace, you can select **Open** ![Icon for opening a workbook.](./media/tutorial-monitor-data/switch.png)in the top pane of any workbook. On the window that opens to the right, switch between workbooks.
+1. To switch between workbooks in your workspace, select **Open** ![Icon for opening a workbook.](./media/tutorial-monitor-data/switch.png) in the toolbar of any workbook. The screen switches to a list of other workbooks you can switch to.
+
+ Select the workbook you want to open:
+
+ [ ![Switch workbooks.](media/tutorial-monitor-data/switch-workbooks.png) ](media/tutorial-monitor-data/switch-workbooks.png#lightbox)
+
+## Refresh your workbook data
+
+Refresh your workbook to display updated data. In the toolbar, select one of the following options:
+
+- :::image type="icon" source="media/whats-new/manual-refresh-button.png" border="false"::: **Refresh**, to manually refresh your workbook data.
+
+- :::image type="icon" source="media/whats-new/auto-refresh-workbook.png" border="false"::: **Auto refresh**, to set your workbook to automatically refresh at a configured interval.
+
+ - Supported auto refresh intervals range from **5 minutes** to **1 day**.
+
+ - Auto refresh is paused while you're editing a workbook, and intervals are restarted each time you switch back to view mode from edit mode.
- ![Switch workbooks](./media/tutorial-monitor-data/switch-workbooks.png)
+ - Auto refresh intervals are also restarted if you manually refresh your data.
+ > [!TIP]
+ > By default, auto refresh is turned off. To optimize performance, auto refresh is also turned off each time you close a workbook, and does not run in the background. Turn auto refresh back on as needed the next time you open the workbook.
+ >
## Print a workbook or save as PDF
To print a workbook, or save it as a PDF, use the options menu to the right of t
For example:
+[ ![Print your workbook or save as PDF.](media/whats-new/print-workbook.png) ](media/whats-new/print-workbook.png#lightbox)
## How to delete workbooks
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/whats-new.md
Noted features are currently in PREVIEW. The [Azure Preview Supplemental Terms](
## March 2021
+- [Set workbooks to automatically refresh while in view mode](#set-workbooks-to-automatically-refresh-while-in-view-mode)
- [New detections for Azure Firewall](#new-detections-for-azure-firewall) - [Automation rules and incident-triggered playbooks](#automation-rules-and-incident-triggered-playbooks) (including all-new playbook documentation) - [New alert enrichments: enhanced entity mapping and custom details](#new-alert-enrichments-enhanced-entity-mapping-and-custom-details)
Noted features are currently in PREVIEW. The [Azure Preview Supplemental Terms](
- [Microsoft 365 Defender incident integration (Public preview)](#microsoft-365-defender-incident-integration-public-preview) - [New Microsoft service connectors using Azure Policy](#new-microsoft-service-connectors-using-azure-policy)
+### Set workbooks to automatically refresh while in view mode
+
+Azure Sentinel users can now use the new [Azure Monitor ability](https://techcommunity.microsoft.com/t5/azure-monitor/azure-workbooks-set-it-to-auto-refresh/ba-p/2228555) to automatically refresh workbook data during a view session.
+
+In each workbook or workbook template, select :::image type="icon" source="media/whats-new/auto-refresh-workbook.png" border="false"::: **Auto refresh** to display your interval options. Select the option you want to use for the current view session, and select **Apply**.
+
+- Supported refresh intervals range from **5 minutes** to **1 day**.
+- By default, auto refresh is turned off. To optimize performance, auto refresh is also turned off each time you close a workbook, and does not run in the background. Turn auto refresh back on as needed the next time you open the workbook.
+- Auto refresh is paused while you're editing a workbook, and auto refresh intervals are restarted each time you switch back to view mode from edit mode.
+
+ Intervals are also restarted if you manually refresh the workbook by selecting the :::image type="icon" source="media/whats-new/manual-refresh-button.png" border="false"::: **Refresh** button.
+
+For more information, see [Tutorial: Visualize and monitor your data](tutorial-monitor-your-data.md) and the [Azure Monitor documentation](../azure-monitor/visualize/workbooks-overview.md).
+ ### New detections for Azure Firewall Several out-of-the-box detections for Azure Firewall have been added to the [Analytics](import-threat-intelligence.md#analytics-puts-your-threat-indicators-to-work-detecting-potential-threats) area in Azure Sentinel. These new detections allow security teams to get alerts if machines on the internal network attempt to query or connect to internet domain names or IP addresses that are associated with known IOCs, as defined in the detection rule query.
service-fabric Service Fabric Diagnostics Event Generation Operational https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-diagnostics-event-generation-operational.md
More details on cluster upgrades can be found [here](service-fabric-cluster-upgr
| 29630 | ClusterUpgradeRollbackCompleted | Upgrade | A cluster upgrade has completed rolling back | CM | Warning | | 29631 | ClusterUpgradeDomainCompleted | Upgrade | An upgrade domain has finished upgrading during a cluster upgrade | CM | Informational |
-**Placement events**
-| EventId | Name | Category | Description |Source (Task) | Level |
-| | | | | | |
-| 17616 | Decision |StateTransition | Placement operation was scheduled to decide on placement of new replicas. | CRM | Informational |
-- ## Node events **Node lifecycle events**
spring-cloud How To Access Data Plane Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-access-data-plane-azure-ad-rbac.md
After the Azure Spring Cloud Data Reader role is assigned, customers can access
* *https://SERVICE_NAME.svc.azuremicroservices.io/eureka/actuator/* * *https://SERVICE_NAME.svc.azuremicroservices.io/config/actuator/*
+>[!NOTE]
+> If you are using Azure China, please replace `*.azuremicroservices.io` with `*.microservices.azure.cn`, [learn more](https://docs.microsoft.com/azure/china/resources-developer-guide#check-endpoints-in-azure).
+ 3. Access the composed endpoint with the access token. Put the access token in a header to provide authorization. Only the "GET" method is supported. For example, access an endpoint like *https://SERVICE_NAME.svc.azuremicroservices.io/eureka/actuator/health* to see the health status of eureka.
storage Network File System Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/network-file-system-protocol-support-how-to.md
As you configure the account, choose these values:
|Setting | Premium performance | Standard performance |-|||
-|Location|All available regions |One of the following regions: Australia East, Korea Central, and South Central US
+|Location|All available regions |One of the following regions: Australia East, Korea Central, East US, and South Central US
|Performance|Premium| Standard |Account kind|BlockBlobStorage| General-purpose V2 |Replication|Locally-redundant storage (LRS)| Locally-redundant storage (LRS)
storage Network File System Protocol Support Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/network-file-system-protocol-support-performance.md
Blob storage now supports the Network File System (NFS) 3.0 protocol. This article contains recommendations that help you to optimize the performance of your storage requests. To learn more about NFS 3.0 support in Azure Blob Storage, see [Network File System (NFS) 3.0 protocol support in Azure Blob storage (preview)](network-file-system-protocol-support.md). > [!NOTE]
-> NFS 3.0 protocol support in Azure Blob storage is in public preview. It supports GPV2 storage accounts with standard tier performance in the following regions: Australia East, Korea Central, and South Central US. The preview also supports block blob with premium performance tier in all public regions.
+> NFS 3.0 protocol support in Azure Blob storage is in public preview. It supports GPV2 storage accounts with standard tier performance in the following regions: Australia East, Korea Central, East US, and South Central US. The preview also supports block blob with premium performance tier in all public regions.
## Add clients to increase throughput
storage Network File System Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/network-file-system-protocol-support.md
Blob storage now supports the Network File System (NFS) 3.0 protocol. This support provides Linux file system compatibility at object storage scale and prices and enables Linux clients to mount a container in Blob storage from an Azure Virtual Machine (VM) or a computer on-premises. > [!NOTE]
-> NFS 3.0 protocol support in Azure Blob storage is in public preview. It supports GPV2 storage accounts with standard tier performance in the following regions: Australia East, Korea Central, and South Central US. The preview also supports block blob with premium performance tier in all public regions.
+> NFS 3.0 protocol support in Azure Blob storage is in public preview. It supports GPV2 storage accounts with standard tier performance in the following regions: Australia East, Korea Central, East US, and South Central US. The preview also supports block blob with premium performance tier in all public regions.
It's always been a challenge to run large-scale legacy workloads, such as High Performance Computing (HPC) in the cloud. One reason is that applications often use traditional file protocols such as NFS or Server Message Block (SMB) to access data. Also, native cloud storage services focused on object storage that have a flat namespace and extensive metadata instead of file systems that provide a hierarchical namespace and efficient metadata operations.
storage Soft Delete Blob Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/soft-delete-blob-enable.md
Title: Enable and manage soft delete for blobs
+ Title: Enable soft delete for blobs
-description: Enable soft delete for blobs to more easily recover your data when it is erroneously modified or deleted.
+description: Enable soft delete for blobs to protect blob data from accidental deletes or overwrites.
Previously updated : 07/15/2020 Last updated : 03/27/2021 -+
-# Enable and manage soft delete for blobs
+# Enable soft delete for blobs
-Blob soft delete protects your data from being accidentally or erroneously modified or deleted. When blob soft delete is enabled for a storage account, blobs, blob versions, and snapshots in that storage account may be recovered after they are deleted, within a retention period that you specify.
+Blob soft delete protects an individual blob and its versions, snapshots, and metadata from accidental deletes or overwrites by maintaining the deleted data in the system for a specified period of time. During the retention period, you can restore the blob to its state at deletion. After the retention period has expired, the blob is permanently deleted. For more information about blob soft delete, see [Soft delete for blobs](soft-delete-blob-overview.md).
-If there is a possibility that your data may accidentally be modified or deleted by an application or another storage account user, Microsoft recommends turning on blob soft delete. This article shows how to enable soft delete for blobs. For more details about blob soft delete, see [Soft delete for blobs](soft-delete-blob-overview.md).
-
-To learn how to also enable soft delete for containers, see [Enable and manage soft delete for containers](soft-delete-container-enable.md).
+Blob soft delete is part of a comprehensive data protection strategy for blob data. To learn more about Microsoft's recommendations for data protection, see [Data protection overview](data-protection-overview.md).
## Enable blob soft delete
+Blob soft delete is disabled by default for a new storage account. You can enable or disable soft delete for a storage account at any time by using the Azure portal, PowerShell, or Azure CLI.
+ # [Portal](#tab/azure-portal)
-Enable soft delete for blobs on your storage account by using Azure portal:
+To enable blob soft delete for your storage account by using the Azure portal, follow these steps:
1. In the [Azure portal](https://portal.azure.com/), navigate to your storage account. 1. Locate the **Data Protection** option under **Blob service**.
-1. Set the **Blob soft delete** property to *Enabled*.
-1. Under **Retention policies**, specify how long soft-deleted blobs are retained by Azure Storage.
+1. In the **Recovery** section, select **Turn on soft delete for blobs**.
+1. Specify a retention period between 1 and 365 days. Microsoft recommends a minimum retention period of seven days.
1. Save your changes.
-![Screenshot of the Azure Portal with the Data Protection blob service elected.](media/soft-delete-blob-enable/storage-blob-soft-delete-portal-configuration.png)
-
-To view soft deleted blobs, select the **Show deleted blobs** checkbox.
-
-![Screenshot of the Data Protection blob service page with the Show deleted blobs option highlighted.](media/soft-delete-blob-enable/storage-blob-soft-delete-portal-view-soft-deleted.png)
-
-To view soft deleted snapshots for a given blob, select the blob then click **View snapshots**.
-
-![Screenshot of the Data Protection blob service page with the View snapshots option highlighted.](media/soft-delete-blob-enable/storage-blob-soft-delete-portal-view-soft-deleted-snapshots.png)
-
-Make sure the **Show deleted snapshots** checkbox is selected.
-
-![Screenshot of the View snapshots page with the Show deleted blobs option highlighted.](media/soft-delete-blob-enable/storage-blob-soft-delete-portal-view-soft-deleted-snapshots-check.png)
-
-When you click on a soft deleted blob or snapshot, notice the new blob properties. They indicate when the object was deleted, and how many days are left until the blob or blob snapshot is permanently expired. If the soft deleted object is not a snapshot, you will also have the option to undelete it.
-
-![Screenshot of the details of a soft deleted object.](media/soft-delete-blob-enable/storage-blob-soft-delete-portal-properties.png)
-
-Remember that undeleting a blob will also undelete all associated snapshots. To undelete soft deleted snapshots for an active blob, click on the blob and select **Undelete all snapshots**.
-
-![Screenshot of the details of a soft deleted blob.](media/soft-delete-blob-enable/storage-blob-soft-delete-portal-undelete-all-snapshots.png)
-
-Once you undelete a blob's snapshots, you can click **Promote** to copy a snapshot over the root blob, thereby restoring the blob to the snapshot.
-
-![Screenshot of the View snapshots page with the Promote option highlighted.](media/soft-delete-blob-enable/storage-blob-soft-delete-portal-promote-snapshot.png)
# [PowerShell](#tab/azure-powershell) -
-To enable soft delete, update a blob client's service properties. The following example enables soft delete for a subset of accounts in a subscription:
+To enable blob soft delete with PowerShell, call the [Enable-AzStorageBlobDeleteRetentionPolicy](/powershell/module/az.storage/enable-azstorageblobdeleteretentionpolicy) command, specifying the retention period in days.
-```powershell
-Set-AzContext -Subscription "<subscription-name>"
-$MatchingAccounts = Get-AzStorageAccount | where-object{$_.StorageAccountName -match "<matching-regex>"}
-$MatchingAccounts | Enable-AzStorageDeleteRetentionPolicy -RetentionDays 7
-```
-
-You can verify that soft delete was turned on by using the following command:
+The following example enables blob soft delete and sets the retention period to seven days. Remember to replace the placeholder values in brackets with your own values:
-```powershell
-$MatchingAccounts | $account = Get-AzStorageAccount -ResourceGroupName myresourcegroup -Name storageaccount
- Get-AzStorageServiceProperty -ServiceType Blob -Context $account.Context | Select-Object -ExpandProperty DeleteRetentionPolicy
+```azurepowershell
+Enable-AzStorageBlobDeleteRetentionPolicy -ResourceGroupName <resource-group> `
+ -StorageAccountName <storage-account> `
+ -RetentionDays 7
```
-To recover blobs that were accidentally deleted, call **Undelete Blob** on those blobs. Remember that calling **Undelete Blob**, both on active and soft deleted blobs, will restore all associated soft deleted snapshots as active. The following example calls **Undelete Blob** on all soft deleted and active blobs in a container:
-
-```powershell
-# Create a context by specifying storage account name and key
-$ctx = New-AzStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $StorageAccountKey
+To check the current settings for blob soft delete, call the [Get-AzStorageBlobServiceProperty](/powershell/module/az.storage/get-azstorageblobserviceproperty) command:
-# Get the blobs in a given container and show their properties
-$Blobs = Get-AzStorageBlob -Container $StorageContainerName -Context $ctx -IncludeDeleted
-$Blobs.ICloudBlob.Properties
-
-# Undelete the blobs
-$Blobs.ICloudBlob.Undelete()
-```
-To find the current soft delete retention policy, use the following command:
-
-```azurepowershell-interactive
- $account = Get-AzStorageAccount -ResourceGroupName myresourcegroup -Name storageaccount
- Get-AzStorageServiceProperty -ServiceType Blob -Context $account.Context
+```azurepowershell
+$properties = Get-AzStorageBlobServiceProperty -ResourceGroupName <resource-group> `
+ -StorageAccountName <storage-account>
+$properties.DeleteRetentionPolicy.Enabled
+$properties.DeleteRetentionPolicy.Days
``` # [CLI](#tab/azure-CLI)
-To enable soft delete, update a blob client's service properties:
-
-```azurecli-interactive
-az storage blob service-properties delete-policy update --days-retained 7 --account-name mystorageaccount --enable true
-```
+To enable blob soft delete with Azure CLI, call the [az storage account blob-service-properties update](/cli/azure/ext/storage-blob-preview/storage/account/blob-service-properties#ext_storage_blob_preview_az_storage_account_blob_service_properties_update) command, specifying the retention period in days.
-To verify soft delete is turned on, use the following command:
+The following example enables blob soft delete and sets the retention period to seven days. Remember to replace the placeholder values in brackets with your own values:
```azurecli-interactive
-az storage blob service-properties delete-policy show --account-name mystorageaccount
+az storage account blob-service-properties update --account-name <storage-account> \
+ --resource-group <resource-group> \
+ --enable-delete-retention true \
+ --delete-retention-days 7
```
-# [Python](#tab/python)
+To check the current settings for blob soft delete, call the [az storage account blob-service-properties show](/cli/azure/ext/storage-blob-preview/storage/account/blob-service-properties#ext_storage_blob_preview_az_storage_account_blob_service_properties_show) command:
-To enable soft delete, update a blob client's service properties:
-
-```python
-# Make the requisite imports
-from azure.storage.blob import BlockBlobService
-from azure.storage.common.models import DeleteRetentionPolicy
-
-# Initialize a block blob service
-block_blob_service = BlockBlobService(
- account_name='<enter your storage account name>', account_key='<enter your storage account key>')
-
-# Set the blob client's service property settings to enable soft delete
-block_blob_service.set_blob_service_properties(
- delete_retention_policy=DeleteRetentionPolicy(enabled=True, days=7))
-```
-
-# [.NET v12](#tab/dotnet)
-
-To enable soft delete, update a blob client's service properties:
--
-To recover blobs that were accidentally deleted, call Undelete on those blobs. Remember that calling **Undelete**, both on active and soft deleted blobs, will restore all associated soft deleted snapshots as active. The following example calls Undelete on all soft deleted and active blobs in a container:
--
-To recover to a specific blob version, first call Undelete on a blob, then copy the desired snapshot over the blob. The following example recovers a block blob to its most recently generated snapshot:
--
-# [.NET v11](#tab/dotnet11)
-
-To enable soft delete, update a blob client's service properties:
-
-```csharp
-// Get the blob client's service property settings
-ServiceProperties serviceProperties = blobClient.GetServiceProperties();
-
-// Configure soft delete
-serviceProperties.DeleteRetentionPolicy.Enabled = true;
-serviceProperties.DeleteRetentionPolicy.RetentionDays = RetentionDays;
-
-// Set the blob client's service property settings
-blobClient.SetServiceProperties(serviceProperties);
-```
-
-To recover blobs that were accidentally deleted, call **Undelete Blob** on those blobs. Remember that calling **Undelete Blob**, both on active and soft deleted blobs, will restore all associated soft deleted snapshots as active. The following example calls **Undelete Blob** on all soft-deleted and active blobs in a container:
-
-```csharp
-// Recover all blobs in a container
-foreach (CloudBlob blob in container.ListBlobs(useFlatBlobListing: true, blobListingDetails: BlobListingDetails.Deleted))
-{
- await blob.UndeleteAsync();
-}
+```azurecli-interactive
+az storage account blob-service-properties show --account-name <storage-account> \
+ --resource-group <resource-group>
```
-To recover to a specific blob version, first call the **Undelete Blob** operation, then copy the desired snapshot over the blob. The following example recovers a block blob to its most recently generated snapshot:
-
-```csharp
-// Undelete
-await blockBlob.UndeleteAsync();
-
-// List all blobs and snapshots in the container prefixed by the blob name
-IEnumerable<IListBlobItem> allBlobVersions = container.ListBlobs(
- prefix: blockBlob.Name, useFlatBlobListing: true, blobListingDetails: BlobListingDetails.Snapshots);
-
-// Restore the most recently generated snapshot to the active blob
-CloudBlockBlob copySource = allBlobVersions.First(version => ((CloudBlockBlob)version).IsSnapshot &&
- ((CloudBlockBlob)version).Name == blockBlob.Name) as CloudBlockBlob;
-blockBlob.StartCopy(copySource);
-```
- ## Next steps -- [Soft delete for Blob storage](./soft-delete-blob-overview.md)-- [Blob versioning](versioning-overview.md)
+- [Soft delete for blobs](soft-delete-blob-overview.md)
+- [Manage and restore soft-deleted blobs](soft-delete-blob-manage.md)
storage Soft Delete Blob Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/soft-delete-blob-manage.md
+
+ Title: Manage and restore soft-deleted blobs
+
+description: Manage and restore soft-deleted blobs and snapshots with the Azure portal or with the Azure Storage client libraries.
+++++ Last updated : 03/27/2021+++++
+# Manage and restore soft-deleted blobs
+
+Blob soft delete protects an individual blob and its versions, snapshots, and metadata from accidental deletes or overwrites by maintaining the deleted data in the system for a specified period of time. During the retention period, you can restore the blob to its state at deletion. After the retention period has expired, the blob is permanently deleted. For more information about blob soft delete, see [Soft delete for blobs](soft-delete-blob-overview.md).
+
+Blob soft delete is part of a comprehensive data protection strategy for blob data. To learn more about Microsoft's recommendations for data protection, see [Data protection overview](data-protection-overview.md).
+
+## Manage soft-deleted blobs with the Azure portal
+
+You can use the Azure portal to view and restore soft-deleted blobs and snapshots.
+
+### View deleted blobs
+
+When blobs are soft-deleted, they are invisible in the Azure portal by default. To view soft-deleted blobs, navigate to the **Overview** page for the container and toggle the **Show deleted blobs** setting. Soft-deleted blobs are displayed with a status of **Deleted**.
++
+Next, select the deleted blob from the list of blobs to display its properties. Under the **Overview** tab, notice that the blob's status is set to **Deleted**. The portal also displays the number of days until the blob is permanently deleted.
++
+### View deleted snapshots
+
+Deleting a blob also deletes any snapshots associated with the blob. If a soft-deleted blob has snapshots, the deleted snapshots can also be displayed in the portal. Display the soft-deleted blob's properties, then navigate to the **Snapshots** tab, and toggle **Show deleted snapshots**.
++
+### Restore soft-deleted objects when versioning is disabled
+
+To restore a soft-deleted blob in the Azure portal when blob versioning is not enabled, first display the blob's properties, then select the **Undelete** button on the **Overview** tab. Restoring a blob also restores any snapshots that were deleted during the soft-delete retention period.
++
+To promote a soft-deleted snapshot to the base blob, first make sure that the blob's soft-deleted snapshots have been restored. Select the **Undelete** button to restore the blob's soft-deleted snapshots, even if the base blob itself has not been soft-deleted. Next, select the snapshot to promote and use the **Promote snapshot** button to overwrite the base blob with the contents of the snapshot.
++
+### Restore soft-deleted blobs when versioning is enabled
+
+To restore a soft-deleted blob in the Azure portal when versioning is enabled, select the soft-deleted blob to display its properties, then select the **Versions** tab. Select the version that you want to promote to be the current version, then select **Make current version**.
++
+To restore deleted versions or snapshots when versioning is enabled, display the blob's properties, then select the **Undelete** button on the **Overview** tab.
+
+> [!NOTE]
+> When versioning is enabled, selecting the **Undelete** button on a deleted blob restores any soft-deleted versions or snapshots, but does not restore the base blob. To restore the base blob, you must promote a previous version.
+
+## Manage soft-deleted blobs with code
+
+You can use the Azure Storage client libraries to restore a soft-deleted blob or snapshot. The following examples show how to use the .NET client library.
+
+### Restore soft-deleted objects when versioning is disabled
+
+# [.NET v12](#tab/dotnet)
+
+To restore deleted blobs when versioning is not enabled, call the [Undelete Blob](/rest/api/storageservices/undelete-blob) operation on those blobs. The **Undelete Blob** operation restores soft-deleted blobs and any deleted snapshots associated with those blobs.
+
+Calling **Undelete Blob** on a blob that has not been deleted has no effect. The following example calls **Undelete Blob** on all blobs in a container, and restores the soft-deleted blobs and their snapshots:
++
+To restore a specific version, first call the **Undelete Blob** operation on the base blob or version, then copy the desired version over the base blob. The following example restores a block blob to the most recently saved version:
++
+# [.NET v11](#tab/dotnet11)
+
+To restore deleted blobs when versioning is not enabled, call the [Undelete Blob](/rest/api/storageservices/undelete-blob) operation on those blobs. The **Undelete Blob** operation restores soft-deleted blobs and any deleted snapshots associated with those blobs.
+
+Calling **Undelete Blob** on a blob that has not been deleted has no effect. The following example calls **Undelete Blob** on all blobs in a container, and restores the soft-deleted blobs and their snapshots:
+
+```csharp
+// Restore all blobs in a container.
+foreach (CloudBlob blob in container.ListBlobs(useFlatBlobListing: true, blobListingDetails: BlobListingDetails.Deleted))
+{
+ await blob.UndeleteAsync();
+}
+```
+
+To restore a specific snapshot, first call the **Undelete Blob** operation on the base blob, then copy the desired snapshot over the base blob. The following example restores a block blob to its most recently generated snapshot:
+
+```csharp
+// Restore the block blob.
+await blockBlob.UndeleteAsync();
+
+// List all blobs and snapshots in the container, prefixed by the blob name.
+IEnumerable<IListBlobItem> allBlobSnapshots = container.ListBlobs(
+ prefix: blockBlob.Name, useFlatBlobListing: true, blobListingDetails: BlobListingDetails.Snapshots);
+
+// Copy the most recently generated snapshot to the base blob.
+CloudBlockBlob copySource = allBlobSnapshots.First(snapshot => ((CloudBlockBlob)version).IsSnapshot &&
+ ((CloudBlockBlob)snapshot).Name == blockBlob.Name) as CloudBlockBlob;
+blockBlob.StartCopy(copySource);
+```
+++
+### Restore soft-deleted blobs when versioning is enabled
+
+To restore a soft-deleted blob when versioning is enabled, copy a previous version over the base blob with a [Copy Blob](/rest/api/storageservices/copy-blob) or [Copy Blob From URL](/rest/api/storageservices/copy-blob-from-url) operation.
+
+# [.NET v12](#tab/dotnet)
++
+# [.NET v11](#tab/dotnet11)
+
+Not applicable. Blob versioning is supported only in the Azure Storage client libraries version 12.x and higher.
+++
+## Next steps
+
+- [Soft delete for Blob storage](./soft-delete-blob-overview.md)
+- [Enable soft delete for blobs](soft-delete-blob-enable.md)
+- [Blob versioning](versioning-overview.md)
storage Soft Delete Blob Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/soft-delete-blob-overview.md
Previously updated : 02/09/2021 Last updated : 03/27/2021 # Soft delete for blobs
-Soft delete for blobs protects your data from being accidentally or erroneously modified or deleted. When soft delete for blobs is enabled for a storage account, blobs, blob versions, and snapshots in that storage account may be recovered after they are deleted, within a retention period that you specify.
+Blob soft delete protects an individual blob, snapshot, or version from accidental deletes or overwrites by maintaining the deleted data in the system for a specified period of time. During the retention period, you can restore a soft-deleted object to its state at the time it was deleted. After the retention period has expired, the object is permanently deleted.
-If there is a possibility that your data may accidentally be modified or deleted by an application or another storage account user, Microsoft recommends turning on soft delete. For more information about enabling soft delete, see [Enable and manage soft delete for blobs](./soft-delete-blob-enable.md).
+## Recommended data protection configuration
-
-## About soft delete for blobs
-
-When soft delete for blobs is enabled on a storage account, you can recover objects after they have been deleted, within the specified data retention period. This protection extends to any blobs (block blobs, append blobs, or page blobs) that are erased as the result of an overwrite.
-
-The following diagram shows how a deleted blob can be restored when blob soft delete is enabled:
--
-If data in an existing blob or snapshot is deleted while blob soft delete is enabled but blob versioning is not enabled, then a soft deleted snapshot is generated to save the state of the overwritten data. After the specified retention period has expired, the object is permanently deleted.
+Blob soft delete is part of a comprehensive data protection strategy for blob data. For optimal protection for your blob data, Microsoft recommends enabling all of the following data protection features:
-If blob versioning and blob soft delete are both enabled on the storage account, then deleting a blob creates a new version instead of a soft-deleted snapshot. The new version is not soft-deleted and is not removed when the soft-delete retention period expires. Soft-deleted versions of a blob can be restored within the retention period by calling the [Undelete Blob](/rest/api/storageservices/undelete-blob) operation. The blob can subsequently be restored from one of its versions by calling the [Copy Blob](/rest/api/storageservices/copy-blob) operation. For more information about using blob versioning and soft delete together, see [Blob versioning and soft delete](versioning-overview.md#blob-versioning-and-soft-delete).
+- Container soft delete, to restore a container that has been deleted. To learn how to enable container soft delete, see [Enable and manage soft delete for containers](soft-delete-container-enable.md).
+- Blob versioning, to automatically maintain previous versions of a blob. When blob versioning is enabled, you can restore an earlier version of a blob to recover your data if it is erroneously modified or deleted. To learn how to enable blob versioning, see [Enable and manage blob versioning](versioning-enable.md).
+- Blob soft delete, to restore a blob, snapshot, or version that has been deleted. To learn how to enable blob soft delete, see [Enable and manage soft delete for blobs](soft-delete-blob-enable.md).
-Soft deleted objects are invisible unless explicitly listed.
+To learn more about Microsoft's recommendations for data protection, see [Data protection overview](data-protection-overview.md).
-Blob soft delete is backwards compatible, so you don't have to make any changes to your applications to take advantage of the protections this feature affords. However, [data recovery](#recovery) introduces a new **Undelete Blob** API.
-
-Blob soft delete is available for both new and existing general-purpose v2, general-purpose v1, and Blob storage accounts. Both standard and premium account types are supported. Blob soft delete is available for all storage tiers including hot, cool, and archive. Soft delete is available for unmanaged disks, which are page blobs under the covers, but is not available for managed disks.
-
-### Configuration settings
-
-When you create a new account, soft delete is disabled by default. Soft delete is also disabled by default for existing storage accounts. You can enable or disable soft delete for a storage account at any time.
-When you enable soft delete, you must configure the retention period. The retention period indicates the amount of time that soft deleted data is stored and available for recovery. For objects that are explicitly deleted, the retention period clock starts when the data is deleted. For soft deleted versions or snapshots generated by the soft delete feature when data is overwritten, the clock starts when the version or snapshot is generated. The retention period may be between 1 and 365 days.
+## How blob soft delete works
-You can change the soft delete retention period at any time. An updated retention period applies only to newly deleted data. Previously deleted data expires based on the retention period that was configured when that data was deleted. Attempting to delete a soft deleted object does not affect its expiry time.
+When you enable blob soft delete for a storage account, you specify a retention period for deleted objects of between 1 and 365 days. The retention period indicates how long the data remains available after it is deleted or overwritten. The clock starts on the retention period as soon as an object is deleted or overwritten.
-If you disable soft delete, you can continue to access and recover soft deleted data in your storage account that was saved while the feature was enabled.
+While the retention period is active, you can restore a deleted blob, together with its snapshots, or a deleted version by calling the [Undelete Blob](/rest/api/storageservices/undelete-blob) operation. The following diagram shows how a deleted object can be restored when blob soft delete is enabled:
-### Saving deleted data
-Soft delete preserves your data in many cases where objects are deleted or overwritten.
+You can change the soft delete retention period at any time. An updated retention period applies only to data that was deleted after the retention period was changed. Any data that was deleted before the retention period was changed is subject to the retention period that was in effect when it was deleted.
-When a blob is overwritten using **Put Blob**, **Put Block List**, or **Copy Blob**, a version or snapshot of the blob's state prior to the write operation is automatically generated. This object is invisible unless soft-deleted objects are explicitly listed. See the [Recovery](#recovery) section to learn how to list soft deleted objects.
+Attempting to delete a soft-deleted object does not affect its expiry time.
-![A diagram showing how snapshots of blobs are stored as they are overwritten using Put Blob, Put Block List, or Copy Blob.](media/soft-delete-blob-overview/storage-blob-soft-delete-overwrite.png)
+If you disable blob soft delete, you can continue to access and recover soft-deleted objects in your storage account until the soft delete retention period has elapsed.
-*Soft deleted data is grey, while active data is blue. More recently written data appears beneath older data. When B0 is overwritten with B1, a soft deleted snapshot of B0 is generated. When B1 is overwritten with B2, a soft deleted snapshot of B1 is generated.*
+Blob versioning is available for general-purpose v2, block blob, and Blob storage accounts. Storage accounts with a hierarchical namespace enabled for use with Azure Data Lake Storage Gen2 are not currently supported.
-> [!NOTE]
-> Soft delete only affords overwrite protection for copy operations when it is turned on for the destination blob's account.
+Version 2017-07-29 and higher of the Azure Storage REST API support blob soft delete.
-> [!NOTE]
-> Soft delete does not afford overwrite protection for blobs in the archive tier. If a blob in archive is overwritten with a new blob in any tier, the overwritten blob is permanently expired.
+> [!IMPORTANT]
+> You can use blob soft delete only to restore an individual blob, snapshot, or version. To restore a container and its contents, container soft delete must also be enabled for the storage account. Microsoft recommends enabling container soft delete and blob versioning together with blob soft delete to ensure complete protection for blob data. For more information, see [Data protection overview](data-protection-overview.md).
+>
+> Blob soft delete does not protect against the deletion of a storage account. To protect a storage account from deletion, configure a lock on the storage account resource. For more information about locking a storage account, see [Apply an Azure Resource Manager lock to a storage account](../common/lock-account-resource.md).
-When **Delete Blob** is called on a snapshot, that snapshot is marked as soft deleted. A new snapshot is not generated.
+### How deletions are handled when soft delete is enabled
-![A diagram showing how snapshots of blobs are soft deleted when using Delete Blob.](media/soft-delete-blob-overview/storage-blob-soft-delete-explicit-delete-snapshot.png)
+When blob soft delete is enabled, deleting a blob marks that blob as soft-deleted. No snapshot is created. When the retention period expires, the soft-deleted blob is permanently deleted.
-*Soft deleted data is grey, while active data is blue. More recently written data appears beneath older data. When **Snapshot Blob** is called, B0 becomes a snapshot and B1 is the active state of the blob. When the B0 snapshot is deleted, it is marked as soft deleted.*
+If a blob has snapshots, the blob cannot be deleted unless the snapshots are also deleted. When you delete a blob and its snapshots, both the blob and snapshots are marked as soft-deleted. No new snapshots are created.
-When **Delete Blob** is called on a base blob (any blob that is not itself a snapshot), that blob is marked as soft deleted. Consistent with previous behavior, calling **Delete Blob** on a blob that has active snapshots returns an error. Calling **Delete Blob** on a blob with soft deleted snapshots does not return an error. You can still delete a blob and all its snapshots in single operation when soft delete is turned on. Doing so marks the base blob and snapshots as soft deleted.
+You can also delete one or more active snapshots without deleting the base blob. In this case, the snapshot is soft-deleted.
-![A diagram showing what happens when Delete Blog is called on a base blob.](media/soft-delete-blob-overview/storage-blob-soft-delete-explicit-include.png)
+Soft-deleted objects are invisible unless they are explicitly displayed or listed. For more information about how to list soft-deleted objects, see [Manage and restore soft-deleted blobs](soft-delete-blob-manage.md).
-*Soft deleted data is grey, while active data is blue. More recently written data appears beneath older data. Here, a **Delete Blob** call is made to delete B2 and all associated snapshots. The active blob, B2, and all associated snapshots are marked as soft deleted.*
+### How overwrites are handled when soft delete is enabled
-> [!NOTE]
-> When a soft deleted blob is overwritten, a soft deleted snapshot of the blob's state prior to the write operation is automatically generated. The new blob inherits the tier of the overwritten blob.
+Calling an operation such as [Put Blob](/rest/api/storageservices/put-blob), [Put Block List](/rest/api/storageservices/put-block-list), or [Copy Blob](/rest/api/storageservices/copy-blob) overwrites the data in a blob. When blob soft delete is enabled, overwriting a blob automatically creates a soft-deleted snapshot of the blob's state prior to the write operation. When the retention period expires, the soft-deleted snapshot is permanently deleted.
-Soft delete does not save your data in cases of container or account deletion, nor when blob metadata and blob properties are overwritten. To protect a storage account from deletion, you can configure a lock using the Azure Resource Manager. For more information, see the Azure Resource Manager article [Lock resources to prevent unexpected changes](../../azure-resource-manager/management/lock-resources.md). To protect containers from accidental deletion, configure container soft delete for the storage account. For more information, see [Soft delete for containers (preview)](soft-delete-container-overview.md).
+Soft-deleted snapshots are invisible unless soft-deleted objects are explicitly displayed or listed. For more information about how to list soft-deleted objects, see [Manage and restore soft-deleted blobs](soft-delete-blob-manage.md).
-The following table details expected behavior when soft delete is turned on:
+To protect a copy operation, blob soft delete must be enabled for the destination storage account.
-| REST API operation | Resource type | Description | Change in behavior |
-|--||-|--|
-| [Delete](/rest/api/storagerp/StorageAccounts/Delete) | Account | Deletes the storage account, including all containers and blobs that it contains. | No change. Containers and blobs in the deleted account are not recoverable. |
-| [Delete Container](/rest/api/storageservices/delete-container) | Container | Deletes the container, including all blobs that it contains. | No change. Blobs in the deleted container are not recoverable. |
-| [Put Blob](/rest/api/storageservices/put-blob) | Block, append, and page blobs | Creates a new blob or replaces an existing blob within a container | If used to replace an existing blob, a snapshot of the blob's state prior to the call is automatically generated. This also applies to a previously soft deleted blob if and only if it is replaced by a blob of the same type (Block, append, or Page). If it is replaced by a blob of a different type, all existing soft deleted data will be permanently expired. |
-| [Delete Blob](/rest/api/storageservices/delete-blob) | Block, append, and page blobs | Marks a blob or blob snapshot for deletion. The blob or snapshot is later deleted during garbage collection | If used to delete a blob snapshot, that snapshot is marked as soft deleted. If used to delete a blob, that blob is marked as soft deleted. |
-| [Copy Blob](/rest/api/storageservices/copy-blob) | Block, append, and page blobs | Copies a source blob to a destination blob in the same storage account or in another storage account. | If used to replace an existing blob, a snapshot of the blob's state prior to the call is automatically generated. This also applies to a previously soft deleted blob if and only if it is replaced by a blob of the same type (Block, append, or Page). If it is replaced by a blob of a different type, all existing soft deleted data will be permanently expired. |
-| [Put Block](/rest/api/storageservices/put-block) | Block blobs | Creates a new block to be committed as part of a block blob. | If used to commit a block to a blob that is active, there is no change. If used to commit a block to a blob that is soft deleted, a new blob is created and a snapshot is automatically generated to capture the state of the soft deleted blob. |
-| [Put Block List](/rest/api/storageservices/put-block-list) | Block blobs | Commits a blob by specifying the set of block IDs that comprise the block blob. | If used to replace an existing blob, a snapshot of the blob's state prior to the call is automatically generated. This also applies to a previously soft deleted blob if and only if it is a block blob. If it is replaced by a blob of a different type, all existing soft deleted data will be permanently expired. |
-| [Put Page](/rest/api/storageservices/put-page) | Page blobs | Writes a range of pages to a page blob. | No change. Page blob data that is overwritten or cleared using this operation is not saved and is not recoverable. |
-| [Append Block](/rest/api/storageservices/append-block) | Append Blobs | Writes a block of data to the end of an append blob | No change. |
-| [Set Blob Properties](/rest/api/storageservices/set-blob-properties) | Block, append, and page blobs | Sets values for system properties defined for a blob. | No change. Overwritten blob properties are not recoverable. |
-| [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata) | Block, append, and page blobs | Sets user-defined metadata for the specified blob as one or more name-value pairs. | No change. Overwritten blob metadata is not recoverable. |
+Blob soft delete does not protect against operations to write blob metadata or properties. No soft-deleted snapshot is created when a blob's metadata or properties are updated.
-It is important to notice that calling **Put Page** to overwrite or clear ranges of a page blob will not automatically generate snapshots. Virtual machine disks are backed by page blobs and use **Put Page** to write data.
+Blob soft delete does not afford overwrite protection for blobs in the archive tier. If a blob in the archive tier is overwritten with a new blob in any tier, then the overwritten blob is permanently deleted.
-### Recovery
+For premium storage accounts, soft-deleted snapshots do not count toward the per-blob limit of 100 snapshots.
-Calling the [Undelete Blob](/rest/api/storageservices/undelete-blob) operation on a soft deleted base blob restores it and all associated soft deleted snapshots as active. Calling the **Undelete Blob** operation on an active base blob restores all associated soft deleted snapshots as active. When snapshots are restored as active, they look like user-generated snapshots; they do not overwrite the base blob.
+### Restoring soft-deleted objects
-To restore a blob to a specific soft deleted snapshot, you can call **Undelete Blob** on the base blob. Then, you can copy the snapshot over the now-active blob. You can also copy the snapshot to a new blob.
+You can restore soft-deleted blobs by calling the [Undelete Blob](/rest/api/storageservices/undelete-blob) operation within the retention period. The **Undelete Blob** operation restores a blob and any soft-deleted snapshots associated with it. Any snapshots that were deleted during the retention period are restored.
-![A diagram showing what happens when Undelete blob is used.](media/soft-delete-blob-overview/storage-blob-soft-delete-recover.png)
+Calling **Undelete Blob** on a blob that is not soft-deleted will restore any soft-deleted snapshots that are associated with the blob. If the blob has no snapshots and is not soft-deleted, then calling **Undelete Blob** has no effect.
-*Soft deleted data is grey, while active data is blue. More recently written data appears beneath older data. Here, **Undelete Blob** is called on blob B, thereby restoring the base blob, B1, and all associated snapshots, here just B0, as active. In the second step, B0 is copied over the base blob. This copy operation generates a soft deleted snapshot of B1.*
+To promote a soft-deleted snapshot to the base blob, first call **Undelete Blob** on the base blob to restore the blob and its snapshots. Next, copy the desired snapshot over the base blob. You can also copy the snapshot to a new blob.
-To view soft deleted blobs and blob snapshots, you can choose to include deleted data in **List Blobs**. You can choose to view only soft deleted base blobs, or to include soft deleted blob snapshots as well. For all soft deleted data, you can view the time when the data was deleted as well as the number of days before the data will be permanently expired.
+Data in a soft-deleted blob or snapshot cannot be read until the object has been restored.
-### Example
+For more information on how to restore soft-deleted objects, see [Manage and restore soft-deleted blobs](soft-delete-blob-manage.md).
-The following is the console output of a .NET script that uploads, overwrites, snapshots, deletes, and restores a blob named *HelloWorld* when soft delete is turned on:
+## Blob soft delete and versioning
-```bash
-Upload:
-- HelloWorld (is soft deleted: False, is snapshot: False)
+If blob versioning and blob soft delete are both enabled for a storage account, then overwriting a blob automatically creates a new version. The new version is not soft-deleted and is not removed when the soft-delete retention period expires. No soft-deleted snapshots are created. When you delete a blob, the current version of the blob becomes a previous version, and the current version is deleted. No new version is created and no soft-deleted snapshots are created.
-Overwrite:
-- HelloWorld (is soft deleted: True, is snapshot: True)-- HelloWorld (is soft deleted: False, is snapshot: False)
+Enabling soft delete and versioning together protects blob versions from deletion. When soft delete is enabled, deleting a version creates a soft-deleted version. You can use the **Undelete Blob** operation to restore a soft-deleted version, as long as there is a current version of the blob. If there is no current version, then you must copy a previous version to the current version before calling the **Undelete Blob** operation.
-Snapshot:
-- HelloWorld (is soft deleted: True, is snapshot: True)-- HelloWorld (is soft deleted: False, is snapshot: True)-- HelloWorld (is soft deleted: False, is snapshot: False)
+> [!NOTE]
+> Calling the **Undelete Blob** operation on a deleted blob when versioning is enabled restores any soft-deleted versions or snapshots, but does not restore the base blob. To restore the base blob, promote a previous version by copying it to the base blob.
-Delete (including snapshots):
-- HelloWorld (is soft deleted: True, is snapshot: True)-- HelloWorld (is soft deleted: True, is snapshot: True)-- HelloWorld (is soft deleted: True, is snapshot: False)
+Microsoft recommends enabling both versioning and blob soft delete for your storage accounts for optimal data protection. For more information about using blob versioning and soft delete together, see [Blob versioning and soft delete](versioning-overview.md#blob-versioning-and-soft-delete).
-Undelete:
-- HelloWorld (is soft deleted: False, is snapshot: True)-- HelloWorld (is soft deleted: False, is snapshot: True)-- HelloWorld (is soft deleted: False, is snapshot: False)
+## Blob soft delete protection by operation
-Copy a snapshot over the base blob:
-- HelloWorld (is soft deleted: False, is snapshot: True)-- HelloWorld (is soft deleted: False, is snapshot: True)-- HelloWorld (is soft deleted: True, is snapshot: True)-- HelloWorld (is soft deleted: False, is snapshot: False)
-```
+The following table describes the expected behavior for delete and write operations when blob soft delete is enabled, either with or without blob versioning:
-See the [Next steps](#next-steps) section for a pointer to the application that produced this output.
+| REST API operations | Soft delete enabled | Soft delete and versioning enabled |
+|--|--|--|
+| [Delete Storage Account](/rest/api/storagerp/storageaccounts/delete) | No change. Containers and blobs in the deleted account are not recoverable. | No change. Containers and blobs in the deleted account are not recoverable. |
+| [Delete Container](/rest/api/storageservices/delete-container) | No change. Blobs in the deleted container are not recoverable. | No change. Blobs in the deleted container are not recoverable. |
+| [Delete Blob](/rest/api/storageservices/delete-blob) | If used to delete a blob, that blob is marked as soft deleted. <br /><br /> If used to delete a blob snapshot, the snapshot is marked as soft deleted. | If used to delete a blob, the current version becomes a previous version, and the current version is deleted. No new version is created and no soft-deleted snapshots are created.<br /><br /> If used to delete a blob version, the version is marked as soft deleted. |
+| [Undelete Blob](/rest/api/storageservices/delete-blob) | Restores a blob and any snapshots that were deleted within the retention period. | Restores a blob and any versions that were deleted within the retention period. |
+| [Put Blob](/rest/api/storageservices/put-blob)<br />[Put Block List](/rest/api/storageservices/put-block-list)<br />[Copy Blob](/rest/api/storageservices/copy-blob)<br />[Copy Blob from URL](/rest/api/storageservices/copy-blob) | If called on an active blob, then a snapshot of the blob's state prior to the operation is automatically generated. <br /><br /> If called on a soft-deleted blob, then a snapshot of the blob's prior state is generated only if it is being replaced by a blob of the same type. If the blob is of a different type, then all existing soft deleted data is permanently deleted. | A new version that captures the blob's state prior to the operation is automatically generated. |
+| [Put Block](/rest/api/storageservices/put-block) | If used to commit a block to an active blob, there is no change.<br /><br />If used to commit a block to a blob that is soft-deleted, a new blob is created and a snapshot is automatically generated to capture the state of the soft-deleted blob. | No change. |
+| [Put Page](/rest/api/storageservices/put-page)<br />[Put Page from URL](/rest/api/storageservices/put-page-from-url) | No change. Page blob data that is overwritten or cleared using this operation is not saved and is not recoverable. | No change. Page blob data that is overwritten or cleared using this operation is not saved and is not recoverable. |
+| [Append Block](/rest/api/storageservices/append-block)<br />[Append Block from URL](/rest/api/storageservices/append-block-from-url) | No change. | No change. |
+| [Set Blob Properties](/rest/api/storageservices/set-blob-properties) | No change. Overwritten blob properties are not recoverable. | No change. Overwritten blob properties are not recoverable. |
+| [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata) | No change. Overwritten blob metadata is not recoverable. | A new version that captures the blob's state prior to the operation is automatically generated. |
+| [Set Blob Tier](/rest/api/storageservices/set-blob-tier) | The base blob is moved to the new tier. Any active or soft-deleted snapshots remain in the original tier. No soft-deleted snapshot is created. | The base blob is moved to the new tier. Any active or soft-deleted versions remain in the original tier. No new version is created. |
## Pricing and billing
-All soft deleted data is billed at the same rate as active data. You will not be charged for data that is permanently deleted after the configured retention period. For a deeper dive into snapshots and how they accrue charges, see [Understanding how snapshots accrue charges](./snapshots-overview.md).
-
-You will not be billed for the transactions related to the automatic generation of snapshots. You will be billed for **Undelete Blob** transactions at the rate for write operations.
+All soft deleted data is billed at the same rate as active data. You will not be charged for data that is permanently deleted after the retention period elapses.
-For more details on prices for Azure Blob Storage in general, check out the [Azure Blob Storage Pricing Page](https://azure.microsoft.com/pricing/details/storage/blobs/).
-
-When you initially turn on soft delete, Microsoft recommends using a short retention period to better understand how the feature will affect your bill.
+When you enable soft delete, Microsoft recommends using a short retention period to better understand how the feature will affect your bill. The minimum recommended retention period is seven days.
Enabling soft delete for frequently overwritten data may result in increased storage capacity charges and increased latency when listing blobs. You can mitigate this additional cost and latency by storing the frequently overwritten data in a separate storage account where soft delete is disabled.
-## FAQ
-
-### Can I use the Set Blob Tier API to tier blobs with soft deleted snapshots?
-
-Yes. The soft deleted snapshots will remain in the original tier, but the base blob will move to the new tier.
-
-### Premium storage accounts have a per blob snapshot limit of 100. Do soft deleted snapshots count toward this limit?
-
-No, soft deleted snapshots do not count toward this limit.
-
-### If I delete an entire account or container with soft delete turned on, will all associated blobs be saved?
-
-No, if you delete an entire account or container, all associated blobs will be permanently deleted. For more information about protecting a storage account from being accidentally deleted, see [Lock Resources to Prevent Unexpected Changes](../../azure-resource-manager/management/lock-resources.md).
-
-### Can I view capacity metrics for deleted data?
-
-Soft deleted data is included as a part of your total storage account capacity. For more information on tracking and monitoring storage capacity, see [Storage Analytics](../common/storage-analytics.md).
-
-### Can I read and copy out soft deleted snapshots of my blob?
-
-Yes, but you must call Undelete on the blob first.
-
-### Is soft delete available for virtual machine disks?
+You are not billed for transactions related to the automatic generation of snapshots or versions when a blob is overwritten or deleted. You are billed for calls to the **Undelete Blob** operation at the transaction rate for write operations.
-Soft delete is available for both premium and standard unmanaged disks, which are page blobs under the covers. Soft delete will only help you recover data deleted by **Delete Blob**, **Put Blob**, **Put Block List**, and **Copy Blob** operations. Data overwritten by a call to **Put Page** is not recoverable.
+For more information on pricing for Blob Storage, see the [Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) page.
-An Azure virtual machine writes to an unmanaged disk using calls to **Put Page**, so using soft delete to undo writes to an unmanaged disk from an Azure VM is not a supported scenario.
+## Blob soft delete and virtual machine disks
-### Do I need to change my existing applications to use soft delete?
+Blob soft delete is available for both premium and standard unmanaged disks, which are page blobs under the covers. Soft delete can help you recover data deleted or overwritten by the **Delete Blob**, **Put Blob**, **Put Block List**, and **Copy Blob** operations only.
-It is possible to take advantage of soft delete regardless of the API version you are using. However, to list and recover soft deleted blobs and blob snapshots, you will need to use version 2017-07-29 of the [Azure Storage REST API](/rest/api/storageservices/Versioning-for-the-Azure-Storage-Services) or greater. Microsoft recommends always using the latest version of the Azure Storage API.
+Data that is overwritten by a call to **Put Page** is not recoverable. An Azure virtual machine writes to an unmanaged disk using calls to **Put Page**, so using soft delete to undo writes to an unmanaged disk from an Azure VM is not a supported scenario.
## Next steps - [Enable soft delete for blobs](./soft-delete-blob-enable.md)
+- [Manage and restore soft-deleted blobs](soft-delete-blob-manage.md)
- [Blob versioning](versioning-overview.md)
storage Soft Delete Container Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/soft-delete-container-overview.md
# Soft delete for containers (preview)
-Soft delete for containers (preview) protects your data from being accidentally or maliciously deleted. When container soft delete is enabled for a storage account, any deleted container and their contents are retained in Azure Storage for the period that you specify. During the retention period, you can restore previously deleted containers. Restoring a container restores any blobs within that container when it was deleted.
+Soft delete for containers (preview) protects your data from being accidentally or maliciously deleted. When container soft delete is enabled for a storage account, a deleted container and its contents are retained in Azure Storage for the period that you specify. During the retention period, you can restore previously deleted containers. Restoring a container restores any blobs within that container when it was deleted.
For end to end protection for your blob data, Microsoft recommends enabling the following data protection features:
When you enable container soft delete, you can specify a retention period for de
When you restore a container, the container's blobs and any blob versions are also restored. However, you can only use container soft delete to restore blobs if the container itself was deleted. To a restore a deleted blob when its parent container has not been deleted, you must use blob soft delete or blob versioning. > [!WARNING]
-> Container soft delete can restore only whole containers and the blobs they contained at the time of deletion. You cannot restore a deleted blob within a container by using container soft delete.
+> Container soft delete can restore only whole containers and their contents at the time of deletion. You cannot restore a deleted blob within a container by using container soft delete. Microsoft recommends also enabling blob soft delete and blob versioning to protect individual blobs in a container.
The following diagram shows how a deleted container can be restored when container soft delete is enabled:
After the retention period has expired, the container is permanently deleted fro
Disabling container soft delete does not result in permanent deletion of containers that were previously soft-deleted. Any soft-deleted containers will be permanently deleted at the expiration of the retention period that was in effect at the time that the container was deleted. > [!IMPORTANT]
-> Container soft delete does not protect against the deletion of a storage account, but only against the deletion of containers in that account. To protect a storage account from deletion, configure a lock on the storage account resource. For more information about locking Azure Resource Manager resources, see [Lock resources to prevent unexpected changes](../../azure-resource-manager/management/lock-resources.md).
+> Container soft delete does not protect against the deletion of a storage account. It protects only against the deletion of containers in that account. To protect a storage account from deletion, configure a lock on the storage account resource. For more information about locking a storage account, see [Apply an Azure Resource Manager lock to a storage account](../common/lock-account-resource.md).
## About the preview
storage Storage Blob Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-blob-change-feed.md
Here's a few things to keep in mind when you enable the change feed.
Enable change feed on your storage account by using Azure portal: 1. In the [Azure portal](https://portal.azure.com/), select your storage account.
+1. Navigate to the **Data protection** option under **Blob service**.
+1. Under **Tracking**, select **Turn on blob change feed**.
+1. Choose the **Save** button to confirm your data protection settings.
-2. Navigate to the **Data Protection** option under **Blob Service**.
-
-3. Click **Enabled** under **Blob change feed**.
-
-4. Choose the **Save** button to confirm your **Data Protection** settings.
-
- ![Screenshot that shows the data protection settings.](media/soft-delete-blob-enable/storage-blob-soft-delete-portal-configuration.png)
+ :::image type="content" source="media/storage-blob-change-feed/change-feed-enable-portal.png" alt-text="Screenshot showing how to enable change feed in Azure portal":::
### [PowerShell](#tab/azure-powershell)
storage Storage Files Migration Nas Cloud Databox https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-migration-nas-cloud-databox.md
Follow the steps in the Azure DataBox documentation:
The linked DataBox documentation specifies a RoboCopy command. However, the command is not suitable to preserve the full file and folder fidelity. Use this command instead:
+```console
+Robocopy /MT:32 /NP /NFL /NDL /B /MIR /IT /COPY:DATSO /DCOPY:DAT /UNILOG:<FilePathAndName> <SourcePath> <Dest.Path>
+```
+* To learn more about the details of the individual RoboCopy flags, check out the table in the upcoming [RoboCopy section](#robocopy).
+* To learn more about how to appropriately size the thread count `/MT:n`, optimize RoboCopy speed, and make RoboCopy a good neighbor in your data center, take a look at the [RoboCopy troubleshooting section](#troubleshoot).
+ ## Phase 7: Catch-up RoboCopy from your NAS
You can try to run a few of these copies in parallel. We recommend processing th
## Troubleshoot
-Speed and success rate of a given RoboCopy run will depend on several factors:
-
-* IOPS on the source and target storage
-* the available network bandwidth between them
-* the ability to quickly process files and folders in a namespace
-* the number of changes between RoboCopy runs
--
-### IOPS and Bandwidth considerations
-
-In this category you need to consider abilities of the **source** (your NAS), the **target** (Azure DataBox and later Azure file share), and the **network** connecting them. The maximum possible throughput is determined by the slowest of these three components. A standard DataBox comes with dual 10-Gbps network interfaces. Depending on your NAS, you may be able to match that. Make sure your network infrastructure is configured to support optimal transfer speeds to its best abilities.
-
-> [!CAUTION]
-> While copying as fast as possible is often most desireable, consider the utilization of your local network and NAS appliance for other, often business critical tasks.
-
-Copying as fast as possible might not be desirable when there is a risk that the migration could monopolize available resources.
-
-* Consider when it's best in your environment to run migrations: during the day, off-hours, or during weekends.
-* Also consider networking QoS on a Windows Server to throttle the RoboCopy speed and thus the impact on NAS and network.
-* Avoid unnecessary work for the migration tools.
-
-RobCopy itself also has the ability to insert inter-packet delays by specifying the `/IPG:n` switch where `n` is measured in milliseconds between RoboCopy packets. Using this switch can help avoid monopolization of resources on both IO constrained NAS devices, and highly utilized network links.
-
-`/IPG:n` cannot be used for precise network throttling to a certain Mbps. Use Windows Server Network QoS instead. RoboCopy entirely relies on the SMB protocol for all networking and thus doesn't have the ability to influence the network throughput itself, but it can slow down its utilization.
-
-A similar line of thought applies to the IOPS observed on the NAS. The cluster size on the NAS volume, packet sizes, and an array of other factors influence the observed IOPS. Introducing inter-packet delay is often the easiest way to control the load on the NAS. Test multiple values, for instance from about 20 milliseconds (n=20) to multiples of that to see how much delay allows your other requirements to be serviced while keeping the RoboCopy speed at it's maximum for your constraints.
-
-### Processing speed
-
-RoboCopy will traverse the namespace it is pointed to and evaluate each file and folder for copy. Every file will be evaluated during an initial copy, such as a copy over the local network to a DataBox, and even during catch-up copies over the WAN link to an Azure file share.
-
-We often default to considering bandwidth as the most limiting factor in a migration - and that can be true. But the ability to enumerate a namespace can influence the total time to copy even more for larger namespaces with smaller files. Consider that copying 1 TiB of small files will take considerably longer than copying 1 TiB of fewer but larger files - granted that all other variables are the same.
-
-The cause for this difference is the processing power needed to walk through a namespace. RoboCopy supports multi-threaded copies through the `/MT:n` parameter where n stands for the number of processor threads. So when provisioning a machine specifically for RoboCopy, consider the number of processor cores and their relationship to the thread count they provide. Most common are two threads per core. The core and thread count of a machine is an important data point to decide what multi-thread values `/MT:n` you should specify. Also consider how many RoboCopy jobs you plan to run in parallel on a given machine.
-
-More threads will copy our 1Tib example of small files considerably faster than fewer threads. At the same time, there is a decreasing return on investment on our 1Tib of larger files. They will still copy faster the more threads you assign but you are getting more likely to be network bandwidth or IO constrained.
-
-### Avoid unnecessary work
-
-Avoid large-scale changes in your namespace. That includes moving files between directories, changing properties at a large scale, or changing permissions (NTFS ACLs) because they often have a cascading change effect when folder ACLs closer to the root of a share are changed. Consequences can be:
-
-* extended RoboCopy job run time due to each file and folder affected by an ACL change needing to be updated
-* effectiveness of using DataBox in the first place can decrease when folder structures change after files had been copied to a DataBox. A RoboCopy job will not be able to "play back" a namespace change and rather will need to purge the files transported to an Azure file share and upload the files in the new folder structure again to Azure.
-
-Another important aspect is to use the RoboCopy tool effectively. With the recommended RoboCopy script, you will create and save a log file for errors. Copy errors can occur - that is normal. These errors often make it necessary to run multiple rounds of a copy tool like RoboCopy. An initial run, say from NAS to DataBox, and one or more extra ones with the /MIR switch to catch and retry files that didn't get copied.
-
-You should be prepared to run multiple rounds of RoboCopy against a given namespace scope. Successive runs will finish faster as they have less to copy but are constrained increasingly by the speed of processing the namespace. When you run multiple rounds, you can speed up each round by not having RoboCopy try unreasonably hard to copy everything at first attempt. These RoboCopy switches can make a significant difference:
-
-* `/R:n` n = how often you retry to copy a failed file and
-* `/W:n` n = how many seconds to wait between retries
-
-`/R:5 /W:5` is a reasonable setting that you can adjust to your liking. In this example, a failed file will be retried five times, with five-second wait time between retries. If the file still fails to copy, the next RoboCopy job will try again and often files that failed because they are in use or because of timeout issues might eventually be copied successfully this way.
- ## Next steps
synapse-analytics Get Started Add Admin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started-add-admin.md
So far in the get started guide, we've focused on activities *you* do in the wor
## Azure RBAC: Owner role for the workspace
+Assign to `ryan@contoso.com` to Azure RBAC **Owner** role on the workspace.
+ 1. Open the Azure portal and open you Synapse workspace. 1. On the left side, select **Access Control (IAM)**.
-1. Click **Add > Add role assignment**.
-1. For **Role**, select **Owner**.
-1. Pick the user you want to assign. In this example, we will use `ryan@contoso.com`.
-1. Click Save.
+1. Add `ryan@contoso.com` to the **Owner** role.
+1. Click **Save**.
## Synapse RBAC: Synapse Administrator role for the workspace+
+Assign to `ryan@contoso.com` to Synapse RBAC **Synapse Administrator** role on the workspace.
+ 1. Open your workspace in Synapse Studio. 1. On the left side, click **Manage** to open the Manage hub. 1. Under **Security**, click **Access control**. 1. Click **Add**.
-1. Leave **Scope** set to Workspace.
-1. For **Role**, choose **Synapse Administrator**.
-1. Then select the user `ryan@contoso.com`.
+1. Leave **Scope** set to **Workspace**.
+1. Add `ryan@contoso.com` to the **Synapse Administrator** role.
1. Then click **Apply**.
-## Primary Storage account: Storage Read/Write permissions
-You need to grant access to the Administrator to use that filesystem
+## Azure RBAC: Role assignments on the primary storage account
+
+Assign to `ryan@contoso.com` to **Owner** role on the workspace's primary storage account.
+Assign to `ryan@contoso.com` to **Azure Storage Blob Data Contributor** role on the workspace's primary storage account.
1. Open the workspace's primary storage account in the Azure portal. 1. On the left side, click **Access Control (IAM)**. 1. Add `ryan@contoso.com` to the **Owner** role.
-3. Add `ryan@contoso.com` to the **Azure Storage Blob Data Contributor** role
+1. Add `ryan@contoso.com` to the **Azure Storage Blob Data Contributor** role
-## Dedicated SQL pools: dbowner role
+## Dedicated SQL pools: db_owner role
-For all dedicated SQL pools, run the following T-SQL script against the corresponding SQL database.
+Assign `ryan@contoso.com` to the **db_owner** on each dedicated SQL pool in the workspace.
``` CREATE USER [ryan@contoso.com] FROM EXTERNAL PROVIDER;
synapse-analytics How To Monitor Using Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/monitoring/how-to-monitor-using-azure-monitor.md
Sign in to the Azure portal and select **Monitor** > **Alerts** to create alerts
Here are the logs emitted by Azure Synapse Analytics workspaces:
-| Log Analytics table name | Log category name | Description |
-|-|-|-|
-| SynapseGatewayApiRequests | GatewayApiRequests | Azure Synapse gateway API requests. |
-| SynapseRbacOperations | SynapseRbacOperations | Azure Synapse role-based access control (SRBAC) operations. |
+| Log Analytics table name | Log category name | Description |
+|--|--|-|
+| SynapseGatewayApiRequests | GatewayApiRequests | Azure Synapse gateway API requests. |
+| SynapseRbacOperations | SynapseRbacOperations | Azure Synapse role-based access control (SRBAC) operations. |
+| SynapseBuiltinSqlReqsEnded | BuiltinSqlReqsEnded | Azure Synapse built-in serverless SQL pool ended requests. |
+| SynapseIntegrationPipelineRuns | IntegrationPipelineRuns | Azure Synapse integration pipeline runs. |
+| SynapseIntegrationActivityRuns | IntegrationActivityRuns | Azure Synapse integration activity runs. |
+| SynapseIntegrationTriggerRuns | IntegrationTriggerRuns | Azure Synapse integration trigger runs. |
### Dedicated SQL pool logs
synapse-analytics Develop Storage Files Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-storage-files-storage-access-control.md
When accessing storage that is protected with the firewall, you can use **User I
#### User Identity To access storage that is protected with the firewall via User Identity, you can use PowerShell module Az.Storage.
+#### Configuration via Azure portal
+
+1. Search for your Storage Account in Azure portal.
+1. Go to Networking under section Settings.
+1. In Section "Resource instances" add an exception for your Synapse workspace.
+1. Select Microsoft.Synapse/workspaces as a Resource type.
+1. Select name of your workspace as an Instance name.
+1. Click Save.
+ #### Configuration via PowerShell Follow these steps to configure your storage account firewall and add an exception for Synapse workspace.
time-series-insights Concepts Ingestion Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/concepts-ingestion-overview.md
Title: 'Ingestion overview - Azure Time Series Insights Gen2 | Microsoft Docs' description: Learn about data ingestion into Azure Time Series Insights Gen2.---+++
time-series-insights Concepts Json Flattening Escaping Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/concepts-json-flattening-escaping-rules.md
Title: 'JSON flattening and escaping rules - Azure Time Series Insights Gen2 | Microsoft Docs' description: Learn about JSON flattening, escaping, and array handling in Azure Time Series Insights Gen2.---+++
The configuration and payload above will produce four columns and six events
| `2020-01-22T16:38:09Z` |`9336971` | ``100231-A-A1`` | 20.560796 | | `2020-01-22T16:38:09Z` | `9336971` | ``100231-A-A9`` | 177 | | `2020-01-22T16:38:09Z` | `9336971` | ``100231-A-A8`` | 420 |
-| `2020-01-22T16:42:14Z` | `9336971` | ``100231-A-A7`` | -30.9918 |
+| `2020-01-22T16:42:14Z` | `9336971` | ``100231-A-A7`` | -30.9918 |
| `2020-01-22T16:42:14Z` | `9336971` | ``100231-A-A4`` | 19.960796 | ### Example C
Time Series ID and timestamp are at the object root\
**Result in Parquet file:**\ The configuration and payload above will produce three columns and one event
-| timestamp | id_string | datapoints_dynamic
+| timestamp | id_string | datapoints_dynamic
| - | - | - | | `2020-11-01T10:00:00.000Z` | `800500054755`| ``[{"value": 120},{"value":124}]`` |
time-series-insights Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/concepts-storage.md
Title: 'Storage overview - Azure Time Series Insights Gen2 | Microsoft Docs' description: Learn about data storage in Azure Time Series Insights Gen2.---+++
Don't delete your Azure Time Series Insights Gen2 files. Manage related data fro
### Parquet file format and folder structure
-Parquet is an open-source columnar file format designed for efficient storage and performance. Azure Time Series Insights Gen2 uses Parquet to enable Time Series ID-based query performance at scale.
+Parquet is an open-source columnar file format designed for efficient storage and performance. Azure Time Series Insights Gen2 uses Parquet to enable Time Series ID-based query performance at scale.
For more information about the Parquet file type, read the [Parquet documentation](https://parquet.apache.org/documentation/latest/).
time-series-insights Concepts Streaming Ingestion Event Sources