Updates from: 03/27/2021 04:08:01
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner Dynamics 365 Fraud Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-dynamics-365-fraud-protection.md
# Tutorial: Configure Microsoft Dynamics 365 Fraud Protection with Azure Active Directory B2C
-In this sample tutorial, we provide guidance on how to integrate [Microsoft Dynamics 365 Fraud Protection](/dynamics365/fraud-protection/overview) (DFP) with the Azure Active Directory (AD) B2C.
+In this sample tutorial, we provide guidance on how to integrate [Microsoft Dynamics 365 Fraud Protection](https://docs.microsoft.com/dynamics365/fraud-protection/overview) (DFP) with the Azure Active Directory (AD) B2C.
Microsoft DFP provides clients with the capability to assess if the risk of attempts to create new accounts and attempts to login to clientΓÇÖs ecosystem are fraudulent. Microsoft DFP assessment can be used by the customer to block or challenge suspicious attempts to create new fake accounts or to compromise existing accounts. Account protection includes artificial intelligence empowered device fingerprinting, APIs for real-time risk assessment, rule and list experience to optimize risk strategy as per clientΓÇÖs business needs, and a scorecard to monitor fraud protection effectiveness and trends in clientΓÇÖs ecosystem.
Configure the application settings in the [App service in Azure](../app-service/
|FraudProtectionSettings:InstanceId | Microsoft DFP Configuration | | |FraudProtectionSettings:DeviceFingerprintingCustomerId | Your Microsoft device fingerprinting customer ID | | | FraudProtectionSettings:ApiBaseUrl | Your Base URL from Microsoft DFP Portal | Remove '-int' to call the production API instead|
-| TokenProviderConfig: Resource | | Remove '-int' to call the production API instead|
+| TokenProviderConfig: Resource | Your Base URL - https://api.dfp.dynamics-int.com | Remove '-int' to call the production API instead|
| TokenProviderConfig:ClientId |Your Fraud Protection merchant Azure AD client app ID | | | TokenProviderConfig:Authority | https://login.microsoftonline.com/<directory_ID> | Your Fraud Protection merchant Azure AD tenant authority | | TokenProviderConfig:CertificateThumbprint* | The thumbprint of the certificate to use to authenticate against your merchant Azure AD client app |
active-directory Concept Authentication Oath Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-oath-tokens.md
Previously updated : 03/25/2021 Last updated : 03/26/2021
Once tokens are acquired they must be uploaded in a comma-separated values (CSV)
```csv upn,serial number,secret key,time interval,manufacturer,model
-Helga@contoso.com,1234567,2234567abcdef1234567abcdef,60,Contoso,HardwareKey
+Helga@contoso.com,1234567,2234567abcdef2234567abcdef,60,Contoso,HardwareKey
``` > [!NOTE]
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
Cloud apps or actions are a key signal in a Conditional Access policy. Condition
Many of the existing Microsoft cloud applications are included in the list of applications you can select from.
-Administrators can assign a Conditional Access policy to the following cloud apps from Microsoft. Some apps like Office 365 and Microsoft Azure Management include multiple related child apps or services. The following list is not exhaustive and is subject to change.
+Administrators can assign a Conditional Access policy to the following cloud apps from Microsoft. Some apps like Office 365 and Microsoft Azure Management include multiple related child apps or services. We continually add more apps, so the following list is not exhaustive and is subject to change.
- [Office 365](#office-365) - Azure Analysis Services
Administrators can assign a Conditional Access policy to the following cloud app
- Virtual Private Network (VPN) - Windows Defender ATP
+Applications that are available to Conditional Access have gone through an onboarding and validation process. This does not include all Microsoft apps, as many are backend services and not meant to have policy directly applied to them. If you are looking for an application that is missing, you can contact the specific application team or make a request on [UserVoice](https://feedback.azure.com/forums/169401-azure-active-directory?category_id=167259).
+ ### Office 365 Microsoft 365 provides cloud-based productivity and collaboration services like Exchange, SharePoint, and Microsoft Teams. Microsoft 365 cloud services are deeply integrated to ensure smooth and collaborative experiences. This integration can cause confusion when creating policies as some apps such as Microsoft Teams have dependencies on others such as SharePoint or Exchange.
active-directory Msal Android Shared Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-android-shared-devices.md
Title: Shared device mode for Android devices
-description: Learn how to enable shared device mode to allow Frontline Workers to share an Android device
+description: Learn how to enable shared device mode to allow frontline workers to share an Android device
# Shared device mode for Android devices
-Frontline Workers such as retail associates, flight crew members, and field service workers often use a shared mobile device to do their work. That becomes problematic when they start sharing passwords or pin numbers to access customer and business data on the shared device.
+Frontline workers such as retail associates, flight crew members, and field service workers often use a shared mobile device to do their work. That becomes problematic when they start sharing passwords or pin numbers to access customer and business data on the shared device.
Shared device mode allows you to configure an Android device so that it can be easily shared by multiple employees. Employees can sign in and access customer information quickly. When they are finished with their shift or task, they can sign out of the device and it will be immediately ready for the next employee to use.
active-directory Msal Ios Shared Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-ios-shared-devices.md
Title: Shared device mode for iOS devices
-description: Learn how to enable shared device mode to allow Frontline Workers to share an iOS device
+description: Learn how to enable shared device mode to allow frontline workers to share an iOS device
>[!IMPORTANT] > This feature [!INCLUDE [PREVIEW BOILERPLATE](../../../includes/active-directory-develop-preview.md)]
-Frontline Workers such as retail associates, flight crew members, and field service workers often use a shared mobile device to perform their work. These shared devices can present security risks if your users share their passwords or PINs, intentionally or not, to access customer and business data on the shared device.
+Frontline workers such as retail associates, flight crew members, and field service workers often use a shared mobile device to perform their work. These shared devices can present security risks if your users share their passwords or PINs, intentionally or not, to access customer and business data on the shared device.
Shared device mode allows you to configure an iOS 13 or higher device to be more easily and securely shared by employees. Employees can sign in and access customer information quickly. When they're finished with their shift or task, they can sign out of the device and it's immediately ready for use by the next employee.
On a user change, you should ensure both the previous user's data is cleared and
### Detect shared device mode
-Detecting shared device mode is important for your application. Many applications will require a change in their user experience (UX) when the application is used on a shared device. For example, your application might have a "Sign-Up" feature, which isn't appropriate for a Frontline Worker because they likely already have an account. You may also want to add extra security to your application's handling of data if it's in shared device mode.
+Detecting shared device mode is important for your application. Many applications will require a change in their user experience (UX) when the application is used on a shared device. For example, your application might have a "Sign-Up" feature, which isn't appropriate for a frontline worker because they likely already have an account. You may also want to add extra security to your application's handling of data if it's in shared device mode.
Use the `getDeviceInformationWithParameters:completionBlock:` API in the `MSALPublicClientApplication` to determine if an app is running on a device in shared device mode.
signoutParameters.signoutFromBrowser = YES; // Only needed for Public Preview.
## Next steps
-To see shared device mode in action, the following code sample on GitHub includes an example of running a Frontline Worker app on an iOS device in shared device mode:
+To see shared device mode in action, the following code sample on GitHub includes an example of running a frontline worker app on an iOS device in shared device mode:
[MSAL iOS Swift Microsoft Graph API Sample](https://github.com/Azure-Samples/ms-identity-mobile-apple-swift-objc)
active-directory Msal Shared Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-shared-devices.md
Title: Shared device mode overview
-description: Learn about shared device mode to enable device sharing for your Frontline Workers.
+description: Learn about shared device mode to enable device sharing for your frontline workers.
# Overview of shared device mode
-Shared device mode is a feature of Azure Active Directory that allows you to build applications that support Frontline Workers and enable shared device mode on the devices deployed to them.
+Shared device mode is a feature of Azure Active Directory that allows you to build applications that support frontline workers and enable shared device mode on the devices deployed to them.
>[!IMPORTANT] > Shared device mode for iOS [!INCLUDE [PREVIEW BOILERPLATE](../../../includes/active-directory-develop-preview.md)]
-## What are Frontline Workers?
+## What are frontline workers?
-Frontline Workers are retail employees, maintenance and field agents, medical personnel, and other users that don't sit in front of a computer or use corporate email for collaboration. The following sections introduce the aspects and challenges of supporting Frontline Workers, followed by an introduction to the features provided by Microsoft that enable your application for use by an organization's Frontline Workers.
+Frontline workers are retail employees, maintenance and field agents, medical personnel, and other users that don't sit in front of a computer or use corporate email for collaboration. The following sections introduce the aspects and challenges of supporting frontline workers, followed by an introduction to the features provided by Microsoft that enable your application for use by an organization's frontline workers.
-### Challenges of supporting Frontline Workers
+### Challenges of supporting frontline workers
-Enabling Frontline Worker workflows includes challenges not usually presented by typical information workers. Such challenges can include high turnover rate and less familiarity with an organization's core productivity tools. To empower their Frontline Workers, organizations are adopting different strategies. Some are adopting a bring-your-own-device (BYOD) strategy in which their employees use business apps on their personal phone, while others provide their employees with shared devices like iPads or Android tablets.
+Enabling frontline worker workflows includes challenges not usually presented by typical information workers. Such challenges can include high turnover rate and less familiarity with an organization's core productivity tools. To empower their frontline workers, organizations are adopting different strategies. Some are adopting a bring-your-own-device (BYOD) strategy in which their employees use business apps on their personal phone, while others provide their employees with shared devices like iPads or Android tablets.
### Supporting multiple users on devices designed for one user
Azure Active Directory enables these scenarios with a feature called **shared de
As mentioned, shared device mode is a feature of Azure Active Directory that enables you to:
-* Build applications that support Frontline Workers
-* Deploy devices to Frontline Workers and turn on shared device mode
+* Build applications that support frontline workers
+* Deploy devices to frontline workers and turn on shared device mode
-### Build applications that support Frontline Workers
+### Build applications that support frontline workers
-You can support Frontline Workers in your applications by using the Microsoft Authentication Library (MSAL) and [Microsoft Authenticator app](../user-help/user-help-auth-app-overview.md) to enable a device state called *shared device mode*. When a device is in shared device mode, Microsoft provides your application with information to allow it to modify its behavior based on the state of the user on the device, protecting user data.
+You can support frontline workers in your applications by using the Microsoft Authentication Library (MSAL) and [Microsoft Authenticator app](../user-help/user-help-auth-app-overview.md) to enable a device state called *shared device mode*. When a device is in shared device mode, Microsoft provides your application with information to allow it to modify its behavior based on the state of the user on the device, protecting user data.
Supported features are:
Your users depend on you to ensure their data isn't leaked to another user. Shar
For details on how to modify your applications to support shared device mode, see the [Next steps](#next-steps) section at the end of this article.
-### Deploy devices to Frontline Workers and turn on shared device mode
+### Deploy devices to frontline workers and turn on shared device mode
-Once your applications support shared device mode and include the required data and security changes, you can advertise them as being usable by Frontline Workers.
+Once your applications support shared device mode and include the required data and security changes, you can advertise them as being usable by frontline workers.
An organization's device administrators are able to deploy their devices and your applications to their stores and workplaces through a mobile device management (MDM) solution like Microsoft Intune. Part of the provisioning process is marking the device as a *Shared Device*. Administrators configure shared device mode by deploying the [Microsoft Authenticator app](../user-help/user-help-auth-app-overview.md) and setting shared device mode through configuration parameters. After performing these steps, all applications that support shared device mode will use the Microsoft Authenticator application to manage its user state and provide security features for the device and organization. ## Next steps
-We support iOS and Android platforms for shared device mode. Review the documentation below for your platform to begin supporting Frontline Workers in your applications.
+We support iOS and Android platforms for shared device mode. Review the documentation below for your platform to begin supporting frontline workers in your applications.
* [Supporting shared device mode for iOS](msal-ios-shared-devices.md) * [Supporting shared device mode for Android](msal-android-shared-devices.md)
active-directory Quickstart Register App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-register-app.md
Follow these steps to create the app registration:
:::image type="content" source="media/quickstart-register-app/portal-02-app-reg-01.png" alt-text="Screenshot of the Azure portal in a web browser, showing the Register an application pane.":::
-When registration finishes, the Azure portal displays the app registration's **Overview** pane. You see the **Application (client) ID**. Also called the *client ID*, this value uniquely identifies your application in the Microsoft identity platform.
+When registration finishes, the Azure portal displays the app registration's **Overview** pane. You see the **Application (client) ID**. Also called the *client ID*, this value uniquely identifies your application in the Microsoft identity platform.
+
+> [!IMPORTANT]
+> New app registrations are hidden to users by default. When you are ready for users to see the app on their [My Apps page](../user-help/my-apps-portal-end-user-access.md) you can enable it. To enable the app, in the Azure portal navigate to **Azure Active Director** > **Enterprise applications** and select the app. Then on the **Properties** page toggle **Visible to users?** to Yes.
Your application's code, or more typically an authentication library used in your application, also uses the client ID. The ID is used as part of validating the security tokens it receives from the identity platform.
active-directory Service Accounts Principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/service-accounts-principal.md
For more information see [Get-AzureADServicePrincipal](/powershell/module/azurea
To assess the security of your service principals, ensure you evaluate privileges and credential storage. Mitigate potential challenges using the following information.+ |Challenges | Mitigations| | - | - | | Detect the user that consented to a multi-tenant appΓÇï, and detect illicit consent grants to a multi-tenant app | Run the following PowerShell to find multi-tenant apps.<br>`Get-AzureADServicePrincipal -All:$true ? {$_.Tags -eq WindowsAzureActiveDirectoryIntegratedApp"}`<br>Disable user consent. ΓÇï<br>Allow user consent from verified publishers, for selected permissions (recommended) <br> Use conditional access to block service principals from untrusted locations. Configure them under the user context, and their tokens should be used to trigger the service principal.|
When using Microsoft Graph, check the documentation of the specific API, [like i
[Governing Azure service accounts](service-accounts-governing-azure.md)
-[Introduction to on-premises service accounts](service-accounts-on-premises.md)
+[Introduction to on-premises service accounts](service-accounts-on-premises.md)
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new.md
Customers can work around this requirement for testing purposes by using a featu
-### Public Preview - Customize and configure Android shared devices for Frontline Workers at scale
+### Public Preview - Customize and configure Android shared devices for frontline workers at scale
**Type:** New feature **Service category:** Device Registration and Management **Product capability:** Identity Security & Protection
-Azure AD and Microsoft Endpoint Manager teams have combined to bring the capability to customize, scale, and secure your Frontline Worker devices.
+Azure AD and Microsoft Endpoint Manager teams have combined to bring the capability to customize, scale, and secure your frontline worker devices.
The following preview capabilities will allow you to: - Provision Android shared devices at scale with Microsoft Endpoint Manager - Secure your access for shift workers using device-based conditional access - Customize sign-in experiences for the shift workers with Managed Home Screen
-To learn more, refer to [Customize and configure shared devices for Frontline Workers at scale](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/customize-and-configure-shared-devices-for-firstline-workers-at/ba-p/1751708).
+To learn more, refer to [Customize and configure shared devices for frontline workers at scale](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/customize-and-configure-shared-devices-for-firstline-workers-at/ba-p/1751708).
active-directory How To Connect Emergency Ad Fs Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-emergency-ad-fs-certificate-rotation.md
+
+ Title: Emergency Rotation of the AD FS certificates | Microsoft Docs
+description: This article explains how to revoke and update AD FS certificates immediately.
+
+documentationcenter: ''
+++++ Last updated : 03/22/2021++++
+# Emergency Rotation of the AD FS certificates
+In the event that you need to rotate the AD FS certificates immediately, you can follow the steps outlined below in this section.
+
+> [!IMPORTANT]
+> Conducting the steps below in the AD FS environment will revoke the old certificates immediately. Because this is done immediately, the normal time usually allowed for your federation partners to consume your new certificate is by-passed. It might result in a service outage as trusts update to use the new certificates. This should resolve once all of the federation partners have the new certificates.
+
+> [!NOTE]
+> Microsoft highly recommends using a Hardware Security Module (HSM) to protect and secure certificates.
+> For more information see [Hardware Security Module](https://docs.microsoft.com/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#hardware-security-module-hsm) under best practices for securing AD FS.
+
+## Determine your Token Signing Certificate thumbprint
+In order to revoke the old Token Signing Certificate which AD FS is currently using, you need to determine the thumbprint of the token-sigining certificate. To do this, use the following steps below:
+
+ 1. Connect to the Microsoft Online Service
+`PS C:\>Connect-MsolService`
+ 2. Document both your on-premise and cloud Token Signing Certificate thumbprint and expiration dates.
+`PS C:\>Get-MsolFederationProperty -DomainName <domain>`
+ 3. Copy down the the thumbprint. It will be used later to remove the existing certificates.
+
+You can also get the thumbprint by using AD FS Management, navigating to Service/Certificates, right-clicking on the certificate, select View certificate and then selecting Details.
+
+## Determine whether AD FS renews the certificates automatically
+By default, AD FS is configured to generate token signing and token decryption certificates automatically, both at the initial configuration time and when the certificates are approaching their expiration date.
+
+You can run the following Windows PowerShell command: `PS C:\>Get-AdfsProperties | FL AutoCert*, Certificate*`.
+
+The AutoCertificateRollover property describes whether AD FS is configured to renew token signing and token decrypting certificates automatically. If AutoCertificateRollover is set to TRUE, follow the instructions outlined below in [Generating new self-signed certificate if AutoCertificateRollover is set to TRUE]. If If AutoCertificateRollover is set to FALSE, follow the instructions outlined below in [Generating new certificates manually if AutoCertificateRollover is set to FALSE]
++
+## Generating new self-signed certificate if AutoCertificateRollover is set to TRUE
+In this section, you will be creating **two** token-signing certificates. The first will use the **-urgent** flag, which will replace the current primary certificate immediately. The second will be used for the secondary certificate.
+
+>[!IMPORTANT]
+>The reason we are creating two certificates is because Azure holds on to information regarding the previous certificate. By creating a second one, we are forcing Azure to release information about the old certificate and replace it with information about the second certificate.
+>
+>If you do not create the second certificate and update Azure with it, it may be possible for the old token-signing certificate to authenticate users.
+
+You can use the following steps to generate the new token-signing certificates.
+
+ 1. Ensure that you are logged on to the primary AD FS server.
+ 2. Open Windows PowerShell as an administrator.
+ 3. Check to make sure that your AutoCertificateRollover is set to True.
+`PS C:\>Get-AdfsProperties | FL AutoCert*, Certificate*`
+ 4. To generate a new token signing certificate: `Update-ADFSCertificate ΓÇôCertificateType token-signing -Urgent`.
+ 5. Verify the update by running the following command: `Get-ADFSCertificate ΓÇôCertificateType token-signing`
+ 6. Now generate the second token signing certificate: `Update-ADFSCertificate ΓÇôCertificateType token-signing`.
+ 7. You can verify the update by running the following command again: `Get-ADFSCertificate ΓÇôCertificateType token-signing`
++
+## Generating new certificates manually if AutoCertificateRollover is set to FALSE
+If you are not using the default automatically generated, self-signed token signing and token decryption certificates, you must renew and configure these certificates manually. This involves creating two new token-signing certificates and importing them. Then you promote one to primary, revoke the old certificate and configure the second certificate as the secondary certificate.
+
+First, you must obtain a two new certificates from your certificate authority and import them into the local machine personal certificate store on each federation server. For instructions, see the [Import a Certificate](https://technet.microsoft.com/library/cc754489.aspx) article.
+
+>[!IMPORTANT]
+>The reason we are creating two certificates is because Azure holds on to information regarding the previous certificate. By creating a second one, we are forcing Azure to release information about the old certificate and replace it with information about the second certificate.
+>
+>If you do not create the second certificate and update Azure with it, it may be possible for the old token-signing certificate to authenticate users.
+
+### To configure a new certificate as a secondary certificate
+Then you must configure one certificate as the secondary AD FS token signing or decryption certificate and then promote it to the primary
+
+1. Once you have imported the certificate. Open the **AD FS Management** console.
+2. Expand **Service** and then select **Certificates**.
+3. In the Actions pane, click **Add Token-Signing Certificate**.
+4. Select the new certificate from the list of displayed certificates, and then click OK.
+
+### To promote the new certificate from secondary to primary
+Now that the new certificate has been imported and configured in AD FS, you need to set as the primary certificate.
+1. Open the **AD FS Management** console.
+2. Expand **Service** and then select **Certificates**.
+3. Click the secondary token signing certificate.
+4. In the **Actions** pane, click **Set As Primary**. Click Yes at the confirmation prompt.
+5. Once you promoted the new certificate as the primary certificate, you should remove the old certificate because it can still be used. See the [Remove your old certificates](#remove-your-old-certificates) section below.
+
+### To configure the second certificate as a secondary certificate
+Now that you have added the first certificate and made it primary and removed the old one, import the second certificate. Then you must configure the certificate as the secondary AD FS token signing certificate
+
+1. Once you have imported the certificate. Open the **AD FS Management** console.
+2. Expand **Service** and then select **Certificates**.
+3. In the Actions pane, click **Add Token-Signing Certificate**.
+4. Select the new certificate from the list of displayed certificates, and then click OK.
+
+## Update Azure AD with the new token-signing certificate
+Open the Microsoft Azure Active Directory Module for Windows PowerShell. Alternatively, open Windows PowerShell and then run the command `Import-Module msonline`
+
+Connect to Azure AD by run the following command: `Connect-MsolService`, and then, enter your global administrator credentials.
+
+>[!Note]
+> If you are running these commands on a computer that is not the primary federation server, enter the following command first: `Set-MsolADFSContext ΓÇôComputer <servername>`. Replace <servername> with the name of the AD FS server. Then enter the administrator credentials for the AD FS server when prompted.
+
+Optionally, verify whether an update is required by checking the current certificate information in Azure AD. To do so, run the following command: `Get-MsolFederationProperty`. Enter the name of the Federated domain when prompted.
+
+To update the certificate information in Azure AD, run the following command: `Update-MsolFederatedDomain` and then enter the domain name when prompted.
+
+>[!Note]
+> If you see an error when running this command, run the following command: `Update-MsolFederatedDomain ΓÇôSupportMultipleDomain`, and then enter the domain name when prompted.
+
+## Replace SSL certificates
+In the event that you need to replace your token-signing certificate because of a compromise, you should also revoke and replace the SSL certificates for AD FS and your WAP servers.
+
+Revoking your SSL certificates must be done at the certificate authority (CA) that issued the certificate. These certificates are often issued by 3rd party providers such as GoDaddy. For an example, see (Revoke a certificate | SSL Certificates - GoDaddy Help US). For more information see [How Certificate Revocation Works](https://docs.microsoft.com/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ee619754(v=ws.10)?redirectedfrom=MSDN).
+
+Once the old SSL certificate has been revoked and a new one issued, you can replacing the SSL certificates. For more information see [Replacing the SSL certificate for AD FS](https://docs.microsoft.com/windows-server/identity/ad-fs/operations/manage-ssl-certificates-ad-fs-wap#replacing-the-ssl-certificate-for-ad-fs).
++
+## Remove your old certificates
+Once you have replaced your old certificates, you should remove the old certificate because it can still be used. . To do this, follow the steps below:. To do this, follow the steps below:
+
+1. Ensure that you are logged on to the primary AD FS server.
+2. Open Windows PowerShell as an administrator.
+4. To remove the old token signing certificate: `Remove-ADFSCertificate ΓÇôCertificateType token-signing -thumbprint <thumbprint>`.
+
+## Updating federation partners who can consume Federation Metadata
+If you have renewed and configure a new token signing or token decryption certificate, you must make sure that the all your federation partners (resource organization or account organization partners that are represented in your AD FS by relying party trusts and claims provider trusts) have picked up the new certificates.
+
+## Updating federation partners who can NOT consume Federation Metadata
+If your federation partners cannot consume your federation metadata, you must manually send them the public key of your new token-signing / token-decrypting certificate. Send your new certificate public key (.cer file or .p7b if you wish to include the entire chain) to all of your resource organization or account organization partners (represented in your AD FS by relying party trusts and claims provider trusts). Have the partners implement changes on their side to trust the new certificates.
+++
+## Revoke refresh tokens via PowerShell
+Now we want to revoke refresh tokens for users who may have them and force them to re-logon and get new tokens. This will log users out of their phone, current webmail sessions, along with other items that are using Tokens and Refresh Tokens. Information can be found [here](https://docs.microsoft.com/powershell/module/azuread/revoke-azureaduserallrefreshtoken?view=azureadps-2.0&preserve-view=true) and you can also reference how to [Revoke user access in Azure Active Directory](../../active-directory/enterprise-users/users-revoke-access.md).
+
+## Next steps
+
+- [Managing SSL Certificates in AD FS and WAP in Windows Server 2016](https://docs.microsoft.com/windows-server/identity/ad-fs/operations/manage-ssl-certificates-ad-fs-wap#replacing-the-ssl-certificate-for-ad-fs)
+- [Obtain and Configure Token Signing and Token Decryption Certificates for AD FS](https://docs.microsoft.com/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn781426(v=ws.11)#updating-federation-partners)
+- [Renew federation certificates for Microsoft 365 and Azure Active Directory](how-to-connect-fed-o365-certs.md)
+++++++++++++++++++
active-directory How To Connect Fed O365 Certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-fed-o365-certs.md
## Overview For successful federation between Azure Active Directory (Azure AD) and Active Directory Federation Services (AD FS), the certificates used by AD FS to sign security tokens to Azure AD should match what is configured in Azure AD. Any mismatch can lead to broken trust. Azure AD ensures that this information is kept in sync when you deploy AD FS and Web Application Proxy (for extranet access).
+> [!NOTE]
+> This article provides information on manging your federation cerficates. For infromation on emergency rotation see [Emergency Rotation of the AD FS certificates](how-to-connect-emergency-ad-fs-certificate-rotation.md)
+ This article provides you additional information to manage your token signing certificates and keep them in sync with Azure AD, in the following cases: * You are not deploying the Web Application Proxy, and therefore the federation metadata is not available in the extranet. * You are not using the default configuration of AD FS for token signing certificates. * You are using a third-party identity provider.
+> [!IMPORTANT]
+> Microsoft highly recommends using a Hardware Security Module (HSM) to protect and secure certificates.
+> For more information see [Hardware Security Module](https://docs.microsoft.com/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#hardware-security-module-hsm) under best practices for securing AD FS.
+ ## Default configuration of AD FS for token signing certificates The token signing and token decrypting certificates are usually self-signed certificates, and are good for one year. By default, AD FS includes an auto-renewal process called **AutoCertificateRollover**. If you are using AD FS 2.0 or later, Microsoft 365 and Azure AD automatically update your certificate before it expires.
active-directory Reference Connect Health Version History https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-health-version-history.md
The Azure Active Directory team regularly updates Azure AD Connect Health with n
Azure AD Connect Health for Sync is integrated with Azure AD Connect installation. Read more about [Azure AD Connect release history](./reference-connect-version-history.md) For feature feedback, vote at [Connect Health User Voice channel](https://feedback.azure.com/forums/169401-azure-active-directory/filters/new?category_id=165591)
+## March 2021
+**Agent Update**
+
+- Azure AD Connect Health agent for AD FS (version 3.1.95.0)
+
+ - Fix to resolve NT4 formatted username to a UPN during sign-in events.
+ - Fix to identify incorrect application identifier scenarios with a dedicated error code.
+ - Changes to add a new property for OAuth client identifier.
+ - Fix to display correct values in the **Protocol** and **Authentication Type** fields in Azure AD Sign-In Report for certain sign-in scenarios.
+ - Fix to display IP addresses in Azure AD Sign-In Report's IP chain field in order of the request.
+ - Changes to introduce a new field to differentiate if secondary authentication was requested during a sign-in.
+ - Fix for AD FS application identifier property to display in Azure AD Sign-In Report.
+ ## April 2020 **Agent Update** - Azure AD Connect Health agent for AD FS (version 3.1.77.0)
- 1. Bug fix for ΓÇ£Invalid Service Principal Name (SPN) for AD FS serviceΓÇ¥ alert, for which the alert was reporting incorrectly.
+ - Bug fix for ΓÇ£Invalid Service Principal Name (SPN) for AD FS serviceΓÇ¥ alert, for which the alert was reporting incorrectly.
## July 2019
For feature feedback, vote at [Connect Health User Voice channel](https://feedba
* Simpler Agent Deployment using Azure AD Global Admin credentials. ## Next steps
-Learn more about [Monitor your on-premises identity infrastructure and synchronization services in the cloud](./whatis-azure-ad-connect.md).
+Learn more about [Monitor your on-premises identity infrastructure and synchronization services in the cloud](./whatis-azure-ad-connect.md).
active-directory Application Proxy Integrate With Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-integrate-with-power-bi.md
Now you're ready to configure Azure AD Application Proxy.
1. Publish Report Services through Application Proxy with the following settings. For step-by-step instructions on how to publish an application through Application Proxy, see [Publishing applications using Azure AD Application Proxy](application-proxy-add-on-premises-application.md#add-an-on-premises-app-to-azure-ad). - **Internal URL**: Enter the URL to the Report Server that the connector can reach in the corporate network. Make sure this URL is reachable from the server the connector is installed on. A best practice is using a top-level domain such as `https://servername/` to avoid issues with subpaths published through Application Proxy. For example, use `https://servername/` and not `https://servername/reports/` or `https://servername/reportserver/`. > [!NOTE]
- > We recommend using a secure HTTPS connection to the Report Server. See [Configure SSL connections on a native mode report server](/sql/reporting-services/security/configure-ssl-connections-on-a-native-mode-report-server?view=sql-server-2017) for information how to.
+ > We recommend using a secure HTTPS connection to the Report Server. See [Configure SSL connections on a native mode report server](/sql/reporting-services/security/configure-ssl-connections-on-a-native-mode-report-server) for information how to.
- **External URL**: Enter the public URL the Power BI mobile app will connect to. For example, it may look like `https://reports.contoso.com` if a custom domain is used. To use a custom domain, upload a certificate for the domain, and point a DNS record to the default msappproxy.net domain for your application. For detailed steps, see [Working with custom domains in Azure AD Application Proxy](application-proxy-configure-custom-domain.md). - **Pre-authentication Method**: Azure Active Directory
active-directory Iqualify Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/iqualify-tutorial.md
To configure the integration of iQualify LMS into Azure AD, you need to add iQua
1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
- ![The Azure Active Directory button](common/select-azuread.png)
+ ![The Azure Active Directory button](common/select-azuread.png)
2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
- ![The Enterprise applications blade](common/enterprise-applications.png)
+ ![The Enterprise applications blade](common/enterprise-applications.png)
3. To add new application, click **New application** button on the top of dialog.
- ![The New application button](common/add-new-app.png)
+ ![The New application button](common/add-new-app.png)
4. In the search box, type **iQualify LMS**, select **iQualify LMS** from result panel then click **Add** button to add the application.
- ![iQualify LMS in the results list](common/search-new-app.png)
+ ![iQualify LMS in the results list](common/search-new-app.png)
## Configure and test Azure AD single sign-on
To configure Azure AD single sign-on with iQualify LMS, perform the following st
1. In the [Azure portal](https://portal.azure.com/), on the **iQualify LMS** application integration page, select **Single sign-on**.
- ![Configure single sign-on link](common/select-sso.png)
+ ![Configure single sign-on link](common/select-sso.png)
2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
- ![Single sign-on select mode](common/select-saml-option.png)
+ ![Single sign-on select mode](common/select-saml-option.png)
3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, If you wish to configure the application in **IDP** initiated mode, perform the following steps:
- ![Screenshot shows the Basic SAML Configuration, where you can enter Identifier, Reply U R L, and select Save.](common/idp-intiated.png)
+ ![Screenshot shows the Basic SAML Configuration, where you can enter Identifier, Reply U R L, and select Save.](common/idp-intiated.png)
+
+ 1. In the **Identifier** text box, type a URL using the following pattern:
- a. In the **Identifier** text box, type a URL using the following pattern:
- | |
- |--|--|
- | Production Environment: `https://<yourorg>.iqualify.com/`|
- | Test Environment: `https://<yourorg>.iqualify.io`|
+ * Production Environment: `https://<yourorg>.iqualify.com/`
+ * Test Environment: `https://<yourorg>.iqualify.io`
- b. In the **Reply URL** text box, type a URL using the following pattern:
- | |
- |--|--|
- | Production Environment: `https://<yourorg>.iqualify.com/auth/saml2/callback` |
- | Test Environment: `https://<yourorg>.iqualify.io/auth/saml2/callback` |
+ 2. In the **Reply URL** text box, type a URL using the following pattern:
+
+ * Production Environment: `https://<yourorg>.iqualify.com/auth/saml2/callback`
+ * Test Environment: `https://<yourorg>.iqualify.io/auth/saml2/callback`
5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/metadata-upload-additional-signon.png)
+ ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/metadata-upload-additional-signon.png)
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
- In the **Sign-on URL** text box, type a URL using the following pattern:
- | |
- |--|--|
- | Production Environment: `https://<yourorg>.iqualify.com/login` |
- | Test Environment: `https://<yourorg>.iqualify.io/login` |
+ * Production Environment: `https://<yourorg>.iqualify.com/login`
+ * Test Environment: `https://<yourorg>.iqualify.io/login`
- > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [iQualify LMS Client support team](https://www.iqualify.com/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [iQualify LMS Client support team](https://www.iqualify.com/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
6. Your iQualify LMS application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click **Edit** icon to open **User Attributes** dialog.
- ![Screenshot shows User Attributes with the Edit icon selected.](common/edit-attribute.png)
+ ![Screenshot shows User Attributes with the Edit icon selected.](common/edit-attribute.png)
7. In the **User Claims** section on the **User Attributes** dialog, edit the claims by using **Edit icon** or add the claims by using **Add new claim** to configure SAML token attribute as shown in the image above and perform the following steps:
- | Name | Source Attribute|
- | | |
- | email | user.userprincipalname |
- | first_name | user.givenname |
- | last_name | user.surname |
- | person_id | "your attribute" |
+ | Name | Source Attribute|
+ | | |
+ | email | user.userprincipalname |
+ | first_name | user.givenname |
+ | last_name | user.surname |
+ | person_id | "your attribute" |
- a. Click **Add new claim** to open the **Manage user claims** dialog.
+ a. Click **Add new claim** to open the **Manage user claims** dialog.
- ![Screenshot shows User claims with the option to Add new claim.](common/new-save-attribute.png)
+ ![Screenshot shows User claims with the option to Add new claim.](common/new-save-attribute.png)
- ![Screenshot shows the Manage user claims dialog box where you can enter the values described.](common/new-attribute-details.png)
+ ![Screenshot shows the Manage user claims dialog box where you can enter the values described.](common/new-attribute-details.png)
- b. In the **Name** textbox, type the attribute name shown for that row.
+ b. In the **Name** textbox, type the attribute name shown for that row.
- c. Leave the **Namespace** blank.
+ c. Leave the **Namespace** blank.
- d. Select Source as **Attribute**.
+ d. Select Source as **Attribute**.
- e. From the **Source attribute** list, type the attribute value shown for that row.
+ e. From the **Source attribute** list, type the attribute value shown for that row.
- f. Click **Ok**
+ f. Click **Ok**
- g. Click **Save**.
+ g. Click **Save**.
- > [!Note]
- > The **person_id** attribute is **Optional**
+ > [!Note]
+ > The **person_id** attribute is **Optional**
8. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![The Certificate download link](common/certificatebase64.png)
9. On the **Set up iQualify LMS** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
+ a. Login URL
- b. Azure AD Identifier
+ b. Azure AD Identifier
- c. Logout URL
+ c. Logout URL
### Configure iQualify LMS Single Sign-On
To configure Azure AD single sign-on with iQualify LMS, perform the following st
1. Once you are logged in, click on your avatar at the top right, then click on **Account settings**
- ![Account settings](./media/iqualify-tutorial/setting1.png)
+ ![Account settings](./media/iqualify-tutorial/setting1.png)
1. In the account settings area, click on the ribbon menu on the left and click on **INTEGRATIONS**
- ![INTEGRATIONS](./media/iqualify-tutorial/setting2.png)
+ ![INTEGRATIONS](./media/iqualify-tutorial/setting2.png)
1. Under INTEGRATIONS, click on the **SAML** icon.
- ![SAML icon](./media/iqualify-tutorial/setting3.png)
+ ![SAML icon](./media/iqualify-tutorial/setting3.png)
1. In the **SAML Authentication Settings** dialog box, perform the following steps: ![SAML Authentication Settings](./media/iqualify-tutorial/setting4.png)
- a. In the **SAML SINGLE SIGN-ON SERVICE URL** box, paste the **Login URL** value copied from the Azure AD application configuration window.
+ a. In the **SAML SINGLE SIGN-ON SERVICE URL** box, paste the **Login URL** value copied from the Azure AD application configuration window.
- b. In the **SAML LOGOUT URL** box, paste the **Logout URL** value copied from the Azure AD application configuration window.
+ b. In the **SAML LOGOUT URL** box, paste the **Logout URL** value copied from the Azure AD application configuration window.
- c. Open the downloaded certificate file in notepad, copy the content, and then paste it in the **PUBLIC CERTIFICATE** box.
+ c. Open the downloaded certificate file in notepad, copy the content, and then paste it in the **PUBLIC CERTIFICATE** box.
- d. In **LOGIN BUTTON LABEL** enter the name for the button to be displayed on login page.
+ d. In **LOGIN BUTTON LABEL** enter the name for the button to be displayed on login page.
- e. Click **SAVE**.
+ e. Click **SAVE**.
- f. Click **UPDATE**.
+ f. Click **UPDATE**.
### Create an Azure AD test user
In this section, you enable Britta Simon to use Azure single sign-on by granting
1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **iQualify LMS**.
- ![Enterprise applications blade](common/enterprise-applications.png)
+ ![Enterprise applications blade](common/enterprise-applications.png)
2. In the applications list, select **iQualify LMS**.
- ![The iQualify LMS link in the Applications list](common/all-applications.png)
+ ![The iQualify LMS link in the Applications list](common/all-applications.png)
3. In the menu on the left, select **Users and groups**.
aks Use Pod Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-pod-security-policies.md
Title: Use pod security policies in Azure Kubernetes Service (AKS)
description: Learn how to control pod admissions by using PodSecurityPolicy in Azure Kubernetes Service (AKS) Previously updated : 02/12/2021 Last updated : 03/25/2021 # Preview - Secure your cluster using pod security policies in Azure Kubernetes Service (AKS) > [!WARNING]
-> **The feature described in this document, pod security policy (preview), is set for deprecation and will no longer be available after June 30th, 2021** in favor of [Azure Policy for AKS](use-azure-policy.md). The deprecation date has been extended from the previous date of October 15th, 2020.
+> **The feature described in this document, pod security policy (preview), will begin deprecation with Kubernetes version 1.21, with its removal in version 1.25.** As Kubernetes Upstream approaches that milestone, the Kubernetes community will be working to document viable alternatives. The previous deprecation announcement was made at the time as there was not a viable option for customers. Now that the Kubernetes community is working on an alternative, there no longer is a pressing need to deprecate ahead of Kubernetes.
> > After pod security policy (preview) is deprecated, you must disable the feature on any existing clusters using the deprecated feature to perform future cluster upgrades and stay within Azure support.
->
-> It is highly recommended to begin testing scenarios with Azure Policy for AKS, which offers built-in policies to secure pods and built-in initiatives which map to pod security policies. To migrate from pod security policy, you need to take the following actions on a cluster.
->
-> 1. [Disable pod security policy](#clean-up-resources) on the cluster
-> 1. Enable the [Azure Policy Add-on][azure-policy-add-on]
-> 1. Enable the desired Azure policies from [available built-in policies][policy-samples]
-> 1. Review [behavior changes between pod security policy and Azure Policy](#behavior-changes-between-pod-security-policy-and-azure-policy)
To improve the security of your AKS cluster, you can limit what pods can be scheduled. Pods that request resources you don't allow can't run in the AKS cluster. You define this access using pod security policies. This article shows you how to use pod security policies to limit the deployment of pods in AKS.
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-java.md
These instructions apply to all database connections. You will need to fill plac
||--|| | PostgreSQL | `org.postgresql.Driver` | [Download](https://jdbc.postgresql.org/download.html) | | MySQL | `com.mysql.jdbc.Driver` | [Download](https://dev.mysql.com/downloads/connector/j/) (Select "Platform Independent") |
-| SQL Server | `com.microsoft.sqlserver.jdbc.SQLServerDriver` | [Download](/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server?view=sql-server-2017#download) |
+| SQL Server | `com.microsoft.sqlserver.jdbc.SQLServerDriver` | [Download](/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server#download) |
To configure Tomcat to use Java Database Connectivity (JDBC) or the Java Persistence API (JPA), first customize the `CATALINA_OPTS` environment variable that is read in by Tomcat at start-up. Set these values through an app setting in the [App Service Maven plugin](https://github.com/Microsoft/azure-maven-plugins/blob/develop/azure-webapp-maven-plugin/README.md):
These instructions apply to all database connections. You will need to fill plac
||--|| | PostgreSQL | `org.postgresql.Driver` | [Download](https://jdbc.postgresql.org/download.html) | | MySQL | `com.mysql.jdbc.Driver` | [Download](https://dev.mysql.com/downloads/connector/j/) (Select "Platform Independent") |
-| SQL Server | `com.microsoft.sqlserver.jdbc.SQLServerDriver` | [Download](/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server?view=sql-server-2017#download) |
+| SQL Server | `com.microsoft.sqlserver.jdbc.SQLServerDriver` | [Download](/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server#download) |
To configure Tomcat to use Java Database Connectivity (JDBC) or the Java Persistence API (JPA), first customize the `CATALINA_OPTS` environment variable that is read in by Tomcat at start-up. Set these values through an app setting in the [App Service Maven plugin](https://github.com/Microsoft/azure-maven-plugins/blob/develop/azure-webapp-maven-plugin/README.md):
app-service Overview Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview-authentication-authorization.md
# Authentication and authorization in Azure App Service and Azure Functions
-Azure App Service provides built-in authentication and authorization support (sometimes referred to as "Easy Auth"), so you can sign in users and access data by writing minimal or no code in your web app, RESTful API, and mobile back end, and also [Azure Functions](../azure-functions/functions-overview.md). This article describes how App Service helps simplify authentication and authorization for your app.
+Azure App Service provides built-in authentication and authorization capabilities (sometimes referred to as "Easy Auth"), so you can sign in users and access data by writing minimal or no code in your web app, RESTful API, and mobile back end, and also [Azure Functions](../azure-functions/functions-overview.md). This article describes how App Service helps simplify authentication and authorization for your app.
-Secure authentication and authorization require deep understanding of security, including federation, encryption, [JSON web tokens (JWT)](https://wikipedia.org/wiki/JSON_Web_Token) management, [grant types](https://oauth.net/2/grant-types/), and so on. App Service provides these utilities so that you can spend more time and energy on providing business value to your customer.
+## Why use the built-in authentication?
-> [!IMPORTANT]
-> You're not required to use this feature for authentication and authorization. You can use the bundled security features in your web framework of choice, or you can write your own utilities. However, keep in mind that [Chrome 80 is making breaking changes to its implementation of SameSite for cookies](https://www.chromestatus.com/feature/5088147346030592) (release date around March 2020), and custom remote authentication or other scenarios that rely on cross-site cookie posting may break when client Chrome browsers are updated. The workaround is complex because it needs to support different SameSite behaviors for different browsers.
->
-> The ASP.NET Core 2.1 and above versions hosted by App Service are already patched for this breaking change and handle Chrome 80 and older browsers appropriately. In addition, the same patch for ASP.NET Framework 4.7.2 has been deployed on the App Service instances throughout January 2020. For more information, see [Azure App Service SameSite cookie update](https://azure.microsoft.com/updates/app-service-samesite-cookie-update/).
->
+You're not required to use this feature for authentication and authorization. You can use the bundled security features in your web framework of choice, or you can write your own utilities. However, you will need to ensure that your solution stays up to date with the latest security, protocol, and browser updates.
-> [!NOTE]
-> Enabling this feature will cause **all** non-secure HTTP requests to your application to be automatically redirected to HTTPS, regardless of the App Service configuration setting to [enforce HTTPS](configure-ssl-bindings.md#enforce-https). If needed, you can disable this via the `requireHttps` setting in the [auth settings configuration file](app-service-authentication-how-to.md#configuration-file-reference), but you must then take care to ensure no security tokens ever get transmitted over non-secure HTTP connections.
-
-For information specific to native mobile apps, see [User authentication and authorization for mobile apps with Azure App Service](/previous-versions/azure/app-service-mobile/app-service-mobile-auth).
-
-## How it works
+Implementing a secure solution for authentication (signing-in users) and authorization (providing access to secure data) can take significant effort. You must make sure to follow industry best practices and standards, and keep your implementation up to date. The built-in authentication feature for App Service and Azure Functions can save you time and effort by providing out-of-the-box authentication with federated identity providers, allowing you to focus on the rest of your application.
-### On Windows
+- Azure App Service allows you to integrate a variety of auth capabilities into your web app or API without implementing them yourself.
+- ItΓÇÖs built directly into the platform and doesnΓÇÖt require any particular language, SDK, security expertise, or even any code to utilize.
+- You can integrate with multiple login providers. For example, Azure AD, Facebook, Google, Twitter.
-The authentication and authorization module runs in the same sandbox as your application code. When it's enabled, every incoming HTTP request passes through it before being handled by your application code.
-
-![An architecture diagram showing requests being intercepted by a process in the site sandbox which interacts with identity providers before allowing traffic to the deployed site](media/app-service-authentication-overview/architecture.png)
+## Identity providers
-This module handles several things for your app:
+App Service uses [federated identity](https://en.wikipedia.org/wiki/Federated_identity), in which a third-party identity provider manages the user identities and authentication flow for you. The following identity providers are available by default:
-- Authenticates users with the specified provider-- Validates, stores, and refreshes tokens-- Manages the authenticated session-- Injects identity information into request headers
+| Provider | Sign-in endpoint | How-To guidance |
+| - | - | - |
+| [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) | `/.auth/login/aad` | [App Service Azure AD login](configure-authentication-provider-aad.md) |
+| [Microsoft Account](../active-directory/develop/v2-overview.md) | `/.auth/login/microsoftaccount` | [App Service Microsoft Account login](configure-authentication-provider-microsoft.md) |
+| [Facebook](https://developers.facebook.com/docs/facebook-login) | `/.auth/login/facebook` | [App Service Facebook login](configure-authentication-provider-facebook.md) |
+| [Google](https://developers.google.com/identity/choose-auth) | `/.auth/login/google` | [App Service Google login](configure-authentication-provider-google.md) |
+| [Twitter](https://developer.twitter.com/en/docs/basics/authentication) | `/.auth/login/twitter` | [App Service Twitter login](configure-authentication-provider-twitter.md) |
+| Any [OpenID Connect](https://openid.net/connect/) provider (preview) | `/.auth/login/<providerName>` | [App Service OpenID Connect login](configure-authentication-provider-openid-connect.md) |
-The module runs separately from your application code and is configured using app settings. No SDKs, specific languages, or changes to your application code are required.
+When you enable authentication and authorization with one of these providers, its sign-in endpoint is available for user authentication and for validation of authentication tokens from the provider. You can provide your users with any number of these sign-in options.
-### On Containers
+## Considerations for using built-in authentication
-The authentication and authorization module runs in a separate container, isolated from your application code. Using what's known as the [Ambassador pattern](/azure/architecture/patterns/ambassador), it interacts with the incoming traffic to perform similar functionality as on Windows. Because it does not run in-process, no direct integration with specific language frameworks is possible; however, the relevant information that your app needs is passed through using request headers as explained below.
+Enabling this feature will cause all requests to your application to be automatically redirected to HTTPS, regardless of the App Service configuration setting to enforce HTTPS. You can disable this with the  `requireHttps` setting in the V2 configuration. However, we do recommend sticking with HTTPS, and you should ensure no security tokens ever get transmitted over non-secure HTTP connections.
-### User/Application claims
+App Service can be used for authentication with or without restricting access to your site content and APIs. To restrict app access only to authenticated users, set **Action to take when request is not authenticated** toΓÇ» log in with one of the configured identity providers. To authenticate but not restrict access, set **Action to take when request is not authenticated** to "Allow anonymous requests (no action)."
-For all language frameworks, App Service makes the claims in the incoming token (whether that be from an authenticated end user or a client application) available to your code by injecting them into the request headers. For ASP.NET 4.6 apps, App Service populates [ClaimsPrincipal.Current](/dotnet/api/system.security.claims.claimsprincipal.current) with the authenticated user's claims, so you can follow the standard .NET code pattern, including the `[Authorize]` attribute. Similarly, for PHP apps, App Service populates the `_SERVER['REMOTE_USER']` variable. For Java apps, the claims are [accessible from the Tomcat servlet](configure-language-java.md#authenticate-users-easy-auth).
+> [!NOTE]
+> You should give each app registration its own permission and consent. Avoid permission sharing between environments by using separate app registrations for separate deployment slots. When testing new code, this practice can help prevent issues from affecting the production app.
-For [Azure Functions](../azure-functions/functions-overview.md), `ClaimsPrincipal.Current` is not populated for .NET code, but you can still find the user claims in the request headers, or get the `ClaimsPrincipal` object from the request context or even through a binding parameter. See [working with client identities](../azure-functions/functions-bindings-http-webhook-trigger.md#working-with-client-identities) for more information.
+## How it works
-For more information, see [Access user claims](app-service-authentication-how-to.md#access-user-claims).
+[Feature architecture on Windows (non-container deployment)](#feature-architecture-on-windows-non-container-deployment))
-> [!NOTE]
-> At this time, ASP.NET Core does not currently support populating the current user with the Authentication/Authorization feature. However, some [3rd party, open source middleware components](https://github.com/MaximRouiller/MaximeRouiller.Azure.AppService.EasyAuth) do exist to help fill this gap.
->
+[Feature architecture on Linux and containers](#feature-architecture-on-linux-and-containers)
-### Token store
+[Authentication flow](#authentication-flow)
-App Service provides a built-in token store, which is a repository of tokens that are associated with the users of your web apps, APIs, or native mobile apps. When you enable authentication with any provider, this token store is immediately available to your app. If your application code needs to access data from these providers on the user's behalf, such as:
+[Authorization behavior](#authorization-behavior)
-- post to the authenticated user's Facebook timeline-- read the user's corporate data using the Microsoft Graph API
+[User and Application claims](#user-and-application-claims)
-You typically must write code to collect, store, and refresh these tokens in your application. With the token store, you just [retrieve the tokens](app-service-authentication-how-to.md#retrieve-tokens-in-app-code) when you need them and [tell App Service to refresh them](app-service-authentication-how-to.md#refresh-identity-provider-tokens) when they become invalid.
+[Token store](#token-store)
-The ID tokens, access tokens, and refresh tokens are cached for the authenticated session, and they're accessible only by the associated user.
+[Logging and tracing](#logging-and-tracing)
-If you don't need to work with tokens in your app, you can disable the token store in your app's **Authentication / Authorization** page.
+#### Feature architecture on Windows (non-container deployment)
-### Logging and tracing
+The authentication and authorization module runs in the same sandbox as your application code. When it's enabled, every incoming HTTP request passes through it before being handled by your application code.
-If you [enable application logging](troubleshoot-diagnostic-logs.md), you will see authentication and authorization traces directly in your log files. If you see an authentication error that you didn't expect, you can conveniently find all the details by looking in your existing application logs. If you enable [failed request tracing](troubleshoot-diagnostic-logs.md), you can see exactly what role the authentication and authorization module may have played in a failed request. In the trace logs, look for references to a module named `EasyAuthModule_32/64`.
-## Identity providers
+This module handles several things for your app:
-App Service uses [federated identity](https://en.wikipedia.org/wiki/Federated_identity), in which a third-party identity provider manages the user identities and authentication flow for you. Five identity providers are available by default:
+- Authenticates users with the specified provider
+- Validates, stores, and refreshes tokens
+- Manages the authenticated session
+- Injects identity information into request headers
-| Provider | Sign-in endpoint |
-| - | - |
-| [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) | `/.auth/login/aad` |
-| [Microsoft Account](../active-directory/develop/v2-overview.md) | `/.auth/login/microsoftaccount` |
-| [Facebook](https://developers.facebook.com/docs/facebook-login) | `/.auth/login/facebook` |
-| [Google](https://developers.google.com/identity/choose-auth) | `/.auth/login/google` |
-| [Twitter](https://developer.twitter.com/en/docs/basics/authentication) | `/.auth/login/twitter` |
-| Any [OpenID Connect](https://openid.net/connect/) provider (preview) | `/.auth/login/<providerName>` |
+The module runs separately from your application code and is configured using app settings. No SDKs, specific languages, or changes to your application code are required.
-When you enable authentication and authorization with one of these providers, its sign-in endpoint is available for user authentication and for validation of authentication tokens from the provider. You can provide your users with any number of these sign-in options with ease.
+#### Feature architecture on Linux and containers
-A [legacy extensibility path][custom-auth] exists for integrating with other identity providers or a custom auth solution, but this is not recommended. Instead, consider using the OpenID Connect support.
+The authentication and authorization module runs in a separate container, isolated from your application code. Using what's known as the [Ambassador pattern](/azure/architecture/patterns/ambassador), it interacts with the incoming traffic to perform similar functionality as on Windows. Because it does not run in-process, no direct integration with specific language frameworks is possible; however, the relevant information that your app needs is passed through using request headers as explained below.
-## Authentication flow
+#### Authentication flow
The authentication flow is the same for all providers, but differs depending on whether you want to sign in with the provider's SDK: -- Without provider SDK: The application delegates federated sign-in to App Service. This is typically the case with browser apps, which can present the provider's login page to the user. The server code manages the sign-in process, so it is also called _server-directed flow_ or _server flow_. This case applies to browser apps. It also applies to native apps that sign users in using the Mobile Apps client SDK because the SDK opens a web view to sign users in with App Service authentication.
+- Without provider SDK: The application delegates federated sign-in to App Service. This is typically the case with browser apps, which can present the provider's login page to the user. The server code manages the sign-in process, so it is also called _server-directed flow_ or _server flow_. This case applies to browser apps. It also applies to native apps that sign users in using the Mobile Apps client SDK because the SDK opens a web view to sign users in with App Service authentication.
- With provider SDK: The application signs users in to the provider manually and then submits the authentication token to App Service for validation. This is typically the case with browser-less apps, which can't present the provider's sign-in page to the user. The application code manages the sign-in process, so it is also called _client-directed flow_ or _client flow_. This case applies to REST APIs, [Azure Functions](../azure-functions/functions-overview.md), and JavaScript browser clients, as well as browser apps that need more flexibility in the sign-in process. It also applies to native mobile apps that sign users in using the provider's SDK.
-> [!NOTE]
-> Calls from a trusted browser app in App Service to another REST API in App Service or [Azure Functions](../azure-functions/functions-overview.md) can be authenticated using the server-directed flow. For more information, see [Customize authentication and authorization in App Service](app-service-authentication-how-to.md).
->
+Calls from a trusted browser app in App Service to another REST API in App Service or [Azure Functions](../azure-functions/functions-overview.md) can be authenticated using the server-directed flow. For more information, see [Customize authentication and authorization in App Service](app-service-authentication-how-to.md).
The table below shows the steps of the authentication flow.
For client browsers, App Service can automatically direct all unauthenticated us
<a name="authorization"></a>
-## Authorization behavior
+#### Authorization behavior
In the [Azure portal](https://portal.azure.com), you can configure App Service authorization with a number of behaviors when incoming request is not authenticated.
In the [Azure portal](https://portal.azure.com), you can configure App Service a
The following headings describe the options.
-### Allow Anonymous requests (no action)
+**Allow Anonymous requests (no action)**
-This option defers authorization of unauthenticated traffic to your application code. For authenticated requests, App Service also passes along authentication information in the HTTP headers.
+This option defers authorization of unauthenticated traffic to your application code. For authenticated requests, App Service also passes along authentication information in the HTTP headers.
-This option provides more flexibility in handling anonymous requests. For example, it lets you [present multiple sign-in providers](app-service-authentication-how-to.md#use-multiple-sign-in-providers) to your users. However, you must write code.
+This option provides more flexibility in handling anonymous requests. For example, it lets you [present multiple sign-in providers](app-service-authentication-how-to.md#use-multiple-sign-in-providers) to your users. However, you must write code.
-### Allow only authenticated requests
+**Allow only authenticated requests**
The option is **Log in with \<provider>**. App Service redirects all anonymous requests to `/.auth/login/<provider>` for the provider you choose. If the anonymous request comes from a native mobile app, the returned response is an `HTTP 401 Unauthorized`.
With this option, you don't need to write any authentication code in your app. F
> [!NOTE] > By default, any user in your Azure AD tenant can request a token for your application from Azure AD. You can [configure the application in Azure AD](../active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md) if you want to restrict access to your app to a defined set of users. +
+#### User and Application claims
+
+For all language frameworks, App Service makes the claims in the incoming token (whether that be from an authenticated end user or a client application) available to your code by injecting them into the request headers. For ASP.NET 4.6 apps, App Service populates [ClaimsPrincipal.Current](/dotnet/api/system.security.claims.claimsprincipal.current) with the authenticated user's claims, so you can follow the standard .NET code pattern, including the `[Authorize]` attribute. Similarly, for PHP apps, App Service populates the `_SERVER['REMOTE_USER']` variable. For Java apps, the claims are [accessible from the Tomcat servlet](configure-language-java.md#authenticate-users-easy-auth).
+
+For [Azure Functions](../azure-functions/functions-overview.md), `ClaimsPrincipal.Current` is not populated for .NET code, but you can still find the user claims in the request headers, or get the `ClaimsPrincipal` object from the request context or even through a binding parameter. See [working with client identities](../azure-functions/functions-bindings-http-webhook-trigger.md#working-with-client-identities) for more information.
+
+For more information, see [Access user claims](app-service-authentication-how-to.md#access-user-claims).
+
+At this time, ASP.NET Core does not currently support populating the current user with the Authentication/Authorization feature. However, some [3rd party, open source middleware components](https://github.com/MaximRouiller/MaximeRouiller.Azure.AppService.EasyAuth) do exist to help fill this gap.
+
+#### Token store
+
+App Service provides a built-in token store, which is a repository of tokens that are associated with the users of your web apps, APIs, or native mobile apps. When you enable authentication with any provider, this token store is immediately available to your app. If your application code needs to access data from these providers on the user's behalf, such as:
+
+- post to the authenticated user's Facebook timeline
+- read the user's corporate data using the Microsoft Graph API
+
+You typically must write code to collect, store, and refresh these tokens in your application. With the token store, you just [retrieve the tokens](app-service-authentication-how-to.md#retrieve-tokens-in-app-code) when you need them and [tell App Service to refresh them](app-service-authentication-how-to.md#refresh-identity-provider-tokens) when they become invalid.
+
+The ID tokens, access tokens, and refresh tokens are cached for the authenticated session, and they're accessible only by the associated user.
+
+If you don't need to work with tokens in your app, you can disable the token store in your app's **Authentication / Authorization** page.
+
+#### Logging and tracing
+
+If you [enable application logging](troubleshoot-diagnostic-logs.md), you will see authentication and authorization traces directly in your log files. If you see an authentication error that you didn't expect, you can conveniently find all the details by looking in your existing application logs. If you enable [failed request tracing](troubleshoot-diagnostic-logs.md), you can see exactly what role the authentication and authorization module may have played in a failed request. In the trace logs, look for references to a module named `EasyAuthModule_32/64`.
+ ## More resources
-* [Tutorial: Authenticate and authorize users in a web app that accesses Azure Storage and Microsoft Graph](scenario-secure-app-authentication-app-service.md)
-* [Tutorial: Authenticate and authorize users end-to-end in Azure App Service (Windows)](tutorial-auth-aad.md)
-* [Tutorial: Authenticate and authorize users end-to-end in Azure App Service for Linux](./tutorial-auth-aad.md?pivots=platform-linux%3fpivots%3dplatform-linux)
-* [Customize authentication and authorization in App Service](app-service-authentication-how-to.md)
-* [.NET Core integration of Azure AppService EasyAuth (3rd party)](https://github.com/MaximRouiller/MaximeRouiller.Azure.AppService.EasyAuth)
-* [Getting Azure App Service authentication working with .NET Core (3rd party)](https://github.com/kirkone/KK.AspNetCore.EasyAuthAuthentication)
-
-Provider-specific how-to guides:
-
-* [How to configure your app to use Azure Active Directory login][AAD]
-* [How to configure your app to use Facebook login][Facebook]
-* [How to configure your app to use Google login][Google]
-* [How to configure your app to use Microsoft Account login][MSA]
-* [How to configure your app to use Twitter login][Twitter]
-* [How to configure your app to use an OpenID Connect provider for login (preview)][OIDC]
-* [How to configure your app to use an Sign in with Apple (preview)][Apple]
-
-[AAD]: configure-authentication-provider-aad.md
-[Facebook]: configure-authentication-provider-facebook.md
-[Google]: configure-authentication-provider-google.md
-[MSA]: configure-authentication-provider-microsoft.md
-[Twitter]: configure-authentication-provider-twitter.md
-[OIDC]: configure-authentication-provider-openid-connect.md
-[Apple]: configure-authentication-provider-apple.md
-
-[custom-auth]: /previous-versions/azure/app-service-mobile/app-service-mobile-dotnet-backend-how-to-use-server-sdk#custom-auth
-
-[ADAL-Android]: /previous-versions/azure/app-service-mobile/app-service-mobile-android-how-to-use-client-library#adal
-[ADAL-iOS]: /previous-versions/azure/app-service-mobile/app-service-mobile-ios-how-to-use-client-library#adal
-[ADAL-dotnet]: /previous-versions/azure/app-service-mobile/app-service-mobile-dotnet-how-to-use-client-library#adal
+- [How-To: Configure your App Service or Azure Functions app to use Azure AD login](configure-authentication-provider-aad.md)
+- [Advanced usage of authentication and authorization in Azure App Service](app-service-authentication-how-to.md)
+
+Samples:
+- [Tutorial: Add authentication to your web app running on Azure App Service](scenario-secure-app-authentication-app-service.md)
+- [Tutorial: Authenticate and authorize users end-to-end in Azure App Service (Windows or Linux)](tutorial-auth-aad.md)
+- [.NET Core integration of Azure AppService EasyAuth (3rd party)](https://github.com/MaximRouiller/MaximeRouiller.Azure.AppService.EasyAuth)
+- [Getting Azure App Service authentication working with .NET Core (3rd party)](https://github.com/kirkone/KK.AspNetCore.EasyAuthAuthentication)
attestation Author Sign Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/author-sign-policy.md
The policy contains rules that determine the authorization criteria, properties,
version=1.0; authorizationrules {
- c:[type="secureBootEnables", issuer=="AttestationService"]=> permit()
+ c:[type="secureBootEnabled", issuer=="AttestationService"]=> permit()
}; issuancerules {
- c:[type="secureBootEnables", issuer=="AttestationService"]=> issue(claim=c)
+ c:[type="secureBootEnabled", issuer=="AttestationService"]=> issue(claim=c)
c:[type="notSafeMode", issuer=="AttestationService"]=> issue(claim=c) }; ```
After creating a policy file, to upload a policy in JWS format, follow the below
## Next steps - [Set up Azure Attestation using PowerShell](quickstart-powershell.md)-- [Attest an SGX enclave using code samples](/samples/browse/?expanded=azure&terms=attestation)
+- [Attest an SGX enclave using code samples](/samples/browse/?expanded=azure&terms=attestation)
attestation Private Endpoint Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/private-endpoint-powershell.md
Previously updated : 08/31/2020 Last updated : 03/26/2021
Get started with Azure Private Link by using a private endpoint to connect secur
In this quickstart, you'll create a private endpoint for Azure Attestation and deploy a virtual machine to test the private connection. > [!NOTE]
-> The current implementation only includes automatic approval option. The subscription must be white listed to be able to proceed with private endpoint creation. Please reach out to the service team or submit an Azure support request on the [Azure support page](https://azure.microsoft.com/support/options/) before proceeding with the below steps.
+> The current implementation only includes automatic approval option. The subscription must be added to an allow list to be able to proceed with private endpoint creation. Please reach out to the service team or submit an Azure support request on the [Azure support page](https://azure.microsoft.com/support/options/) before proceeding with the below steps.
## Prerequisites
automation Manage Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/manage-runbooks.md
Use the [New-AzAutomationRunbook](/powershell/module/az.automation/new-azautomat
The following example shows how to create a new empty runbook. ```azurepowershell-interactive
-New-AzAutomationRunbook -AutomationAccountName MyAccount `
--Name NewRunbook -ResourceGroupName MyResourceGroup -Type PowerShell
+$params = @{
+ AutomationAccountName = 'MyAutomationAccount'
+ Name = 'NewRunbook'
+ ResourceGroupName = 'MyResourceGroup'
+ Type = 'PowerShell'
+}
+New-AzAutomationRunbook @params
``` ## Import a runbook
Use the [Import-AzAutomationRunbook](/powershell/module/az.automation/import-aza
The following example shows how to import a script file into a runbook. ```azurepowershell-interactive
-$automationAccountName = "AutomationAccount"
-$runbookName = "Sample_TestRunbook"
-$scriptPath = "C:\Runbooks\Sample_TestRunbook.ps1"
-$RGName = "ResourceGroup"
-
-Import-AzAutomationRunbook -Name $runbookName -Path $scriptPath `
--ResourceGroupName $RGName -AutomationAccountName $automationAccountName `--Type PowerShellWorkflow
+$params = @{
+ AutomationAccountName = 'MyAutomationAccount'
+ Name = 'Sample_TestRunbook'
+ ResourceGroupName = 'MyResourceGroup'
+ Type = 'PowerShell'
+ Path = 'C:\Runbooks\Sample_TestRunbook.ps1'
+}
+Import-AzAutomationRunbook @params
``` ## Handle resources
Import-AzAutomationRunbook -Name $runbookName -Path $scriptPath `
If your runbook creates a [resource](automation-runbook-execution.md#resources), the script should check to see if the resource already exists before attempting to create it. Here's a basic example. ```powershell
-$vmName = "WindowsVM1"
-$resourceGroupName = "myResourceGroup"
-$myCred = Get-AutomationPSCredential "MyCredential"
-$vmExists = Get-AzResource -Name $vmName -ResourceGroupName $resourceGroupName
+$vmName = 'WindowsVM1'
+$rgName = 'MyResourceGroup'
+$myCred = Get-AutomationPSCredential 'MyCredential'
-if(!$vmExists)
- {
+$vmExists = Get-AzResource -Name $vmName -ResourceGroupName $rgName
+if (-not $vmExists) {
Write-Output "VM $vmName does not exist, creating"
- New-AzVM -Name $vmName -ResourceGroupName $resourceGroupName -Credential $myCred
- }
-else
- {
+ New-AzVM -Name $vmName -ResourceGroupName $rgName -Credential $myCred
+} else {
Write-Output "VM $vmName already exists, skipping"
- }
+}
``` ## Retrieve details from Activity log
else
You can retrieve runbook details, such as the person or account that started a runbook, from the [Activity log](automation-runbook-execution.md#activity-logging) for the Automation account. The following PowerShell example provides the last user to run the specified runbook. ```powershell-interactive
-$SubID = "00000000-0000-0000-0000-000000000000"
-$AutomationResourceGroupName = "MyResourceGroup"
-$AutomationAccountName = "MyAutomationAccount"
-$RunbookName = "MyRunbook"
+$SubID = '00000000-0000-0000-0000-000000000000'
+$AutoRgName = 'MyResourceGroup'
+$aaName = 'MyAutomationAccount'
+$RunbookName = 'MyRunbook'
$StartTime = (Get-Date).AddDays(-1)
-$JobActivityLogs = Get-AzLog -ResourceGroupName $AutomationResourceGroupName -StartTime $StartTime `
- | Where-Object {$_.Authorization.Action -eq "Microsoft.Automation/automationAccounts/jobs/write"}
+
+$params = @{
+ ResourceGroupName = $AutoRgName
+ StartTime = $StartTime
+}
+$JobActivityLogs = (Get-AzLog @params).Where( { $_.Authorization.Action -eq 'Microsoft.Automation/automationAccounts/jobs/write' })
$JobInfo = @{}
-foreach ($log in $JobActivityLogs)
-{
+foreach ($log in $JobActivityLogs) {
# Get job resource $JobResource = Get-AzResource -ResourceId $log.ResourceId
- if ($JobInfo[$log.SubmissionTimestamp] -eq $null -and $JobResource.Properties.runbook.name -eq $RunbookName)
- {
+ if ($null -eq $JobInfo[$log.SubmissionTimestamp] -and $JobResource.Properties.Runbook.Name -eq $RunbookName) {
# Get runbook
- $Runbook = Get-AzAutomationJob -ResourceGroupName $AutomationResourceGroupName -AutomationAccountName $AutomationAccountName `
- -Id $JobResource.Properties.jobId | ? {$_.RunbookName -eq $RunbookName}
+ $jobParams = @{
+ ResourceGroupName = $AutoRgName
+ AutomationAccountName = $aaName
+ Id = $JobResource.Properties.JobId
+ }
+ $Runbook = Get-AzAutomationJob @jobParams | Where-Object RunbookName -EQ $RunbookName
# Add job information to hashtable $JobInfo.Add($log.SubmissionTimestamp, @($Runbook.RunbookName,$Log.Caller, $JobResource.Properties.jobId)) } }
-$JobInfo.GetEnumerator() | sort key -Descending | Select-Object -First 1
+$JobInfo.GetEnumerator() | Sort-Object Key -Descending | Select-Object -First 1
``` ## Track progress
Some runbooks behave strangely if they run across multiple jobs at the same time
```powershell # Authenticate to Azure $connection = Get-AutomationConnection -Name AzureRunAsConnection
-Connect-AzAccount -ServicePrincipal -Tenant $connection.TenantID `
--ApplicationId $connection.ApplicationID -CertificateThumbprint $connection.CertificateThumbprint-
+$cnParams = @{
+ ServicePrincipal = $true
+ Tenant = $connection.TenantId
+ ApplicationId = $connection.ApplicationId
+ CertificateThumbprint = $connection.CertificateThumbprint
+}
+Connect-AzAccount @cnParams
$AzureContext = Get-AzSubscription -SubscriptionId $connection.SubscriptionID # Check for already running or new runbooks
$aaName = "<AutomationAccountName>"
$jobs = Get-AzAutomationJob -ResourceGroupName $rgName -AutomationAccountName $aaName -RunbookName $runbookName -AzContext $AzureContext # Check to see if it is already running
-$runningCount = ($jobs | ? {$_.Status -eq "Running"}).count
+$runningCount = ($jobs.Where( { $_.Status -eq 'Running' })).count
-If (($jobs.status -contains "Running" -And $runningCount -gt 1 ) -Or ($jobs.Status -eq "New")) {
+if (($jobs.Status -contains 'Running' -and $runningCount -gt 1 ) -or ($jobs.Status -eq 'New')) {
# Exit code
- Write-Output "Runbook is already running"
- Exit 1
+ Write-Output "Runbook [$runbookName] is already running"
+ exit 1
} else { # Insert Your code here }
Your runbook must be able to work with [subscriptions](automation-runbook-execut
```powershell Disable-AzContextAutosave -Scope Process
-$Conn = Get-AutomationConnection -Name AzureRunAsConnection
-$AzureContext = Connect-AzAccount -ServicePrincipal `
--Tenant $Conn.TenantID `--ApplicationId $Conn.ApplicationID `--CertificateThumbprint $Conn.CertificateThumbprint `--Subscription $Conn.SubscriptionId
+$connection = Get-AutomationConnection -Name AzureRunAsConnection
+$cnParams = @{
+ ServicePrincipal = $true
+ Tenant = $connection.TenantId
+ ApplicationId = $connection.ApplicationId
+ CertificateThumbprint = $connection.CertificateThumbprint
+}
+Connect-AzAccount @cnParams
$ChildRunbookName = 'ChildRunbookDemo'
-$AutomationAccountName = 'myAutomationAccount'
-$ResourceGroupName = 'myResourceGroup'
-
-Start-AzAutomationRunbook `
--ResourceGroupName $ResourceGroupName `--AutomationAccountName $AutomationAccountName `--Name $ChildRunbookName `--DefaultProfile $AzureContext
+$aaName = 'MyAutomationAccount'
+$rgName = 'MyResourceGroup'
+
+$startParams = @{
+ ResourceGroupName = $rgName
+ AutomationAccountName = $aaName
+ Name = $ChildRunbookName
+ DefaultProfile = $AzureContext
+}
+Start-AzAutomationRunbook @startParams
``` ## Work with a custom script
When you create or import a new runbook, you must publish it before you can run
Use the [Publish-AzAutomationRunbook](/powershell/module/Az.Automation/Publish-AzAutomationRunbook) cmdlet to publish your runbook. ```azurepowershell-interactive
-$automationAccountName = "AutomationAccount"
-$runbookName = "Sample_TestRunbook"
-$RGName = "ResourceGroup"
-
-Publish-AzAutomationRunbook -AutomationAccountName $automationAccountName `
--Name $runbookName -ResourceGroupName $RGName
+$aaName = "MyAutomationAccount"
+$RunbookName = "Sample_TestRunbook"
+$rgName = "MyResourceGroup"
+
+$publishParams = @{
+ AutomationAccountName = $aaName
+ ResourceGroupName = $rgName
+ Name = $RunbookName
+}
+Publish-AzAutomationRunbook @publishParams
``` ## Schedule a runbook in the Azure portal
Use the [Get-AzAutomationJob](/powershell/module/Az.Automation/Get-AzAutomationJ
The following example gets the last job for a sample runbook and displays its status, the values provided for the runbook parameters, and the job output. ```azurepowershell-interactive
-$job = (Get-AzAutomationJob ΓÇôAutomationAccountName "MyAutomationAccount" `
-ΓÇôRunbookName "Test-Runbook" -ResourceGroupName "ResourceGroup01" | sort LastModifiedDate ΓÇôdesc)[0]
-$job.Status
-$job.JobParameters
-Get-AzAutomationJobOutput -ResourceGroupName "ResourceGroup01" `
-ΓÇôAutomationAccountName "MyAutomationAcct" -Id $job.JobId ΓÇôStream Output
+$getJobParams = @{
+ AutomationAccountName = 'MyAutomationAccount'
+ ResourceGroupName = 'MyResourceGroup'
+ Runbookname = 'Test-Runbook'
+}
+$job = (Get-AzAutomationJob @getJobParams | Sort-Object LastModifiedDate ΓÇôDesc)[0]
+$job | Select-Object JobId, Status, JobParameters
+
+$getOutputParams = @{
+ AutomationAccountName = 'MyAutomationAccount'
+ ResourceGroupName = 'MyResourceGroup'
+ Id = $job.JobId
+ Stream = 'Output'
+}
+Get-AzAutomationJobOutput @getOutputParams
``` The following example retrieves the output for a specific job and returns each record. If there's an [exception](automation-runbook-execution.md#exceptions) for one of the records, the script writes the exception instead of the value. This behavior is useful since exceptions can provide additional information that might not be logged normally during output. ```azurepowershell-interactive
-$output = Get-AzAutomationJobOutput -AutomationAccountName <AutomationAccountName> -Id <jobID> -ResourceGroupName <ResourceGroupName> -Stream "Any"
-foreach($item in $output)
-{
- $fullRecord = Get-AzAutomationJobOutputRecord -AutomationAccountName <AutomationAccountName> -ResourceGroupName <ResourceGroupName> -JobId <jobID> -Id $item.StreamRecordId
- if ($fullRecord.Type -eq "Error")
- {
- $fullRecord.Value.Exception
+$params = @{
+ AutomationAccountName = 'MyAutomationAccount'
+ ResourceGroupName = 'MyResourceGroup'
+ Stream = 'Any'
+}
+$output = Get-AzAutomationJobOutput @params
+
+foreach ($item in $output) {
+ $jobOutParams = @{
+ AutomationAccountName = 'MyAutomationAccount'
+ ResourceGroupName = 'MyResourceGroup'
+ Id = $item.StreamRecordId
}
- else
- {
- $fullRecord.Value
+ $fullRecord = Get-AzAutomationJobOutputRecord @jobOutParams
+
+ if ($fullRecord.Type -eq 'Error') {
+ $fullRecord.Value.Exception
+ } else {
+ $fullRecord.Value
} } ```
foreach($item in $output)
* To learn details of runbook management, see [Runbook execution in Azure Automation](automation-runbook-execution.md). * To prepare a PowerShell runbook, see [Edit textual runbooks in Azure Automation](automation-edit-textual-runbook.md).
-* To troubleshoot issues with runbook execution, see [Troubleshoot runbook issues](troubleshoot/runbooks.md).
+* To troubleshoot issues with runbook execution, see [Troubleshoot runbook issues](troubleshoot/runbooks.md).
azure-app-configuration Enable Dynamic Configuration Dotnet Core Push Refresh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/enable-dynamic-configuration-dotnet-core-push-refresh.md
description: In this tutorial, you learn how to dynamically update the configuration data for .NET Core apps using push refresh documentationcenter: ''-+ editor: ''
ms.devlang: csharp Last updated 07/25/2020-+ #Customer intent: I want to use push refresh to dynamically update my app to use the latest configuration data in App Configuration.
azure-app-configuration Enable Dynamic Configuration Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/enable-dynamic-configuration-dotnet-core.md
description: In this tutorial, you learn how to dynamically update the configuration data for .NET Core apps documentationcenter: ''-+ editor: ''
ms.devlang: csharp
Last updated 07/01/2019-+ #Customer intent: I want to dynamically update my app to use the latest configuration data in App Configuration.
azure-app-configuration Enable Dynamic Configuration Java Spring App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/enable-dynamic-configuration-java-spring-app.md
Then, open the *pom.xml* file in a text editor, and add a `<dependency>` for `sp
``` 1. To test dynamic configuration, open the Azure App Configuration portal associated with your application. Select **Configuration Explorer**, and update the value of your displayed key, for example:+ | Key | Value | ||| | application/config.message | Hello - Updated |
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Connected Machine agent description: This article provides a detailed overview of the Azure Arc enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 03/15/2021 Last updated : 03/25/2021
The Azure Arc enabled servers Connected Machine agent enables you to manage your
## Agent component details + The Azure Connected Machine agent package contains several logical components, which are bundled together. * The Hybrid Instance Metadata service (HIMDS) manages the connection to Azure and the connected machine's Azure identity.
azure-functions Durable Functions Bindings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-bindings.md
const df = require("durable-functions");
module.exports = async function (context) { const client = df.getClient(context); const entityId = new df.EntityId("Counter", "myCounter");
- await context.df.signalEntity(entityId, "add", 1);
+ await client.signalEntity(entityId, "add", 1);
}; ```
azure-functions Durable Functions Perf And Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-perf-and-scale.md
There is one work-item queue per task hub in Durable Functions. It is a basic qu
### Control queue(s)
-There are multiple *control queues* per task hub in Durable Functions. A *control queue* is more sophisticated than the simpler work-item queue. Control queues are used to trigger the stateful orchestrator and entity functions. Because the orchestrator and entity function instances are stateful singletons, it's not possible to use a competing consumer model to distribute load across VMs. Instead, orchestrator and entity messages are load-balanced across the control queues. More details on this behavior can be found in subsequent sections.
+There are multiple *control queues* per task hub in Durable Functions. A *control queue* is more sophisticated than the simpler work-item queue. Control queues are used to trigger the stateful orchestrator and entity functions. Because the orchestrator and entity function instances are stateful singletons, it's important that each orchestration or entity is only processed by one worker at a time. To achieve this, each orchestration instance or entity is assigned to a single control queue. These control queues are load balanced across workers to ensure that each queue is only processed by one worker at a time. More details on this behavior can be found in subsequent sections.
Control queues contain a variety of orchestration lifecycle message types. Examples include [orchestrator control messages](durable-functions-instance-management.md), activity function *response* messages, and timer messages. As many as 32 messages will be dequeued from a control queue in a single poll. These messages contain payload data as well as metadata including which orchestration instance it is intended for. If multiple dequeued messages are intended for the same orchestration instance, they will be processed as a batch.
The maximum polling delay is configurable via the `maxQueuePollingInterval` prop
### Orchestration start delays Orchestrations instances are started by putting an `ExecutionStarted` message in one of the task hub's control queues. Under certain conditions, you may observe multi-second delays between when an orchestration is scheduled to run and when it actually starts running. During this time interval, the orchestration instance remains in the `Pending` state. There are two potential causes of this delay:
-1. **Backlogged control queues**: If the control queue for this instance contains a large number of messages, it may take time before the `ExecutionStarted` message is received and processed by the runtime. Message backlogs can happen when orchestrations are processing lots of events concurrently. Events that go into the control queue include orchestration start events, activity completions, durable timers, termination, and external events. If this delay happens under normal circumstances, consider creating a new task hub with a larger number of partitions. Configuring more partitions will cause the runtime to create more control queues for load distribution.
+1. **Backlogged control queues**: If the control queue for this instance contains a large number of messages, it may take time before the `ExecutionStarted` message is received and processed by the runtime. Message backlogs can happen when orchestrations are processing lots of events concurrently. Events that go into the control queue include orchestration start events, activity completions, durable timers, termination, and external events. If this delay happens under normal circumstances, consider creating a new task hub with a larger number of partitions. Configuring more partitions will cause the runtime to create more control queues for load distribution. Each partition corresponds to 1:1 with a control queue, with a maximum of 16 partitions.
2. **Back off polling delays**: Another common cause of orchestration delays is the [previously described back-off polling behavior for control queues](#queue-polling). However, this delay is only expected when an app is scaled out to two or more instances. If there is only one app instance or if the app instance that starts the orchestration is also the same instance that is polling the target control queue, then there will not be a queue polling delay. Back off polling delays can be reduced by updating the **host.json** settings, as described previously.
If not specified, the default `AzureWebJobsStorage` storage account is used. For
## Orchestrator scale-out
-Activity functions are stateless and scaled out automatically by adding VMs. Orchestrator functions and entities, on the other hand, are *partitioned* across one or more control queues. The number of control queues is defined in the **host.json** file. The following example host.json snippet sets the `durableTask/storageProvider/partitionCount` property (or `durableTask/partitionCount` in Durable Functions 1.x) to `3`.
+While activity functions can be scaled out infinitely by adding more VMs elastically, individual orchestrator instances and entities are constrained to inhabit a single partition and the maximum number of partitions is bounded by the `partitionCount` setting in your `host.json`.
+
+> [!NOTE]
+> Generally speaking, orchestrator functions are intended to be lightweight and should not require large amounts of computing power. It is therefore not necessary to create a large number of control queue partitions to get great throughput for orchestrations. Most of the heavy work should be done in stateless activity functions, which can be scaled out infinitely.
+
+The number of control queues is defined in the **host.json** file. The following example host.json snippet sets the `durableTask/storageProvider/partitionCount` property (or `durableTask/partitionCount` in Durable Functions 1.x) to `3`. Note that there are as many control queues as there are partitions.
### Durable Functions 2.x
Activity functions are stateless and scaled out automatically by adding VMs. Orc
A task hub can be configured with between 1 and 16 partitions. If not specified, the default partition count is **4**.
-When scaling out to multiple function host instances (typically on different VMs), each instance acquires a lock on one of the control queues. These locks are internally implemented as blob storage leases and ensure that an orchestration instance or entity only runs on a single host instance at a time. If a task hub is configured with three control queues, orchestration instances and entities can be load-balanced across as many as three VMs. Additional VMs can be added to increase capacity for activity function execution.
+During low traffic scenarios, your application will be scaled-in, so partitions will be managed by a small number of workers. As an example, consider the diagram below.
+
+![Scale-in orchestrations diagram](./media/durable-functions-perf-and-scale/scale-progression-1.png)
+
+In the previous diagram, we see that orchestrators 1 through 6 are load balanced across partitions. Similarly, partitions, like activities, are load balanced across workers. Partitions are load-balanced across workers regardless of the number of orchestrators that get started.
+
+If you're running on the Azure Functions Consumption or Elastic Premium plans, or if you have load-based auto-scaling configured, more workers will get allocated as traffic increases and partitions will eventually load balance across all workers. If we continue to scale out, eventually each partition will eventually be managed by a single worker. Activities, on the other hand, will continue to be load-balanced across all workers. This is shown in the image below.
+
+![First scaled-out orchestrations diagram](./media/durable-functions-perf-and-scale/scale-progression-2.png)
+
+The upper-bound of the maximum number of concurrent _active_ orchestrations at *any given time* is equal to the number of workers allocated to your application _times_ your value for `maxConcurrentOrchestratorFunctions`. This upper-bound can be made more precise when your partitions are fully scaled-out across workers. When fully scaled-out, and since each worker will have only a single Functions host instance, the maximum number of _active_ concurrent orchestrator instances will be equal to your number of partitions _times_ your value for `maxConcurrentOrchestratorFunctions`. Our image below illustrates a fully scaled-out scenario where more orchestrators are added but some are inactive, shown in grey.
+
+![Second scaled-out orchestrations diagram](./media/durable-functions-perf-and-scale/scale-progression-3.png)
+
+During scale-out, control queue locks may be redistributed across Functions host instances to ensure that partitions are evenly distributed. These locks are internally implemented as blob storage leases and ensure that any individual orchestration instance or entity only runs on a single host instance at a time. If a task hub is configured with three partitions (and therefore three control queues), orchestration instances and entities can be load-balanced across all three lease-holding host instances. Additional VMs can be added to increase capacity for activity function execution.
The following diagram illustrates how the Azure Functions host interacts with the storage entities in a scaled out environment.
-![Scale diagram](./media/durable-functions-perf-and-scale/scale-diagram.png)
+![Scale diagram](./media/durable-functions-perf-and-scale/scale-interactions-diagram.png)
As shown in the previous diagram, all VMs compete for messages on the work-item queue. However, only three VMs can acquire messages from control queues, and each VM locks a single control queue.
azure-functions Functions Dotnet Dependency Injection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-dotnet-dependency-injection.md
Previously updated : 01/27/2021 Last updated : 03/24/2021
Azure Functions supports the dependency injection (DI) software design pattern,
- Support for dependency injection begins with Azure Functions 2.x.
+- Dependency injection patterns differ depending on whether your C# functions run [in-process](functions-dotnet-class-library.md) or [out-of-process](dotnet-isolated-process-guide.md).
+
+> [!IMPORTANT]
+> The guidance in this article applies only to [C# class library functions](functions-dotnet-class-library.md), which run in-process with the runtime. This custom dependency injection model doesn't apply to [.NET isolated functions](dotnet-isolated-process-guide.md), which lets you run .NET 5.0 functions out-of-process. The .NET isolated process model relies on regular ASP.NET Core dependency injection patterns. To learn more, see [Dependency injection](dotnet-isolated-process-guide.md#dependency-injection) in the .NET isolated process guide.
+ ## Prerequisites Before you can use dependency injection, you must install the following NuGet packages:
azure-maps Android Map Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/android-map-events.md
The map manages all events through its `events` property. The following table li
| `OnCameraMove` | `()` | Fired repeatedly during an animated transition from one view to another, as the result of either user interaction or methods. | | `OnCameraMoveCanceled` | `()` | Fired when a movement request to the camera has been canceled. | | `OnCameraMoveStarted` | `(int reason)` | Fired just before the map begins a transition from one view to another, as the result of either user interaction or methods. The `reason` argument of the event listener returns an integer value that provides details of how the camera movement was initiated. The following list outlines the possible reasons:<ul><li>1: Gesture</li><li>2: Developer animation</li><li>3: API Animation</li></ul> |
-| `OnClick` | `(double lat, double lon)` | Fired when the map is pressed and released at the same point on the map. |
-| `OnFeatureClick` | `(List<Feature>)` | Fired when the map is pressed and released at the same point on a feature. |
+| `OnClick` | `(double lat, double lon): boolean` | Fired when the map is pressed and released at the same point on the map. This event handler returns a boolean value indicating if the event should be consumed or passed further to other event listeners. |
+| `OnFeatureClick` | `(List<Feature>): boolean` | Fired when the map is pressed and released at the same point on a feature. This event handler returns a boolean value indicating if the event should be consumed or passed further to other event listeners. |
| `OnLayerAdded` | `(Layer layer)` | Fired when a layer is added to the map. | | `OnLayerRemoved` | `(Layer layer)` | Fired when a layer is removed from the map. | | `OnLoaded` | `()` | Fired immediately after all necessary resources have been downloaded and the first visually complete rendering of the map has occurred. |
-| `OnLongClick` | `(double lat, double lon)` | Fired when the map is pressed, held for a moment, and then released at the same point on the map. |
-| `OnLongFeatureClick ` | `(List<Feature>)` | Fired when the map is pressed, held for a moment, and then released at the same point on a feature. |
+| `OnLongClick` | `(double lat, double lon): boolean` | Fired when the map is pressed, held for a moment, and then released at the same point on the map. This event handler returns a boolean value indicating if the event should be consumed or passed further to other event listeners. |
+| `OnLongFeatureClick ` | `(List<Feature>): boolean` | Fired when the map is pressed, held for a moment, and then released at the same point on a feature. This event handler returns a boolean value indicating if the event should be consumed or passed further to other event listeners. |
| `OnReady`              | `(AzureMap map)`     | Fired when the map initially is loaded or when the app orientation change and the minimum required map resources are loaded and the map is ready to be programmatically interacted with. | | `OnSourceAdded` | `(Source source)` | Fired when a `DataSource` or `VectorTileSource` is added to the map. | | `OnSourceRemoved` | `(Source source)` | Fired when a `DataSource` or `VectorTileSource` is removed from the map. |
The following code shows how to add the `OnClick`, `OnFeatureClick`, and `OnCame
```java map.events.add((OnClick) (lat, lon) -> { //Map clicked.+
+ //Return true indicating if event should be consumed and not passed further to other listeners registered afterwards, false otherwise.
+ return true;
}); map.events.add((OnFeatureClick) (features) -> { //Feature clicked.+
+ //Return true indicating if event should be consumed and not passed further to other listeners registered afterwards, false otherwise.
+ return true;
}); map.events.add((OnCameraMove) () -> {
map.events.add((OnCameraMove) () -> {
```kotlin map.events.add(OnClick { lat: Double, lon: Double -> //Map clicked.+
+ //Return true indicating if event should be consumed and not passed further to other listeners registered afterwards, false otherwise.
+ return false
}) map.events.add(OnFeatureClick { features: List<Feature?>? -> //Feature clicked.+
+ //Return true indicating if event should be consumed and not passed further to other listeners registered afterwards, false otherwise.
+ return false
}) map.events.add(OnCameraMove {
map.layers.add(layer);
//Add a feature click event to the map and pass the layer ID to limit the event to the specified layer. map.events.add((OnFeatureClick) (features) -> { //One or more features clicked.+
+ //Return true indicating if event should be consumed and not passed further to other listeners registered afterwards, false otherwise.
+ return true;
}, layer); //Add a long feature click event to the map and pass the layer ID to limit the event to the specified layer. map.events.add((OnLongFeatureClick) (features) -> { //One or more features long clicked.+
+ //Return true indicating if event should be consumed and not passed further to other listeners registered afterwards, false otherwise.
+ return true;
}, layer); ```
map.layers.add(layer)
map.events.add( OnFeatureClick { features: List<Feature?>? -> //One or more features clicked.+
+ //Return true indicating if event should be consumed and not passed further to other listeners registered afterwards, false otherwise.
+ return false
}, layer )
map.events.add(
map.events.add( OnLongFeatureClick { features: List<Feature?>? -> //One or more features long clicked.+
+ //Return true indicating if event should be consumed and not passed further to other listeners registered afterwards, false otherwise.
+ return false
}, layer )
azure-maps Clustering Point Data Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/clustering-point-data-web-sdk.md
When visualizing many data points on the map, data points may overlap over each
## Enabling clustering on a data source
-Enable clustering in the `DataSource` class by setting the `cluster` option to true. Set `clusterRadius` to select nearby points and combines them into a cluster. The value of `clusterRadius` is in pixels. Use `clusterMaxZoom` to specify a zoom level at which to disable the clustering logic. Here is an example of how to enable clustering in a data source.
+Enable clustering in the `DataSource` class by setting the `cluster` option to `true`. Set `clusterRadius` to select nearby points and combines them into a cluster. The value of `clusterRadius` is in pixels. Use `clusterMaxZoom` to specify a zoom level at which to disable the clustering logic. Here is an example of how to enable clustering in a data source.
```javascript //Create a data source and enable clustering.
azure-maps Display Feature Information Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/display-feature-information-android.md
map.events.add((OnFeatureClick) (features) -> {
String msg = features.get(0).getStringProperty("title"); //Do something with the message.+
+ //Return a boolean indicating if event should be consumed or continue bubble up.
+ return false;
}, layer.getId()); //Limit this event to the symbol layer. ```
map.events.add(OnFeatureClick { features: List<Feature> ->
val msg = features[0].getStringProperty("title") //Do something with the message.+
+ //Return a boolean indicating if event should be consumed or continue bubble up.
+ return false
}, layer.getId()) //Limit this event to the symbol layer. ```
map.events.add((OnFeatureClick) (features) -> {
//Display a toast message with the title information. Toast.makeText(this, msg, Toast.LENGTH_SHORT).show();+
+ //Return a boolean indicating if event should be consumed or continue bubble up.
+ return false;
}, layer.getId()); //Limit this event to the symbol layer. ```
map.events.add(OnFeatureClick { features: List<Feature> ->
//Display a toast message with the title information. Toast.makeText(this, msg, Toast.LENGTH_SHORT).show()+
+ //Return a boolean indicating if event should be consumed or continue bubble up.
+ return false
}, layer.getId()) //Limit this event to the symbol layer. ```
In addition to toast messages, There are many other ways to present the metadata
## Display a popup
-The Azure Maps Android SDK provides a `Popup` class that makes it easy to create UI annotation elements that are anchored to a position on the map. For popups you have to pass in a view with a relative layout into the `content` option of the popup. Here is a simple layout example that displays dark text on top of a while background.
+The Azure Maps Android SDK provides a `Popup` class that makes it easy to create UI annotation elements that are anchored to a position on the map. For popups, you have to pass in a view with a relative layout into the `content` option of the popup. Here is a simple layout example that displays dark text on top of a while background.
```xml <?xml version="1.0" encoding="utf-8"?>
map.events.add((OnFeatureClick)(feature) -> {
//Open the popup. popup.open();+
+ //Return a boolean indicating if event should be consumed or continue bubble up.
+ return false;
}); ```
map.events.add(OnFeatureClick { feature: List<Feature> ->
//Open the popup. popup.open()+
+ //Return a boolean indicating if event should be consumed or continue bubble up.
+ return false
}) ```
azure-maps How To Add Tile Layer Android Map https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-add-tile-layer-android-map.md
Title: Add a tile layer to Android maps | Microsoft Azure Maps
description: Learn how to add a tile layer to a map. See an example that uses the Azure Maps Android SDK to add a weather radar overlay to a map. Previously updated : 2/26/2021 Last updated : 3/25/2021
A Tile layer loads in tiles from a server. These images can be pre-rendered and
* X, Y, Zoom notation - Based on the zoom level, x is the column and y is the row position of the tile in the tile grid. * Quadkey notation - Combination x, y, zoom information into a single string value that is a unique identifier for a tile.
-* Bounding Box - Bounding box coordinates can be used to specify an image in the format `{west},{south},{east},{north}`, which is commonly used by [Web Mapping Services (WMS)](https://www.opengeospatial.org/standards/wms).
+* Bounding Box - Bounding box coordinates can be used to specify an image in the format `{west},{south},{east},{north}`, which is commonly used by [web-mapping Services (WMS)](https://www.opengeospatial.org/standards/wms).
> [!TIP] > A TileLayer is a great way to visualize large data sets on the map. Not only can a tile layer be generated from an image, but vector data can also be rendered as a tile layer too. By rendering vector data as a tile layer, the map control only needs to load the tiles, which can be much smaller in file size than the vector data they represent. This technique is used by many who need to render millions of rows of data on the map.
-The tile URL passed into a Tile layer must be an http/https URL to a TileJSON resource or a tile URL template that uses the following parameters:
+The tile URL passed into a Tile layer must be an http/https URL to a TileJSON resource or a tile URL template that uses the following parameters:
* `{x}` - X position of the tile. Also needs `{y}` and `{z}`. * `{y}` - Y position of the tile. Also needs `{x}` and `{z}`.
The following screenshot shows the above code displaying a tile layer of nautica
![Android map displaying tile layer](media/how-to-add-tile-layer-android-map/xyz-tile-layer-android.png)
+## Add an OGC web-mapping service (WMS)
+
+A web-mapping service (WMTS) is an Open Geospatial Consortium (OGC) standard for serving images of map data. There are many open data sets available in this format that you can use with Azure Maps. This type of service can be used with a tile layer if the service supports the `EPSG:3857` coordinate reference system (CRS). When using a WMS service, set the width and height parameters to the same value that is supported by the service, be sure to set this same value in the `tileSize` option. In the formatted URL, set the `BBOX` parameter of the service with the `{bbox-epsg-3857}` placeholder.
++
+``` java
+TileLayer layer = new TileLayer(
+ tileUrl("https://mrdata.usgs.gov/services/gscworld?FORMAT=image/png&HEIGHT=1024&LAYERS=geology&REQUEST=GetMap&STYLES=default&TILED=true&TRANSPARENT=true&WIDTH=1024&VERSION=1.3.0&SERVICE=WMS&CRS=EPSG:3857&BBOX={bbox-epsg-3857}"),
+ tileSize(1024)
+);
+
+map.layers.add(layer, "labels");
+```
+++
+```kotlin
+val layer = TileLayer(
+ tileUrl("https://mrdata.usgs.gov/services/gscworld?FORMAT=image/png&HEIGHT=1024&LAYERS=geology&REQUEST=GetMap&STYLES=default&TILED=true&TRANSPARENT=true&WIDTH=1024&VERSION=1.3.0&SERVICE=WMS&CRS=EPSG:3857&BBOX={bbox-epsg-3857}"),
+ tileSize(1024)
+)
+
+map.layers.add(layer, "labels")
+```
++
+The following screenshot shows the above code overlaying a web-mapping service of geological data from the [U.S. Geological Survey (USGS)](https://mrdata.usgs.gov/) on top of a map, below the labels.
+
+![Android map displaying WMS tile layer](media/how-to-add-tile-layer-android-map/android-tile-layer-wms.jpg)
+
+## Add an OGC web-mapping tile service (WMTS)
+
+A web-mapping tile service (WMTS) is an Open Geospatial Consortium (OGC) standard for serving tiled based overlays for maps. There are many open data sets available in this format that you can use with Azure Maps. This type of service can be used with a tile layer if the service supports the `EPSG:3857` or `GoogleMapsCompatible` coordinate reference system (CRS). When using a WMTS service, set the width and height parameters to the same value that is supported by the service, be sure to set this same value in the `tileSize` option. In the formatted URL, replace the following placeholders accordingly:
+
+* `{TileMatrix}` => `{z}`
+* `{TileRow}` => `{y}`
+* `{TileCol}` => `{x}`
++
+``` java
+TileLayer layer = new TileLayer(
+ tileUrl("https://basemap.nationalmap.gov/arcgis/rest/services/USGSImageryOnly/MapServer/WMTS/tile/1.0.0/USGSImageryOnly/default/GoogleMapsCompatible/{z}/{y}/{x}"),
+ tileSize(256),
+ bounds(-173.25000107492872, 0.0005794121990209753, 146.12527718104752, 71.506811402077),
+ maxSourceZoom(18)
+);
+
+map.layers.add(layer, "transit");
+```
+++
+```kotlin
+val layer = TileLayer(
+ tileUrl("https://basemap.nationalmap.gov/arcgis/rest/services/USGSImageryOnly/MapServer/WMTS/tile/1.0.0/USGSImageryOnly/default/GoogleMapsCompatible/{z}/{y}/{x}"),
+ tileSize(256),
+ bounds(-173.25000107492872, 0.0005794121990209753, 146.12527718104752, 71.506811402077),
+ maxSourceZoom(18)
+)
+
+map.layers.add(layer, "transit")
+```
++
+The following screenshot shows the above code overlaying a web-mapping tile service of imagery from the [U.S. Geological Survey (USGS) National Map](https://viewer.nationalmap.gov/services/) on top of a map, below the roads and labels.
+
+![Android map displaying WMTS tile layer](media/how-to-add-tile-layer-android-map/android-tile-layer-wmts.jpg)
+ ## Next steps See the following article to learn more about ways to overlay imagery on a map.
azure-maps How To Show Traffic Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-show-traffic-android.md
map.setTraffic(
::: zone-end
-The following screenshot shows the above code rending real-time traffic information on the map.
+The following screenshot shows the above code rendering real-time traffic information on the map.
![Map showing real-time traffic information](media/how-to-show-traffic-android/android-show-traffic.png)
map.events.add(OnFeatureClick { features: List<Feature>? ->
::: zone-end
-The following screenshot shows the above code rending real-time traffic information on the map with a toast message displaying incident details.
+The following screenshot shows the above code rendering real-time traffic information on the map with a toast message displaying incident details.
![Map showing real-time traffic information with a toast message displaying incident details](media/how-to-show-traffic-android/android-traffic-details.png)
azure-maps How To Use Android Map Control Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-use-android-map-control-library.md
The first option is to pass the language and view regional information into the
```java static {
- //Set your Azure Maps Key.
- AzureMaps.setSubscriptionKey("<Your Azure Maps Key>");
- //Alternatively use Azure Active Directory authenticate.
- //AzureMaps.setAadProperties("<Your aad clientId>", "<Your aad AppId>", "<Your aad Tenant>");
+ AzureMaps.setAadProperties("<Your aad clientId>", "<Your aad AppId>", "<Your aad Tenant>");
+
+ //Set your Azure Maps Key.
+ //AzureMaps.setSubscriptionKey("<Your Azure Maps Key>");
//Set the language to be used by Azure Maps. AzureMaps.setLanguage("fr-FR");
static {
```kotlin companion object { init {
- //Set your Azure Maps Key.
- AzureMaps.setSubscriptionKey("<Your Azure Maps Key>");
- //Alternatively use Azure Active Directory authenticate.
- //AzureMaps.setAadProperties("<Your aad clientId>", "<Your aad AppId>", "<Your aad Tenant>");
+ AzureMaps.setAadProperties("<Your aad clientId>", "<Your aad AppId>", "<Your aad Tenant>");
+
+ //Set your Azure Maps Key.
+ //AzureMaps.setSubscriptionKey("<Your Azure Maps Key>");
//Set the language to be used by Azure Maps. AzureMaps.setLanguage("fr-FR");
azure-maps Map Add Tile Layer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/map-add-tile-layer.md
Title: Add a tile layer to a map | Microsoft Azure Maps
description: Learn how to superimpose images on maps. See an example that uses the Azure Maps Web SDK to add a tile layer containing a weather radar overlay to a map. Previously updated : 07/29/2019 Last updated : 3/25/2021
This article shows you how to overlay a Tile layer on the map. Tile layers allow you to superimpose images on top of Azure Maps base map tiles. For more information on Azure Maps tiling system, see [Zoom levels and tile grid](zoom-levels-and-tile-grid.md).
-A Tile layer loads in tiles from a server. These images can either be pre-rendered or dynamically rendered. Pre-rendered images are stored like any other image on a server using a naming convention that the tile layer understands. Dynamically rendered images use a service to load the images close to real time. There are three different tile service naming conventions supported by Azure Maps [TileLayer](/javascript/api/azure-maps-control/atlas.layer.tilelayer) class:
+A Tile layer loads in tiles from a server. These images can either be pre-rendered or dynamically rendered. Pre-rendered images are stored like any other image on a server using a naming convention that the tile layer understands. Dynamically rendered images use a service to load the images close to real time. There are three different tile service naming conventions supported by Azure Maps [TileLayer](/javascript/api/azure-maps-control/atlas.layer.tilelayer) class:
* X, Y, Zoom notation - X is the column, Y is the row position of the tile in the tile grid, and the Zoom notation a value based on the zoom level. * Quadkey notation - Combines x, y, and zoom information into a single string value. This string value becomes a unique identifier for a single tile.
-* Bounding Box - Specify an image in the Bounding box coordinates format: `{west},{south},{east},{north}`. This format is commonly used by [Web Mapping Services (WMS)](https://www.opengeospatial.org/standards/wms).
+* Bounding Box - Specify an image in the Bounding box coordinates format: `{west},{south},{east},{north}`. This format is commonly used by [web-mapping Services (WMS)](https://www.opengeospatial.org/standards/wms).
> [!TIP] > A [TileLayer](/javascript/api/azure-maps-control/atlas.layer.tilelayer) is a great way to visualize large data sets on the map. Not only can a tile layer be generated from an image, vector data can also be rendered as a tile layer too. By rendering vector data as a tile layer, map control only needs to load the tiles which are smaller in file size than the vector data they represent. This technique is commonly used to render millions of rows of data on the map.
-The tile URL passed into a Tile layer must be an http or an https URL to a TileJSON resource or a tile URL template that uses the following parameters:
+The tile URL passed into a Tile layer must be an http or an https URL to a TileJSON resource or a tile URL template that uses the following parameters:
* `{x}` - X position of the tile. Also needs `{y}` and `{z}`. * `{y}` - Y position of the tile. Also needs `{x}` and `{z}`.
The tile URL passed into a Tile layer must be an http or an https URL to a TileJ
## Add a tile layer
- This sample shows how to create a tile layer that points to a set of tiles. This sample uses the x, y, zoom tiling system. The source of this tile layer is a weather radar overlay from the [Iowa Environmental Mesonet of Iowa State University](https://mesonet.agron.iastate.edu/ogc/). When viewing radar data, ideally users would clearly see the labels of cities as they navigate the map. This behavior can be implemented by inserting the tile layer below the `labels` layer.
+ This sample shows how to create a tile layer that points to a set of tiles. This sample uses the x, y, zoom tiling system. he source of this tile layer is the [OpenSeaMap project](https://openseamap.org/index.php), which contains crowd sourced nautical charts. When viewing radar data, ideally users would clearly see the labels of cities as they navigate the map. This behavior can be implemented by inserting the tile layer below the `labels` layer.
```javascript //Create a tile layer and add it to the map below the label layer.
-//Weather radar tiles from Iowa Environmental Mesonet of Iowa State University.
map.layers.add(new atlas.layer.TileLayer({
- tileUrl: 'https://mesonet.agron.iastate.edu/cache/tile.py/1.0.0/nexrad-n0q-900913/{z}/{x}/{y}.png',
+ tileUrl: 'https://tiles.openseamap.org/seamark/{z}/{x}/{y}.png',
opacity: 0.8,
- tileSize: 256
+ tileSize: 256,
+ minSourceZoom: 7,
+ maxSourceZoom: 17
}), 'labels'); ```
Below is the complete running code sample of the above functionality.
<iframe height='500' scrolling='no' title='Tile Layer using X, Y, and Z' src='//codepen.io/azuremaps/embed/BGEQjG/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true' style='width: 100%;'>See the Pen <a href='https://codepen.io/azuremaps/pen/BGEQjG/'>Tile Layer using X, Y, and Z</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+## Add an OGC web-mapping service (WMS)
+
+A web-mapping service (WMTS) is an Open Geospatial Consortium (OGC) standard for serving images of map data. There are many open data sets available in this format that you can use with Azure Maps. This type of service can be used with a tile layer if the service supports the `EPSG:3857` coordinate reference system (CRS). When using a WMS service, set the width and height parameters to the same value that is supported by the service, be sure to set this same value in the `tileSize` option. In the formatted URL, set the `BBOX` parameter of the service with the `{bbox-epsg-3857}` placeholder.
+
+The following screenshot shows the above code overlaying a web-mapping service of geological data from the [U.S. Geological Survey (USGS)](https://mrdata.usgs.gov/) on top of a map, below the labels.
+
+<br/>
+
+<iframe height="265" style="width: 100%;" scrolling="no" title="WMS Tile Layer" src="https://codepen.io/azuremaps/embed/BapjZqr?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true" frameborder="no" loading="lazy" allowtransparency="true" allowfullscreen="true">
+ See the Pen <a href='https://codepen.io/azuremaps/pen/BapjZqr'>WMS Tile Layer</a> by Azure Maps
+ (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
+</iframe>
+
+## Add an OGC web-mapping tile service (WMTS)
+
+A web-mapping tile service (WMTS) is an Open Geospatial Consortium (OGC) standard for serving tiled based overlays for maps. There are many open data sets available in this format that you can use with Azure Maps. This type of service can be used with a tile layer if the service supports the `EPSG:3857` or `GoogleMapsCompatible` coordinate reference system (CRS). When using a WMTS service, set the width and height parameters to the same value that is supported by the service, be sure to set this same value in the `tileSize` option. In the formatted URL, replace the following placeholders accordingly:
+
+* `{TileMatrix}` => `{z}`
+* `{TileRow}` => `{y}`
+* `{TileCol}` => `{x}`
+
+The following screenshot shows the above code overlaying a web-mapping tile service of imagery from the [U.S. Geological Survey (USGS) National Map](https://viewer.nationalmap.gov/services/) on top of a map, below the roads and labels.
+
+<br/>
+
+<iframe height="500" style="width: 100%;" scrolling="no" title="WMTS tile layer" src="https://codepen.io/azuremaps/embed/BapjZVY?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true" frameborder="no" loading="lazy" allowtransparency="true" allowfullscreen="true">
+ See the Pen <a href='https://codepen.io/azuremaps/pen/BapjZVY'>WMTS tile layer</a> by Azure Maps
+ (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
+</iframe>
+ ## Customize a tile layer The tile layer class has many styling options. Here is a tool to try them out.
Learn more about the classes and methods used in this article:
See the following articles for more code samples to add to your maps: > [!div class="nextstepaction"]
-> [Add an image layer](./map-add-image-layer.md)
+> [Add an image layer](./map-add-image-layer.md)
azure-maps Map Extruded Polygon Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/map-extruded-polygon-android.md
map.layers.add(layer, "labels")
::: zone-end
-The following screenshot shows the above code rending a polygon stretched vertically using a polygon extrusion layer.
+The following screenshot shows the above code rendering a polygon stretched vertically using a polygon extrusion layer.
![Map with polygon stretched vertically using a polygon extrusion layer](media/map-extruded-polygon-android/polygon-extrusion-layer.jpg)
azure-maps Set Android Map Styles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/set-android-map-styles.md
Be sure to complete the steps in the [Quickstart: Create an Android app](quick-a
## Set map style in the layout
-You can set a map style in the layout file for your activity class when adding the map control. The following code sets the center location, zoom level and map style.
+You can set a map style in the layout file for your activity class when adding the map control. The following code sets the center location, zoom level, and map style.
```XML <com.microsoft.azure.maps.mapcontrol.MapControl
The following screenshot shows the above code displaying a map with the satellit
## Setting the map camera
-The map camera controls the what part of the map is displayed in the map. The camera can be in the layout our programmatically in code. When setting it in code, there are two main methods for setting the position of the map; using center and zoom, or passing in a bounding box. The following code shows how to set all optional camera options when using `center` and `zoom`.
+The map camera controls which part of the world is displayed in the map viewport. The camera can be in the layout our programmatically in code. When setting it in code, there are two main methods for setting the position of the map; using center and zoom, or passing in a bounding box. The following code shows how to set all optional camera options when using `center` and `zoom`.
::: zone pivot="programming-language-java-android"
map.setCamera(
//The minimum zoom level the map will zoom-out to when animating from one location to another on the map. minZoom(10),
- //The maximium zoom level the map will zoom-in to when animating from one location to another on the map.
+ //The maximum zoom level the map will zoom-in to when animating from one location to another on the map.
maxZoom(14) ); ```
map.setCamera(
//The minimum zoom level the map will zoom-out to when animating from one location to another on the map. minZoom(10),
- //The maximium zoom level the map will zoom-in to when animating from one location to another on the map.
+ //The maximum zoom level the map will zoom-in to when animating from one location to another on the map.
maxZoom(14) ) ```
map.setCamera(
//Amount of pixel buffer around the bounding box to provide extra space around the bounding box. padding(20),
- //The maximium zoom level the map will zoom-in to when animating from one location to another on the map.
+ //The maximum zoom level the map will zoom-in to when animating from one location to another on the map.
maxZoom(14) ); ```
map.setCamera(
//Amount of pixel buffer around the bounding box to provide extra space around the bounding box. padding(20),
- //The maximium zoom level the map will zoom-in to when animating from one location to another on the map.
+ //The maximum zoom level the map will zoom-in to when animating from one location to another on the map.
maxZoom(14) ) ``` ::: zone-end
-Note that the aspect ratio of a bounding box may not be the same as the aspect ratio of the map, as such the map will often show the full bounding box area, but will often only be tight vertically or horizontally.
+The aspect ratio of a bounding box may not be the same as the aspect ratio of the map, as such the map will often show the full bounding box area, but will often only be tight vertically or horizontally.
## Next steps
azure-monitor Container Insights Persistent Volumes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-persistent-volumes.md
Starting with agent version *ciprod10052020*, Azure Monitor for containers integ
Container insights automatically starts monitoring PV usage by collecting the following metrics at 60 -sec intervals and storing them in the **InsightMetrics** table.
-|Metric name |Metric Dimension (tags) | Metric Description |
-| `pvUsedBytes`|podUID, podName, pvcName, pvcNamespace, capacityBytes, clusterId, clusterName|Used space in bytes for a specific persistent volume with a claim used by a specific pod. `capacityBytes` is folded in as a dimension in the Tags field to reduce data ingestion cost and to simplify queries.|
+| Metric name | Metric Dimension (tags) | Metric Description |
+|--|--|-|
+| `pvUsedBytes`| podUID, podName, pvcName, pvcNamespace, capacityBytes, clusterId, clusterName| Used space in bytes for a specific persistent volume with a claim used by a specific pod. `capacityBytes` is folded in as a dimension in the Tags field to reduce data ingestion cost and to simplify queries.|
Learn more about configuring collected PV metrics [here](./container-insights-agent-config.md).
Azure Monitor for containers automatically starts monitoring PVs by collecting t
|Data |Data Source| Data Type| Fields| |--|--|-|-|
-|Inventory of persistent volumes in a Kubernetes cluster |Kube API |`KubePVInventory` | PVName, PVCapacityBytes, PVCName, PVCNamespace, PVStatus, PVAccessModes, PVType, PVTypeInfo, PVStorageClassName, PVCreationTimestamp, TimeGenerated, ClusterId, ClusterName, _ResourceId |
+|Inventory of persistent volumes in a Kubernetes cluster |Kube API |`KubePVInventory` | PVName, PVCapacityBytes, PVCName, PVCNamespace, PVStatus, PVAccessModes, PVType, PVTypeInfo, PVStorageClassName, PVCreationTimestamp, TimeGenerated, ClusterId, ClusterName, _ResourceId |
## Monitor Persistent Volumes
azure-monitor Container Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-troubleshoot.md
Title: How to Troubleshoot Container insights | Microsoft Docs description: This article describes how you can troubleshoot and resolve issues with Container insights. Previously updated : 07/21/2020 Last updated : 03/25/2021
Container insights agent Pods uses the cAdvisor endpoint on the node agent to ga
To view the non-Azure Kubernetes cluster in Container insights, Read access is required on the Log Analytics workspace supporting this Insight and on the Container Insights solution resource **ContainerInsights (*workspace*)**.
+## Metrics aren't being collected
+
+1. Verify that the cluster is in a [supported region for custom metrics](../essentials/metrics-custom-overview.md#supported-regions).
+
+2. Verify that the **Monitoring Metrics Publisher** role assignment exists using the following CLI command:
+
+ ``` azurecli
+ az role assignment list --assignee "SP/UserassignedMSI for omsagent" --scope "/subscriptions/<subid>/resourcegroups/<RG>/providers/Microsoft.ContainerService/managedClusters/<clustername>" --role "Monitoring Metrics Publisher"
+ ```
+ For clusters with MSI, the user assigned client id for omsagent changes every time monitoring is enabled and disabled, so the role assignment should exist on the current msi client id.
+
+3. For clusters with Azure Active Directory pod identity enabled and using MSI:
+
+ - Verify the required label **kubernetes.azure.com/managedby: aks** is present on the omsagent pods using the following command:
+
+ `kubectl get pods --show-labels -n kube-system | grep omsagent`
+
+ - Verify that exceptions are enabled when pod identity is enabled using one of the supported methods at https://github.com/Azure/aad-pod-identity#1-deploy-aad-pod-identity.
+
+ Run the following command to verify:
+
+ `kubectl get AzurePodIdentityException -A -o yaml`
+
+ You should receive output similar to the following:
+
+ ```
+ apiVersion: "aadpodidentity.k8s.io/v1"
+ kind: AzurePodIdentityException
+ metadata:
+ name: mic-exception
+ namespace: default
+ spec:
+ podLabels:
+ app: mic
+ component: mic
+
+ apiVersion: "aadpodidentity.k8s.io/v1"
+ kind: AzurePodIdentityException
+ metadata:
+ name: aks-addon-exception
+ namespace: kube-system
+ spec:
+ podLabels:
+ kubernetes.azure.com/managedby: aks
+ ```
+++ ## Next steps With monitoring enabled to capture health metrics for both the AKS cluster nodes and pods, these health metrics are available in the Azure portal. To learn how to use Container insights, see [View Azure Kubernetes Service health](container-insights-analyze.md).
azure-netapp-files Azure Netapp Files Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-faqs.md
No. Azure NetApp Files is not supported by Azure Storage Explorer.
### How do I determine if a directory is approaching the limit size?
-You can use the `stat` command from a client to see whether a directory is approaching the maximum size limit for directory metadata (320 MB).
+You can use the `stat` command from a client to see whether a directory is approaching the maximum size limit for directory metadata (320 MB).
-For a 320 MB directory, the number of blocks is 655360, with each block size being 512 bytes. (That is, 320x1024x1024/512.)
+For a 320-MB directory, the number of blocks is 655360, with each block size being 512 bytes. (That is, 320x1024x1024/512.) This number translates to approximately 4 million files maximum for a 320-MB directory. However, the actual number of maximum files might be lower, depending on factors such as the number of files containing non-ASCII characters in the directory. As such, you should use the `stat` command as follows to determine whether your directory is approaching its limit.
Examples:
azure-percept Audio Button Led Behavior https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/audio-button-led-behavior.md
Title: Azure Percept Audio button and LED behavior description: Learn more about the button and LED states of Azure Percept Audio--++ Previously updated : 02/18/2021- Last updated : 03/25/2021 # Azure Percept Audio button and LED behavior
-See the following guidance for information on the button and LED states of the Azure Percept Audio.
+See the following guidance for information on the button and LED states of the Azure Percept Audio device.
## Button behavior
-You can use the buttons to control the behavior of the device.
+Use the buttons to control the behavior of the device.
-|Button State| Behavior|
+|Button State|Behavior|
||-|
-|Mute| Press to mute/unmute the mic-array. The button event is release-triggered when pressed.|
-|PTT/PTS| Press PTT to bypass the keyword spotting state and activate the command listening state. Press again to stop the agent's active dialogue and revert to keyword spotting state. The button event is release-triggered when pressed. PTS only works when button is pressed while agent is speaking, not when agent is listening or thinking.|
+|Mute|Press to mute/unmute the mic array. The button event is release-triggered when pressed.|
+|PTT/PTS|Press PTT to bypass the keyword spotting state and activate the command listening state. Press again to stop the agent's active dialogue and revert to the keyword spotting state. The button event is release-triggered when pressed. PTS only works when the button is pressed while the agent is speaking, not when the agent is listening or thinking.|
## LED behavior
-You can use LED indicators to understand which state you device is in.
+Use the LED indicators to understand which state you device is in.
-|LED| LED State| Ear SoM Status|
-|||-|
-|L02| 1x white, static on |Power on |
-|L02| 1x white, 0.5 Hz flashing| Authentication in progress |
-|L01 & L02 & L03| 3x blue, static on| Waiting for keyword|
-|L01 & L02 & L03| LED array flashing, 20fps | Listening or speaking|
-|L01 & L02 & L03| LED array racing, 20fps| Thinking|
-|L01 & L02 & L03| 3x red, static on | Mute|
+|LED|LED State|Ear SoM Status|
+|||-|
+|L02|1x white, static on|Power on |
+|L02|1x white, 0.5 Hz flashing|Authentication in progress |
+|L01 & L02 & L03|3x blue, static on|Waiting for keyword|
+|L01 & L02 & L03|LED array flashing, 20fps |Listening or speaking|
+|L01 & L02 & L03|LED array racing, 20fps|Thinking|
+|L01 & L02 & L03|3x red, static on |Mute|
## Next steps
azure-percept Concept Security Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/concept-security-configuration.md
Azure Percept DK offers a great variety of security capabilities out of the box.
- Ensure data-at-rest encryption is enabled - Continuously monitor the device posture and quickly respond to alerts - Limit the number of administrators who have access to the device+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about Azure Percept security](./overview-percept-security.md)
azure-percept Dev Tools Installer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/dev-tools-installer.md
Previously updated : 02/18/2021 Last updated : 03/25/2021
-# Dev Tools Pack Installer Overview
+# Dev Tools Pack Installer overview
-The Dev Tools Pack Installer is a one-stop solution that installs and configures all of the tools required to develop an Intelligent Edge solution. If you have already installed any of the software packages listed below, the Dev Tools Pack Installer will reinstall those packages so that your tools are consistent with the Installer software versions.
+The Dev Tools Pack Installer is a one-stop solution that installs and configures all of the tools required to develop an advanced intelligent edge solution.
-## Mandatory Tools Installed
+## Mandatory tools
* [Visual Studio Code](https://code.visualstudio.com/) * [Python 3.6 or later](https://www.python.org/)
The Dev Tools Pack Installer is a one-stop solution that installs and configures
* [TensorFlow 1.13](https://www.tensorflow.org/) * [Azure Machine Learning SDK 1.1](/python/api/overview/azure/ml/)
-## Optional Tools Available for Installation
+## Optional tools
-* [Nvidia DeepStream SDK 5](https://developer.nvidia.com/deepstream-sdk) (Toolkit for developing solutions for Nvidia Accelerators)
-* [Intel OpenVino Toolkit 2020.2](https://docs.openvinotoolkit.org/) (Toolkit for developing solutions for Intel Accelerators)
+* [Nvidia DeepStream SDK 5](https://developer.nvidia.com/deepstream-sdk) (toolkit for developing solutions for Nvidia Accelerators)
+* [Intel OpenVino Toolkit 2020.2](https://docs.openvinotoolkit.org/) (toolkit for developing solutions for Intel Accelerators)
* [Lobe.ai](https://lobe.ai/) * [Streamlit](https://www.streamlit.io/) * [Pytorch 1.4.0 (Windows) or 1.2.0 (Linux)](https://pytorch.org/)
The Dev Tools Pack Installer is a one-stop solution that installs and configures
* [CUDA Toolkit 10.0.130](https://developer.nvidia.com/cuda-toolkit) * [Microsoft Cognitive Toolkit 2.5.1](https://www.microsoft.com/research/product/cognitive-toolkit/?lang=fr_ca)
-## Known Issues
+## Known issues
-- Optional Caffe install may fail if Docker is not running properly on system. If you would like to install Caffe, make sure Docker is installed and running before attempting Caffe installation through the Dev Tools Pack Installer.
+- Optional Caffe install may fail if Docker is not running properly. If you would like to install Caffe, make sure Docker is installed and running before attempting the Caffe installation through the Dev Tools Pack Installer.
- Optional CUDA install fails on incompatible systems. Before attempting to install the [CUDA Toolkit 10.0.130](https://developer.nvidia.com/cuda-toolkit) through the Dev Tools Pack Installer, verify your system compatibility.
-## Minimum Requirements
-
-* Docker minimum Requirements:
-
- * Windows:
- * https://docs.docker.com/docker-for-windows/install/#system-requirements
-
- - Windows 10 64-bit: Pro, Enterprise, or Education (Build 16299 or later).
-
- For Windows 10 Home, see Install Docker Desktop on Windows Home.
- - Hyper-V and Containers Windows features must be enabled.
- - The following hardware prerequisites are required to successfully run Client Hyper-V on Windows 10:
-
- - 64-bit processor with [Second Level Address Translation (SLAT)](https://en.wikipedia.org/wiki/Second_Level_Address_Translation)
- - 4-GB system RAM
- - BIOS-level hardware virtualization support must be enabled in the BIOS settings. For more information, see Virtualization.
-
- > [!NOTE]
- > Docker supports Docker Desktop on Windows based on MicrosoftΓÇÖs support lifecycle for Windows 10 operating system. For more information, see the [Windows lifecycle fact sheet](https://support.microsoft.com/help/13853/windows-lifecycle-fact-sheet).
-
- * Mac:
- * https://docs.docker.com/docker-for-mac/install/#system-requirements
-
- Your Mac must meet the following requirements to successfully install Docker Desktop:
-
- - **Mac hardware must be a 2010 or a newer model with an Intel processor**, with IntelΓÇÖs hardware support for memory management unit (MMU) virtualization, including Extended Page Tables (EPT) and Unrestricted Mode. You can check to see if your machine has this support by running the following command in a terminal: ```sysctl kern.hv_support```
-
- If your Mac supports the Hypervisor framework, the command prints ```kern.hv_support: 1```.
-
- - **macOS must be version 10.14 or newer**. That is, Mojave, Catalina, or Big Sur. We recommend upgrading to the latest version of macOS.
-
- If you experience any issues after upgrading your macOS to version 10.15, you must install the latest version of Docker Desktop to be compatible with this version of macOS.
-
- - At least 4 GB of RAM.
- - VirtualBox prior to version 4.3.30 must not be installed as it is not compatible with Docker Desktop.
-
- > [!NOTE]
- > Docker supports Docker Desktop on the most recent versions of macOS. That is, the current release of macOS and the previous two releases. As new major versions of macOS are made generally available, Docker stops supporting the oldest version and supports the newest version of macOS (in addition to the previous two releases). Docker Desktop currently supports macOS Mojave, macOS Catalina, and macOS Big Sur.
- >
- - The installer is not supported on Apple M1.
-
-## Instructions
-
-1. Download the Dev Tools Pack Installer for [Windows](https://go.microsoft.com/fwlink/?linkid=2132187), [Linux](https://go.microsoft.com/fwlink/?linkid=2132186), and [Mac](https://go.microsoft.com/fwlink/?linkid=2132296).
-
-1. Depending on your Platform, there will be some differences in launching the installer.
-
- 1. For Windows:
-
- 1. Click on the **Dev-Tools-Pack-Installer** to open the installation wizard.
-
- 1. For Mac:
-
- 1. After downloading, move the Dev-Tools-Pack-Installer.app file to the Applications folder.
-
- 1. Click on **Dev-Tools-Pack-Installer.app** to open the installation wizard.
-
- 1. If you get an ΓÇ£unidentified developerΓÇ¥ security dialog:
-
- 1. Go to System Preferences -> Security & Privacy -> General and click the ΓÇ£Open AnywayΓÇ¥ button next to ΓÇ£Dev-Tools-Pack-Installer.appΓÇ¥
-
- 1. Click the Electron icon on the Dock again
-
- 1. Click the ΓÇ£OpenΓÇ¥ button in the security dialog
-
- 1. For Linux:
-
- 1. When prompted by the browser click ΓÇ£SaveΓÇ¥ to complete the installer download
-
- 1. Add execution permissions to the **.appimage** file method 1 (Commandline):
-
- 1. Open the Linux Terminal
-
- 1. Type the following in the Terminal to go to the Downloads folder
-
- 1. cd ~/Downloads/
-
- 1. Type the following in the Terminal to make the AppImage executable
-
- 1. chmod +x **Dev-Tools-Pack-Installer.AppImage**
-
- 1. Type the following in the Terminal to run the installer
-
- 1. ./Dev-Tools-Pack-Installer.AppImage
-
- 1. Add execution permissions to the **.appimage** file method 2 (UI):
-
- 1. Right click on the .appimage file and select properties
-
- 1. Open Permissions tab
-
- 1. Check 'Allow executing file as a program' box
-
- 1. Close properties and open the .appimage file
+## Docker minimum requirements
+
+### Windows
+
+- Windows 10 64-bit: Pro, Enterprise, or Education (build 16299 or later).
+
+- Hyper-V and Containers Windows features must be enabled. The following hardware prerequisites are required to successfully run Hyper-V on Windows 10:
+
+ - 64-bit processor with [Second Level Address Translation (SLAT)](https://en.wikipedia.org/wiki/Second_Level_Address_Translation)
+ - 4 GB system RAM
+ - BIOS-level hardware virtualization support must be enabled in the BIOS settings. For more information, see Virtualization.
+
+> [!NOTE]
+> Docker supports Docker Desktop on Windows based on MicrosoftΓÇÖs support lifecycle for Windows 10 operating system. For more information, see the [Windows lifecycle fact sheet](https://support.microsoft.com/help/13853/windows-lifecycle-fact-sheet).
+
+Learn more about [installing Docker Desktop on Windows](https://docs.docker.com/docker-for-windows/install/#install-docker-desktop-on-windows).
+
+### Mac
+
+- Mac must be a 2010 or a newer model with the following attributes:
+ - Intel processor
+ - IntelΓÇÖs hardware support for memory management unit (MMU) virtualization, including Extended Page Tables (EPT) and Unrestricted Mode. You can check to see if your machine has this support by running the following command in a terminal: ```sysctl kern.hv_support```. If your Mac supports the Hypervisor framework, the command prints ```kern.hv_support: 1```.
+
+- macOS version 10.14 or newer (Mojave, Catalina, or Big Sur). We recommend upgrading to the latest version of macOS. If you experience any issues after upgrading your macOS to version 10.15, you must install the latest version of Docker Desktop to be compatible with this version of macOS.
+
+- At least 4 GB of RAM.
+
+- Do NOT install VirtualBox prior to version 4.3.30--it is not compatible with Docker Desktop.
+
+- The installer is not supported on Apple M1.
+
+Learn more about [installing Docker Desktop on Mac](https://docs.docker.com/docker-for-mac/install/#system-requirements).
+
+## Launch the installer
+
+Download the Dev Tools Pack Installer for [Windows](https://go.microsoft.com/fwlink/?linkid=2132187), [Linux](https://go.microsoft.com/fwlink/?linkid=2132186), or [Mac](https://go.microsoft.com/fwlink/?linkid=2132296). Launch the installer according to your platform, as described below.
+
+### Windows
+
+1. Click on **Dev-Tools-Pack-Installer** to open the installation wizard.
+
+### Mac
+
+1. After downloading, move the **Dev-Tools-Pack-Installer.app** file to the **Applications** folder.
+
+1. Click on **Dev-Tools-Pack-Installer.app** to open the installation wizard.
+
+1. If you receive an ΓÇ£unidentified developerΓÇ¥ security dialog:
+
+ 1. Go to **System Preferences** -> **Security & Privacy** -> **General** and click **Open Anyway** next to **Dev-Tools-Pack-Installer.app**.
+ 1. Click the electron icon.
+ 1. Click **Open** in the security dialog.
+
+### Linux
+
+1. When prompted by the browser, click **Save** to complete the installer download.
+
+1. Add execution permissions to the **.appimage** file:
+
+ 1. Open a Linux terminal.
+
+ 1. Enter the following in the terminal to go to the **Downloads** folder:
+
+ ```bash
+ cd ~/Downloads/
+ ```
+
+ 1. Make the AppImage executable:
+
+ ```bash
+ chmod +x Dev-Tools-Pack-Installer.AppImage
+ ```
+
+ 1. Run the installer:
+
+ ```bash
+ ./Dev-Tools-Pack-Installer.AppImage
+ ```
+
+1. Add execution permissions to the **.appimage** file:
+
+ 1. Right click on the .appimage file and select **Properties**.
+ 1. Open the **Permissions** tab.
+ 1. Check the box next to **Allow executing file as a program**.
+ 1. Close **Properties** and open the **.appimage** file.
+
+## Run the installer
1. On the **Install Dev Tools Pack Installer** page, click **View license** to view the license agreements of each software package included in the installer. If you accept the terms in the license agreements, check the box and click **Next**.
The Dev Tools Pack Installer is a one-stop solution that installs and configures
If the installer notifies you to verify Docker Desktop is in a good running state, see the following steps:
- 1. Windows:
-
- 1. Expand system tray hidden icons:
-
- 1. Expand system tray hidden icons if hidden:
-
- :::image type="content" source="./media/dev-tools-installer/system-tray.png" alt-text="System Tray.":::
-
- 1. Verify the Docker Desktop icon shows 'Docker Desktop is Running':
-
- :::image type="content" source="./media/dev-tools-installer/docker-status-running.png" alt-text="Docker Status.":::
-
- 1. If you do not see the above icon listed in the system tray, launch Docker Desktop from the start menu.
-
- 1. If Docker prompts you to reboot, it's fine to close the installer and relaunch after a reboot has completed and Docker is in a running state. Any successfully installed third-party applications should be detected and will not be automatically reinstalled.
+### Windows
+
+1. Expand system tray hidden icons.
+
+ :::image type="content" source="./media/dev-tools-installer/system-tray.png" alt-text="System Tray.":::
+
+1. Verify the Docker Desktop icon shows **Docker Desktop is Running**.
+
+ :::image type="content" source="./media/dev-tools-installer/docker-status-running.png" alt-text="Docker Status.":::
+
+1. If you do not see the above icon listed in the system tray, launch Docker Desktop from the start menu.
+
+1. If Docker prompts you to reboot, it's fine to close the installer and relaunch after a reboot has completed and Docker is in a running state. Any successfully installed third-party applications should be detected and will not be automatically reinstalled.
## Next steps
azure-percept How To Troubleshoot Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-troubleshoot-setup.md
Title: Troubleshoot issues during the on-boarding experience for Azure Percept DK
-description: Get troubleshooting tips for some of the more common issues found during the on-boarding experience
+ Title: Troubleshoot issues during the Azure Percept DK setup experience
+description: Get troubleshooting tips for some of the more common issues found during the setup experience
Previously updated : 02/18/2021 Last updated : 03/25/2021
-# Azure Percept DK onboarding experience troubleshooting guide
+# Azure Percept DK setup experience troubleshooting guide
-Here are some issues you may encounter during the Azure Percept DK onboarding experience. If after using the steps in this guide, the issue still persists. Contact Azure customer support.
+Refer to the table below for workarounds to common issues found during the [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md). If your issue still persists, contact Azure customer support.
|Issue|Reason|Workaround| |:--|:|:-|
-|When connecting to the Azure account sign-up pages or to the Azure portal, you may automatically sign in with a cached account. If this is not the account you intended on using, it may result in an experience that is inconsistent with the documentation.|This is usually because of a setting in the browser to "remember" an account you have previously used.|From the Azure page, click on the upper right corner of the page where it shows your account name, and then click "sign out." You will then be able to sign in with the correct account.|
-|The Azure Percept DK access point (scz-xxxx) network does not appear in the list of available Wi-Fi networks.|This is usually a temporary issue that resolves itself with a little time.|Wait for the network to appear. If it does not after more than 15 minutes, reboot the device.|
-|The connection to the Azure Percept DK access point frequently disconnects.|This can be due to a poor connection between the device and the host computer. It can also be caused by interference from other Wi-Fi connections on the host computer.|Make sure that the antennas are properly attached to the dev kit. If the dev kit is far away from the host computer, try moving it closer. Turn off any other internet connections such as LTE/5G if they are running on the host computer.|
-|The host computer shows a security warning about the connection to the Azure Percept DK access point.|This is a known issue that will be fixed in a later update.|It is safe to proceed through the onboarding experience over the devkit Wi-Fi access point.|
-|The Azure Percept DK access point (scz-xxxx) network appears in the network list but fails to connect.|This could be due to a temporary corruption of the devkit Wi-Fi access point.|Reboot the devkit and try again.|
-|Unable to connect to a Wi-Fi network during the setup experience.|The Wi-Fi network must currently have internet connectivity so we can communicate with Azure. EAP[PEAP/MSCHAP], captive portals, and Enterprise EAP-TLS connectivity is currently not supported.|Ensure the type Wi-Fi network you are connecting is supported and has internet connectivity.|
+|When connecting to the Azure account sign-up pages or to the Azure portal, you may automatically sign in with a cached account. If this is not the account you intended to use, it may result in an experience that is inconsistent with the documentation.|This is usually because of a setting in the browser to "remember" an account you have previously used.|From the Azure page, click on your account name in the upper right corner and select **sign out**. You will then be able to sign in with the correct account.|
+|The Azure Percept DK Wi-Fi access point (scz-xxxx or apd-xxxx) does not appear in the list of available Wi-Fi networks.|This is usually a temporary issue that resolves within 15 minutes.|Wait for the network to appear. If it does not appear after more than 15 minutes, reboot the device.|
+|The connection to the Azure Percept DK Wi-Fi access point frequently disconnects.|This can be due to a poor connection between the device and the host computer. It can also be caused by interference from other Wi-Fi connections on the host computer.|Make sure that the antennas are properly attached to the dev kit. If the dev kit is far away from the host computer, try moving it closer. Turn off any other internet connections such as LTE/5G if they are running on the host computer.|
+|The host computer shows a security warning about the connection to the Azure Percept DK access point.|This is a known issue that will be fixed in a later update.|It is safe to proceed through the setup experience.|
+|The Azure Percept DK Wi-Fi access point (scz-xxxx or apd-xxxx) appears in the network list but fails to connect.|This could be due to a temporary corruption of the dev kit's Wi-Fi access point.|Reboot the dev kit and try again.|
+|Unable to connect to a Wi-Fi network during the setup experience.|The Wi-Fi network must currently have internet connectivity to communicate with Azure. EAP[PEAP/MSCHAP], captive portals, and enterprise EAP-TLS connectivity is currently not supported.|Ensure your Wi-Fi network type is supported and has internet connectivity.|
azure-percept Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/known-issues.md
Title: Azure Percept known issues description: Learn more about Azure Percept known issues and their workarounds--++ Previously updated : 03/03/2021 Last updated : 03/25/2021 # Known issues
If you encounter any of these issues, it is not necessary to open a bug. If you
|Area|Description of Issue|Workaround| |-|||
-| On-boarding experience | CanΓÇÖt complete on-boarding experience unless deviceΓÇÖs Wi-Fi is configured (Azure login fails). | 1. SSH to the device access point (10.1.1.1) <br> 2. Identify and copy the device ethernet IP address <br> 3. Connect to on-boarding experience using the copied ethernet IP-based URL |
-| On-boarding experience | Clicking on links in the EULA during on-boarding experience sometimes does not open a new web page. | Copy the link and open it in a separate browser window. |
-| On-boarding experience | Cannot work through on-boarding experience when connected to a mobile Wi-Fi hotspot. | Connect your device directly to the SoftAP, a Wi-Fi network, or to a network over ethernet. |
-| Wi-Fi/SoftAP | The SoftAP can sometimes disconnect or disappear. | We are investigating. Rebooting the device will typically bring it back. |
-| Wi-Fi | The hardware button that toggles the Wi-Fi SoftAP on and off sometimes does not work. | Continue to try pressing the button or reboot the device. |
-| Wi-Fi | Users may see a message after connecting to Wi-Fi saying "This Wi-Fi network uses an older security standard." | The devkit's hotspot/SoftAP uses the WEP encryption algorithm. |
-| Wi-Fi | Unable to connect to SoftAP from Windows 10 PC with the following error message: <br> "Can't connect to this network" | Reboot both the devkit and the computer. |
-| Device update | Containers do not run after an OTA update. | SSH into the device and restart the IoT Edge container with this command `systemctl restart iotedge`. This will restart all containers. |
-| Device update | Users may get a message that the update failed, even if it succeeded. | Confirm the device updated by navigating to the Device Twin for the device in IoT Hub. This is fixed after the first update. |
-| Device update | Users may lose their Wi-Fi connection settings after their first update. | Run through on-boarding experience after updating to set up the Wi-Fi connection. This is fixed after the first update. |
-| Device update | After performing an OTA update, users can no longer log on via SSH using previously created user accounts, and new SSH users cannot be created through the on-boarding experience. This issue affects systems performing OTA updates from the following pre-installed image versions: 2020.110.114.105 and 2020.109.101.105. | To recover your user profiles, perform these steps after the OTA update: <br> [SSH into your devkit](./how-to-ssh-into-percept-dk.md) using ΓÇ£rootΓÇ¥ as the username. If you disabled the SSH ΓÇ£rootΓÇ¥ user login via on-boarding experience, you must re-enable it. Run this command after successfully connecting: <br> ```mkdir -p /var/custom-configs/home; chmod 755 /var/custom-configs/home``` <br> To recover previous user home data, run the following command: <br> ```mkdir -p /tmp/prev-rootfs && mount /dev/mmcblk0p3 /tmp/prev-rootfs && [ ! -L /tmp/prev-rootfs/home ] && cp -a /tmp/prev-rootfs/home/* /var/custom-configs/home/. && echo "User home migrated!"; umount /tmp/prev-rootfs``` |
+| Onboarding experience | CanΓÇÖt complete the onboarding experience unless deviceΓÇÖs Wi-Fi is configured (Azure login fails). | 1. [SSH into your Azure Percept DK](./how-to-ssh-into-percept-dk.md). <br> 2. Identify and copy the device's ethernet IP address. <br> 3. Connect to the onboarding experience using the ethernet IP-based URL. |
+| Onboarding experience | Clicking on links in the EULA (license agreement) sometimes does not open a new webpage. | Copy the link and open it in a separate browser window. |
+| Onboarding experience | Cannot work through the onboarding experience when connected to a mobile Wi-Fi hotspot. | Connect your device directly to the SoftAP, a Wi-Fi network, or to a network over ethernet. |
+| Wi-Fi | The SoftAP can sometimes disconnect or disappear. | We are investigating. Rebooting the device will typically bring it back. |
+| Wi-Fi | The hardware button that toggles the Wi-Fi SoftAP on and off sometimes does not work. | Continue pressing the button or reboot the device. |
+| Wi-Fi | Users may see a message after connecting to Wi-Fi: <br> "This Wi-Fi network uses an older security standard." | The devkit's SoftAP uses the WEP encryption algorithm. |
+| Wi-Fi | Unable to connect to the SoftAP from Windows 10 PC with the following error message: <br> "Can't connect to this network" | Reboot both the devkit and the computer. |
+| Device update | Containers do not run after an OTA update. | SSH into the device and restart the IoT Edge container with this command: `systemctl restart iotedge`. This will restart all containers. |
+| Device update | Users may get a message that the update failed, even if it succeeded. | Confirm the update by navigating to the devkit's Device Twin in IoT Hub and checking the value of `swVersion`. This is fixed after the first update. |
+| Device update | Users may lose their Wi-Fi connection settings after their first update. | Run through the onboarding experience after updating to set up the Wi-Fi connection. This is fixed after the first update. |
+| Device update | After performing an OTA update, users can no longer log on via SSH using previously created user accounts, and new SSH users cannot be created through the onboarding experience. This issue affects systems performing OTA updates from the following pre-installed image versions: 2020.110.114.105 and 2020.109.101.105. | To recover your user profiles, perform these steps after the OTA update: <br> [SSH into your devkit](./how-to-ssh-into-percept-dk.md) using ΓÇ£rootΓÇ¥ as the username. If you disabled the SSH ΓÇ£rootΓÇ¥ user login via the onboarding experience, you must re-enable it. Run this command after successfully connecting: <br> ```mkdir -p /var/custom-configs/home; chmod 755 /var/custom-configs/home``` <br> To recover previous user home data, run the following command: <br> ```mkdir -p /tmp/prev-rootfs && mount /dev/mmcblk0p3 /tmp/prev-rootfs && [ ! -L /tmp/prev-rootfs/home ] && cp -a /tmp/prev-rootfs/home/* /var/custom-configs/home/. && echo "User home migrated!"; umount /tmp/prev-rootfs``` |
| Device update | After taking an OTA update, update groups are lost. | Update the deviceΓÇÖs tag by following [these instructions](./how-to-update-over-the-air.md#create-a-device-update-group). | | Dev Tools Pack Installer | Optional Caffe install may fail if Docker is not running properly on system. | Make sure Docker is installed and running, then retry Caffe installation. | | Dev Tools Pack Installer | Optional CUDA install fails on incompatible systems. | Verify system compatibility with CUDA prior to running installer. |
-| Docker, Network, IoT Edge | If your internal network uses 172.x.x.x, docker containers will fail to connect to edge. | Add a special bip section to the /etc/docker/daemon.json file like this: `{ "bip": "192.168.168.1/24"}` |
+| Docker, Network, IoT Edge | If your internal network uses 172.x.x.x, Docker containers will fail to connect to IoT Edge. | Add a special bip section to the /etc/docker/daemon.json file like this: `{ "bip": "192.168.168.1/24"}` |
|Azure Percept Studio | "View stream" links within Azure Percept Studio do not open a new window showing the device's web stream. | 1. Open the [Azure portal](https://portal.azure.com) and select **IoT Hub**. <br> 2. Click on the IoT Hub to which your device is connected. <br> 3. Select **IoT Edge** under **Automatic Device Management** on your IoT Hub page. <br> 4. Select your device from the list. <br> 5. Select **Set modules** at the top of your device page. <br> 6. Click the trashcan icon next to **HostIpModule** to delete the module. <br> 7. To confirm the action, click **Review + create** and then **Create**. <br> 8. Open [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819) and click **Devices** on the left menu panel. <br> 9. Select your device from the list. <br> 10. On the **Vision** tab, click **View your device stream**. Your device will download a new version of HostIpModule and open a browser tab with your device's web stream. |
azure-percept Quickstart Percept Audio Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/quickstart-percept-audio-setup.md
Title: Get started with Azure Percept Audio
+ Title: Set up Azure Percept Audio
description: Learn how to connect your Azure Percept Audio device to your Azure Percept DK--++ Previously updated : 02/18/2021- Last updated : 03/25/2021 # Azure Percept Audio setup
Azure Percept Audio works out of the box with Azure Percept DK. No unique setup
- Azure Percept Audio - [Azure subscription](https://azure.microsoft.com/free/) - [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md): you connected your devkit to a Wi-Fi network, created an IoT Hub, and connected your devkit to the IoT Hub-- Speaker or headphones that can connect to 3.5mm audio jack (optional)
+- Speaker or headphones that can connect to a 3.5mm audio jack (optional)
## Connecting your devices
-1. Connect the Azure Percept Audio device to the Azure Percept DK carrier board with the included Micro USB to USB Type-A cable. Connect the Micro USB end of the cable to the Interposer (developer) board and the Type-A end to the Percept DK carrier board.
-1. (Optional) connect your speaker or headphones to your Azure Percept Audio via the audio jack, which is labeled "Line Out." This will allow you to hear your voice assistant's audio responses. If you do not connect a speaker or headphones, you will still be able to see the responses as text in the demo window.
+1. Connect the Azure Percept Audio device to the Azure Percept DK carrier board with the included Micro USB to USB Type-A cable. Connect the Micro USB end of the cable to the Audio interposer (developer) board and the Type-A end to the Percept DK carrier board.
-1. Power on the devkit. LED L02 on the Interposer board will change to blinking white to indicate that the device was powered on and that the Audio SoM is authenticating.
+1. (Optional) connect your speaker or headphones to your Azure Percept Audio device via the audio jack, which is labeled "Line Out." This will allow you to hear audio responses.
+
+1. Power on the devkit. LED L02 on the Audio interposer board will change to blinking white to indicate that the device was powered on and that the Audio SoM is authenticating.
1. Wait for the authentication process to complete--this can take up to 3 minutes. 1. You are ready to begin prototyping when you see one of the following:
- - LED L02 will change to solid white. This indicates that authentication is complete, and the devkit has not been configured with a keyword yet.
- - All three LEDs turn blue. This indicates that authentication is complete, and the devkit is configured with a keyword.
+ - LED L02 will change to solid white: this indicates that authentication is complete, and the devkit has not been configured with a keyword yet.
+ - All three LEDs turn blue: this indicates that authentication is complete, and the devkit is configured with a keyword.
## Next steps
azure-percept Troubleshoot Audio Accessory Speech Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/troubleshoot-audio-accessory-speech-module.md
Previously updated : 02/18/2021 Last updated : 03/25/2021
Use the guidelines below to troubleshoot voice assistant application issues.
## Collecting speech module logs
-To run these commands, [connect to the Azure Percept DK Wi-Fi access point and connect to the dev kit over SSH](./how-to-ssh-into-percept-dk.md) and enter the commands in the SSH terminal.
+To run these commands, [SSH into the dev kit](./how-to-ssh-into-percept-dk.md) and enter the commands into the SSH client prompt.
+
+Collect speech module logs:
```console sudo iotedge logs azureearspeechclientmodule ```
-To redirect any output to a .txt file for further analysis, use the following syntax:
+To redirect output to a .txt file for further analysis, use the following syntax:
```console sudo [command] > [file name].txt ```
+Change the permissions of the .txt file so it can be copied:
+
+```console
+sudo chmod 666 [file name].txt
+```
+ After redirecting output to a .txt file, copy the file to your host PC via SCP: ```console scp [remote username]@[IP address]:[remote file path]/[file name].txt [local host file path] ```
-[local host file path] refers to the location on your host PC which you would like to copy the .txt file to. [remote username] is the SSH username chosen during the [on-boarding experience](./quickstart-percept-dk-set-up.md). If you did not set up an SSH login during the Azure Percept DK on-boarding experience, your remote username is root.
+[local host file path] refers to the location on your host PC which you would like to copy the .txt file to. [remote username] is the SSH username chosen during the [setup experience](./quickstart-percept-dk-set-up.md).
## Checking runtime status of the speech module
-Check if the runtime status of **azureearspeechclientmodule** shows as **running**. To locate the runtime status of your device modules, open the [Azure portal](https://portal.azure.com/) and navigate to **All resources** -> **\<your IoT hub>** -> **IoT Edge** -> **\<your device ID>**. Click the **Modules** tab to see the runtime status of all installed modules.
+Check if the runtime status of **azureearspeechclientmodule** shows as **running**. To locate the runtime status of your device modules, open the [Azure portal](https://portal.azure.com/) and navigate to **All resources** -> **[your IoT hub]** -> **IoT Edge** -> **[your device ID]**. Click the **Modules** tab to see the runtime status of all installed modules.
:::image type="content" source="./media/troubleshoot-audio-accessory-speech-module/over-the-air-iot-edge-device-page.png" alt-text="Edge device page in the Azure portal.":::
If the runtime status of **azureearspeechclientmodule** is not listed as **runni
## Understanding Ear SoM LED indicators
-You can use LED indicators to understand which state you device is in. Usually it takes around 2 minutes for the module to fully initialize after *power on*. As it goes through initialization steps you will see:
+You can use LED indicators to understand which state you device is in. Usually it takes around 2 minutes for the module to fully initialize after the device powers on. As it goes through initialization steps, you will see:
-1. 1 center white LED - the device is powered on.
-2. 1 center white LED blinking - authentication is in progress.
+1. Center white LED on (static): the device is powered on.
+2. Center white LED on (blinking): authentication is in progress.
3. All three LEDs will change to blue once the device is authenticated and ready to use. |LED|LED State|Ear SoM Status|
azure-percept Troubleshoot Dev Kit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/troubleshoot-dev-kit.md
Previously updated : 02/18/2021 Last updated : 03/25/2021
-# Azure Percept DK (dev kit) troubleshooting
+# Azure Percept DK troubleshooting
See the guidance below for general troubleshooting tips for the Azure Percept DK. ## General troubleshooting commands
-To run these commands,
-1. Connect to the [dev kit's Wi-Fi AP](./quickstart-percept-dk-set-up.md)
-1. [SSH into the dev kit](./how-to-ssh-into-percept-dk.md)
-1. Enter the commands in the SSH terminal
+To run these commands, [SSH into the dev kit](./how-to-ssh-into-percept-dk.md) and enter the commands into the SSH client prompt.
To redirect any output to a .txt file for further analysis, use the following syntax:
After redirecting output to a .txt file, copy the file to your host PC via SCP:
scp [remote username]@[IP address]:[remote file path]/[file name].txt [local host file path] ```
-```[local host file path]``` refers to the location on your host PC that you would like to copy the .txt file to. ```[remote username]``` is the SSH username chosen during the [setup experience](./quickstart-percept-dk-set-up.md). If you did not set up an SSH login during the OOBE, your remote username is ```root```.
+```[local host file path]``` refers to the location on your host PC that you would like to copy the .txt file to. ```[remote username]``` is the SSH username chosen during the [setup experience](./quickstart-percept-dk-set-up.md).
For additional information on the Azure IoT Edge commands, see the [Azure IoT Edge device troubleshooting documentation](../iot-edge/troubleshoot.md).
For additional information on the Azure IoT Edge commands, see the [Azure IoT Ed
|Azure IoT Edge |```sudo iotedge logs [container name]``` |check container logs, such as speech and vision modules | |Azure IoT Edge |```sudo iotedge support-bundle --since 1h``` |collect module logs, Azure IoT Edge security manager logs, container engine logs, ```iotedge check``` JSON output, and other useful debug information from the past hour | |Azure IoT Edge |```sudo journalctl -u iotedge -f``` |view the logs of the Azure IoT Edge security manager |
-|Azure IoT Edge |```sudo systemctl restart iotedge``` |restart the Azure IoT Edge Security Daemon |
+|Azure IoT Edge |```sudo systemctl restart iotedge``` |restart the Azure IoT Edge security daemon |
|Azure IoT Edge |```sudo iotedge list``` |list the deployed Azure IoT Edge modules | |Other |```df [option] [file]``` |display information on available/total space in specified file system(s) | |Other |`ip route get 1.1.1.1` |display device IP and interface information |
sudo journalctl -u hostapd.service -u wpa_supplicant.service -u ztpd.service -u
|```sudo docker image prune``` |[removes all dangling images](https://docs.docker.com/engine/reference/commandline/image_prune/) | |```sudo watch docker ps``` <br> ```watch ifconfig [interface]``` |check docker container download status |
-## USB Updating
+## USB updates
|Error: |Solution: | ||--|
-|LIBUSB_ERROR_XXX during USB flash via UUU |This error is the result of a USB connection failure during UUU updating. If the USB cable is not properly connected to the USB ports on the PC or the PE-10X, an error of this form will occur. Try unplugging and replugging both ends of the USB cable and jiggling the cable to ensure a secure connection. This almost always solves the issue. |
+|LIBUSB_ERROR_XXX during USB flash via UUU |This error is the result of a USB connection failure during UUU updating. If the USB cable is not properly connected to the USB ports on the PC or the Percept DK carrier board, an error of this form will occur. Try unplugging and reconnecting both ends of the USB cable and jiggling the cable to ensure a secure connection. This almost always solves the issue. |
## Azure Percept DK carrier board LED states
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/overview.md
Title: Azure Resource Manager overview description: Describes how to use Azure Resource Manager for deployment, management, and access control of resources on Azure. Previously updated : 09/01/2020- Last updated : 03/25/2021+ # What is Azure Resource Manager?
There are some important factors to consider when defining your resource group:
* When you delete a resource group, all resources in the resource group are also deleted. For information about how Azure Resource Manager orchestrates those deletions, see [Azure Resource Manager resource group and resource deletion](delete-resource-group.md).
-* You can deploy up to 800 instances of a resource type in each resource group. Some resource types are [exempt from the 800 instance limit](resources-without-resource-group-limit.md).
+* You can deploy up to 800 instances of a resource type in each resource group. Some resource types are [exempt from the 800 instance limit](resources-without-resource-group-limit.md). For more information, see [resource group limits](azure-subscription-service-limits.md#resource-group-limits).
* Some resources can exist outside of a resource group. These resources are deployed to the [subscription](../templates/deploy-to-subscription.md), [management group](../templates/deploy-to-management-group.md), or [tenant](../templates/deploy-to-tenant.md). Only specific resource types are supported at these scopes.
This resiliency applies to services that receive requests through Resource Manag
## Next steps
+* To learn about limits that are applied across Azure services, see [Azure subscription and service limits, quotas, and constraints](azure-subscription-service-limits.md).
+ * To learn about moving resources, see [Move resources to new resource group or subscription](move-resource-group-and-subscription.md). * To learn about tagging resources, see [Use tags to organize your Azure resources](tag-resources.md).
azure-resource-manager Bicep Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-install.md
Title: Setup Bicep development and deployment environments
+ Title: Set up Bicep development and deployment environments
description: How to configure Bicep development and deployment environments Previously updated : 03/17/2021 Last updated : 03/26/2021
-# Setup Bicep development and deployment environment
+# Install Bicep (Preview)
-Learn how to setup Bicep development and deployment environments.
+Learn how to set up Bicep development and deployment environments.
## Development environment To get the best Bicep authoring experience, you need two components: - **Bicep extension for Visual Studio Code**. To create Bicep files, you need a good Bicep editor. We recommend [Visual Studio Code](https://code.visualstudio.com/) with the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep). These tools provide language support and resource autocompletion. They help create and validate Bicep files. For more information about using Visual Studio Code and the Bicep extension, see [Quickstart: Create Bicep files with Visual Studio Code](./quickstart-create-bicep-use-visual-studio-code.md).-- **Bicep CLI**. Use Bicep CLI to compile Bicep files to ARM JSON templates, and decompile ARM JSON templates to Bicep files. For more information, see [Install Bicep CLI](#install-bicep-cli).
+- **Bicep CLI**. Use Bicep CLI to compile Bicep files to ARM JSON templates, and decompile ARM JSON templates to Bicep files. For the installation instructions, see [Install Bicep CLI](#install-manually).
## Deployment environment
-You can deploy Bicep files by using Azure CLI or Azure PowerShell. For Azure CLI, you need version 2.20.0 or later; for Azure PowerShell, you need version 5.6.0 or later. For the installation instructions, see:
+To deploy local Bicep files, you need two components:
-- [Install Azure PowerShell](/powershell/azure/install-az-ps)-- [Install Azure CLI on Windows](/cli/azure/install-azure-cli-windows)-- [Install Azure CLI on Linux](/cli/azure/install-azure-cli-linux)-- [Install Azure CLI on macOS](/cli/azure/install-azure-cli-macos)
+- **Azure CLI version 2.20.0 or later, or Azure PowerShell version 5.6.0 or later**. For the installation instructions, see:
-> [!NOTE]
-> Currently, both Azure CLI and Azure PowerShell can only deploy local Bicep files. For more information about deploying Bicep files by using Azure CLI, see [Deploy - CLI](./deploy-cli.md#deploy-remote-template). For more information about deploying Bicep files by using Azure PowerShell, see [Deploy - PowerShell]( ./deploy-powershell.md#deploy-remote-template).
+ - [Install Azure PowerShell](/powershell/azure/install-az-ps)
+ - [Install Azure CLI on Windows](/cli/azure/install-azure-cli-windows)
+ - [Install Azure CLI on Linux](/cli/azure/install-azure-cli-linux)
+ - [Install Azure CLI on macOS](/cli/azure/install-azure-cli-macos)
-After the supported version of Azure PowerShell or Azure CLI is installed, you can deploy a Bicep file with:
+ > [!NOTE]
+ > Currently, both Azure CLI and Azure PowerShell can only deploy local Bicep files. For more information about deploying Bicep files by using Azure CLI, see [Deploy - CLI](./deploy-cli.md#deploy-remote-template). For more information about deploying Bicep files by using Azure PowerShell, see [Deploy - PowerShell]( ./deploy-powershell.md#deploy-remote-template).
+
+- **Bicep CLI**. Bicep CLI is needed to compile Bicep files to JSON templates before deployment. For the installation instructions, see [Install Bicep CLI](#install-bicep-cli).
+
+After the components are installed, you can deploy a Bicep file with:
# [PowerShell](#tab/azure-powershell)
az deployment group create \
## Install Bicep CLI
-You can install Bicep CLI by using Azure CLI, by using Azure PowerShell or manually.
+- To use Bicep CLI to compile and Decompile Bicep files, see [Install manually](#install-manually).
+- To use Azure CLI to deploy Bicep files, see [Use with Azure CLI](#use-with-azure-cli).
+- To use Azure PowerShell to deploy Bicep files, see [Use with Azure PowerShell](#use-with-azure-powershell).
+
+### Use with Azure CLI
-### Use Azure CLI
+With Azure CLI version 2.20.0 or later installed, the Bicep CLI is automatically installed when a command that depends on it is executed. For example:
-With Az CLI version 2.20.0 or later installed, the Bicep CLI is automatically installed when a command that depends on it is executed. For example, `az deployment ... -f *.bicep` or `az bicep ...`.
+```azurecli
+az deployment group create --template-file azuredeploy.bicep --resource-group myResourceGroup
+```
+
+or
+
+```azurecli
+az bicep ...
+```
You can also manually install the CLI using the built-in commands:
az bicep upgrade
To install a specific version: ```bash
-az bicep install --version v0.2.212
+az bicep install --version v0.3.126
```
-> [!NOTE]
-> Az CLI installs a separate version of the Bicep CLI that is not in conflict with any other Bicep installs you may have, and Az CLI does not add Bicep to your PATH.
+> [!IMPORTANT]
+> Azure CLI installs a separate version of the Bicep CLI that is not in conflict with any other Bicep installs you may have, and Azure CLI does not add Bicep CLI to your PATH. To use Bicep CLI to compile/decompile Bicep files, or to use Azure PowerShell to deploy Bicep files, see [Install manually](#install-manually) or [Use with Azure Powershell](#use-with-azure-powershell).
+
+To list all available versions of Bicep CLI:
+
+```bash
+az bicep list-versions
+```
To show the installed versions:
To show the installed versions:
az bicep version ```
-To list all available versions of Bicep CLI:
+### Use with Azure PowerShell
-```bash
-az bicep list-versions
+Azure PowerShell does not have the capability to install the Bicep CLI yet. Azure PowerShell (v5.6.0 or later) expects that the Bicep CLI is already installed and available on the PATH. Follow one of the [manual install methods](#install-manually).
+
+To deploy Bicep files, Bicep CLI version 0.3.1 or later is required. To check the Bicep CLI version:
+
+```cmd
+bicep --version
```
-### Use Azure PowerShell
+> [!IMPORTANT]
+> Azure CLI installs its own self-contained version of Bicep CLI. Azure PowerShell deployment fails even if you have the required versions installed for Azure CLI.
+
+Once the Bicep CLI is installed, Bicep CLI is called whenever it is required for a deployment cmdlet. For example:
-Azure PowerShell does not have the capability to install the Bicep CLI yet. Azure PowerShell (v5.6.0 or later) expects that the Bicep CLI is already installed and available on the PATH. Follow one of the [manual install methods](#install-manually). Once the Bicep CLI is installed, Bicep CLI is called whenever it is required for a deployment cmdlet. For example, `New-AzResourceGroupDeployment ... -TemplateFile main.bicep`.
+```azurepowershell
+New-AzResourceGroupDeployment -ResourceGroupName myResourceGroup -TemplateFile azuredeploy.bicep
+```
### Install manually
azure-resource-manager Bicep Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-modules.md
Title: Bicep modules description: Describes how to define and consume a module, and how to use module scopes. Previously updated : 03/17/2021 Last updated : 03/25/2021
-# Use Bicep modules
+# Use Bicep modules (Preview)
Bicep enables you to break down a complex solution into modules. A Bicep module is a set of one or more resources to be deployed together. Modules abstract away complex details of the raw resource declaration, which can increase readability. You can reuse these modules, and share them with other people. Combined with [template specs](./template-specs.md), it creates a way for modularity and code reuse. For a tutorial, see [Tutorial: Add Bicep modules](./bicep-tutorial-add-modules.md).
azure-resource-manager Quickstart Create Bicep Use Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/quickstart-create-bicep-use-visual-studio-code.md
Title: Create Bicep files - Visual Studio Code description: Use Visual Studio Code and the Bicep extension to Bicep files for deploy Azure resources Previously updated : 03/02/2021 Last updated : 03/26/2021
The Bicep extension for Visual Studio Code provides language support and resource autocompletion. These tools help create and validate [Bicep](./bicep-overview.md) files. In this quickstart, you use the extension to create a Bicep file from scratch. While doing so you experience the extensions capabilities such as validation, and completions. + To complete this quickstart, you need [Visual Studio Code](https://code.visualstudio.com/), with the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep) installed. You also need either the latest [Azure CLI](/cli/azure/) or the latest [Azure PowerShell module](/powershell/azure/new-azureps-module-az) installed and authenticated. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
azure-resource-manager Quickstart Create Templates Use The Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md
Title: Deploy template - Azure portal description: Learn how to create your first Azure Resource Manager template (ARM template) using the Azure portal, and how to deploy it. Previously updated : 03/09/2021 Last updated : 03/26/2021 ++ #Customer intent: As a developer new to Azure deployment, I want to learn how to use the Azure portal to create and edit Resource Manager templates, so I can use the templates to deploy Azure resources.
Learn how to generate an Azure Resource Manager template (ARM template) using the Azure portal, and the process of editing and deploying the template from the portal. ARM templates are JSON files that define the resources you need to deploy for your solution. To understand the concepts associated with deploying and managing your Azure solutions, see [template deployment overview](overview.md).
-![Resource Manager template quickstart portal diagram](./media/quickstart-create-templates-use-the-portal/azure-resource-manager-export-deploy-template-portal.png)
- After completing the tutorial, you deploy an Azure Storage account. The same process can be used to deploy other Azure resources.
+![Resource Manager template quickstart portal diagram](./media/quickstart-create-templates-use-the-portal/azure-resource-manager-export-deploy-template-portal.png)
+ If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. ## Generate a template using the portal
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-specs.md
Title: Create & deploy template specs description: Describes how to create template specs and share them with other users in your organization. Previously updated : 03/02/2021 Last updated : 03/26/2021
To deploy the template spec, you use standard Azure tools like PowerShell, Azure
## Why use template specs?
-If you currently have your templates in a GitHub repo or storage account, you run into several challenges when trying to share and use the templates. For a user to deploy it, the template must either be local or the URL for the template must be publicly accessible. To get around this limitation, you might share copies of the template with users who need to deploy it, or open access to the repo or storage account. When users own local copies of a template, these copies can eventually diverge from the original template. When you make a repo or storage account publicly accessible, you may allow unintended users to access the template.
+Template specs provide the following benefits:
-The benefit of using template specs is that you can create canonical templates and share them with teams in your organization. The template specs are secure because they're available to Azure Resource Manager for deployment, but not accessible to users without Azure RBAC permission. Users only need read access to the template spec to deploy its template, so you can share the template without allowing others to modify it.
+* You use standard ARM templates for your template spec.
+* You manage access through Azure RBAC, rather than SAS tokens.
+* Users can deploy the template spec without having write access to the template.
+* You can integrate the template spec into existing deployment process, such as PowerShell script or DevOps pipeline.
+
+Template specs enable you to create canonical templates and share them with teams in your organization. The template specs are secure because they're available to Azure Resource Manager for deployment, but not accessible to users without the correct permission. Users only need read access to the template spec to deploy its template, so you can share the template without allowing others to modify it.
+
+If you currently have your templates in a GitHub repo or storage account, you run into several challenges when trying to share and use the templates. To deploy the template, you need to either make the template publicly accessible or manage access with SAS tokens. To get around this limitation, users might create local copies, which eventually diverge from your original template. Template specs simplify sharing templates.
The templates you include in a template spec should be verified by administrators in your organization to follow the organization's requirements and guidance.
azure-resource-manager Template Syntax https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-syntax.md
Title: Template structure and syntax description: Describes the structure and properties of Azure Resource Manager templates (ARM templates) using declarative JSON syntax. Previously updated : 03/03/2021 Last updated : 03/26/2021 # Understand the structure and syntax of ARM templates
You define resources with the following structure:
"capacity": <sku-capacity> }, "kind": "<type-of-resource>",
+ "scope": "<target-scope-for-extension-resources>",
"copy": { "name": "<name-of-copy-loop>", "count": <number-of-iterations>,
You define resources with the following structure:
| tags |No |Tags that are associated with the resource. Apply tags to logically organize resources across your subscription. | | sku | No | Some resources allow values that define the SKU to deploy. For example, you can specify the type of redundancy for a storage account. | | kind | No | Some resources allow a value that defines the type of resource you deploy. For example, you can specify the type of Cosmos DB to create. |
+| scope | No | The scope property is only available for [extension resource types](../management/extension-resource-types.md). Use it when specifying a scope that is different than the deployment scope. See [Setting scope for extension resources in ARM templates](scope-extension-resources.md). |
| copy |No |If more than one instance is needed, the number of resources to create. The default mode is parallel. Specify serial mode when you don't want all or the resources to deploy at the same time. For more information, see [Create several instances of resources in Azure Resource Manager](copy-resources.md). | | plan | No | Some resources allow values that define the plan to deploy. For example, you can specify the marketplace image for a virtual machine. | | properties |No |Resource-specific configuration settings. The values for the properties are the same as the values you provide in the request body for the REST API operation (PUT method) to create the resource. You can also specify a copy array to create several instances of a property. To determine available values, see [template reference](/azure/templates/). |
azure-resource-manager Template Tutorial Deploy Vm Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-tutorial-deploy-vm-extensions.md
Title: Deploy VM extensions with template description: Learn how to deploy virtual machine extensions with Azure Resource Manager templates (ARM templates). Previously updated : 04/23/2020 Last updated : 03/26/2021
Add a virtual machine extension resource to the existing template with the follo
```json { "type": "Microsoft.Compute/virtualMachines/extensions",
- "apiVersion": "2019-12-01",
+ "apiVersion": "2020-12-01",
"name": "[concat(variables('vmName'),'/', 'InstallWebServer')]", "location": "[parameters('location')]", "dependsOn": [
azure-sql Connectivity Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/connectivity-settings.md
Last updated 07/06/2020
# Azure SQL connectivity settings [!INCLUDE[appliesto-sqldb-asa](../includes/appliesto-sqldb-asa.md)]
-This article introduces settings that control connectivity to the server for Azure SQL Database and Azure Synapse Analytics. These settings apply to all SQL Database and Azure Synapse Analytics databases associated with the server.
+This article introduces settings that control connectivity to the server for Azure SQL Database and [dedicated SQL pool (formerly SQL DW)](../../synapse-analytics\sql-data-warehouse\sql-data-warehouse-overview-what-is.md) in Azure Synapse Analytics. These settings apply to all SQL Database and dedicated SQL pool (formerly SQL DW) databases associated with the server.
> [!IMPORTANT] > This article doesn't apply to Azure SQL Managed Instance.
azure-sql High Availability Sla https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/high-availability-sla.md
Whenever the database engine or the operating system is upgraded, or a failure i
## General Purpose service tier zone redundant availability (Preview)
-Zone redundant configuration for the general purpose service tier utilizes [Azure Availability Zones](../../availability-zones/az-overview.md)  to replicate databases across multiple physical locations within an Azure region. By selecting zone redundancy, you can make your new and existing general purpose single databases and elastic pools resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes of the application logic.
+Zone redundant configuration for the general purpose service tier is offered for both serverless and provisioned compute. This configuration utilizes [Azure Availability Zones](../../availability-zones/az-overview.md)  to replicate databases across multiple physical locations within an Azure region. By selecting zone redundancy, you can make your new and existing serverlesss and provisioned general purpose single databases and elastic pools resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes of the application logic.
Zone redundant configuration for the general purpose tier has two layers: -- A stateful data layer with the database files (.mdf/.ldf) that are stored in ZRS PFS (zone-redundant [storage premium file share](../../storage/files/storage-how-to-create-file-share.md). Using [zone-redundant storage](../../storage/common/storage-redundancy.md) the data and log files are synchronously copied across three physically-isolated Azure availability zones.-- A stateless compute layer that runs the sqlservr.exe process and contains only transient and cached data, such as TempDB, model databases on the attached SSD, and plan cache, buffer pool, and columnstore pool in memory. This stateless node is operated by Azure Service Fabric that initializes sqlservr.exe, controls health of the node, and performs failover to another node if necessary. For zone redundant general purpose databases, nodes with spare capacity are readily available in other Availability Zones for failover.
+- A stateful data layer with the database files (.mdf/.ldf) that are stored in ZRS(zone-redundant storage). Using [ZRS](../../storage/common/storage-redundancy.md) the data and log files are synchronously copied across three physically-isolated Azure availability zones.
+- A stateless compute layer that runs the sqlservr.exe process and contains only transient and cached data, such as TempDB, model databases on the attached SSD, and plan cache, buffer pool, and columnstore pool in memory. This stateless node is operated by Azure Service Fabric that initializes sqlservr.exe, controls health of the node, and performs failover to another node if necessary. For zone redundant serverless and provisioned general purpose databases, nodes with spare capacity are readily available in other Availability Zones for failover.
The zone redundant version of the high availability architecture for the general purpose service tier is illustrated by the following diagram: ![Zone redundant configuration for general purpose](./media/high-availability-sla/zone-redundant-for-general-purpose.png) > [!IMPORTANT]
-> Zone redundant configuration is only available when the Gen5 compute hardware is selected. This feature is not available in SQL Managed Instance. Zone redundant configuration for general purpose tier is only available in the following regions: East US, East US 2, West US 2, North Europe, West Europe, Southeast Asia, Australia East, Japan East, UK South, and France Central.
+> Zone redundant configuration is only available when the Gen5 compute hardware is selected. This feature is not available in SQL Managed Instance. Zone redundant configuration for serverless and provisioned general purpose tier is only available in the following regions: East US, East US 2, West US 2, North Europe, West Europe, Southeast Asia, Australia East, Japan East, UK South, and France Central.
> [!NOTE]
-> General Purpose databases with a size of 80 vcore may experience performance degradation with zone redundant configuration. Additionally, operations such as backup, restore, database copy, and setting up Geo-DR relationships may experience slower performance for any single databases larger than 1 TB.
+> General Purpose databases with a size of 80 vcore may experience performance degradation with zone redundant configuration. Additionally, operations such as backup, restore, database copy, setting up Geo-DR relationships, and downgrading a zone redundant database from Business Critical to General Purpose may experience slower performance for any single databases larger than 1 TB. Please see our [latency documentation on scaling a database](single-database-scale.md) for more information.
> > [!NOTE] > The preview is not covered under Reserved Instance
azure-sql Logins Create Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/logins-create-manage.md
After creating a user account in a database, either based on a login or as a con
- To add a user to a fixed database role: - In Azure SQL Database, use the [ALTER ROLE](/sql/t-sql/statements/alter-role-transact-sql) statement. For examples, see [ALTER ROLE examples](/sql/t-sql/statements/alter-role-transact-sql#examples)
- - Azure Synapse, use the [sp_addrolemember](/sql/relational-databases/system-stored-procedures/sp-addrolemember-transact-sql) statement. For examples, see [sp_addrolemember examples](/sql/t-sql/statements/alter-role-transact-sql).
+ - Azure Synapse, use the [sp_addrolemember](/sql/relational-databases/system-stored-procedures/sp-addrolemember-transact-sql) statement. For examples, see [sp_addrolemember examples](/sql/relational-databases/system-stored-procedures/sp-addrolemember-transact-sql#examples).
- **Custom database role**
You should familiarize yourself with the following features that can be used to
## Next steps
-For an overview of all Azure SQL Database and SQL Managed Instance security features, see [Security overview](security-overview.md).
+For an overview of all Azure SQL Database and SQL Managed Instance security features, see [Security overview](security-overview.md).
azure-sql Resource Limits Dtu Elastic Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-dtu-elastic-pools.md
For the same number of DTUs, resources provided to an elastic pool may exceed th
If all DTUs of an elastic pool are used, then each database in the pool receives an equal amount of resources to process queries. The SQL Database service provides resource sharing fairness between databases by ensuring equal slices of compute time. Elastic pool resource sharing fairness is in addition to any amount of resource otherwise guaranteed to each database when the DTU min per database is set to a non-zero value. > [!NOTE]
-> For `tempdb` limits, see [tempdb limits](/sql/relational-databases/databases/tempdb-database?view=sql-server-2017#tempdb-database-in-sql-database).
+> For `tempdb` limits, see [tempdb limits](/sql/relational-databases/databases/tempdb-database#tempdb-database-in-sql-database).
### Database properties for pooled databases
azure-sql Resource Limits Dtu Single Databases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-dtu-single-databases.md
The following tables show the resources available for a single database at each
> [!IMPORTANT] > More than 1 TB of storage in the Premium tier is currently available in all regions except: China East, China North, Germany Central, and Germany Northeast. In these regions, the storage max in the Premium tier is limited to 1 TB. For more information, see [P11-P15 current limitations](single-database-scale.md#p11-and-p15-constraints-when-max-size-greater-than-1-tb). > [!NOTE]
-> For `tempdb` limits, see [tempdb limits](/sql/relational-databases/databases/tempdb-database?view=sql-server-2017#tempdb-database-in-sql-database).
+> For `tempdb` limits, see [tempdb limits](/sql/relational-databases/databases/tempdb-database#tempdb-database-in-sql-database).
## Next steps
azure-sql Single Database Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/single-database-scale.md
The estimated latency to change the service tier, scale the compute size of a si
> Additionally, for Standard (S2-S12) and General Purpose databases, latency for moving a database in/out of an elastic pool or between elastic pools will be proportional to database size if the database is using Premium File Share ([PFS](../../storage/files/storage-files-introduction.md)) storage. > > To determine if a database is using PFS storage, execute the following query in the context of the database. If the value in the AccountType column is `PremiumFileStorage` or `PremiumFileStorage-ZRS`, the database is using PFS storage.
-
+
+[!NOTE]
+ The zone redundant property will remain the same by default when scaling from the Business Critical to the General Purpose tier. Latency for this downgrade when zone redundancy is enabled as well as latency for switching to zone redundancy for the General Purpose tier will be proportional to database size.
+ ```sql SELECT s.file_id, s.type_desc,
azure-sql Frequently Asked Questions Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/frequently-asked-questions-faq.md
You can provision an instance from [Azure portal](instance-create-quickstart.md)
Yes, you can provision a Managed Instance in an existing subscription if that subscription belongs to the [Supported subscription types](resource-limits.md#supported-subscription-types).
-**Why couldnΓÇÖt I provision a Managed Instance in the subnet which name starts with a digit?**
+**Why couldn't I provision a Managed Instance in the subnet which name starts with a digit?**
This is a current limitation on underlying component that verifies subnet name against the regex ^[a-zA-Z_][^\\\/\:\*\?\"\<\>\|\`\'\^]*(?<![\.\s])$. All names that pass the regex and are valid subnet names are currently supported.
Managed instance offers the same performance levels per compute and storage size
One option is to [export a database to BACPAC](../database/database-export.md) and then [import the BACPAC file](../database/database-import.md). This is the recommended approach if your database is smaller than 100 GB.
-[Transactional replication](replication-two-instances-and-sql-server-configure-tutorial.md?view=sql-server-2017&preserve-view=true) can be used if all tables in the database have *primary* keys and there are no In-memory OLTP objects in the database.
+[Transactional replication](replication-two-instances-and-sql-server-configure-tutorial.md) can be used if all tables in the database have *primary* keys and there are no In-memory OLTP objects in the database.
Native COPY_ONLY backups taken from managed instance cannot be restored to SQL Server because managed instance has a higher database version compared to SQL Server. For more details, see [Copy-only backup](/sql/relational-databases/backup-restore/copy-only-backups-sql-server?preserve-view=true&view=sql-server-ver15).
See [Key causes of performance differences between SQL managed instance and SQL
You can optimize the performance of your managed instance by: - [Automatic tuning](../database/automatic-tuning-overview.md) that provides peak performance and stable workloads through continuous performance tuning based on AI and machine learning.-- [In-memory OLTP](../in-memory-oltp-overview.md) that improves throughput and latency on transactional processing workloads and delivers faster business insights.
+- [In-memory OLTP](../in-memory-oltp-overview.md) that improves throughput and latency on transactional processing workloads and delivers faster business insights.
To tune the performance even further, consider applying some of the *best practices* for [Application and database tuning](../database/performance-guidance.md#tune-your-database). If your workload consists of lots of small transactions, consider [switching the connection type from proxy to redirect mode](connection-types-overview.md#changing-connection-type) for lower latency and higher throughput.
Yes. After a Managed Instance is provisioned you can set NSG that controls inbou
**Can I set the NVA or on-premises firewall to filter the outbound management traffic based on FQDNs?** No. This is not supported for several reasons:-- Routing traffic that represent response to inbound management request would be asymmetric and could not work.-- Routing traffic that goes to storage would be affected by throughput constraints and latency so this way we wonΓÇÖt be able to provide expected service quality and availability.-- Based on experience, these configurations are error prone and not supportable.
+- Routing traffic that represent response to inbound management request would be asymmetric and could not work.
+- Routing traffic that goes to storage would be affected by throughput constraints and latency so this way we won't be able to provide expected service quality and availability.
+- Based on experience, these configurations are error prone and not supportable.
**Can I set the NVA or firewall for the outbound non-management traffic?**
Yes, customers can create logins that are members of the sysadmin role. Custome
Yes, Transparent Data Encryption is supported for SQL Managed Instance. For details, see [Transparent Data Encryption for SQL Managed Instance](../database/transparent-data-encryption-tde-overview.md?tabs=azure-portal).
-**Can I leverage the ΓÇ£bring your own keyΓÇ¥ model for TDE?**
+**Can I leverage the "bring your own key" model for TDE?**
Yes, Azure Key Vault for BYOK scenario is available for Azure SQL Managed Instance. For details, see [Transparent Data Encryption with customer-managed key](../database/transparent-data-encryption-tde-overview.md?tabs=azure-portal#customer-managed-transparent-data-encryptionbring-your-own-key).
You can rotate TDE protector for Managed Instance using Azure Cloud Shell. For i
Yes, you don't need to decrypt your database to restore it to SQL Managed Instance. You do need to provide a certificate/key used as the encryption key protector on the source system to SQL Managed Instance to be able to read data from the encrypted backup file. There are two possible ways to do it: - *Upload certificate-protector to SQL Managed Instance*. It can be done using PowerShell only. The [sample script](./tde-certificate-migrate.md) describes the whole process.-- *Upload asymmetric key-protector to Azure Key Vault and point SQL Managed Instance to it*. This approach resembles bring-your-own-key (BYOK) TDE use case that also uses Key Vault integration to store the encryption key. If you don't want to use the key as an encryption key protector, and just want to make the key available for SQL Managed Instance to restore encrypted database(s), follow instructions for [setting up BYOK TDE](../database/transparent-data-encryption-tde-overview.md#manage-transparent-data-encryption), and donΓÇÖt check the checkbox **Make the selected key the default TDE protector**.
+- *Upload asymmetric key-protector to Azure Key Vault and point SQL Managed Instance to it*. This approach resembles bring-your-own-key (BYOK) TDE use case that also uses Key Vault integration to store the encryption key. If you don't want to use the key as an encryption key protector, and just want to make the key available for SQL Managed Instance to restore encrypted database(s), follow instructions for [setting up BYOK TDE](../database/transparent-data-encryption-tde-overview.md#manage-transparent-data-encryption), and don't check the checkbox **Make the selected key the default TDE protector**.
Once you make the encryption protector available to SQL Managed Instance, you can proceed with the standard database restore procedure.
SQL Managed Instance offers [vCore-based purchasing model](sql-managed-instance-
**What cost benefits are available for SQL Managed Instance?** You can save costs with the Azure SQL benefits in the following ways:-- Maximize existing investments in on-premises licenses and save up to 55 percent with [Azure Hybrid Benefit](../azure-hybrid-benefit.md?tabs=azure-powershell). -- Commit to a reservation for compute resources and save up to 33 percent with [Reserved Instance Benefit](../database/reserved-capacity-overview.md). Combine this with Azure Hybrid benefit for savings up to 82 percent. -- Save up to 55 percent versus list prices with [Azure Dev/Test Pricing Benefit](https://azure.microsoft.com/pricing/dev-test/) that offers discounted rates for your ongoing development and testing workloads.
+- Maximize existing investments in on-premises licenses and save up to 55 percent with [Azure Hybrid Benefit](../azure-hybrid-benefit.md?tabs=azure-powershell).
+- Commit to a reservation for compute resources and save up to 33 percent with [Reserved Instance Benefit](../database/reserved-capacity-overview.md). Combine this with Azure Hybrid benefit for savings up to 82 percent.
+- Save up to 55 percent versus list prices with [Azure Dev/Test Pricing Benefit](https://azure.microsoft.com/pricing/dev-test/) that offers discounted rates for your ongoing development and testing workloads.
**Who is eligible for Reserved Instance benefit?**
azure-sql Migrate To Instance From Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/migrate-to-instance-from-sql-server.md
Performance baseline is a set of parameters such as average/max CPU usage, avera
Some of the parameters that you would need to measure on your SQL Server instance are: - [Monitor CPU usage on your SQL Server instance](https://techcommunity.microsoft.com/t5/Azure-SQL-Database/Monitor-CPU-usage-on-SQL-Server/ba-p/680777#M131) and record the average and peak CPU usage.-- [Monitor memory usage on your SQL Server instance](/sql/relational-databases/performance-monitor/monitor-memory-usage) and determine the amount of memory used by different components such as buffer pool, plan cache, column-store pool, [In-Memory OLTP](/sql/relational-databases/in-memory-oltp/monitor-and-troubleshoot-memory-usage?view=sql-server-2017), etc. In addition, you should find average and peak values of the Page Life Expectancy memory performance counter.
+- [Monitor memory usage on your SQL Server instance](/sql/relational-databases/performance-monitor/monitor-memory-usage) and determine the amount of memory used by different components such as buffer pool, plan cache, column-store pool, [In-Memory OLTP](/sql/relational-databases/in-memory-oltp/monitor-and-troubleshoot-memory-usage), etc. In addition, you should find average and peak values of the Page Life Expectancy memory performance counter.
- Monitor disk IO usage on the source SQL Server instance using [sys.dm_io_virtual_file_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-io-virtual-file-stats-transact-sql) view or [performance counters](/sql/relational-databases/performance-monitor/monitor-disk-usage). - Monitor workload and query performance or your SQL Server instance by examining Dynamic Management Views or Query Store if you are migrating from a SQL Server 2016+ version. Identify average duration and CPU usage of the most important queries in your workload to compare them with the queries that are running on the managed instance.
As an outcome of this activity, you should have documented average and peak valu
## Deploy to an optimally sized managed instance
-SQL Managed Instance is tailored for on-premises workloads that are planning to move to the cloud. It introduces a [new purchasing model](../database/service-tiers-vcore.md) that provides greater flexibility in selecting the right level of resources for your workloads. In the on-premises world, you are probably accustomed to sizing these workloads by using physical cores and IO bandwidth. The purchasing model for managed instance is based upon virtual cores, or ΓÇ£vCores,ΓÇ¥ with additional storage and IO available separately. The vCore model is a simpler way to understand your compute requirements in the cloud versus what you use on-premises today. This new model enables you to right-size your destination environment in the cloud. Some general guidelines that might help you to choose the right service tier and characteristics are described here:
+SQL Managed Instance is tailored for on-premises workloads that are planning to move to the cloud. It introduces a [new purchasing model](../database/service-tiers-vcore.md) that provides greater flexibility in selecting the right level of resources for your workloads. In the on-premises world, you are probably accustomed to sizing these workloads by using physical cores and IO bandwidth. The purchasing model for managed instance is based upon virtual cores, or "vCores," with additional storage and IO available separately. The vCore model is a simpler way to understand your compute requirements in the cloud versus what you use on-premises today. This new model enables you to right-size your destination environment in the cloud. Some general guidelines that might help you to choose the right service tier and characteristics are described here:
- Based on the baseline CPU usage, you can provision a managed instance that matches the number of cores that you are using on SQL Server, having in mind that CPU characteristics might need to be scaled to match [VM characteristics where the managed instance is installed](resource-limits.md#hardware-generation-characteristics). - Based on the baseline memory usage, choose [the service tier that has matching memory](resource-limits.md#hardware-generation-characteristics). The amount of memory cannot be directly chosen, so you would need to select the managed instance with the amount of vCores that has matching memory (for example, 5.1 GB/vCore in Gen5).
Once you have prepared the environment that is comparable as much as possible wi
As a result, you should compare performance parameters with the baseline and identify critical differences. > [!NOTE]
-> In many cases, you would not be able to get exactly matching performance on the managed instance and SQL Server. Azure SQL Managed Instance is a SQL Server database engine, but infrastructure and high-availability configuration on a managed instance may introduce some differences. You might expect that some queries would be faster while some others might be slower. The goal of comparison is to verify that workload performance in the managed instance matches the performance on SQL Server (on average), and identify any critical queries with the performance that donΓÇÖt match your original performance.
+> In many cases, you would not be able to get exactly matching performance on the managed instance and SQL Server. Azure SQL Managed Instance is a SQL Server database engine, but infrastructure and high-availability configuration on a managed instance may introduce some differences. You might expect that some queries would be faster while some others might be slower. The goal of comparison is to verify that workload performance in the managed instance matches the performance on SQL Server (on average), and identify any critical queries with the performance that don't match your original performance.
The outcome of the performance comparison might be:
Once you are on a fully managed platform and you have verified that workload per
Even if you don't make some changes in managed instance during the migration, there are high chances that you would turn on some of the new features while you are operating your instance to take advantage of the latest database engine improvements. Some changes are only enabled once the [database compatibility level has been changed](/sql/relational-databases/databases/view-or-change-the-compatibility-level-of-a-database).
-For instance, you donΓÇÖt have to create backups on managed instance - the service performs backups for you automatically. You no longer must worry about scheduling, taking, and managing backups. SQL Managed Instance provides you the ability to restore to any point in time within this retention period using [Point in Time Recovery (PITR)](../database/recovery-using-backups.md#point-in-time-restore). Additionally, you do not need to worry about setting up high availability, as [high availability](../database/high-availability-sla.md) is built in.
+For instance, you don't have to create backups on managed instance - the service performs backups for you automatically. You no longer must worry about scheduling, taking, and managing backups. SQL Managed Instance provides you the ability to restore to any point in time within this retention period using [Point in Time Recovery (PITR)](../database/recovery-using-backups.md#point-in-time-restore). Additionally, you do not need to worry about setting up high availability, as [high availability](../database/high-availability-sla.md) is built in.
To strengthen security, consider using [Azure Active Directory Authentication](../database/security-overview.md), [auditing](auditing-configure.md), [threat detection](../database/azure-defender-for-sql.md), [row-level security](/sql/relational-databases/security/row-level-security), and [dynamic data masking](/sql/relational-databases/security/dynamic-data-masking).
azure-sql Restore Sample Database Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/restore-sample-database-quickstart.md
This quickstart:
- [Configure a point-to-site connection to SQL Managed Instance from on-premises](point-to-site-p2s-configure.md). > [!NOTE]
-> For more information on backing up and restoring a SQL Server database using Azure Blob storage and a [Shared Access Signature (SAS) key](../../storage/common/storage-sas-overview.md), see [SQL Server Backup to URL](/sql/relational-databases/backup-restore/sql-server-backup-to-url?view=sql-server-2017).
+> For more information on backing up and restoring a SQL Server database using Azure Blob storage and a [Shared Access Signature (SAS) key](../../storage/common/storage-sas-overview.md), see [SQL Server Backup to URL](/sql/relational-databases/backup-restore/sql-server-backup-to-url).
## Restore from a backup file
azure-sql Access To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/access-to-sql-database-guide.md
Title: "Access to Azure SQL Database: Migration guide"
-description: This guide teaches you to migrate your Microsoft Access databases to Azure SQL Database using SQL Server Migration Assistant for Access (SSMA for Access).
+description: In this guide, you learn how to migrate your Microsoft Access databases to an Azure SQL database by using SQL Server Migration Assistant for Access (SSMA for Access).
Last updated 03/19/2021
# Migration guide: Access to Azure SQL Database
-This migration guide teaches you to migrate your Microsoft Access databases to Azure SQL Database using the SQL Server Migration Assistant for Access.
+In this guide, you learn how to migrate your Microsoft Access database to an Azure SQL database by using SQL Server Migration Assistant for Access (SSMA for Access).
-For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
+For other migration guides, see [Azure Database Migration Guide](https://docs.microsoft.com/data-migration).
## Prerequisites
-To migrate your Access database to Azure SQL Database, you need:
--- To verify your source environment is supported. -- [SQL Server Migration Assistant for Access](https://www.microsoft.com/download/details.aspx?id=54255). -- Connectivity and sufficient permissions to access both source and target.
+Before you begin migrating your Access database to a SQL database, do the following:
+- Verify that your source environment is supported.
+- Download and install [SQL Server Migration Assistant for Access](https://www.microsoft.com/download/details.aspx?id=54255).
+- Ensure that you have connectivity and sufficient permissions to access both source and target.
## Pre-migration
-After you have met the prerequisites, you are ready to discover the topology of your environment and assess the feasibility of your migration.
+After you've met the prerequisites, you're ready to discover the topology of your environment and assess the feasibility of your migration.
### Assess
-Use SQL Server Migration Assistant (SSMA) for Access to review database objects and data, and assess databases for migration.
+Use SSMA for Access to review database objects and data, and assess databases for migration.
-To create an assessment, follow these steps:
+To create an assessment, do the following:
-1. Open [SQL Server Migration Assistant for Access](https://www.microsoft.com/download/details.aspx?id=54255).
-1. Select **File** and then choose **New Project**.
-1. Provide a project name, a location to save your project, and then select Azure SQL Database as the migration target from the drop-down. Select **OK**:
+1. Open [SSMA for Access](https://www.microsoft.com/download/details.aspx?id=54255).
+1. Select **File**, and then select **New Project**.
+1. Provide a project name and a location for your project and then, in the drop-down list, select **Azure SQL Database** as the migration target.
+1. Select **OK**.
- ![Choose New Project](./media/access-to-sql-database-guide/new-project.png)
+ ![Screenshot of the "New Project" pane for entering your migration project name and location.](./media/access-to-sql-database-guide/new-project.png)
-1. Select **Add Databases** and choose databases to be added to your new project:
+1. Select **Add Databases**, and then select the databases to be added to your new project.
- ![Choose Add databases](./media/access-to-sql-database-guide/add-databases.png)
+ ![Screenshot of the "Add Databases" tab in SSMA for Access.](./media/access-to-sql-database-guide/add-databases.png)
-1. In **Access Metadata Explorer**, right-click the database and then choose **Create Report**. Alternatively, you can choose **Create report** from the navigation bar after selecting the schema:
+1. On the **Access Metadata Explorer** pane, right-click a database, and then select **Create Report**. Alternatively, you can select the **Create Report** tab at the upper right.
- ![Right-click the database and choose Create Report](./media/access-to-sql-database-guide/create-report.png)
+ ![Screenshot of the "Create Report" command in Access Metadata Explorer.](./media/access-to-sql-database-guide/create-report.png)
-1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of Access objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects
+1. Review the HTML report to understand the conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of Access objects and understand the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects. For example:
- For example: `drive:\<username>\Documents\SSMAProjects\MyAccessMigration\report\report_<date>`
+ `drive:\<username>\Documents\SSMAProjects\MyAccessMigration\report\report_<date>`
- ![Review the sample report assessment](./media/access-to-sql-database-guide/sample-assessment.png)
+ ![Screenshot of an example database report assessment in SSMA.](./media/access-to-sql-database-guide/sample-assessment.png)
-### Validate data types
+### Validate the data types
-Validate the default data type mappings and change them based on requirements if necessary. To do so, follow these steps:
+Validate the default data type mappings, and change them based on your requirements, if necessary. To do so:
-1. Select **Tools** from the menu.
-1. Select **Project Settings**.
-1. Select the **Type mappings** tab:
+1. In SSMA for Access, select **Tools**, and then select **Project Settings**.
+1. Select the **Type Mapping** tab.
- ![Type Mappings](./media/access-to-sql-database-guide/type-mappings.png)
+ ![Screenshot of the "Type Mapping" pane in SSMA for Access.](./media/access-to-sql-database-guide/type-mappings.png)
-1. You can change the type mapping for each table by selecting the table in the **Access Metadata Explorer**.
+1. You can change the type mapping for each table by selecting the table name on the **Access Metadata Explorer** pane.
-### Convert schema
+### Convert the schema
-To convert database objects, follow these steps:
+To convert database objects, do the following:
-1. Select **Connect to Azure SQL Database**.
- 1. Enter connection details to connect your database in Azure SQL Database.
- 1. Choose your target SQL Database from the drop-down, or provide a new name, in which case a database will be created on the target server.
- 1. Provide authentication details.
- 1. Select **Connect**:
+1. Select the **Connect to Azure SQL Database** tab, and then do the following:
- ![Connect to Azure SQL Database](./media/access-to-sql-database-guide/connect-to-sqldb.png)
+ a. Enter the details for connecting to your SQL database.
+ b. In the drop-down list, select your target SQL database. Or you can enter a new name, in which case a database will be created on the target server.
+ c. Provide authentication details.
+ d. Select **Connect**.
-1. Right-click the database in **Access Metadata Explorer** and choose **Convert schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your database:
+ ![Screenshot of the "Connect to Azure SQL Database" pane for entering connection details.](./media/access-to-sql-database-guide/connect-to-sqldb.png)
- ![Right-click the database and choose convert schema](./media/access-to-sql-database-guide/convert-schema.png)
-
+1. On the **Access Metadata Explorer** pane, right-click the database, and then select **Convert Schema**. Alternatively, you can select your database and then select the **Convert Schema** tab.
-1. After the conversion completes, compare and review the converted objects to the original objects to identify potential problems and address them based on the recommendations:
+ ![Screenshot of the "Convert Schema" command on the "Access Metadata Explorer" pane.](./media/access-to-sql-database-guide/convert-schema.png)
- ![Converted objects can be compared with source](./media/access-to-sql-database-guide/table-comparison.png)
+1. After the conversion is completed, compare the converted objects to the original objects to identify potential problems, and address the problems based on the recommendations.
- Compare the converted Transact-SQL text to the original code and review the recommendations:
+ ![Screenshot showing a comparison of the converted objects to the source objects.](./media/access-to-sql-database-guide/table-comparison.png)
- ![Converted queries can be compared with source code](./media/access-to-sql-database-guide/query-comparison.png)
+ Compare the converted Transact-SQL text to the original code, and review the recommendations.
-1. (Optional) To convert an individual object, right-click the object and choose **Convert schema**. Converted objects appear bold in the **Access Metadata Explorer**:
+ ![Screenshot showing a comparison of converted queries to the source code.](./media/access-to-sql-database-guide/query-comparison.png)
- ![Bold objects in metadata explorer have been converted](./media/access-to-sql-database-guide/converted-items.png)
-
-1. Select **Review results** in the Output pane, and review errors in the **Error list** pane.
-1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to SQL Database.
+1. (Optional) To convert an individual object, right-click the object, and then select **Convert Schema**. Converted objects appear in bold text in **Access Metadata Explorer**:
+ ![Screenshot showing that the objects in Access Metadata Explorer are converted.](./media/access-to-sql-database-guide/converted-items.png)
+
+1. On the **Output** pane, select the **Review results** icon, and review the errors on the **Error list** pane.
+1. Save the project locally for an offline schema remediation exercise. To do so, select **File** > **Save Project**. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you publish them to your SQL database.
-## Migrate
+## Migrate the databases
-After you have completed assessing your databases and addressing any discrepancies, the next step is to execute the migration process. Migrating data is a bulk-load operation that moves rows of data into Azure SQL Database in transactions. The number of rows to be loaded into Azure SQL Database in each transaction is configured in the project settings.
+After you've assessed your databases and addressed any discrepancies, you can run the migration process. Migrating data is a bulk-load operation that moves rows of data into an Azure SQL database in transactions. The number of rows to be loaded into your SQL database in each transaction is configured in the project settings.
-To publish your schema and migrate the data by using SSMA for Access, follow these steps:
+To publish your schema and migrate the data by using SSMA for Access, do the following:
-1. If you haven't already, select **Connect to Azure SQL Database** and provide connection details.
-1. Publish the schema: Right-click the database from the **Azure SQL Database Metadata Explorer** and choose **Synchronize with Database**. This action publishes the MySQL schema to Azure SQL Database:
+1. If you haven't already done so, select **Connect to Azure SQL Database**, and provide connection details.
- ![Synchronize with Database](./media/access-to-sql-database-guide/synchronize-with-database.png)
+1. Publish the schema. On the **Azure SQL Database Metadata Explorer** pane, right-click the database you're working with, and then select **Synchronize with Database**. This action publishes the MySQL schema to the SQL database.
- Review the mapping between your source project and your target:
+1. On the **Synchronize with the Database** pane, review the mapping between your source project and your target:
- ![Review the synchronization with the database](./media/access-to-sql-database-guide/synchronize-with-database-review.png)
+ ![Screenshot of the "Synchronize with the Database" pane for reviewing the synchronization with the database.](./media/access-to-sql-database-guide/synchronize-with-database-review.png)
-1. Migrate the data: Right-click the database or object you want to migrate in **Access Metadata Explorer**, and choose **Migrate data**. Alternatively, you can select **Migrate Data** from the top-line navigation bar. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand Tables, and then select the check box next to the table. To omit data from individual tables, clear the check box:
+1. On the **Access Metadata Explorer** pane, select the check boxes next to the items you want to migrate. To migrate the entire database, select the check box next to the database.
- ![Migrate Data](./media/access-to-sql-database-guide/migrate-data.png)
+1. Migrate the data. Right-click the database or object you want to migrate, and then select **Migrate Data**. Alternatively, you can select the **Migrate Data** tab at the upper right.
-1. After migration completes, view the **Data Migration Report**:
+ To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand **Tables**, and then select the check box next to the table. To omit data from individual tables, clear the check box.
- ![Migrate Data Review](./media/access-to-sql-database-guide/migrate-data-review.png)
+ ![Screenshot of the "Migrate Data" command on the "Access Metadata Explorer" pane.](./media/access-to-sql-database-guide/migrate-data.png)
-1. Connect to your Azure SQL Database by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema:
+1. After migration is completed, view the **Data Migration Report**.
- ![Validate in SSMA](./media/access-to-sql-database-guide/validate-data.png)
+ ![Screenshot of the "Migrate Data Report" pane showing an example report for review.](./media/access-to-sql-database-guide/migrate-data-review.png)
+1. Connect to your Azure SQL database by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms), and validate the migration by reviewing the data and schema.
+ ![Screenshot of SQL Server Management Studio Object Explorer for validating your migration in SSMA.](./media/access-to-sql-database-guide/validate-data.png)
## Post-migration
-After you have successfully completed the **Migration** stage, you need to go through a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
+After you've successfully completed the *migration* stage, you need to complete a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
### Remediate applications
After the data is migrated to the target environment, all the applications that
### Perform tests
-The test approach for database migration consists of performing the following activities:
+The test approach to database migration consists of the following activities:
- 1. **Develop validation tests**. To test database migration, you need to use SQL queries. You must create the validation queries to run against both the source and the target databases. Your validation queries should cover the scope you have defined.
+1. **Develop validation tests**: To test the database migration, you need to use SQL queries. You must create the validation queries to run against both the source and target databases. Your validation queries should cover the scope you've defined.
- 2. **Set up test environment**. The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
+1. **Set up a test environment**: The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
- 3. **Run validation tests**. Run the validation tests against the source and the target, and then analyze the results.
+1. **Run validation tests**: Run validation tests against the source and the target, and then analyze the results.
+
+1. **Run performance tests**: Run performance tests against the source and the target, and then analyze and compare the results.
- 4. **Run performance tests**. Run performance test against the source and the target, and then analyze and compare the results.
### Optimize
-The post-migration phase is crucial for reconciling any data accuracy issues and verifying completeness, as well as addressing performance issues with the workload.
+The post-migration phase is crucial for reconciling any data accuracy issues, verifying completeness, and addressing performance issues with the workload.
-For additional detail about these issues and specific steps to mitigate them, see the [Post-migration Validation and Optimization Guide](/sql/relational-databases/post-migration-validation-and-optimization-guide).
+For more information about these issues and the steps to mitigate them, see the [Post-migration validation and optimization guide](/sql/relational-databases/post-migration-validation-and-optimization-guide).
## Migration assets
-For additional assistance with completing this migration scenario, please see the following resources, which were developed in support of a real-world migration project engagement.
-
-| **Title/link** | **Description** |
-| - | -- |
-| [Data Workload Assessment Model and Tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | This tool provides suggested ΓÇ£best fitΓÇ¥ target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that greatly helps to accelerate large estate assessments by providing and automated and uniform target platform decision process. |
+For more assistance with completing this migration scenario, see the following resource. It was developed in support of a real-world migration project engagement.
+| Title | Description |
+| | |
+| [Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | Provides suggested ΓÇ£best fitΓÇ¥ target platforms, cloud readiness, and application/database remediation levels for specified workloads. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated, uniform target-platform decision process. |
-These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
+The Data SQL Engineering team developed this resource. The team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
## Next steps -- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see [Service and tools for data migration](../../../dms/dms-tools-matrix.md).
+- For a matrix of Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios and specialty tasks, see [Service and tools for data migration](../../../dms/dms-tools-matrix.md).
- To learn more about Azure SQL Database see: - [An overview of SQL Database](../../database/sql-database-paas-overview.md)
- - [Azure total Cost of Ownership Calculator](https://azure.microsoft.com/pricing/tco/calculator/)
+ - [Azure total cost of ownership calculator](https://azure.microsoft.com/pricing/tco/calculator/)
-- To learn more about the framework and adoption cycle for Cloud migrations, see
+- To learn more about the framework and adoption cycle for cloud migrations, see:
- [Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/contoso-migration-scale)
- - [Best practices for costing and sizing workloads migrate to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
--- To assess the Application access layer, see [Data Access Migration Toolkit (Preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit)-- For details on how to perform Data Access Layer A/B testing see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
+ - [Best practices for costing and sizing workloads for migration to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
+- To assess the application access layer, see [Data Access Migration Toolkit (preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit).
+- For information about how to perform Data Access Layer A/B testing, see [Overview of Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
azure-sql Mysql To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/mysql-to-sql-database-guide.md
Title: "MySQL to Azure SQL Database: Migration guide"
-description: This guide teaches you to migrate your MySQL databases to Azure SQL Database using SQL Server Migration Assistant for MySQL (SSMA for MySQL).
+description: In this guide, you learn how to migrate your MySQL databases to an Azure SQL database by using SQL Server Migration Assistant for MySQL (SSMA for MySQL).
Last updated 03/19/2021
-# Migration guide: MySQL to Azure SQL Database
+# Migration guide: MySQL to Azure SQL Database
[!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqldb.md)]
-This guide teaches you to migrate your MySQL database to Azure SQL Database using SQL Server Migration Assistant for MySQL (SSMA for MySQL).
+In this guide, you learn how to migrate your MySQL database to an Azure SQL database by using SQL Server Migration Assistant for MySQL (SSMA for MySQL).
-For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
+For other migration guides, see [Azure Database Migration Guide](https://docs.microsoft.com/data-migration).
## Prerequisites
-To migrate your MySQL database to Azure SQL Database, you need:
--- To verify your source environment is supported. Currently, MySQL 5.6 and 5.7 is supported. -- [SQL Server Migration Assistant for MySQL](https://www.microsoft.com/download/details.aspx?id=54257)-- Connectivity and sufficient permissions to access both source and target.
+Before you begin migrating your MySQL database to a SQL database, do the following:
+- Verify that your source environment is supported. Currently, MySQL 5.6 and 5.7 are supported.
+- Download and install [SQL Server Migration Assistant for MySQL](https://www.microsoft.com/download/details.aspx?id=54257).
+- Ensure that you have connectivity and sufficient permissions to access both the source and the target.
## Pre-migration
-After you have met the prerequisites, you are ready to discover the topology of your environment and assess the feasibility of your migration.
+After you've met the prerequisites, you're ready to discover the topology of your environment and assess the feasibility of your migration.
### Assess Use SQL Server Migration Assistant (SSMA) for MySQL to review database objects and data, and assess databases for migration.
-To create an assessment, perform the following steps:
+To create an assessment, do the following:
-1. Open [SQL Server Migration Assistant for MySQL](https://www.microsoft.com/download/details.aspx?id=54257).
-1. Select **File** from the menu and then choose **New Project**.
-1. Provide the project name, a location to save your project. Choose **Azure SQL Database** as the migration target. Select **OK**:
+1. Open [SSMA for MySQL](https://www.microsoft.com/download/details.aspx?id=54257).
+1. Select **File**, and then select **New Project**.
+1. In the **New Project** pane, enter a name and location for your project and then, in the **Migrate To** drop-down list, select **Azure SQL Database**.
+1. Select **OK**.
- ![New Project](./media/mysql-to-sql-database-guide/new-project.png)
+ ![Screenshot of the "New Project" pane for entering your migration project name, location, and target.](./media/mysql-to-sql-database-guide/new-project.png)
-1. Choose **Connect to MySQL** and provide connection details to connect your MySQL server:
+1. Select the **Connect to MySQL** tab, and then provide details for connecting your MySQL server.
- ![Connect to MySQL](./media/mysql-to-sql-database-guide/connect-to-mysql.png)
+ ![Screenshot of the "Connect to MySQL" pane for specifying connections to the source.](./media/mysql-to-sql-database-guide/connect-to-mysql.png)
-1. Right-click the MySQL schema in **MySQL Metadata Explorer** and choose **Create report**. Alternatively, you can select **Create report** from the top-line navigation bar:
+1. On the **MySQL Metadata Explorer** pane, right-click the MySQL schema, and then select **Create Report**. Alternatively, you can select the **Create Report** tab at the upper right.
- ![Create Report](./media/mysql-to-sql-database-guide/create-report.png)
+ ![Screenshot of the "Create Report" links in SSMA for MySQL.](./media/mysql-to-sql-database-guide/create-report.png)
-1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of MySQL objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects.
-
- For example: `drive:\Users\<username>\Documents\SSMAProjects\MySQLMigration\report\report_2016_11_12T02_47_55\`
+1. Review the HTML report to understand the conversion statistics, errors, and warnings. Analyze it to understand the conversion issues and resolutions.
+ You can also open the report in Excel to get an inventory of MySQL objects and understand the effort that's required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects. For example:
+
+ `drive:\Users\<username>\Documents\SSMAProjects\MySQLMigration\report\report_2016_11_12T02_47_55\`
- ![Conversion Report](./media/mysql-to-sql-database-guide/conversion-report.png)
+ ![Screenshot of an example conversion report in SSMA.](./media/mysql-to-sql-database-guide/conversion-report.png)
-### Validate data types
+### Validate the data types
-Validate the default data type mappings and change them based on requirements if necessary. To do so, follow these steps:
+Validate the default data type mappings and change them based on requirements, if necessary. To do so:
-1. Select **Tools** from the menu.
-1. Select **Project Settings**.
-1. Select the **Type mappings** tab:
+1. Select **Tools**, and then select **Project Settings**.
+1. Select the **Type Mappings** tab.
- ![Type Mappings](./media/mysql-to-sql-database-guide/type-mappings.png)
+ ![Screenshot of the "Type Mapping" pane in SSMA for MySQL.](./media/mysql-to-sql-database-guide/type-mappings.png)
-1. You can change the type mapping for each table by selecting the table in the **MySQL Metadata explorer**.
+1. You can change the type mapping for each table by selecting the table name on the **MySQL Metadata Explorer** pane.
-### Convert schema
+### Convert the schema
-To convert the schema, follow these steps:
+To convert the schema, do the following:
-1. (Optional) To convert dynamic or ad-hoc queries, right-click the node and choose **Add statement**.
-1. Select **Connect to Azure SQL Database**.
- 1. Enter connection details to connect your database in Azure SQL Database.
- 1. Choose your target SQL Database from the drop-down, or provide a new name, in which case a database will be created on the target server.
- 1. Provide authentication details.
- 1. Select **Connect**:
+1. (Optional) To convert dynamic or specialized queries, right-click the node, and then select **Add statement**.
- ![Connect to SQL](./media/mysql-to-sql-database-guide/connect-to-sqldb.png)
-
-1. Right-click the schema and choose **Convert schema**. Alternatively, you can choose **Convert schema** from the top line navigation bar after choosing your database:
+1. Select the **Connect to Azure SQL Database** tab, and then do the following:
- ![Convert Schema](./media/mysql-to-sql-database-guide/convert-schema.png)
+ a. Enter the details for connecting to your SQL database.
+ b. In the drop-down list, select your target SQL database. Or you can provide a new name, in which case a database will be created on the target server.
+ c. Provide authentication details.
+ d. Select **Connect**.
-1. After the conversion completes, compare and review the converted objects to the original objects to identify potential problems and address them based on the recommendations:
-
- ![Converted objects can be compared with source](./media/mysql-to-sql-database-guide/table-comparison.png)
+ ![Screenshot of the "Connect to Azure SQL Database" pane in SSMA for MySQL.](./media/mysql-to-sql-database-guide/connect-to-sqldb.png)
+
+1. Right-click the schema you're working with, and then select **Convert Schema**. Alternatively, you can select the **Convert schema** tab at the upper right.
- Compare the converted Transact-SQL text to the original code and review the recommendations:
+ ![Screenshot of the "Convert Schema" command on the "MySQL Metadata Explorer" pane.](./media/mysql-to-sql-database-guide/convert-schema.png)
- ![Converted queries can be compared with source code](./media/mysql-to-sql-database-guide/procedure-comparison.png)
+1. After the conversion is completed, review and compare the converted objects to the original objects to identify potential problems and address them based on the recommendations.
-1. Select **Review results** in the Output pane, and review errors in the **Error list** pane.
-1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to SQL Database.
+ ![Screenshot showing a comparison of the converted objects to the original objects.](./media/mysql-to-sql-database-guide/table-comparison.png)
+ Compare the converted Transact-SQL text to the original code, and review the recommendations.
+ ![Screenshot showing a comparison of converted queries to the source code.](./media/mysql-to-sql-database-guide/procedure-comparison.png)
-## Migrate
+1. On the **Output** pane, select **Review results**, and then review any errors on the **Error list** pane.
+1. Save the project locally for an offline schema remediation exercise. To do so, select **File** > **Save Project**. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you publish the schema to your SQL database.
-After you have completed assessing your databases and addressing any discrepancies, the next step is to execute the migration process. Migration involves two steps ΓÇô publishing the schema and migrating the data.
+ Compare the converted procedures to the original procedures, as shown here:
-To publish your schema and migrate the data, follow these steps:
+ ![Screenshot showing a comparison of the converted procedures to the original procedures.](./media/mysql-to-sql-database-guide/procedure-comparison.png)
-1. Publish the schema: Right-click the database from the **Azure SQL Database Metadata Explorer** and choose **Synchronize with Database**. This action publishes the MySQL schema to Azure SQL Database:
- ![Synchronize with Database](./media/mysql-to-sql-database-guide/synchronize-database.png)
+## Migrate the databases
- Review the mapping between your source project and your target:
+After you've assessed your databases and addressed any discrepancies, you can run the migration process. Migration involves two steps: publishing the schema and migrating the data.
- ![Synchronize with Database Review](./media/mysql-to-sql-database-guide/synchronize-database-review.png)
+To publish the schema and migrate the data, do the following:
-1. Migrate the data: Right-click the database or object you want to migrate in **MySQL Metadata Explorer**, and choose **Migrate data**. Alternatively, you can select **Migrate Data** from the top-line navigation bar. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand Tables, and then select the check box next to the table. To omit data from individual tables, clear the check box:
+1. Publish the schema. On the **Azure SQL Database Metadata Explorer** pane, right-click the database, and then select **Synchronize with Database**. This action publishes the MySQL schema to your SQL database.
- ![Migrate data](./media/mysql-to-sql-database-guide/migrate-data.png)
+ ![Screenshot of the "Synchronize with the Database" pane for reviewing database mapping.](./media/mysql-to-sql-database-guide/synchronize-database-review.png)
-1. After migration completes, view the **Data Migration** report:
+1. Migrate the data. On the **MySQL Metadata Explorer** pane, right-click the MySQL schema you want to migrate, and then select **Migrate Data**. Alternatively, you can select the **Migrate Data** tab at the upper right.
- ![Data Migration Report](./media/mysql-to-sql-database-guide/data-migration-report.png)
+ To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand **Tables**, and then select the check box next to the table. To omit data from individual tables, clear the check box.
-1. Connect to your Azure SQL Database by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema:
+ ![Screenshot of the "Migrate Data" command on the "MySQL Metadata Explorer" pane.](./media/mysql-to-sql-database-guide/migrate-data.png)
- ![Validate in SSMA](./media/mysql-to-sql-database-guide/validate-in-ssms.png)
+1. After the migration is completed, view the **Data Migration Report**.
+
+ ![Screenshot of the Data Migration Report.](./media/mysql-to-sql-database-guide/data-migration-report.png)
+1. Connect to your SQL database by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema.
+ ![Screenshot of SQL Server Management Studio.](./media/mysql-to-sql-database-guide/validate-in-ssms.png)
## Post-migration
-After you have successfully completed the **Migration** stage, you need to go through a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
+After you've successfully completed the *migration* stage, you need to complete a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
### Remediate applications
After the data is migrated to the target environment, all the applications that
### Perform tests
-The test approach for database migration consists of performing the following activities:
+The test approach to database migration consists of the following activities:
-1. **Develop validation tests**. To test database migration, you need to use SQL queries. You must create the validation queries to run against both the source and the target databases. Your validation queries should cover the scope you have defined.
+1. **Develop validation tests**: To test the database migration, you need to use SQL queries. You must create the validation queries to run against both the source and target databases. Your validation queries should cover the scope you've defined.
-2. **Set up test environment**. The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
+1. **Set up a test environment**: The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
-3. **Run validation tests**. Run the validation tests against the source and the target, and then analyze the results.
+1. **Run validation tests**: Run validation tests against the source and the target, and then analyze the results.
-4. **Run performance tests**. Run performance test against the source and the target, and then analyze and compare the results.
+1. **Run performance tests**: Run performance tests against the source and the target, and then analyze and compare the results.
### Optimize
-The post-migration phase is crucial for reconciling any data accuracy issues and verifying completeness, as well as addressing performance issues with the workload.
+The post-migration phase is crucial for reconciling any data accuracy issues, verifying completeness, and addressing performance issues with the workload.
-For additional detail about these issues and specific steps to mitigate them, see the [Post-migration Validation and Optimization Guide](/sql/relational-databases/post-migration-validation-and-optimization-guide).
+For more information about these issues and the steps to mitigate them, see the [Post-migration validation and optimization guide](/sql/relational-databases/post-migration-validation-and-optimization-guide).
## Migration assets
-For additional assistance with completing this migration scenario, please see the following resources, which were developed in support of a real-world migration project engagement.
+For more assistance with completing this migration scenario, see the following resource. It was developed in support of a real-world migration project engagement.
-| Title/link | Description |
-| - | - |
-| [Data Workload Assessment Model and Tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that greatly helps to accelerate large estate assessments by providing and automated and uniform target platform decision process. |
+| Title | Description |
+| | |
+| [Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | Provides suggested ΓÇ£best fitΓÇ¥ target platforms, cloud readiness, and application/database remediation levels for specified workloads. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated, uniform target-platform decision process. |
-These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
+The Data SQL Engineering team developed this resource. The team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
## Next steps -- Be sure to check out the [Azure Total Cost of Ownership (TCO) Calculator](https://aka.ms/azure-tco) to help estimate the cost savings you can realize by migrating your workloads to Azure.
+- To help estimate the cost savings you can realize by migrating your workloads to Azure, see the [Azure total cost of ownership calculator](https://aka.ms/azure-tco).
-- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see the article [Service and tools for data migration](../../../dms/dms-tools-matrix.md).
+- For a matrix of Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios and specialty tasks, see [Service and tools for data migration](../../../dms/dms-tools-matrix.md).
-- For other migration guides, see [Database Migration](https://datamigration.microsoft.com/).
+- For other migration guides, see [Azure Database Migration Guide](https://datamigration.microsoft.com/).
-For videos, see:
-- [Overview of the migration journey and the tools/services recommended for performing assessment and migration](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/)
+- For migration videos, see [Overview of the migration journey and recommended migration and assessment tools and services](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/).
azure-sql Sap Ase To Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sap-ase-to-sql-database.md
Title: "SAP ASE to Azure SQL Database: Migration guide"
-description: This guide teaches you to migrate your SAP ASE databases to Azure SQL Database using SQL Server Migration Assistant for SAP Adapter Server Enterprise.
+description: In this guide you learn how to migrate your SAP ASE databases to an Azure SQL database by using SQL Server Migration Assistant for SAP Adapter Server Enterprise.
Last updated 03/19/2021
# Migration guide: SAP ASE to Azure SQL Database+ [!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqldb.md)]
-This guide teaches you to migrate your SAP ASE databases to Azure SQL Database using SQL Server Migration Assistant for SAP Adapter Server Enterprise.
+In this guide, you learn how to migrate your SAP Adapter Server Enterprise (ASE) databases to an Azure SQL database by using SQL Server Migration Assistant for SAP Adapter Server Enterprise.
-For other migration guides, see [Database Migration](https://docs.microsoft.com/data-migration).
+For other migration guides, see [Azure Database Migration Guide](https://docs.microsoft.com/data-migration).
## Prerequisites
-To migrate your SAP SE database to Azure SQL Database, you need:
--- to verify your source environment is supported. -- [SQL Server Migration Assistant for SAP Adaptive Server Enterprise (formerly SAP Sybase ASE)](https://www.microsoft.com/en-us/download/details.aspx?id=54256). -- Connectivity and sufficient permissions to access both source and target.
+Before you begin migrating your SAP SE database to your SQL database, do the following:
+- Verify that your source environment is supported.
+- Download and install [SQL Server Migration Assistant for SAP Adaptive Server Enterprise (formerly SAP Sybase ASE)](https://www.microsoft.com/en-us/download/details.aspx?id=54256).
+- Ensure that you have connectivity and sufficient permissions to access both source and target.
## Pre-migration
-After you have met the prerequisites, you are ready to discover the topology of your environment and assess the feasibility of your migration.
+After you've met the prerequisites, you're ready to discover the topology of your environment and assess the feasibility of your migration.
### Assess
-Use [SQL Server Migration Assistant (SSMA) for SAP Adaptive Server Enterprise (formally SAP Sybase ASE)](https://www.microsoft.com/en-us/download/details.aspx?id=54256) to review database objects and data, assess databases for migration, migrate Sybase database objects to Azure SQL Database, and then migrate data to Azure SQL Database. To learn more, see [SQL Server Migration Assistant for Sybase (SybaseToSQL)](/sql/ssma/sybase/sql-server-migration-assistant-for-sybase-sybasetosql).
-
-To create an assessment, follow these steps:
+By using [SQL Server Migration Assistant (SSMA) for SAP Adaptive Server Enterprise (formally SAP Sybase ASE)](https://www.microsoft.com/en-us/download/details.aspx?id=54256), you can review database objects and data, assess databases for migration, migrate Sybase database objects to your SQL database, and then migrate data to the SQL database. To learn more, see [SQL Server Migration Assistant for Sybase (SybaseToSQL)](/sql/ssma/sybase/sql-server-migration-assistant-for-sybase-sybasetosql).
-1. Open **SSMA for Sybase**.
-1. Select **File** and then choose **New Project**.
-1. Provide a project name, a location to save your project, and then select Azure SQL Database as the migration target from the drop-down. Select **OK**.
-1. Enter in values for SAP connection details on the **Connect to Sybase** dialog box.
-1. Right-click the SAP database you want to migrate, and then choose **Create report**. This generates an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the database:
-1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of SAP ASE objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects.
+To create an assessment, do the following:
- For example: `drive:\<username>\Documents\SSMAProjects\MySAPMigration\report\report_<date>`.
+1. Open SSMA for Sybase.
+1. Select **File**, and then select **New Project**.
+1. In the **New Project** pane, enter a name and location for your project and then, in the **Migrate To** drop-down list, select **Azure SQL Database**.
+1. Select **OK**.
+1. On the **Connect to Sybase** pane, enter the SAP connection details.
+1. Right-click the SAP database you want to migrate, and then select **Create report**. This generates an HTML report. Alternatively, you can select the **Create report** tab at the upper right.
+1. Review the HTML report to understand the conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of SAP ASE objects and the effort that's required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects. For example:
+ `drive:\<username>\Documents\SSMAProjects\MySAPMigration\report\report_<date>`
-### Validate type mappings
+### Validate the type mappings
-Before you perform schema conversion validate the default datatype mappings or change them based on requirements. You could do so either by navigating to the **Tools** menu and choosing **Project Settings** or you can change type mapping for each table by selecting the table in the **SAP ASE Metadata Explorer**.
+Before you perform schema conversion, validate the default data-type mappings or change them based on requirements. You can do so by selecting **Tools** > **Project Settings**, or you can change the type mapping for each table by selecting the table in the **SAP ASE Metadata Explorer**.
+### Convert the schema
-### Convert schema
+To convert the schema, do the following:
-To convert the schema, follow these steps:
+1. (Optional) To convert dynamic or specialized queries, right-click the node, and then select **Add statement**.
+1. Select the **Connect to Azure SQL Database** tab, and then enter the details for your SQL database. You can choose to connect to an existing database or provide a new name, in which case a database will be created on the target server.
+1. On the **Sybase Metadata Explorer** pane, right-click the SAP ASE schema you're working with, and then select **Convert Schema**.
+1. After the schema has been converted, compare and review the converted structure to the original structure identify potential problems.
-1. (Optional) To convert dynamic or ad-hoc queries, right-click the node, and choose **Add Statement**.
-1. Select **Connect to Azure SQL Database** in the top-line navigation bar and provide Azure SQL Database details. You can choose to connect to an existing database or provide a new name, in which case a database will be created on the target server.
-1. Right-click the SAP ASE schema in **Sybase Metadata Explorer** and choose **Convert schema**. Alternatively, you can select **Convert schema** from the top-line navigation bar.
-1. Compare and review the structure of the schema to identify potential problems.
+ After the schema conversion, you can save this project locally for an offline schema remediation exercise. To do so, select **File** > **Save Project**. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you publish the schema to your SQL database.
- After schema conversion you can save this project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to Azure SQL Database.
+1. On the **Output** pane, select **Review results**, and review any errors in the **Error list** pane.
+1. Save the project locally for an offline schema remediation exercise. To do so, select **File** > **Save Project**. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you publish the schema to your SQL database.
-1. Select **Review results** in the Output pane, and review errors in the **Error list** pane.
-1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to SQL Database.
+## Migrate the databases
-## Migrate
+After you have the necessary prerequisites in place and have completed the tasks associated with the *pre-migration* stage, you're ready to run the schema and data migration.
-After you have the necessary prerequisites in place and have completed the tasks associated with the **Pre-migration** stage, you are ready to perform the schema and data migration.
+To publish the schema and migrate the data, do the following:
-To publish your schema and migrate the data, follow these steps:
+1. Publish the schema. On the **Azure SQL Database Metadata Explorer** pane, right-click the database, and then select **Synchronize with Database**. This action publishes the SAP ASE schema to your SQL database.
-1. Publish the schema: Right-click the database in **Azure SQL Database Metadata Explorer** and choose **Synchronize with Database**. This action publishes the SAP ASE schema to the Azure SQL Database instance.
-1. Migrate the data: Right-click the database or object you want to migrate in **SAP ASE Metadata Explorer**, and choose **Migrate data**. Alternatively, you can select **Migrate Data** from the top-line navigation bar. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand Tables, and then select the check box next to the table. To omit data from individual tables, clear the check box:
-1. After migration completes, view the **Data Migration Report**:
-1. Connect to your Azure SQL Database by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema.
+1. Migrate the data. On the **SAP ASE Metadata Explorer** pane, right-click the SAP ASE database or object you want to migrate, and then select **Migrate Data**. Alternatively, you can select the **Migrate Data** tab at the upper right.
+ To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand **Tables**, and then select the check box next to the table. To omit data from individual tables, clear the check box.
+1. After the migration is completed, view the **Data Migration Report**.
+1. Validate the migration by reviewing the data and schema. To do so, connect to your SQL database by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms).
## Post-migration
-After you have successfully completed the **Migration** stage, you need to go through a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
+After you've successfully completed the *migration* stage, you need to complete a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
### Remediate applications
After the data is migrated to the target environment, all the applications that
### Perform tests
-The test approach for database migration consists of performing the following activities:
+The test approach to database migration consists of the following activities:
-1. **Develop validation tests**. To test database migration, you need to use SQL queries. You must create the validation queries to run against both the source and the target databases. Your validation queries should cover the scope you have defined.
+1. **Develop validation tests**: To test the database migration, you need to use SQL queries. You must create the validation queries to run against both the source and target databases. Your validation queries should cover the scope you've defined.
-2. **Set up test environment**. The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
+1. **Set up a test environment**: The test environment should contain a copy of the source database and the target database. Be sure to isolate the test environment.
-3. **Run validation tests**. Run the validation tests against the source and the target, and then analyze the results.
+1. **Run validation tests**: Run validation tests against the source and the target, and then analyze the results.
+
+1. **Run performance tests**: Run performance tests against the source and the target, and then analyze and compare the results.
-4. **Run performance tests**. Run performance test against the source and the target, and then analyze and compare the results.
### Optimize
-The post-migration phase is crucial for reconciling any data accuracy issues and verifying completeness, as well as addressing performance issues with the workload.
+The post-migration phase is crucial for reconciling any data accuracy issues, verifying completeness, and addressing performance issues with the workload.
-> [!NOTE]
-> For additional detail about these issues and specific steps to mitigate them, see the [Post-migration Validation and Optimization Guide](/sql/relational-databases/post-migration-validation-and-optimization-guide).
+For more information about these issues and the steps to mitigate them, see the [Post-migration validation and optimization guide](/sql/relational-databases/post-migration-validation-and-optimization-guide).
## Next steps -- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see [Service and tools for data migration](../../../dms/dms-tools-matrix.md).
+- For a matrix of Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios and specialty tasks, see [Service and tools for data migration](../../../dms/dms-tools-matrix.md).
-- To learn more about Azure SQL Database see:
+- To learn more about Azure SQL Database, see:
- [An overview of SQL Database](../../database/sql-database-paas-overview.md)
- - [Azure total Cost of Ownership Calculator](https://azure.microsoft.com/pricing/tco/calculator/)
-
+ - [Azure total cost of ownership calculator](https://azure.microsoft.com/pricing/tco/calculator/)
-- To learn more about the framework and adoption cycle for Cloud migrations, see
+- To learn more about the framework and adoption cycle for cloud migrations, see:
- [Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/contoso-migration-scale)
- - [Best practices for costing and sizing workloads migrate to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
+ - [Best practices for costing and sizing workloads for migration to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs)
-- To assess the Application access layer, see [Data Access Migration Toolkit (Preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit)
+- To assess the application access layer, see [Data Access Migration Toolkit (preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit).
- For details on how to perform Data Access Layer A/B testing see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
backup Backup Managed Disks Ps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-managed-disks-ps.md
+
+ Title: Back up Azure Managed Disks using Azure PowerShell
+description: Learn how to back up Azure Managed Disks using Azure PowerShell.
+ Last updated : 03/26/2021++
+# Back up Azure Managed Disks using Azure PowerShell
+
+This article explains how to back up [Azure Managed Disk](../virtual-machines/managed-disks-overview.md) using Azure PowerShell.
+
+In this article, you'll learn how to:
+
+- Create a Backup vault
+
+- Create a backup policy
+
+- Configure a backup of an Azure Disk
+
+- Run an on-demand backup job
+
+For information on the Azure Disk backup region availability, supported scenarios and limitations, see the [support matrix](disk-backup-support-matrix.md).
+
+## Create a Backup vault
+
+A Backup vault is a storage entity in Azure that holds backup data for various newer workloads that Azure Backup supports, such as Azure Database for PostgreSQL servers and Azure Disks. Backup vaults make it easy to organize your backup data, while minimizing management overhead. Backup vaults are based on the Azure Resource Manager model of Azure, which provides enhanced capabilities to help secure backup data.
+
+Before creating a backup vault, choose the storage redundancy of the data within the vault. Then proceed to create the backup vault with that storage redundancy and the location. In this article, we will create a backup vault "TestBkpVault" in "westus" region under the resource group "testBkpVaultRG". Use the [New-AzDataProtectionBackupVault](/powershell/module/az.dataprotection/new-azdataprotectionbackupvault?view=azps-5.7.0&preserve-view=true) command to create a backup vault.Learn more about [creating a Backup vault](./backup-vault-overview.md#create-a-backup-vault).
+
+```azurepowershell-interactive
+$storageSetting = New-AzDataProtectionBackupVaultStorageSettingObject -Type LocallyRedundant/GeoRedundant -DataStoreType VaultStore
+New-AzDataProtectionBackupVault -ResourceGroupName testBkpVaultRG -VaultName TestBkpVault -Location westus -StorageSetting $storageSetting
+$TestBkpVault = Get-AzDataProtectionBackupVault -VaultName TestBkpVault
+$TestBKPVault | fl
+ETag :
+Id : /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault
+Identity : Microsoft.Azure.PowerShell.Cmdlets.DataProtection.Models.Api20210201Preview.DppIdentityDetails
+IdentityPrincipalId :
+IdentityTenantId :
+IdentityType :
+Location : westus
+Name : TestBkpVault
+ProvisioningState : Succeeded
+StorageSetting : {Microsoft.Azure.PowerShell.Cmdlets.DataProtection.Models.Api20210201Preview.StorageSetting}
+SystemData : Microsoft.Azure.PowerShell.Cmdlets.DataProtection.Models.Api20210201Preview.SystemData
+Tag : Microsoft.Azure.PowerShell.Cmdlets.DataProtection.Models.Api20210201Preview.DppTrackedResourceTags
+Type : Microsoft.DataProtection/backupVaults
+```
+
+After creation of vault, let's create a backup policy to protect Azure disks.
+
+## Create a Backup policy
+
+To understand the inner components of a backup policy for Azure disk backup, retrieve the policy template using the command [Get-AzDataProtectionPolicyTemplate](/powershell/module/az.dataprotection/get-azdataprotectionpolicytemplate?view=azps-5.7.0&preserve-view=true). This command returns a default policy template for a given datasource type. Use this policy template to create a new policy.
+
+```azurepowershell-interactive
+$policyDefn = Get-AzDataProtectionPolicyTemplate -DatasourceType AzureDisk
+$policyDefn | fl
++
+DatasourceType : {Microsoft.Compute/disks}
+ObjectType : BackupPolicy
+PolicyRule : {BackupHourly, Default}
+
+$policyDefn.PolicyRule | fl
++
+BackupParameter : Microsoft.Azure.PowerShell.Cmdlets.DataProtection.Models.Api20210201Preview.AzureBackupParams
+BackupParameterObjectType : AzureBackupParams
+DataStoreObjectType : DataStoreInfoBase
+DataStoreType : OperationalStore
+Name : BackupHourly
+ObjectType : AzureBackupRule
+Trigger : Microsoft.Azure.PowerShell.Cmdlets.DataProtection.Models.Api20210201Preview.ScheduleBasedTriggerContext
+TriggerObjectType : ScheduleBasedTriggerContext
+
+IsDefault : True
+Lifecycle : {Microsoft.Azure.PowerShell.Cmdlets.DataProtection.Models.Api20210201Preview.SourceLifeCycle}
+Name : Default
+ObjectType : AzureRetentionRule
+```
+
+The policy template consists of a trigger (which decides what triggers the backup) and a lifecycle (which decides when to delete/copy/move the backup). In Azure disk backup, the default value for trigger is a scheduled hourly trigger for every 4 hours (PT4H) and to retain each backup for 7 days.
+
+```azurepowershell-interactive
+ $policyDefn.PolicyRule[0].Trigger | fl
++
+ObjectType : ScheduleBasedTriggerContext
+ScheduleRepeatingTimeInterval : {R/2020-04-05T13:00:00+00:00/PT4H}
+TaggingCriterion : {Default}
+```
+
+```azurepowershell-interactive
+$policyDefn.PolicyRule[1].Lifecycle | fl
++
+DeleteAfterDuration : P7D
+DeleteAfterObjectType : AbsoluteDeleteOption
+SourceDataStoreObjectType : DataStoreInfoBase
+SourceDataStoreType : OperationalStore
+TargetDataStoreCopySetting :
+```
+
+Azure Disk Backup offers multiple backups per day. If you require more frequent backups, choose the **Hourly** backup frequency with the ability to take backups with intervals of every 4, 6, 8 or 12 hours. The backups are scheduled based on the **Time** interval selected. For example, if you select **Every 4 hours**, then the backups are taken at approximately in the interval of every 4 hours so the backups are distributed equally across the day. If a once a day backup is sufficient, then choose the **Daily** backup frequency. In the daily backup frequency, you can specify the time of the day when your backups are taken. It's important to note that the time of the day indicates the backup start time and not the time when the backup completes. The time required for completing the backup operation is dependent on various factors including size of the disk, and churn rate between consecutive backups. However, Azure Disk backup is an agentless backup that uses [incremental snapshots](../virtual-machines/disks-incremental-snapshots.md), which doesn't impact the production application performance.
+
+ >[!NOTE]
+ > Although the selected vault may have the global-redundancy setting, currently Azure Disk Backup supports snapshot datastore only. All backups are stored in a resource group in your subscription and aren't copied to backup vault storage.
+
+To know more details about policy creation, refer to the [azure disk backup policy](backup-managed-disks.md#create-backup-policy) document.
+
+If you want to edit the hourly frequency or the retention period, use the [Edit-AzDataProtectionPolicyTriggerClientObject](/powershell/module/az.dataprotection/edit-azdataprotectionpolicytriggerclientobject?view=azps-5.7.0&preserve-view=true) and/or [Edit-AzDataProtectionPolicyRetentionRuleClientObject](/powershell/module/az.dataprotection/edit-azdataprotectionpolicyretentionruleclientobject?view=azps-5.7.0&preserve-view=true) commands. Once the policy object has all the desired values, proceed to create a new policy from the policy object using the [New-AzDataProtectionBackupPolicy](/powershell/module/az.dataprotection/new-azdataprotectionbackuppolicy?view=azps-5.7.0&preserve-view=true).
+
+```azurepowershell-interactive
+New-AzDataProtectionBackupPolicy -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -Name diskBkpPolicy -Policy $policyDefn
+
+Name Type
+- -
+diskBkpPolicy Microsoft.DataProtection/backupVaults/backupPolicies
+
+$diskBkpPol = Get-AzDataProtectionBackupPolicy -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -Name "diskBkpPolicy"
+```
+
+## Configure backup
+
+Once the vault and policy are created, there are 3 critical points that the user needs to consider to protect an Azure disk.
+
+### Key entities involved
+
+#### Disk to be protected
+
+Fetch the ARM ID of the disk to be protected. This will serve as the identifier of the disk. We will use an example of a disk named "PSTestDisk" under a resource group "diskrg" under a different subscription.
+
+```azurepowershell-interactive
+$DiskId = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/diskrg/providers/Microsoft.Compute/disks/PSTestDisk"
+```
+
+#### Snapshot resource group
+
+The disk snapshots are stored in a resource group within in your subscription. As a guideline, it's recommended to create a dedicated resource group as a snapshot datastore to be used by the Azure Backup service. Having a dedicated resource group allows restricting access permissions on the resource group, providing safety and ease of management of the backup data. Note the ARM ID for the resource group where you wish to place the disk snapshots
+
+```azurepowershell-interactive
+$snapshotrg = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourceGroups/snapshotrg"
+```
+
+#### Backup vault
+
+The Backup vaults require permissions on disk and the snapshot resource group to be able to trigger snapshots and manage their lifecycle. The system-assigned managed identity of the vault is used for assigning such permissions. Use the [Update-AzRecoveryServicesVault](/powershell/module/az.recoveryservices/update-azrecoveryservicesvault?view=azps-5.7.0&preserve-view=true) command to enable system-assigned managed identity for the recovery services vault.
+
+### Assign permissions
+
+The user needs to assign few permissions via RBAC to vault (represented by vault MSI) and the relevant disk and/or the disk RG. These can be performed via Portal or PowerShell. All related permissions are detailed in points 1,2,3 in [this section](backup-managed-disks.md#configure-backup).
+
+### Prepare the request
+
+Once all the relevant permissions are set, the configuration of backup is performed in 2 steps. First, we prepare the relevant request by using the relevant vault, policy, disk and snapshot resource group using the [Initialize-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/initialize-azdataprotectionbackupinstance?view=azps-5.7.0&preserve-view=true) command. Then, we submit the request to protect the disk using the [New-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/new-azdataprotectionbackupinstance?view=azps-5.7.0&preserve-view=true) command.
+
+```azurepowershell-interactive
+$instance = Initialize-AzDataProtectionBackupInstance -DatasourceType AzureDisk -DatasourceLocation $TestBkpvault.Location -PolicyId $diskBkpPol[0].Id -DatasourceId $DiskId
+$instance.Property.PolicyInfo.PolicyParameter.DataStoreParametersList[0].ResourceGroupId = $snapshotrg
+New-AzDataProtectionBackupInstance -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -BackupInstance $instance
+
+Name Type BackupInstanceName
+- -
+diskrg-PSTestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166 Microsoft.DataProtection/backupVaults/backupInstances diskrg-PSTestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166
+```
+
+## Run an on-demand backup
+
+Fetch the relevant backup instance on which the user desires to trigger a backup using the [Get-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/get-azdataprotectionbackupinstance?view=azps-5.7.0&preserve-view=true)
+
+```azurepowershell-interactive
+$instance = Get-AzDataProtectionBackupInstance -SubscriptionId "xxxx-xxx-xxx" -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -Name "BackupInstanceName"
+```
+
+You can specify a retention rule while triggering backup. To view the retention rules in policy, navigate through the policy object for retention rules. In the below example, the rule with name 'default' is displayed and we will use that rule for the on-demand backup
+
+```azurepowershell-interactive
+$policyDefn.PolicyRule | fl
++
+BackupParameter : Microsoft.Azure.PowerShell.Cmdlets.DataProtection.Models.Api20210201Preview.AzureBackupParams
+BackupParameterObjectType : AzureBackupParams
+DataStoreObjectType : DataStoreInfoBase
+DataStoreType : OperationalStore
+Name : BackupHourly
+ObjectType : AzureBackupRule
+Trigger : Microsoft.Azure.PowerShell.Cmdlets.DataProtection.Models.Api20210201Preview.ScheduleBasedTriggerContext
+TriggerObjectType : ScheduleBasedTriggerContext
+
+IsDefault : True
+Lifecycle : {Microsoft.Azure.PowerShell.Cmdlets.DataProtection.Models.Api20210201Preview.SourceLifeCycle}
+Name : Default
+ObjectType : AzureRetentionRule
+```
+
+Trigger an on-demand backup using the [Backup-AzDataProtectionBackupInstanceAdhoc](/powershell/module/az.dataprotection/backup-azdataprotectionbackupinstanceadhoc?view=azps-5.7.0&preserve-view=true) command.
+
+```azurepowershell-interactive
+$AllInstances = Get-AzDataProtectionBackupInstance -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name
+Backup-AzDataProtectionBackupInstanceAdhoc -BackupInstanceName $AllInstances[0].Name -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -BackupRuleOptionRuleName "Default"
+```
+
+## Tracking jobs
+
+Track all the jobs using the [Get-AzDataProtectionJob](/powershell/module/az.dataprotection/get-azdataprotectionjob?view=azps-5.7.0&preserve-view=true) command. You can list all jobs and fetch a particular job detail.
+
+You can also use Az.ResourceGraph to track all jobs across all backup vaults. Use the [Search-AzDataProtectionJobInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionjobinazgraph?view=azps-5.7.0&preserve-view=true) command to get the relevant job which can be across any backup vault.
+
+```azurepowershell-interactive
+ $job = Search-AzDataProtectionJobInAzGraph -Subscription $sub -ResourceGroupName "testBkpVaultRG" -Vault $TestBkpVault.Name -DatasourceType AzureDisk -Operation OnDemandBackup
+```
+
+## Next steps
+
+- [Restore Azure Managed Disks using Azure PowerShell](restore-managed-disks-ps.md)
backup Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-overview.md
The Azure Backup service provides simple, secure, and cost-effective solutions t
- **On-premises** - Back up files, folders, system state using the [Microsoft Azure Recovery Services (MARS) agent](backup-support-matrix-mars-agent.md). Or use the DPM or Azure Backup Server (MABS) agent to protect on-premises VMs ([Hyper-V](back-up-hyper-v-virtual-machines-mabs.md) and [VMware](backup-azure-backup-server-vmware.md)) and other [on-premises workloads](backup-mabs-protection-matrix.md) - **Azure VMs** - [Back up entire Windows/Linux VMs](backup-azure-vms-introduction.md) (using backup extensions) or back up files, folders, and system state using the [MARS agent](backup-azure-manage-mars.md).-- **Azure Managed Disks** - [Back up Azure Managed Disks (in preview)](backup-managed-disks.md)
+- **Azure Managed Disks** - [Back up Azure Managed Disks](backup-managed-disks.md)
- **Azure Files shares** - [Back up Azure File shares to a storage account](backup-afs.md) - **SQL Server in Azure VMs** - [Back up SQL Server databases running on Azure VMs](backup-azure-sql-database.md) - **SAP HANA databases in Azure VMs** - [Backup SAP HANA databases running on Azure VMs](backup-azure-sap-hana-database.md)
backup Disk Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/disk-backup-support-matrix.md
More regions will be announced when they become available.
- Currently, the Azure portal experience to configure the backup of disks is limited to a maximum of 20 disks from the same subscription. -- Currently (during the preview), the use of PowerShell and Azure CLI to configure the backup and restore of disks isn't supported.
+- Azure Disk Backup supports PowerShell. Currently, Azure CLI isnΓÇÖt supported.
- When configuring backup, the disk selected to be backed up and the snapshot resource group where the snapshots are to be stored must be part of the same subscription. You can't create an incremental snapshot for a particular disk outside of that disk's subscription. Learn more about [incremental snapshots](../virtual-machines/disks-incremental-snapshots.md#restrictions) for managed disk. For more information on how to choose a snapshot resource group, see [Configure backup](backup-managed-disks.md#configure-backup).
More regions will be announced when they become available.
- [Private Links](../virtual-machines/disks-enable-private-links-for-import-export-portal.md) support for managed disks allows you to restrict the export and import of managed disks so that it only occurs within your Azure virtual network. Azure Disk Backup supports backup of disks that have private endpoints enabled. This doesn't include the backup data or snapshots to be accessible through the private endpoint. -- During the preview, you can't disable the backup, so the option **stop backup and retain backup data** is not supported. You can delete a backup instance, which will not only stop the backup but also delete all the backup data.
+- You can delete a backup instance, which will stop the backup, and also deletes all the backup data. Currently, you canΓÇÖt disable a backup, as the option **stop backup and retain backup data** isnΓÇÖt supported.
## Next steps
backup Restore Managed Disks Ps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/restore-managed-disks-ps.md
+
+ Title: Restore Azure Managed Disks via Azure PowerShell
+description: Learn how to restore Azure Managed Disks using Azure PowerShell.
+ Last updated : 03/26/2021++
+# Restore Azure Managed Disks using Azure PowerShell
+
+This article explains how to restore [Azure Managed Disks](../virtual-machines/managed-disks-overview.md) from a restore point created by Azure Backup.
+
+Currently, the Original-Location Recovery (OLR) option of restoring by replacing existing the source disk from where the backups were taken isn't supported. You can restore from a recovery point to create a new disk either in the same resource group as that of the source disk from where the backups were taken or in any other resource group. This is known as Alternate-Location Recovery (ALR) and this helps to keep both the source disk and the restored (new) disk.
+
+In this article, you'll learn how to:
+
+- Restore to create a new disk
+
+- Track the restore operation status
+
+We will refer to an existing backup vault "TestBkpVault" under the resource group "testBkpVaultRG" in the examples
+
+```azurepowershell-interactive
+$TestBkpVault = Get-AzDataProtectionBackupVault -VaultName TestBkpVault -ResourceGroupName "testBkpVaultRG"
+```
+
+## Restore to create a new disk
+
+### Setting up permissions
+
+Backup Vault uses Managed Identity to access other Azure resources. To restore from backup, Backup vaultΓÇÖs managed identity requires a set of permissions on the resource group where the disk is to be restored.
+
+Backup vault uses a system assigned managed identity, which is restricted to one per resource and is tied to the lifecycle of this resource. You can grant permissions to the managed identity by using Azure role-based access control (Azure RBAC). Managed identity is a service principal of a special type that may only be used with Azure resources. Learn more about [Managed Identities](../active-directory/managed-identities-azure-resources/overview.md).
+
+Assign the relevant permissions for vault's system assigned managed identity on the target resource group where the disks will be restored/created as mentioned [here](restore-managed-disks.md#restore-to-create-a-new-disk).
+
+### Fetching the relevant recovery point
+
+Fetch all instances using [Get-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/get-azdataprotectionbackupinstance?view=azps-5.7.0&preserve-view=true) command and identify the relevant instance.
+
+```azurepowershell-interactive
+$AllInstances = Get-AzDataProtectionBackupInstance -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name
+```
+
+You can also use **Az.Resourcegraph** and the [Search-AzDataProtectionBackupInstanceInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionbackupinstanceinazgraph?view=azps-5.7.0&preserve-view=true) command to search across instances in many vaults and subscriptions.
+
+```azurepowershell-interactive
+$AllInstances = Search-AzDataProtectionBackupInstanceInAzGraph -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -DatasourceType AzureDisk -ProtectionStatus ProtectionConfigured
+```
+
+Once the instance is identified, fetch the relevant recovery point.
+
+```azurepowershell-interactive
+$rp = Get-AzDataProtectionRecoveryPoint -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -BackupInstanceName $AllInstances[2].BackupInstanceName
+```
+
+### Preparing the restore request
+
+Construct the ARM ID of the new disk to be created with the target resource group, to which permissions were assigned as detailed [above](#setting-up-permissions), and the required disk name. For example, a disk can be named **PSTestDisk2** under a resource group **targetrg** with a different subscription.
+
+```azurepowershell-interactive
+$targetDiskId = /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourceGroups/targetrg/providers/Microsoft.Compute/disks/PSTestDisk2
+```
+
+Use the [Initialize-AzDataProtectionRestoreRequest](/powershell/module/az.dataprotection/initialize-azdataprotectionrestorerequest?view=azps-5.7.0&preserve-view=true) command to prepare the restore request with all relevant details.
+
+```azurepowershell-interactive
+$restorerequest = Initialize-AzDataProtectionRestoreRequest -DatasourceType AzureDisk -SourceDataStore OperationalStore -RestoreLocation $TestBkpVault.Location -RestoreType AlternateLocation -TargetResourceId $targetDiskId -RecoveryPoint $rp[0].Name
+```
+
+### Trigger the restore
+
+Use the [Start-AzDataProtectionBackupInstanceRestore](/powershell/module/az.dataprotection/start-azdataprotectionbackupinstancerestore?view=azps-5.7.0&preserve-view=true) command to trigger the restore with the request prepared above.
+
+```azurepowershell-interactive
+Start-AzDataProtectionBackupInstanceRestore -BackupInstanceName $AllInstances[2].BackupInstanceName -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -Parameter $restorerequest
+```
+
+## Tracking job
+
+Track all the jobs using the [Get-AzDataProtectionJob](/powershell/module/az.dataprotection/get-azdataprotectionjob?view=azps-5.7.0&preserve-view=true) command. You can list all jobs and fetch a particular job detail.
+
+You can also use **Az.ResourceGraph** to track all jobs across all backup vaults. Use the [Search-AzDataProtectionJobInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionjobinazgraph?view=azps-5.7.0&preserve-view=true) command to get the relevant job, which can be across any backup vault.
+
+```azurepowershell-interactive
+$job = Search-AzDataProtectionJobInAzGraph -Subscription $sub -ResourceGroupName "testBkpVaultRG" -Vault $TestBkpVault.Name -DatasourceType AzureDisk -Operation OnDemandBackup
+```
+
+## Next steps
+
+- [Azure Disk Backup FAQ](disk-backup-faq.md)
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/whats-new.md
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary - March 2021
+ - [Azure Disk Backup is now generally available](#azure-disk-backup-is-now-generally-available)
- [Backup center is now generally available](#backup-center-is-now-generally-available) - [Archive Tier support for Azure Backup (in preview)](#archive-tier-support-for-azure-backup-in-preview) - February 2021
You can learn more about the new releases by bookmarking this page or by [subscr
- [Zone redundant storage (ZRS) for backup data (in preview)](#zone-redundant-storage-zrs-for-backup-data-in-preview) - [Soft delete for SQL Server and SAP HANA workloads in Azure VMs](#soft-delete-for-sql-server-and-sap-hana-workloads)
+## Azure Disk Backup is now generally available
+
+Azure Backup offers snapshot lifecycle management to Azure Managed Disks by automating periodic creation of snapshots and retaining these for configured durations using Backup policy.
+
+For more information, see [Overview of Azure Disk Backup](disk-backup-overview.md).
+ ## Backup center is now generally available Backup center simplifies data protection management at-scale by enabling you to discover, govern, monitor, operate, and optimize backup management from one single central console.
batch Batch Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-diagnostics.md
Title: Metrics, alerts, and diagnostic logs description: Record and analyze diagnostic log events for Azure Batch account resources like pools and tasks. Previously updated : 10/08/2020 Last updated : 03/25/2021 # Batch metrics, alerts, and logs for diagnostic evaluation and monitoring
-This article explains how to monitor a Batch account using features of [Azure Monitor](../azure-monitor/overview.md). Azure Monitor collects [metrics](../azure-monitor/essentials/data-platform-metrics.md) and [diagnostic logs](../azure-monitor/essentials/platform-logs-overview.md) for resources in your Batch account. Collect and consume this data in a variety of ways to monitor your Batch account and diagnose issues. You can also configure [metric alerts](../azure-monitor/alerts/alerts-overview.md) so you receive notifications when a metric reaches a specified value.
+Azure Monitor collects [metrics](../azure-monitor/essentials/data-platform-metrics.md) and [diagnostic logs](../azure-monitor/essentials/platform-logs-overview.md) for resources in your Azure Batch account.
+
+You can collect and consume this data in a variety of ways to monitor your Batch account and diagnose issues. You can also configure [metric alerts](../azure-monitor/alerts/alerts-overview.md) so you receive notifications when a metric reaches a specified value.
## Batch metrics
-Metrics are Azure telemetry data (also called performance counters) that are emitted by your Azure resources and consumed by the Azure Monitor service. Examples of metrics in a Batch account are Pool Create Events, Low-Priority Node Count, and Task Complete Events.
+[Metrics](../azure-monitor/essentials/data-platform-metrics.md) are Azure telemetry data (also called performance counters) that are emitted by your Azure resources and consumed by the Azure Monitor service. Examples of metrics in a Batch account are Pool Create Events, Low-Priority Node Count, and Task Complete Events. These metrics can help identify trends and can be used for data analysis.
See the [list of supported Batch metrics](../azure-monitor/essentials/metrics-supported.md#microsoftbatchbatchaccounts).
Metrics are:
## View Batch metrics
-In the Azure portal, the **Overview** page for the account will show key node, core, and task metrics by default.
+In the Azure portal, the **Overview** page for the Batch account will show key node, core, and task metrics by default.
-To view all Batch account metrics in the Azure portal:
+To view additional metrics for a Batch account:
1. In the Azure portal, select **All services** > **Batch accounts**, and then select the name of your Batch account.
-2. Under **Monitoring**, select **Metrics**.
-3. Select **Add metric** and then choose a metric from the dropdown list.
-4. Select an **Aggregation** option for the metric. For count-based metrics (like "Dedicated Core Count" or "Low-Priority Node Count"), use the **Average** aggregation. For event-based metrics (like "Pool Resize Complete Events"), use the **Count**" aggregation.
-
- > [!WARNING]
- > Do not use the "Sum" aggregation, which adds up the values of all data points received over the period of the chart.
-
-5. To add additional metrics, repeat steps 3 and 4.
+1. Under **Monitoring**, select **Metrics**.
+1. Select **Add metric** and then choose a metric from the dropdown list.
+1. Select an **Aggregation** option for the metric. For count-based metrics (like "Dedicated Core Count" or "Low-Priority Node Count"), use the **Avg** aggregation. For event-based metrics (like "Pool Resize Complete Events"), use the **Count**" aggregation. Avoid using the **Sum** aggregation, which adds up the values of all data points received over the period of the chart.
+1. To add additional metrics, repeat steps 3 and 4.
-You can also retrieve metrics programmatically with the Azure Monitor APIs. For an example, see [Retrieve Azure Monitor metrics with .NET](https://azure.microsoft.com/resources/samples/monitor-dotnet-metrics-api/).
+You can also retrieve metrics programmatically with the Azure Monitor APIs. For an example, see [Retrieve Azure Monitor metrics with .NET](/samples/azure-samples/monitor-dotnet-metrics-api/monitor-dotnet-metrics-api/).
-### Batch metric reliability
-
-Metrics can help identify trends and can be used for data analysis. It's important to note that metric delivery is not guaranteed, and may be subject to out-of-order delivery, data loss, and/or duplication. Because of this, using single events to alert or trigger functions is not recommended. See the next section for more details on how to set thresholds for alerting.
-
-Metrics emitted in the last 3 minutes may still be aggregating, so metric values may be underreported during this timeframe.
+> [!NOTE]
+> Metrics emitted in the last 3 minutes may still be aggregating, so values may be under-reported during this timeframe. Metric delivery is not guaranteed, and may be affected by out-of-order delivery, data loss, or duplication.
## Batch metric alerts
-You can configure near real-time *metric alerts* that trigger when the value of a specified metric crosses a threshold that you assign. The alert generates a notification when the alert is "Activated" (when the threshold is crossed and the alert condition is met) as well as when it is "Resolved" (when the threshold is crossed again and the condition is no longer met).
+You can configure near real-time metric alerts that trigger when the value of a specified metric crosses a threshold that you assign. The alert generates a notification when the alert is "Activated" (when the threshold is crossed and the alert condition is met) as well as when it is "Resolved" (when the threshold is crossed again and the condition is no longer met).
-Alerts that trigger on a single data point is not recommended, as metrics are subject to out-of-order delivery, data loss, and/or duplication. When creating your alerts, you can use thresholds to account for these inconsistencies.
+Because metric delivery can be subject to inconsistencies such as out-of-order delivery, data loss, or duplication, we recommend avoiding alerts that trigger on a single data point. Instead, use thresholds to account for any inconsistencies such as out-of-order delivery, data loss, and duplication over a period of time.
-For example, you might want to configure a metric alert when your low priority core count falls to a certain level, so you can adjust the composition of your pools. For best results, set a period of 10 or more minutes, where alerts trigger if the average low priority core count falls below the threshold value for the entire period. This allows for more time for metrics to aggregate so that you get more accurate results.
+For example, you might want to configure a metric alert when your low priority core count falls to a certain level, so you can adjust the composition of your pools. For best results, set a period of 10 or more minutes, where the alert will be triggered if the average low priority core count falls below the threshold value for the entire period. This allows time for metrics to aggregate so that you get more accurate results.
To configure a metric alert in the Azure portal: 1. Select **All services** > **Batch accounts**, and then select the name of your Batch account.
-2. Under **Monitoring**, select **Alerts**, then select **New alert rule**.
-3. Click **Select condition**, then choose a metric. Confirm the values for **Chart period**, **Threshold type**, **Operator**, and **Aggregation type**, and enter a **Threshold value**. Then select **Done**.
-4. Add an action group to the alert either by selecting an existing action group or creating a new action group.
-5. In the **Alert rule details** section, enter an **Alert rule name** and **Description** and select the **Severity**
-6. Select **Create alert rule**.
+1. Under **Monitoring**, select **Alerts**, then select **New alert rule**.
+1. Select **Add condition**, then choose a metric.
+1. Select the desired values for **Chart period**, **Threshold**, **Operator**, and **Aggregation type**.
+1. Enter a **Threshold value** and select the **Unit** for the threshold. Then select **Done**.
+1. Add an [action group](../azure-monitor/alerts/action-groups.md) to the alert either by selecting an existing action group or creating a new action group.
+1. In the **Alert rule details** section, enter an **Alert rule name** and **Description**. If you want the alert to be enabled immediately, ensure that the **Enable alert rule upon creation** box is checked.
+1. Select **Create alert rule**.
For more information about creating metric alerts, see [Understand how metric alerts work in Azure Monitor](../azure-monitor/alerts/alerts-metric-overview.md) and [Create, view, and manage metric alerts using Azure Monitor](../azure-monitor/alerts/alerts-metric.md).
-You can also configure a near real-time alert using the Azure Monitor [REST API](/rest/api/monitor/). For more information, see [Overview of Alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md). To include job, task, or pool-specific information in your alerts, see the information on search queries in [Respond to events with Azure Monitor Alerts](../azure-monitor/alerts/tutorial-response.md).
+You can also configure a near real-time alert using the [Azure Monitor REST API](/rest/api/monitor/). For more information, see [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md). To include job, task, or pool-specific information in your alerts, see [Respond to events with Azure Monitor Alerts](../azure-monitor/alerts/tutorial-response.md).
## Batch diagnostics
-Diagnostic logs contain information emitted by Azure resources that describe the operation of each resource. For Batch, you can collect the following logs:
+[Diagnostic logs](../azure-monitor/essentials/platform-logs-overview.md) contain information emitted by Azure resources that describe the operation of each resource. For Batch, you can collect the following logs:
-- **Service Logs** events emitted by the Azure Batch service during the lifetime of an individual Batch resource like a pool or task.-- **Metrics** logs at the account level.
+- **ServiceLog**: [events emitted by the Batch service](#service-log-events) during the lifetime of an individual resource such as a pool or task.
+- **AllMetrics**: Metrics at the Batch account level.
-Settings to enable collection of diagnostic logs are not enabled by default. Explicitly enable diagnostic settings for each Batch account you want to monitor.
+You must explicitly enable diagnostic settings for each Batch account you want to monitor.
-### Log destinations
+### Log destination options
A common scenario is to select an Azure Storage account as the log destination. To store logs in Azure Storage, create the account before enabling collection of logs. If you associated a storage account with your Batch account, you can choose that account as the log destination.
To create a new diagnostic setting in the Azure portal, follow the steps below.
2. Under **Monitoring**, select **Diagnostic settings**. 3. In **Diagnostic settings**, select **Add diagnostic setting**. 4. Enter a name for the setting.
-5. Select a destination: **Send to Log Analytics**, **Archive to a storage account**, or **Stream to an Event Hub**. If you select a storage account, you can optionally set a retention policy. If you don't specify a number of days for retention, data is retained during the life of the storage account.
+5. Select a destination: **Send to Log Analytics**, **Archive to a storage account**, or **Stream to an event hub**. If you select a storage account, you can optionally select the number of days to retain data for each log. If you don't specify a number of days for retention, data is retained during the life of the storage account.
6. Select **ServiceLog**, **AllMetrics**, or both. 7. Select **Save** to create the diagnostic setting.
-You can also [enable collection through Azure Monitor in the Azure portal](../azure-monitor/essentials/diagnostic-settings.md) to configure diagnostic settings, by using a [Resource Manager template](../azure-monitor/essentials/resource-manager-diagnostic-settings.md), or with Azure PowerShell or the Azure CLI. For more information, see [Overview of Azure platform logs](../azure-monitor/essentials/platform-logs-overview.md).
+You can also enable log collection by [creating diagnostic settings in the Azure portal](../azure-monitor/essentials/diagnostic-settings.md), using a [Resource Manager template](../azure-monitor/essentials/resource-manager-diagnostic-settings.md), or using Azure PowerShell or the Azure CLI. For more information, see [Overview of Azure platform logs](../azure-monitor/essentials/platform-logs-overview.md).
### Access diagnostics logs in storage
-If you archive Batch diagnostic logs in a storage account, a storage container is created in the storage account as soon as a related event occurs. Blobs are created according to the following naming pattern:
+If you [archive Batch diagnostic logs in a storage account](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage), a storage container is created in the storage account as soon as a related event occurs. Blobs are created according to the following naming pattern:
```json insights-{log category name}/resourceId=/SUBSCRIPTIONS/{subscription ID}/
Below is an example of a `PoolResizeCompleteEvent` entry in a `PT1H.json` log fi
{ "Tenant": "65298bc2729a4c93b11c00ad7e660501", "time": "2019-08-22T20:59:13.5698778Z", "resourceId": "/SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.BATCH/BATCHACCOUNTS/MYBATCHACCOUNT/", "category": "ServiceLog", "operationName": "PoolResizeCompleteEvent", "operationVersion": "2017-06-01", "properties": {"id":"MYPOOLID","nodeDeallocationOption":"Requeue","currentDedicatedNodes":10,"targetDedicatedNodes":100,"currentLowPriorityNodes":0,"targetLowPriorityNodes":0,"enableAutoScale":false,"isAutoPool":false,"startTime":"2019-08-22 20:50:59.522","endTime":"2019-08-22 20:59:12.489","resultCode":"Success","resultMessage":"The operation succeeded"}} ```
-For more information about the schema of diagnostic logs in the storage account, see [Archive Azure resource logs to storage account](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage). To access the logs in your storage account programmatically, use the Storage APIs.
+To access the logs in your storage account programmatically, use the Storage APIs.
### Service log events
-Azure Batch service logs, if collected, contain events emitted by the Azure Batch service during the lifetime of an individual Batch resource, such as a pool or task. Each event emitted by Batch is logged in JSON format. For example, this is the body of a sample **pool create event**:
+Azure Batch service logs contain events emitted by the Batch service during the lifetime of an individual Batch resource, such as a pool or task. Each event emitted by Batch is logged in JSON format. For example, this is the body of a sample **pool create event**:
```json {
Service log events emitted by the Batch service include the following:
## Next steps - Learn about the [Batch APIs and tools](batch-apis-tools.md) available for building Batch solutions.-- Learn more about [monitoring Batch solutions](monitoring-overview.md).
+- Learn more about [monitoring Batch solutions](monitoring-overview.md).
batch Batch Mpi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-mpi.md
Title: Use multi-instance tasks to run MPI applications description: Learn how to execute Message Passing Interface (MPI) applications using the multi-instance task type in Azure Batch. Previously updated : 10/08/2020- Last updated : 03/25/2021 # Use multi-instance tasks to run Message Passing Interface (MPI) applications in Batch
-Multi-instance tasks allow you to run an Azure Batch task on multiple compute nodes simultaneously. These tasks enable high performance computing scenarios like Message Passing Interface (MPI) applications in Batch. In this article, you learn how to execute multi-instance tasks using the [Batch .NET][api_net] library.
+Multi-instance tasks allow you to run an Azure Batch task on multiple compute nodes simultaneously. These tasks enable high performance computing scenarios like Message Passing Interface (MPI) applications in Batch. In this article, you learn how to execute multi-instance tasks using the [Batch .NET](/dotnet/api/microsoft.azure.batch) library.
> [!NOTE] > While the examples in this article focus on Batch .NET, MS-MPI, and Windows compute nodes, the multi-instance task concepts discussed here are applicable to other platforms and technologies (Python and Intel MPI on Linux nodes, for example).
->
->
## Multi-instance task overview+ In Batch, each task is normally executed on a single compute node--you submit multiple tasks to a job, and the Batch service schedules each task for execution on a node. However, by configuring a task's **multi-instance settings**, you tell Batch to instead create one primary task and several subtasks that are then executed on multiple nodes.
-![Multi-instance task overview][1]
When you submit a task with multi-instance settings to a job, Batch performs several steps unique to multi-instance tasks: 1. The Batch service creates one **primary** and several **subtasks** based on the multi-instance settings. The total number of tasks (primary plus all subtasks) matches the number of **instances** (compute nodes) you specify in the multi-instance settings. 2. Batch designates one of the compute nodes as the **master**, and schedules the primary task to execute on the master. It schedules the subtasks to execute on the remainder of the compute nodes allocated to the multi-instance task, one subtask per node. 3. The primary and all subtasks download any **common resource files** you specify in the multi-instance settings.
-4. After the common resource files have been downloaded, the primary and subtasks execute the **coordination command** you specify in the multi-instance settings. The coordination command is typically used to prepare nodes for executing the task. This can include starting background services (such as [Microsoft MPI][msmpi_msdn]'s `smpd.exe`) and verifying that the nodes are ready to process inter-node messages.
-5. The primary task executes the **application command** on the master node *after* the coordination command has been completed successfully by the primary and all subtasks. The application command is the command line of the multi-instance task itself, and is executed only by the primary task. In an [MS-MPI][msmpi_msdn]-based solution, this is where you execute your MPI-enabled application using `mpiexec.exe`.
+4. After the common resource files have been downloaded, the primary and subtasks execute the **coordination command** you specify in the multi-instance settings. The coordination command is typically used to prepare nodes for executing the task. This can include starting background services (such as [Microsoft MPI's](/message-passing-interface/microsoft-mpi) `smpd.exe`) and verifying that the nodes are ready to process inter-node messages.
+5. The primary task executes the **application command** on the master node *after* the coordination command has been completed successfully by the primary and all subtasks. The application command is the command line of the multi-instance task itself, and is executed only by the primary task. In an [MS-MPI](/message-passing-interface/microsoft-mpi) -based solution, this is where you execute your MPI-enabled application using `mpiexec.exe`.
> [!NOTE]
-> Though it is functionally distinct, the "multi-instance task" is not a unique task type like the [StartTask][net_starttask] or [JobPreparationTask][net_jobprep]. The multi-instance task is simply a standard Batch task ([CloudTask][net_task] in Batch .NET) whose multi-instance settings have been configured. In this article, we refer to this as the **multi-instance task**.
->
->
+> Though it is functionally distinct, the "multi-instance task" is not a unique task type like the [StartTask](/dotnet/api/microsoft.azure.batch.starttask) or [JobPreparationTask](/dotnet/api/microsoft.azure.batch.jobpreparationtask). The multi-instance task is simply a standard Batch task ([CloudTask](/dotnet/api/microsoft.azure.batch.cloudtask) in Batch .NET) whose multi-instance settings have been configured. In this article, we refer to this as the **multi-instance task**.
## Requirements for multi-instance tasks+ Multi-instance tasks require a pool with **inter-node communication enabled**, and with **concurrent task execution disabled**. To disable concurrent task execution, set the [CloudPool.TaskSlotsPerNode](/dotnet/api/microsoft.azure.batch.cloudpool) property to 1. > [!NOTE] > Batch [limits](batch-quota-limit.md#pool-size-limits) the size of a pool that has inter-node communication enabled. - This code snippet shows how to create a pool for multi-instance tasks using the Batch .NET library. ```csharp
myCloudPool.TaskSlotsPerNode = 1;
> [!NOTE] > If you try to run a multi-instance task in a pool with internode communication disabled, or with a *taskSlotsPerNode* value greater than 1, the task is never scheduled--it remains indefinitely in the "active" state. - ### Use a StartTask to install MPI
-To run MPI applications with a multi-instance task, you first need to install an MPI implementation (MS-MPI or Intel MPI, for example) on the compute nodes in the pool. This is a good time to use a [StartTask][net_starttask], which executes whenever a node joins a pool, or is restarted. This code snippet creates a StartTask that specifies the MS-MPI setup package as a [resource file][net_resourcefile]. The start task's command line is executed after the resource file is downloaded to the node. In this case, the command line performs an unattended install of MS-MPI.
+
+To run MPI applications with a multi-instance task, you first need to install an MPI implementation (MS-MPI or Intel MPI, for example) on the compute nodes in the pool. This is a good time to use a [StartTask](/dotnet/api/microsoft.azure.batch.starttask), which executes whenever a node joins a pool, or is restarted. This code snippet creates a StartTask that specifies the MS-MPI setup package as a [resource file](/dotnet/api/microsoft.azure.batch.resourcefile). The start task's command line is executed after the resource file is downloaded to the node. In this case, the command line performs an unattended install of MS-MPI.
```csharp // Create a StartTask for the pool which we use for installing MS-MPI on
await myCloudPool.CommitAsync();
``` ### Remote direct memory access (RDMA)
-When you choose an [RDMA-capable size](../virtual-machines/sizes-hpc.md?toc=/azure/virtual-machines/windows/toc.json) such as A9 for the compute nodes in your Batch pool, your MPI application can take advantage of Azure's high-performance, low-latency remote direct memory access (RDMA) network.
-
-Look for the sizes specified as "RDMA capable" in the following articles:
-* **CloudServiceConfiguration** pools
-
- * [Sizes for Cloud Services](../cloud-services/cloud-services-sizes-specs.md) (Windows only)
-* **VirtualMachineConfiguration** pools
+When you choose an [RDMA-capable size](../virtual-machines/sizes-hpc.md?toc=/azure/virtual-machines/windows/toc.json) such as A9 for the compute nodes in your Batch pool, your MPI application can take advantage of Azure's high-performance, low-latency remote direct memory access (RDMA) network.
- * [Sizes for virtual machines in Azure](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) (Linux)
- * [Sizes for virtual machines in Azure](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) (Windows)
+Look for the sizes specified as "RDMA capable" in [Sizes for virtual machines in Azure](../virtual-machines/sizes.md) (for VirtualMachineConfiguration pools) or [Sizes for Cloud Services](../cloud-services/cloud-services-sizes-specs.md) (for CloudServicesConfiguration pools).
> [!NOTE] > To take advantage of RDMA on [Linux compute nodes](batch-linux-nodes.md), you must use **Intel MPI** on the nodes.
->
## Create a multi-instance task with Batch .NET
-Now that we've covered the pool requirements and MPI package installation, let's create the multi-instance task. In this snippet, we create a standard [CloudTask][net_task], then configure its [MultiInstanceSettings][net_multiinstance_prop] property. As mentioned earlier, the multi-instance task is not a distinct task type, but a standard Batch task configured with multi-instance settings.
+
+Now that we've covered the pool requirements and MPI package installation, let's create the multi-instance task. In this snippet, we create a standard [CloudTask](/dotnet/api/microsoft.azure.batch.cloudtask), then configure its [MultiInstanceSettings](/dotnet/api/microsoft.azure.batch.cloudtask) property. As mentioned earlier, the multi-instance task is not a distinct task type, but a standard Batch task configured with multi-instance settings.
```csharp // Create the multi-instance task. Its command line is the "application command"
await myBatchClient.JobOperations.AddTaskAsync("mybatchjob", myMultiInstanceTask
``` ## Primary task and subtasks+ When you create the multi-instance settings for a task, you specify the number of compute nodes that are to execute the task. When you submit the task to a job, the Batch service creates one **primary** task and enough **subtasks** that together match the number of nodes you specified.
-These tasks are assigned an integer id in the range of 0 to *numberOfInstances* - 1. The task with id 0 is the primary task, and all other ids are subtasks. For example, if you create the following multi-instance settings for a task, the primary task would have an id of 0, and the subtasks would have ids 1 through 9.
+These tasks are assigned an integer ID in the range of 0 to *numberOfInstances* - 1. The task with ID 0 is the primary task, and all other IDs are subtasks. For example, if you create the following multi-instance settings for a task, the primary task would have an ID of 0, and the subtasks would have IDs 1 through 9.
```csharp int numberOfNodes = 10;
myMultiInstanceTask.MultiInstanceSettings = new MultiInstanceSettings(numberOfNo
``` ### Master node+ When you submit a multi-instance task, the Batch service designates one of the compute nodes as the "master" node, and schedules the primary task to execute on the master node. The subtasks are scheduled to execute on the remainder of the nodes allocated to the multi-instance task. ## Coordination command+ The **coordination command** is executed by both the primary and subtasks. The invocation of the coordination command is blocking--Batch does not execute the application command until the coordination command has returned successfully for all subtasks. The coordination command should therefore start any required background services, verify that they are ready for use, and then exit. For example, this coordination command for a solution using MS-MPI version 7 starts the SMPD service on the node, then exits:
-```
-cmd /c start cmd /c ""%MSMPI_BIN%\smpd.exe"" -d
-```
+`cmd /c start cmd /c ""%MSMPI_BIN%\smpd.exe"" -d`
-Note the use of `start` in this coordination command. This is required because the `smpd.exe` application does not return immediately after execution. Without the use of the [start][cmd_start] command, this coordination command would not return, and would therefore block the application command from running.
+Note the use of `start` in this coordination command. This is required because the `smpd.exe` application does not return immediately after execution. Without the use of the start command, this coordination command would not return, and would therefore block the application command from running.
## Application command+ Once the primary task and all subtasks have finished executing the coordination command, the multi-instance task's command line is executed by the primary task *only*. We call this the **application command** to distinguish it from the coordination command. For MS-MPI applications, use the application command to execute your MPI-enabled application with `mpiexec.exe`. For example, here is an application command for a solution using MS-MPI version 7:
-```
-cmd /c ""%MSMPI_BIN%\mpiexec.exe"" -c 1 -wdir %AZ_BATCH_TASK_SHARED_DIR% MyMPIApplication.exe
-```
+`cmd /c ""%MSMPI_BIN%\mpiexec.exe"" -c 1 -wdir %AZ_BATCH_TASK_SHARED_DIR% MyMPIApplication.exe`
> [!NOTE]
-> Because MS-MPI's `mpiexec.exe` uses the `CCP_NODES` variable by default (see [Environment variables](#environment-variables)) the example application command line above excludes it.
->
->
+> Because MS-MPI's `mpiexec.exe` uses the `CCP_NODES` variable by default (see [Environment variables](#environment-variables)), the example application command line above excludes it.
## Environment variables
-Batch creates several [environment variables][msdn_env_var] specific to multi-instance tasks on the compute nodes allocated to a multi-instance task. Your coordination and application command lines can reference these environment variables, as can the scripts and programs they execute.
+
+Batch creates several [environment variables](batch-compute-node-environment-variables.md) specific to multi-instance tasks on the compute nodes allocated to a multi-instance task. Your coordination and application command lines can reference these environment variables, as can the scripts and programs they execute.
The following environment variables are created by the Batch service for use by multi-instance tasks:
-* `CCP_NODES`
-* `AZ_BATCH_NODE_LIST`
-* `AZ_BATCH_HOST_LIST`
-* `AZ_BATCH_MASTER_NODE`
-* `AZ_BATCH_TASK_SHARED_DIR`
-* `AZ_BATCH_IS_CURRENT_NODE_MASTER`
+- `CCP_NODES`
+- `AZ_BATCH_NODE_LIST`
+- `AZ_BATCH_HOST_LIST`
+- `AZ_BATCH_MASTER_NODE`
+- `AZ_BATCH_TASK_SHARED_DIR`
+- `AZ_BATCH_IS_CURRENT_NODE_MASTER`
-For full details on these and the other Batch compute node environment variables, including their contents and visibility, see [Compute node environment variables][msdn_env_var].
+For full details on these and the other Batch compute node environment variables, including their contents and visibility, see [Compute node environment variables](batch-compute-node-environment-variables.md).
> [!TIP]
-> The Batch Linux MPI code sample contains an example of how several of these environment variables can be used.
+> The [Batch Linux MPI code sample](https://github.com/Azure-Samples/azure-batch-samples/tree/master/Python/Batch/article_samples/mpi) contains an example of how several of these environment variables can be used.
## Resource files+ There are two sets of resource files to consider for multi-instance tasks: **common resource files** that *all* tasks download (both primary and subtasks), and the **resource files** specified for the multi-instance task itself, which *only the primary* task downloads. You can specify one or more **common resource files** in the multi-instance settings for a task. These common resource files are downloaded from [Azure Storage](../storage/common/storage-introduction.md) into each node's **task shared directory** by the primary and all subtasks. You can access the task shared directory from application and coordination command lines by using the `AZ_BATCH_TASK_SHARED_DIR` environment variable. The `AZ_BATCH_TASK_SHARED_DIR` path is identical on every node allocated to the multi-instance task, thus you can share a single coordination command between the primary and all subtasks. Batch does not "share" the directory in a remote access sense, but you can use it as a mount or share point as mentioned earlier in the tip on environment variables.
Resource files that you specify for the multi-instance task itself are downloade
> [!IMPORTANT] > Always use the environment variables `AZ_BATCH_TASK_SHARED_DIR` and `AZ_BATCH_TASK_WORKING_DIR` to refer to these directories in your command lines. Do not attempt to construct the paths manually.
->
->
## Task lifetime+ The lifetime of the primary task controls the lifetime of the entire multi-instance task. When the primary exits, all of the subtasks are terminated. The exit code of the primary is the exit code of the task, and is therefore used to determine the success or failure of the task for retry purposes. If any of the subtasks fail, exiting with a non-zero return code, for example, the entire multi-instance task fails. The multi-instance task is then terminated and retried, up to its retry limit. When you delete a multi-instance task, the primary and all subtasks are also deleted by the Batch service. All subtask directories and their files are deleted from the compute nodes, just as for a standard task.
-[TaskConstraints][net_taskconstraints] for a multi-instance task, such as the [MaxTaskRetryCount][net_taskconstraint_maxretry], [MaxWallClockTime][net_taskconstraint_maxwallclock], and [RetentionTime][net_taskconstraint_retention] properties, are honored as they are for a standard task, and apply to the primary and all subtasks. However, if you change the [RetentionTime][net_taskconstraint_retention] property after adding the multi-instance task to the job, this change is applied only to the primary task. All of the subtasks continue to use the original [RetentionTime][net_taskconstraint_retention].
+[TaskConstraints](/dotnet/api/microsoft.azure.batch.taskconstraints) for a multi-instance task, such as the [MaxTaskRetryCount](/dotnet/api/microsoft.azure.batch.taskconstraints.maxtaskretrycount), [MaxWallClockTime](/dotnet/api/microsoft.azure.batch.taskconstraints.maxwallclocktime), and [RetentionTime](/dotnet/api/microsoft.azure.batch.taskconstraints.retentiontime) properties, are honored as they are for a standard task, and apply to the primary and all subtasks. However, if you change theRetentionTime property after adding the multi-instance task to the job, this change is applied only to the primary task, and all of the subtasks continue to use the original RetentionTime.
-A compute node's recent task list reflects the id of a subtask if the recent task was part of a multi-instance task.
+A compute node's recent task list reflects the ID of a subtask if the recent task was part of a multi-instance task.
## Obtain information about subtasks
-To obtain information on subtasks by using the Batch .NET library, call the [CloudTask.ListSubtasks][net_task_listsubtasks] method. This method returns information on all subtasks, and information about the compute node that executed the tasks. From this information, you can determine each subtask's root directory, the pool id, its current state, exit code, and more. You can use this information in combination with the [PoolOperations.GetNodeFile][poolops_getnodefile] method to obtain the subtask's files. Note that this method does not return information for the primary task (id 0).
+
+To obtain information on subtasks by using the Batch .NET library, call the [CloudTask.ListSubtasks](/dotnet/api/microsoft.azure.batch.cloudtask.listsubtasks) method. This method returns information on all subtasks, and information about the compute node that executed the tasks. From this information, you can determine each subtask's root directory, the pool ID, its current state, exit code, and more. You can use this information in combination with the [PoolOperations.GetNodeFile](/dotnet/api/microsoft.azure.batch.pooloperations.getnodefile) method to obtain the subtask's files. Note that this method does not return information for the primary task (ID 0).
> [!NOTE]
-> Unless otherwise stated, Batch .NET methods that operate on the multi-instance [CloudTask][net_task] itself apply *only* to the primary task. For example, when you call the [CloudTask.ListNodeFiles][net_task_listnodefiles] method on a multi-instance task, only the primary task's files are returned.
->
->
+> Unless otherwise stated, Batch .NET methods that operate on the multi-instance [CloudTask](/dotnet/api/microsoft.azure.batch.cloudtask) itself apply *only* to the primary task. For example, when you call the [CloudTask.ListNodeFiles](/dotnet/api/microsoft.azure.batch.cloudtask.listnodefiles) method on a multi-instance task, only the primary task's files are returned.
The following code snippet shows how to obtain subtask information, as well as request file contents from the nodes on which they executed.
await subtasks.ForEachAsync(async (subtask) =>
``` ## Code sample
-The [MultiInstanceTasks][github_mpi] code sample on GitHub demonstrates how to use a multi-instance task to run an [MS-MPI][msmpi_msdn] application on Batch compute nodes. Follow the steps in [Preparation](#preparation) and [Execution](#execution) to run the sample.
+
+The [MultiInstanceTasks](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/ArticleProjects/MultiInstanceTasks) code sample on GitHub demonstrates how to use a multi-instance task to run an [MS-MPI](/message-passing-interface/microsoft-mpi) application on Batch compute nodes. Follow the steps below to run the sample.
### Preparation
-1. Follow the first two steps in [How to compile and run a simple MS-MPI program][msmpi_howto]. This satisfies the prerequisites for the following step.
-2. Build a *Release* version of the [MPIHelloWorld][helloworld_proj] sample MPI program. This is the program that will be run on compute nodes by the multi-instance task.
-3. Create a zip file containing `MPIHelloWorld.exe` (which you built step 2) and `MSMpiSetup.exe` (which you downloaded step 1). You'll upload this zip file as an application package in the next step.
-4. Use the [Azure portal][portal] to create a Batch [application](batch-application-packages.md) called "MPIHelloWorld", and specify the zip file you created in the previous step as version "1.0" of the application package. See [Upload and manage applications](batch-application-packages.md#upload-and-manage-applications) for more information.
+
+1. Download the [MS-MPI SDK and Redist installers](/message-passing-interface/microsoft-mpi) and install them. After installation you can verify that the MS-MPI environment variables have been set.
+1. Build a *Release* version of the [MPIHelloWorld](https://github.com/Azure-Samples/azure-batch-samples/tree/master/CSharp/ArticleProjects/MultiInstanceTasks/MPIHelloWorld) sample MPI program. This is the program that will be run on compute nodes by the multi-instance task.
+1. Create a zip file containing `MPIHelloWorld.exe` (which you built in step 2) and `MSMpiSetup.exe` (which you downloaded in step 1). You'll upload this zip file as an application package in the next step.
+1. Use the [Azure portal](https://portal.azure.com) to create a Batch [application](batch-application-packages.md) called "MPIHelloWorld", and specify the zip file you created in the previous step as version "1.0" of the application package. See [Upload and manage applications](batch-application-packages.md#upload-and-manage-applications) for more information.
> [!TIP]
-> Build a *Release* version of `MPIHelloWorld.exe` so that you don't have to include any additional dependencies (for example, `msvcp140d.dll` or `vcruntime140d.dll`) in your application package.
->
->
+> Building a *Release* version of `MPIHelloWorld.exe` ensures that you don't have to include any additional dependencies (for example, `msvcp140d.dll` or `vcruntime140d.dll`) in your application package.
### Execution
-1. Download the [azure-batch-samples][github_samples_zip] from GitHub.
-2. Open the MultiInstanceTasks **solution** in Visual Studio 2019. The `MultiInstanceTasks.sln` solution file is located in:
+
+1. Download the [azure-batch-samples .zip file](https://github.com/Azure/azure-batch-samples/archive/master.zip) from GitHub.
+1. Open the MultiInstanceTasks **solution** in Visual Studio 2019. The `MultiInstanceTasks.sln` solution file is located in:
`azure-batch-samples\CSharp\ArticleProjects\MultiInstanceTasks\`
-3. Enter your Batch and Storage account credentials in `AccountSettings.settings` in the **Microsoft.Azure.Batch.Samples.Common** project.
-4. **Build and run** the MultiInstanceTasks solution to execute the MPI sample application on compute nodes in a Batch pool.
-5. *Optional*: Use the [Azure portal][portal] or [Batch Explorer][batch_labs] to examine the sample pool, job, and task ("MultiInstanceSamplePool", "MultiInstanceSampleJob", "MultiInstanceSampleTask") before you delete the resources.
+1. Enter your Batch and Storage account credentials in `AccountSettings.settings` in the **Microsoft.Azure.Batch.Samples.Common** project.
+1. **Build and run** the MultiInstanceTasks solution to execute the MPI sample application on compute nodes in a Batch pool.
+1. *Optional*: Use the [Azure portal](https://portal.azure.com) or [Batch Explorer](https://azure.github.io/BatchExplorer/) to examine the sample pool, job, and task ("MultiInstanceSamplePool", "MultiInstanceSampleJob", "MultiInstanceSampleTask") before you delete the resources.
> [!TIP]
-> You can download [Visual Studio Community][visual_studio] for free if you do not have Visual Studio.
->
->
+> You can download [Visual Studio Community](https://visualstudio.microsoft.com/vs/community/) for free if you don't already have Visual Studio.
Output from `MultiInstanceTasks.exe` is similar to the following:
Sample complete, hit ENTER to exit...
``` ## Next steps
-* The Microsoft HPC & Azure Batch Team blog discusses [MPI support for Linux on Azure Batch][blog_mpi_linux], and includes information on using [OpenFOAM][openfoam] with Batch. You can find Python code samples for the [OpenFOAM example on GitHub][github_mpi].
-* Learn how to [create pools of Linux compute nodes](batch-linux-nodes.md) for use in your Azure Batch MPI solutions.
-
-[helloworld_proj]: https://github.com/Azure/azure-batch-samples/tree/master/CSharp/ArticleProjects/MultiInstanceTasks/MPIHelloWorld
-
-[api_net]: /dotnet/api/microsoft.azure.batch
-[api_rest]: /rest/api/batchservice/
-[batch_labs]: https://azure.github.io/BatchExplorer/
-[blog_mpi_linux]: /archive/blogs/windowshpc/introducing-mpi-support-for-linux-on-azure-batch
-[cmd_start]: /previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/cc770297(v=ws.11)
-[coord_cmd_example]: https://github.com/Azure/azure-batch-samples/blob/master/Python/Batch/article_samples/mpi/dat
-[github_mpi]: https://github.com/Azure/azure-batch-samples/tree/master/CSharp/ArticleProjects/MultiInstanceTasks
-[github_samples]: https://github.com/Azure/azure-batch-samples
-[github_samples_zip]: https://github.com/Azure/azure-batch-samples/archive/master.zip
-[msdn_env_var]: ./batch-compute-node-environment-variables.md
-[msmpi_msdn]: /message-passing-interface/microsoft-mpi
-[msmpi_sdk]: https://go.microsoft.com/FWLink/p/?LinkID=389556
-[msmpi_howto]: /archive/blogs/windowshpc/how-to-compile-and-run-a-simple-ms-mpi-program
-[openfoam]: http://www.openfoam.com/
-[visual_studio]: https://www.visualstudio.com/vs/community/
-
-[net_jobprep]: /dotnet/api/microsoft.azure.batch.jobpreparationtask
-[net_multiinstance_class]: /dotnet/api/microsoft.azure.batch.multiinstancesettings
-[net_multiinstance_prop]: /dotnet/api/microsoft.azure.batch.cloudtask
-[net_multiinsance_commonresfiles]: /dotnet/api/microsoft.azure.batch.multiinstancesettings
-[net_multiinstance_coordcmdline]: /dotnet/api/microsoft.azure.batch.multiinstancesettings
-[net_multiinstance_numinstances]: /dotnet/api/microsoft.azure.batch.multiinstancesettings
-[net_pool]: /dotnet/api/microsoft.azure.batch.cloudpool
-[net_pool_create]: /dotnet/api/microsoft.azure.batch.pooloperations
-[net_pool_starttask]: /dotnet/api/microsoft.azure.batch.cloudpool
-[net_resourcefile]: /dotnet/api/microsoft.azure.batch.resourcefile
-[net_starttask]: /dotnet/api/microsoft.azure.batch.starttask
-[net_starttask_cmdline]: /dotnet/api/microsoft.azure.batch.starttask
-[net_task]: /dotnet/api/microsoft.azure.batch.cloudtask
-[net_taskconstraints]: /dotnet/api/microsoft.azure.batch.taskconstraints
-[net_taskconstraint_maxretry]: /dotnet/api/microsoft.azure.batch.taskconstraints
-[net_taskconstraint_maxwallclock]: /dotnet/api/microsoft.azure.batch.taskconstraints
-[net_taskconstraint_retention]: /dotnet/api/microsoft.azure.batch.taskconstraints
-[net_task_listsubtasks]: /dotnet/api/microsoft.azure.batch.cloudtask
-[net_task_listnodefiles]: /dotnet/api/microsoft.azure.batch.cloudtask
-[poolops_getnodefile]: /dotnet/api/microsoft.azure.batch.pooloperations
-
-[portal]: https://portal.azure.com
-[rest_multiinstance]: /previous-versions/azure/mt637905(v=azure.100)
-
-[1]: ./media/batch-mpi/batch_mpi_01.png "Multi-instance overview"
+
+- Read more about [MPI support for Linux on Azure Batch](https://docs.microsoft.com/archive/blogs/windowshpc/introducing-mpi-support-for-linux-on-azure-batch).
+- Learn how to [create pools of Linux compute nodes](batch-linux-nodes.md) for use in your Azure Batch MPI solutions.
batch Batch Parallel Node Tasks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-parallel-node-tasks.md
Title: Run tasks concurrently to maximize usage of Batch compute nodes description: Increase efficiency and lower costs by using fewer compute nodes and running tasks in parallel on each node in an Azure Batch pool Previously updated : 10/08/2020 Last updated : 03/25/2021 # Run tasks concurrently to maximize usage of Batch compute nodes You can maximize resource usage on a smaller number of compute nodes in your pool by running more than one task simultaneously on each node.
-While some scenarios work best with all of a node's resources dedicated to a single task, certain workloads may see shorter job times and lower costs when multiple tasks share those resources:
+While some scenarios work best with all of a node's resources dedicated to a single task, certain workloads may see shorter job times and lower costs when multiple tasks share those resources. Consider the following scenarios:
- **Minimize data transfer** for tasks that are able to share data. You can dramatically reduce data transfer charges by copying shared data to a smaller number of nodes, then executing tasks in parallel on each node. This especially applies if the data to be copied to each node must be transferred between geographic regions.-- **Maximize memory usage** for tasks which require a large amount of memory, but only during short periods of time, and at variable times during execution. You can employ fewer, but larger, compute nodes with more memory to efficiently handle such spikes. These nodes would have multiple tasks running in parallel on each node, but each task would take advantage of the nodes' plentiful memory at different times.
+- **Maximize memory usage** for tasks which require a large amount of memory, but only during short periods of time, and at variable times during execution. You can employ fewer, but larger, compute nodes with more memory to efficiently handle such spikes. These nodes will have multiple tasks running in parallel on each node, but each task can take advantage of the nodes' plentiful memory at different times.
- **Mitigate node number limits** when inter-node communication is required within a pool. Currently, pools configured for inter-node communication are limited to 50 compute nodes. If each node in such a pool is able to execute tasks in parallel, a greater number of tasks can be executed simultaneously. - **Replicate an on-premises compute cluster**, such as when you first move a compute environment to Azure. If your current on-premises solution executes multiple tasks per compute node, you can increase the maximum number of node tasks to more closely mirror that configuration. ## Example scenario
-As an example, imagine a task application with CPU and memory requirements such that [Standard\_D1](../cloud-services/cloud-services-sizes-specs.md) nodes are sufficient. However, in order to finish the job in the required time, 1,000 of these nodes are needed.
+As an example, imagine a task application with CPU and memory requirements such that [Standard\_D1](../cloud-services/cloud-services-sizes-specs.md#d-series) nodes are sufficient. However, in order to finish the job in the required time, 1,000 of these nodes are needed.
-Instead of using Standard\_D1 nodes that have 1 CPU core, you could use [Standard\_D14](../cloud-services/cloud-services-sizes-specs.md) nodes that have 16 cores each, and enable parallel task execution. This means that *16 times fewer nodes* could be used--instead of 1,000 nodes, only 63 would be required. If large application files or reference data are required for each node, job duration and efficiency are again improved, since the data is copied to only 63 nodes.
+Instead of using Standard\_D1 nodes that have 1 CPU core, you could use [Standard\_D14](../cloud-services/cloud-services-sizes-specs.md#d-series) nodes that have 16 cores each, and enable parallel task execution. This means that 16 times fewer nodes could be used--instead of 1,000 nodes, only 63 would be required. If large application files or reference data are required for each node, job duration and efficiency are improved, since the data is copied to only 63 nodes.
## Enable parallel task execution
-You configure compute nodes for parallel task execution at the pool level. With the Batch .NET library, set the [CloudPool.TaskSlotsPerNode](/dotnet/api/microsoft.azure.batch.cloudpool) property when you create a pool. If you're using the Batch REST API, set the [taskSlotsPerNode](/rest/api/batchservice/pool/add) element in the request body during pool creation.
+You configure compute nodes for parallel task execution at the pool level. With the Batch .NET library, set the [CloudPool.TaskSlotsPerNode](/dotnet/api/microsoft.azure.batch.cloudpool.taskslotspernode) property when you create a pool. If you're using the Batch REST API, set the [taskSlotsPerNode](/rest/api/batchservice/pool/add) element in the request body during pool creation.
> [!NOTE] > You can set the `taskSlotsPerNode` element and [TaskSlotsPerNode](/dotnet/api/microsoft.azure.batch.cloudpool) property only at pool creation time. They can't be modified after a pool has already been created.
Azure Batch allows you to set task slots per node up to (4x) the number of node
When enabling concurrent tasks, it's important to specify how you want the tasks to be distributed across the nodes in the pool.
-By using the [CloudPool.TaskSchedulingPolicy](/dotnet/api/microsoft.azure.batch.cloudpool) property, you can specify that tasks should be assigned evenly across all nodes in the pool ("spreading"). Or you can specify that as many tasks as possible should be assigned to each node before tasks are assigned to another node in the pool ("packing").
+By using the [CloudPool.TaskSchedulingPolicy](/dotnet/api/microsoft.azure.batch.cloudpool.taskschedulingpolicy) property, you can specify that tasks should be assigned evenly across all nodes in the pool ("spreading"). Or you can specify that as many tasks as possible should be assigned to each node before tasks are assigned to another node in the pool ("packing").
-As an example, consider the pool of [Standard\_D14](../cloud-services/cloud-services-sizes-specs.md) nodes (in the example above) that is configured with a [CloudPool.TaskSlotsPerNode](/dotnet/api/microsoft.azure.batch.cloudpool) value of 16. If the [CloudPool.TaskSchedulingPolicy](/dotnet/api/microsoft.azure.batch.cloudpool) is configured with a [ComputeNodeFillType](/dotnet/api/microsoft.azure.batch.common.computenodefilltype) of *Pack*, it would maximize usage of all 16 cores of each node and allow an [autoscaling pool](batch-automatic-scaling.md) to remove unused nodes (nodes without any tasks assigned) from the pool. This minimizes resource usage and saves money.
+As an example, consider the pool of [Standard\_D14](../cloud-services/cloud-services-sizes-specs.md#d-series) nodes (in the example above) that is configured with a [CloudPool.TaskSlotsPerNode](/dotnet/api/microsoft.azure.batch.cloudpool.taskslotspernode) value of 16. If the [CloudPool.TaskSchedulingPolicy](/dotnet/api/microsoft.azure.batch.cloudpool.taskschedulingpolicy) is configured with a [ComputeNodeFillType](/dotnet/api/microsoft.azure.batch.common.computenodefilltype) of *Pack*, it would maximize usage of all 16 cores of each node and allow an [autoscaling pool](batch-automatic-scaling.md) to remove unused nodes (nodes without any tasks assigned) from the pool. This minimizes resource usage and saves money.
## Define variable slots per task
-A task can be defined with [CloudTask.RequiredSlots](/dotnet/api/microsoft.azure.batch.cloudtask.requiredslots) property, specifying how many slots it requires to run on a compute node. The default value as 1. You can set variable task slots if your tasks have different weights regarding to resource usage on the compute node. This lets each compute node have a reasonable number of concurrent running tasks without overwhelming system resources like CPU or memory.
+A task can be defined with [CloudTask.RequiredSlots](/dotnet/api/microsoft.azure.batch.cloudtask.requiredslots) property, specifying how many slots it requires to run on a compute node. The default value is 1. You can set variable task slots if your tasks have different weights regarding to resource usage on the compute node. This lets each compute node have a reasonable number of concurrent running tasks without overwhelming system resources like CPU or memory.
For example, for a pool with property `taskSlotsPerNode = 8`, you can submit multi-core required CPU-intensive tasks with `requiredSlots = 8`, while other tasks can be set to `requiredSlots = 1`. When this mixed workload is scheduled, the CPU-intensive tasks will run exclusively on their compute nodes, while other tasks can run concurrently (up to eight tasks at once) on other nodes. This helps you balance your workload across compute nodes and improve resource usage efficiency.
+Be sure you don't specify a task's `requiredSlots` to be greater than the pool's `taskSlotsPerNode`. This will result in the task never being able to run. The Batch Service doesn't currently validate this conflict when you submit tasks because a job may not have a pool bound at submission time, or it could be changed to a different pool by disabling/re-enabling.
+ > [!TIP] > When using variable task slots, it's possible that large tasks with more required slots can temporarily fail to be scheduled because not enough slots are available on any compute node, even when there are still idle slots on some nodes. You can raise the job priority for these tasks to increase their chance to compete for available slots on nodes. > > The Batch service emits the [TaskScheduleFailEvent](batch-task-schedule-fail-event.md) when it fails to schedule a task to run, and keeps retrying the scheduling until required slots become available. You can listen to that event to detect potential task scheduling issues and mitigate accordingly.
-> [!NOTE]
-> Do not specify a task's `requiredSlots` to be greater than the pool's `taskSlotsPerNode`. This will result in the task never being able to run. The Batch Service doesn't currently validate this conflict when you submit tasks because a job may not have a pool bound at submission time, or it could be changed to a different pool by disabling/re-enabling.
- ## Batch .NET example The following [Batch .NET](/dotnet/api/microsoft.azure.batch) API code snippets show how to create a pool with multiple task slots per node and how to submit a task with required slots.
The following [Batch .NET](/dotnet/api/microsoft.azure.batch) API code snippets
This code snippet shows a request to create a pool that contains four nodes, with four task slots allowed per node. It specifies a task scheduling policy that will fill each node with tasks prior to assigning tasks to another node in the pool.
-For more information on adding pools by using the Batch .NET API, see [BatchClient.PoolOperations.CreatePool](/dotnet/api/microsoft.azure.batch.pooloperations).
+For more information on adding pools by using the Batch .NET API, see [BatchClient.PoolOperations.CreatePool](/dotnet/api/microsoft.azure.batch.pooloperations.createpool).
```csharp CloudPool pool =
This snippet shows a request to add a task with non-default `requiredSlots`. Thi
## Code sample on GitHub
-The [ParallelTasks](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/ArticleProjects/ParallelTasks) project on GitHub illustrates the use of the [CloudPool.TaskSlotsPerNode](/dotnet/api/microsoft.azure.batch.cloudpool) property.
+The [ParallelTasks](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/ArticleProjects/ParallelTasks) project on GitHub illustrates the use of the [CloudPool.TaskSlotsPerNode](/dotnet/api/microsoft.azure.batch.cloudpool.taskslotspernode) property.
This C# console application uses the [Batch .NET](/dotnet/api/microsoft.azure.batch) library to create a pool with one or more compute nodes. It executes a configurable number of tasks on those nodes to simulate a variable load. Output from the application shows which nodes executed each task. The application also provides a summary of the job parameters and duration.
batch Batch User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-user-accounts.md
Title: Run tasks under user accounts description: Learn the types of user accounts and how to configure them. Previously updated : 08/20/2020 Last updated : 03/25/2021 # Run tasks under user accounts in Batch
> [!NOTE] > The user accounts discussed in this article are different from user accounts used for Remote Desktop Protocol (RDP) or Secure Shell (SSH), for security reasons. >
-> To connect to a node running the Linux virtual machine configuration via SSH, see [Use Remote Desktop to a Linux VM in Azure](../virtual-machines/linux/use-remote-desktop.md). To connect to nodes running Windows via RDP, see [Connect to a Windows Server VM](../virtual-machines/windows/connect-logon.md).<br /><br />
+> To connect to a node running the Linux virtual machine configuration via SSH, see [Install and configure xrdp to use Remote Desktop with Ubuntu](../virtual-machines/linux/use-remote-desktop.md). To connect to nodes running Windows via RDP, see [How to connect and sign on to an Azure virtual machine running Windows](../virtual-machines/windows/connect-logon.md).
+>
> To connect to a node running the cloud service configuration via RDP, see [Enable Remote Desktop Connection for a Role in Azure Cloud Services](../cloud-services/cloud-services-role-enable-remote-desktop-new-portal.md). A task in Azure Batch always runs under a user account. By default, tasks run under standard user accounts, without administrator permissions. For certain scenarios, you may want to configure the user account under which you want a task to run. This article discusses the types of user accounts and how to configure them for your scenario.
Azure Batch provides two types of user accounts for running tasks:
- **A named user account.** You can specify one or more named user accounts for a pool when you create the pool. Each user account is created on each node of the pool. In addition to the account name, you specify the user account password, elevation level, and, for Linux pools, the SSH private key. When you add a task, you can specify the named user account under which that task should run. > [!IMPORTANT]
-> The Batch service version 2017-01-01.4.0 introduces a breaking change that requires that you update your code to call that version. If you are migrating code from an older version of Batch, note that the **runElevated** property is no longer supported in the REST API or Batch client libraries. Use the new **userIdentity** property of a task to specify elevation level. See [Update your code to the latest Batch client library](#update-your-code-to-the-latest-batch-client-library) for quick guidelines for updating your Batch code if you are using one of the client libraries.
+> The Batch service version 2017-01-01.4.0 introduced a breaking change that requires that you update your code to call that version or later. See [Update your code to the latest Batch client library](#update-your-code-to-the-latest-batch-client-library) for quick guidelines for updating your Batch code from an older version.
## User account access to files and directories
The following code snippets show how to configure the auto-user specification. T
```csharp task.UserIdentity = new UserIdentity(new AutoUserSpecification(elevationLevel: ElevationLevel.Admin, scope: AutoUserScope.Task)); ```+ #### Batch Java ```java
task.UserIdentity = new UserIdentity(AdminUserAccountName);
## Update your code to the latest Batch client library
-The Batch service version 2017-01-01.4.0 introduces a breaking change, replacing the **runElevated** property available in earlier versions with the **userIdentity** property. The following tables provide a simple mapping that you can use to update your code from earlier versions of the client libraries.
+The Batch service version 2017-01-01.4.0 introduced a breaking change, replacing the **runElevated** property available in earlier versions with the **userIdentity** property. The following tables provide a simple mapping that you can use to update your code from earlier versions of the client libraries.
### Batch .NET
batch Batch Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-virtual-network.md
Title: Provision a pool in a virtual network description: How to create a Batch pool in an Azure virtual network so that compute nodes can communicate securely with other VMs in the network, such as a file server. Previously updated : 03/15/2021 Last updated : 03/26/2021
To ensure that the nodes in your pool work in a VNet that has forced tunneling e
- Ensure that outbound traffic to Azure Storage (specifically, URLs of the form `<account>.table.core.windows.net`, `<account>.queue.core.windows.net`, and `<account>.blob.core.windows.net`) is not blocked by your on-premises network.
+- If you use virtual file mounts, review the [networking requirements](virtual-file-mount.md#networking-requirements) and ensure that no required traffic is blocked.
+ When you add a UDR, define the route for each related Batch IP address prefix, and set **Next hop type** to **Internet**. ![User-defined route](./media/batch-virtual-network/user-defined-route.png)
batch Monitor Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/monitor-application-insights.md
Title: Monitor Batch with Azure Application Insights
description: Learn how to instrument an Azure Batch .NET application using the Azure Application Insights library. Previously updated : 04/05/2018 Last updated : 03/25/2021 # Monitor and debug an Azure Batch .NET application with Application Insights
-[Application Insights](../azure-monitor/app/app-insights-overview.md) provides an elegant and powerful way for developers to monitor and debug
-applications deployed to Azure services. Use Application Insights to
-monitor performance counters and exceptions as well as instrument your code
-with custom metrics and tracing. Integrating Application Insights with your
-Azure Batch application allows you to gain deep insights into behaviors
-and investigate issues in near-real time.
+[Application Insights](../azure-monitor/app/app-insights-overview.md) provides an elegant and powerful way for developers to monitor and debug applications deployed to Azure services. Use Application Insights to monitor performance counters and exceptions as well as instrument your code with custom metrics and tracing. Integrating Application Insights with your Azure Batch application allows you to gain deep insights into behaviors and investigate issues in near-real time.
-This article shows how to add and configure the Application Insights library
-into your Azure Batch .NET solution and instrument your application code. It also shows ways to monitor your application via the Azure portal and build
-custom dashboards. For Application Insights support in other languages, look at the
-[languages, platforms, and integrations documentation](../azure-monitor/app/platforms.md).
+This article shows how to add and configure the Application Insights library into your Azure Batch .NET solution and instrument your application code. It also shows ways to monitor your application via the Azure portal and build custom dashboards. For Application Insights support in other languages, see the [languages, platforms, and integrations documentation](../azure-monitor/app/platforms.md).
-A sample C# solution with code to accompany this article is available on [GitHub](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/ArticleProjects/ApplicationInsights). This example adds Application Insights instrumentation code to the [TopNWords](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/TopNWords) example. If you're not familiar with that example, try building and running TopNWords first. Doing this will help you understand a basic Batch workflow of processing a set of input blobs in parallel on multiple compute nodes.
+A sample C# solution with code to accompany this article is available on [GitHub](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/ArticleProjects/ApplicationInsights). This example adds Application Insights instrumentation code to the [TopNWords](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/TopNWords) example. If you're not familiar with that example, try building and running TopNWords first. Doing this will help you understand a basic Batch workflow of processing a set of input blobs in parallel on multiple compute nodes.
> [!TIP]
-> As an alternative, configure your Batch solution to display Application Insights data such as VM performance counters in Batch Explorer. [Batch Explorer](https://github.com/Azure/BatchExplorer) is a free, rich-featured, standalone client tool to help create, debug, and monitor Azure Batch applications. Download an [installation package](https://azure.github.io/BatchExplorer/) for Mac, Linux, or Windows. See the [batch-insights repo](https://github.com/Azure/batch-insights) for quick steps to enable Application Insights data in Batch Explorer.
->
+> As an alternative, configure your Batch solution to display Application Insights data such as VM performance counters in Batch Explorer. [Batch Explorer](https://github.com/Azure/BatchExplorer) is a free, rich-featured, standalone client tool to help create, debug, and monitor Azure Batch applications. Download an [installation package](https://azure.github.io/BatchExplorer/) for Mac, Linux, or Windows. See the [batch-insights repo](https://github.com/Azure/batch-insights) for quick steps to enable Application Insights data in Batch Explorer.
## Prerequisites
-* [Visual Studio 2017 or later](https://www.visualstudio.com/vs)
-* [Batch account and linked storage account](batch-account-create-portal.md)
-
-* [Application Insights resource](../azure-monitor/app/create-new-resource.md )
-
- * Use the Azure portal to create an Application Insights *resource*. Select the *General* **Application type**.
-
- * Copy the [instrumentation
-key](../azure-monitor/app/create-new-resource.md#copy-the-instrumentation-key) from the portal. It is required later in this article.
+- [Visual Studio 2017 or later](https://www.visualstudio.com/vs)
+- [Batch account and linked storage account](batch-account-create-portal.md)
+- [Application Insights resource](../azure-monitor/app/create-new-resource.md). Use the Azure portal to create an Application Insights *resource*. Select the *General* **Application type**.
+- Copy the [instrumentation key](../azure-monitor/app/create-new-resource.md#copy-the-instrumentation-key) from the Azure portal. You'll need this value later.
> [!NOTE]
- > You may be [charged](https://azure.microsoft.com/pricing/details/application-insights/) for the data stored in Application Insights.
- > This includes the diagnostic and monitoring data discussed in this article.
- >
+ > You may be [charged](https://azure.microsoft.com/pricing/details/application-insights/) for data stored in Application Insights. This includes the diagnostic and monitoring data discussed in this article.
## Add Application Insights to your project
The **Microsoft.ApplicationInsights.WindowsServer** NuGet package and its depend
```powershell Install-Package Microsoft.ApplicationInsights.WindowsServer ```+ Reference Application Insights from your .NET application by using the **Microsoft.ApplicationInsights** namespace. ## Instrument your code
To instrument your code, your solution needs to create an Application Insights [
```xml <InstrumentationKey>YOUR-IKEY-GOES-HERE</InstrumentationKey> ```+ Also add the instrumentation key in the file TopNWords.cs. The example in TopNWords.cs uses the following [instrumentation calls](../azure-monitor/app/api-custom-events-metrics.md) from the Application Insights API:
-* `TrackMetric()` - Tracks how long, on average, a compute node takes to download the required text file.
-* `TrackTrace()` - Adds debugging calls to your code.
-* `TrackEvent()` - Tracks interesting events to capture.
-This example purposely leaves out exception
-handling. Instead, Application Insights automatically reports unhandled
-exceptions, which significantly improves the debugging experience.
+- `TrackMetric()` - Tracks how long, on average, a compute node takes to download the required text file.
+- `TrackTrace()` - Adds debugging calls to your code.
+- `TrackEvent()` - Tracks interesting events to capture.
+
+This example purposely leaves out exception handling. Instead, Application Insights automatically reports unhandled exceptions, which significantly improves the debugging experience.
-The
-following snippet illustrates how to use these methods.
+The following snippet illustrates how to use these methods.
```csharp public void CountWords(string blobName, int numTopN, string storageAccountName, string storageAccountKey)
public void CountWords(string blobName, int numTopN, string storageAccountName,
``` ### Azure Batch telemetry initializer helper
-When reporting telemetry for a given server and instance, Application Insights
-uses the Azure VM Role and VM name for the default values. In the context of Azure Batch, the example shows how to use the pool name and compute
-node name instead. Use a [telemetry initializer](../azure-monitor/app/api-filtering-sampling.md#add-properties) to override the default
-values.
+
+When reporting telemetry for a given server and instance, Application Insights uses the Azure VM Role and VM name for the default values. In the context of Azure Batch, the example shows how to use the pool name and compute node name instead. Use a [telemetry initializer](../azure-monitor/app/api-filtering-sampling.md#add-properties) to override the default values.
```csharp using Microsoft.ApplicationInsights.Channel;
To enable the telemetry initializer, the ApplicationInsights.config file in the
<TelemetryInitializers> <Add Type="Microsoft.Azure.Batch.Samples.TelemetryInitializer.AzureBatchNodeTelemetryInitializer, Microsoft.Azure.Batch.Samples.TelemetryInitializer"/> </TelemetryInitializers>
-```
+```
## Update the job and tasks to include Application Insights binaries
-In order for Application Insights to run correctly on your compute nodes, make sure the binaries are correctly placed. Add the required
-binaries to your task's resource files collection so that they get downloaded
-at the time your task executes. The following snippets are similar to code in Job.cs.
+In order for Application Insights to run correctly on your compute nodes, make sure the binaries are correctly placed. Add the required binaries to your task's resource files collection so that they get downloaded at the time your task executes. The following snippets are similar to code in Job.cs.
First, create a static list of Application Insights files to upload.
private static readonly List<string> AIFilesToUpload = new List<string>()
``` Next, create the staging files that are used by the task.+ ```csharp ... // create file staging objects that represent the executable and its dependent assembly to run as the task.
foreach (string aiFile in AIFilesToUpload)
The `FileToStage` method is a helper function in the code sample that allows you to easily upload a file from local disk to an Azure Storage blob. Each file is later downloaded to a compute node and referenced by a task. Finally, add the tasks to the job and include the necessary Application Insights binaries.+ ```csharp ... // initialize a collection to hold the tasks that will be submitted in their entirety
for (int i = 1; i <= topNWordsConfiguration.NumberOfTasks; i++)
## View data in the Azure portal
-Now that you've configured the job and tasks to use Application Insights, run
-the example job in your pool. Navigate to the Azure portal and open the Application
-Insights resource that you provisioned. After the pool is provisioned, you should start to see
-data flowing and getting logged. The rest of this article touches on only a few Application Insights
-features, but feel free to explore the full feature set.
+Now that you've configured the job and tasks to use Application Insights, run the example job in your pool. Navigate to the Azure portal and open the Application Insights resource that you provisioned. After the pool is provisioned, you should start to see data flowing and getting logged. The rest of this article touches on only a few Application Insights features, but feel free to explore the full feature set.
### View live stream data
-To view trace logs in your Applications Insights resource, click **Live Stream**. The following screenshot shows how to view live data coming from the
-compute nodes in the pool, for example the CPU usage per compute node.
+To view trace logs in your Applications Insights resource, click **Live Stream**. The following screenshot shows how to view live data coming from the compute nodes in the pool, for example the CPU usage per compute node.
-![Live stream compute node data](./media/monitor-application-insights/applicationinsightslivestream.png)
+![Screenshot of live stream compute node data.](./media/monitor-application-insights/applicationinsightslivestream.png)
### View trace logs
-To view trace logs in your Applications Insights resource, click **Search**. This view shows a list of diagnostic data
-captured by Application Insights including traces, events, and exceptions.
+To view trace logs in your Applications Insights resource, click **Search**. This view shows a list of diagnostic data captured by Application Insights including traces, events, and exceptions.
The following screenshot shows how a single trace for a task is logged and later queried for debugging purposes.
-![Trace logs image](./media/monitor-application-insights/tracelogsfortask.png)
+![Screenshot showing logs for a single trace.](./media/monitor-application-insights/tracelogsfortask.png)
### View unhandled exceptions
-The following screenshots shows how Application Insights logs exceptions thrown from your application. In this case, within seconds of the application throwing the exception, you can drill into a specific exception and diagnose the issue.
+Application Insights logs exceptions thrown from your application. In this case, within seconds of the application throwing the exception, you can drill into a specific exception and diagnose the issue.
-![Unhandled exceptions](./media/monitor-application-insights/exception.png)
+![Screenshot showing unhandled exceptions.](./media/monitor-application-insights/exception.png)
### Measure blob download time Custom metrics are also a valuable tool in the portal. For example, you can display the average time it took each compute node to download the required text file it was processing. To create a sample chart:
-1. In your Application Insights resource, click **Metrics Explorer** > **Add chart**.
-2. Click **Edit** on the chart that was added.
-2. Update the chart details as follows:
- * Set **Chart type** to **Grid**.
- * Set **Aggregation** to **Average**.
- * Set **Group by** to **NodeId**.
- * In **Metrics**, select **Custom** > **Blob download in seconds**.
- * Adjust display **Color palette** to your choice.
-![Blob download time per node](./media/monitor-application-insights/blobdownloadtime.png)
+1. In your Application Insights resource, click **Metrics Explorer** > **Add chart**.
+1. Click **Edit** on the chart that was added.
+1. Update the chart details as follows:
+ - Set **Chart type** to **Grid**.
+ - Set **Aggregation** to **Average**.
+ - Set **Group by** to **NodeId**.
+ - In **Metrics**, select **Custom** > **Blob download in seconds**.
+ - Adjust display **Color palette** to your choice.
+![Screenshot of a chart showing blob download time per node.](./media/monitor-application-insights/blobdownloadtime.png)
## Monitor compute nodes continuously
-You may have noticed that all metrics, including performance counters, are only
-logged when the tasks are running. This behavior is useful because it limits the amount of
-data that Application Insights logs. However, there are cases
-when you would always like to monitor the compute nodes. For example, they might be
-running background work which is not scheduled via the Batch service. In this case, set up a monitoring process to run for the life of the
-compute node.
+You may have noticed that all metrics, including performance counters, are only logged when the tasks are running. This behavior is useful because it limits the amount of
+data that Application Insights logs. However, there are cases when you would always like to monitor the compute nodes. For example, they might be running background work which is not scheduled via the Batch service. In this case, set up a monitoring process to run for the life of the compute node.
-One way to achieve this behavior is to spawn a process that loads
-the Application Insights library and runs in the background. In the example, the start task loads the
-binaries on the machine and keeps a process running indefinitely. Configure the
-Application Insights configuration file for this process to emit additional data you're interested in, such
-as performance counters.
+One way to achieve this behavior is to spawn a process that loads the Application Insights library and runs in the background. In the example, the start task loads the binaries on the machine and keeps a process running indefinitely. Configure the Application Insights configuration file for this process to emit additional data you're interested in, such as performance counters.
```csharp ...
pool.StartTask = new StartTask()
> [!TIP] > To increase the manageability of your solution, you can bundle the assembly in an [application package](./batch-application-packages.md). Then, to deploy the application package automatically to your pools, add an application package reference to the pool configuration.
->
-## Throttle and sample data
-
-Due to the large-scale nature of Azure Batch applications
-running in production, you might want to limit the amount of data collected by
-Application Insights to manage costs.
-See [Sampling in Application Insights](../azure-monitor/app/sampling.md) for some mechanisms to achieve this.
+## Throttle and sample data
+Due to the large-scale nature of Azure Batch applications running in production, you might want to limit the amount of data collected by Application Insights to manage costs. See [Sampling in Application Insights](../azure-monitor/app/sampling.md) for some mechanisms to achieve this.
## Next steps
-* Learn more about [Application Insights](../azure-monitor/app/app-insights-overview.md).
-
-* For Application Insights support in other languages, look at the
-[languages, platforms, and integrations documentation](../azure-monitor/app/platforms.md).
-
+- Learn more about [Application Insights](../azure-monitor/app/app-insights-overview.md).
+- For Application Insights support in other languages, see the [languages, platforms, and integrations documentation](../azure-monitor/app/platforms.md).
batch Virtual File Mount https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/virtual-file-mount.md
Title: Mount a virtual file system on a pool
description: Learn how to mount a virtual file system on a Batch pool. Previously updated : 08/13/2019 Last updated : 03/26/2021 # Mount a virtual file system on a Batch pool
-Azure Batch now supports mounting cloud storage or an external file system on Windows or Linux compute nodes in your Batch pools. When a compute node joins a pool, the virtual file system is mounted and treated as a local drive on that node. You can mount file systems such as Azure Files, Azure Blob storage, Network File System (NFS) including an [Avere vFXT cache](../avere-vfxt/avere-vfxt-overview.md), or Common Internet File System (CIFS).
+Azure Batch supports mounting cloud storage or an external file system on Windows or Linux compute nodes in your Batch pools. When a compute node joins a pool, the virtual file system is mounted and treated as a local drive on that node. You can mount file systems such as Azure Files, Azure Blob storage, Network File System (NFS) including an [Avere vFXT cache](../avere-vfxt/avere-vfxt-overview.md), or Common Internet File System (CIFS).
In this article, you'll learn how to mount a virtual file system on a pool of compute nodes using the [Batch Management Library for .NET](/dotnet/api/overview/azure/batch). > [!NOTE]
-> Mounting a virtual file system is supported on Batch pools created on or after 2019-08-19. Batch pools created prior to 2019-08-19 do not support this feature.
->
-> The APIs for mounting file systems on a compute node are part of the [Batch .NET](/dotnet/api/microsoft.azure.batch) library.
+> Mounting a virtual file system is only supported on Batch pools created on or after August 8, 2019. Batch pools created before that date will not support this feature.
## Benefits of mounting on a pool Mounting the file system to the pool, instead of letting tasks retrieve their own data from a large data set, makes it easier and more efficient for tasks to access the necessary data.
-Consider a scenario with multiple tasks requiring access to a common set of data, like rendering a movie. Each task renders one or more frames at a time from the scene files. By mounting a drive that contains the scene files, it's easier for compute nodes to access shared data. Additionally, the underlying file system can be chosen and scaled independently based on the performance and scale (throughput and IOPS) required by the number of compute nodes concurrently accessing the data. For example, an [Avere vFXT](../avere-vfxt/avere-vfxt-overview.md) distributed in-memory cache can be used to support large motion picture-scale renders with thousands of concurrent render nodes, accessing source data that resides on-premises. Alternatively, for data that already resides in cloud-based Blob storage, [blobfuse](../storage/blobs/storage-how-to-mount-container-linux.md) can be used to mount this data as a local file system. Blobfuse is only available on Linux nodes, however, [Azure Files](https://azure.microsoft.com/blog/a-new-era-for-azure-files-bigger-faster-better/) provides a similar workflow and is available on both Windows and Linux.
+Consider a scenario with multiple tasks requiring access to a common set of data, like rendering a movie. Each task renders one or more frames at a time from the scene files. By mounting a drive that contains the scene files, it's easier for compute nodes to access shared data.
+
+Additionally, the underlying file system can be chosen and scaled independently based on the performance and scale (throughput and IOPS) required by the number of compute nodes concurrently accessing the data. For example, an [Avere vFXT](../avere-vfxt/avere-vfxt-overview.md) distributed in-memory cache can be used to support large motion picture-scale renders with thousands of concurrent render nodes, accessing source data that resides on-premises. Alternatively, for data that already resides in cloud-based Blob storage, [blobfuse](../storage/blobs/storage-how-to-mount-container-linux.md) can be used to mount this data as a local file system. Blobfuse is only available on Linux nodes, though [Azure Files](../storage/files/storage-files-introduction.md) provides a similar workflow and is available on both Windows and Linux.
## Mount a virtual file system on a pool
new PoolAddParameter
### Azure Blob file system
-Another option is to use Azure Blob storage via [blobfuse](../storage/blobs/storage-how-to-mount-container-linux.md). Mounting a blob file system requires an `AccountKey` or `SasKey` for your storage account. For information on getting these keys, see [Manage storage account access keys](../storage/common/storage-account-keys-manage.md), or [Using shared access signatures (SAS)](../storage/common/storage-sas-overview.md). For more information on using blobfuse, see the blobfuse [Troubleshoot FAQ](https://github.com/Azure/azure-storage-fuse/wiki/3.-Troubleshoot-FAQ). To get default access to the blobfuse mounted directory, run the task as an **Administrator**. Blobfuse mounts the directory at the user space, and at pool creation it is mounted as root. In Linux all **Administrator** tasks are root. All options for the FUSE module is described in the [FUSE reference page](https://manpages.ubuntu.com/manpages/xenial/man8/mount.fuse.8.html).
+Another option is to use Azure Blob storage via [blobfuse](../storage/blobs/storage-how-to-mount-container-linux.md). Mounting a blob file system requires an `AccountKey` or `SasKey` for your storage account. For information on getting these keys, see [Manage storage account access keys](../storage/common/storage-account-keys-manage.md) or [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../storage/common/storage-sas-overview.md). For more information and tips on using blobfuse, see the blobfuse .
+
+To get default access to the blobfuse mounted directory, run the task as an **Administrator**. Blobfuse mounts the directory at the user space, and at pool creation it is mounted as root. In Linux all **Administrator** tasks are root. All options for the FUSE module are described in the [FUSE reference page](https://manpages.ubuntu.com/manpages/xenial/man8/mount.fuse.8.html).
-In addition to the troubleshooting guide, GitHub issues in the blobfuse repository are a helpful way to check on current blobfuse issues and resolutions. For more information, see [blobfuse issues](https://github.com/Azure/azure-storage-fuse/issues).
+Review the [Troubleshoot FAQ](https://github.com/Azure/azure-storage-fuse/wiki/3.-Troubleshoot-FAQ) for more information and tips on using blobfuse. You can also review [GitHub issues in the blobfuse repository](https://github.com/Azure/azure-storage-fuse/issues) to check on current blobfuse issues and resolutions.
```csharp new PoolAddParameter
new PoolAddParameter
### Network File System
-Network File Systems (NFS) can also be mounted to pool nodes allowing traditional file systems to be easily accessed by Azure Batch nodes. This could be a single NFS server deployed in the cloud, or an on-premises NFS server accessed over a virtual network. Alternatively, take advantage of the [Avere vFXT](../avere-vfxt/avere-vfxt-overview.md) distributed in-memory cache solution, which provides seamless connectivity to on-premises storage, reading data on-demand into its cache, and delivers high performance and scale to cloud-based compute nodes.
+Network File Systems (NFS) can be mounted to pool nodes, allowing traditional file systems to be accessed by Azure Batch. This could be a single NFS server deployed in the cloud, or an on-premises NFS server accessed over a virtual network. Alternatively, you can use the [Avere vFXT](../avere-vfxt/avere-vfxt-overview.md) distributed in-memory cache solution for data-intensive high-performance computing (HPC) tasks
```csharp new PoolAddParameter
new PoolAddParameter
### Common Internet File System
-Common Internet File Systems (CIFS) can also be mounted to pool nodes allowing traditional file systems to be easily accessed by Azure Batch nodes. CIFS is a file-sharing protocol that provides an open and cross-platform mechanism for requesting network server files and services. CIFS is based on the enhanced version of Microsoft's Server Message Block (SMB) protocol for internet and intranet file sharing and is used to mount external file systems on Windows nodes. To learn more about SMB, see [File Server and SMB](/windows-server/storage/file-server/file-server-smb-overview).
+Mounting [Common Internet File Systems (CIFS)](/windows/desktop/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) to pool nodes is another way to provide access to traditional file systems. CIFS is a file-sharing protocol that provides an open and cross-platform mechanism for requesting network server files and services. CIFS is based on the enhanced version of the [Server Message Block (SMB)](/windows-server/storage/file-server/file-server-smb-overview) protocol for internet and intranet file sharing, and can be used to mount external file systems on Windows nodes.
```csharp new PoolAddParameter
new PoolAddParameter
## Diagnose mount errors
-If a mount configuration fails, the compute node in the pool will fail and the node state becomes unusable. To diagnose a mount configuration failure, inspect the [`ComputeNodeError`](/rest/api/batchservice/computenode/get#computenodeerror) property for details on the error.
+If a mount configuration fails, the compute node in the pool will fail and the node state will be set to `unusable`. To diagnose a mount configuration failure, inspect the [`ComputeNodeError`](/rest/api/batchservice/computenode/get#computenodeerror) property for details on the error.
To get the log files for debugging, use [OutputFiles](batch-task-output-files.md) to upload the `*.log` files. The `*.log` files contain information about the file system mount at the `AZ_BATCH_NODE_MOUNTS_DIR` location. Mount log files have the format: `<type>-<mountDirOrDrive>.log` for each mount. For example, a `cifs` mount at a mount directory named `test` will have a mount log file named: `cifs-test.log`.
To get the log files for debugging, use [OutputFiles](batch-task-output-files.md
| Oracle | Oracle-Linux | 7.6 | :x: | :x: | :x: | :x: | | Windows | WindowsServer | 2012, 2016, 2019 | :heavy_check_mark: | :x: | :x: | :x: |
+## Networking requirements
+
+When using virtual file mounts with [Azure Batch pools in a virtual network](batch-virtual-network.md), keep in mind the following requirements and ensure no required traffic is blocked.
+
+- **Azure Files**:
+ - Requires TCP port 445 to be open for traffic to/from the "storage" service tag. For more information, see [Use an Azure file share with Windows](../storage/files/storage-how-to-use-files-windows.md#prerequisites).
+- **Blobfuse**:
+ - Requires TCP port 443 to be open for traffic to/from the "storage" service tag.
+ - VMs must have access to https://packages.microsoft.com in order to download the blobfuse and gpg packages. Depending on your configuration, you may also need access to other URLs to download additional packages.
+- **Network File System (NFS)**:
+ - Requires access to port 2049 (by default; your configuration may have other requirements).
+ - VMs must have access to the appropriate package manager in order to download the nfs-common (for Debian or Ubuntu) or nfs-utils (for CentOS) package. This URL may vary based on your OS version. Depending on your configuration, you may also need access to other URLs to download additional packages.
+- **Common Internet File System (CIFS)**:
+ - Requires access to TCP port 445.
+ - VMs must have access to the appropriate package manager(s) in order to download the cifs-utils package. This URL may vary based on your OS version.
+ ## Next steps -- Learn more details about mounting an Azure Files share with [Windows](../storage/files/storage-how-to-use-files-windows.md) or [Linux](../storage/files/storage-how-to-use-files-linux.md).
+- Learn more about mounting an Azure Files share with [Windows](../storage/files/storage-how-to-use-files-windows.md) or [Linux](../storage/files/storage-how-to-use-files-linux.md).
- Learn about using and mounting [blobfuse](https://github.com/Azure/azure-storage-fuse) virtual file systems. - See [Network File System overview](/windows-server/storage/nfs/nfs-overview) to learn about NFS and its applications. - See [Microsoft SMB protocol and CIFS protocol overview](/windows/desktop/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) to learn more about CIFS.
blockchain Send Transaction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/blockchain/service/send-transaction.md
Azure Blockchain Development Kit uses Truffle to execute the migration script to
![Successfully deployed contract](./media/send-transaction/deploy-contract.png) ## Call a contract function
+The **HelloBlockchain** contract's **SendRequest** function changes the **RequestMessage** state variable. Changing the state of a blockchain network is done via a transaction. You can create a script to execute the **SendRequest** function via a transaction.
-The **HelloBlockchain** contract's **SendRequest** function changes the **RequestMessage** state variable. Changing the state of a blockchain network is done via a transaction. You can use the Azure Blockchain Development Kit smart contract interaction page to call the **SendRequest** function via a transaction.
+1. Create a new file in the root of your Truffle project and name it `sendrequest.js`. Add the following Web3 JavaScript code to the file.
-1. To interact with your smart contract, right-click **HelloBlockchain.sol** and choose **Show Smart Contract Interaction Page** from the menu.
+ ```javascript
+ var HelloBlockchain = artifacts.require("HelloBlockchain");
+
+ module.exports = function(done) {
+ console.log("Getting the deployed version of the HelloBlockchain smart contract")
+ HelloBlockchain.deployed().then(function(instance) {
+ console.log("Calling SendRequest function for contract ", instance.address);
+ return instance.SendRequest("Hello, blockchain!");
+ }).then(function(result) {
+ console.log("Transaction hash: ", result.tx);
+ console.log("Request complete");
+ done();
+ }).catch(function(e) {
+ console.log(e);
+ done();
+ });
+ };
+ ```
- ![Choose Show Smart Contract Interaction Page from menu](./media/send-transaction/contract-interaction.png)
+1. When Azure Blockchain Development Kit creates a project, the Truffle configuration file is generated with your consortium blockchain network endpoint details. Open **truffle-config.js** in your project. The configuration file lists two networks: one named development and one with the same name as the consortium.
+1. In VS Code's terminal pane, use Truffle to execute the script on your consortium blockchain network. In the terminal pane menu bar, select the **Terminal** tab and **PowerShell** in the dropdown.
-1. The interaction page allows you to choose a deployed contract version, call functions, view current state, and view metadata.
+ ```PowerShell
+ truffle exec sendrequest.js --network <blockchain network>
+ ```
- ![Example Smart Contract Interaction Page](./media/send-transaction/interaction-page.png)
+ Replace \<blockchain network\> with the name of the blockchain network defined in the **truffle-config.js**.
-1. To call smart contract function, select the contract action and pass your arguments. Choose **SendRequest** contract action and enter **Hello, Blockchain!** for the **requestMessage** parameter. Select **Execute** to call the **SendRequest** function via a transaction.
+Truffle executes the script on your blockchain network.
- ![Execute SendRequest action](./media/send-transaction/sendrequest-action.png)
+![Output showing transaction has been sent](./media/send-transaction/execute-transaction.png)
-Once the transaction is processed, the interaction section reflects the state changes.
+When you execute a contract's function via a transaction, the transaction isn't processed until a block is created. Functions meant to be executed via a transaction return a transaction ID instead of a return value.
-![Contract state changes](./media/send-transaction/contract-state.png)
+## Query contract state
-The SendRequest function sets the **RequestMessage** and **State** fields. The current state for **RequestMessage** is the argument you passed **Hello, Blockchain**. The **State** field value remains **Request**.
+Smart contract functions can return the current value of state variables. Let's add a function to return the value of a state variable.
+1. In **HelloBlockchain.sol**, add a **getMessage** function to the **HelloBlockchain** smart contract.
+
+ ``` solidity
+ function getMessage() public view returns (string memory)
+ {
+ if (State == StateType.Request)
+ return RequestMessage;
+ else
+ return ResponseMessage;
+ }
+ ```
+
+ The function returns the message stored in a state variable based on the current state of the contract.
+
+1. Right-click **HelloBlockchain.sol** and choose **Build Contracts** from the menu to compile the changes to the smart contract.
+1. To deploy, right-click **HelloBlockchain.sol** and choose **Deploy Contracts** from the menu. When prompted, choose your Azure Blockchain consortium network in the command palette.
+1. Next, create a script using to call the **getMessage** function. Create a new file in the root of your Truffle project and name it `getmessage.js`. Add the following Web3 JavaScript code to the file.
+
+ ```javascript
+ var HelloBlockchain = artifacts.require("HelloBlockchain");
+
+ module.exports = function(done) {
+ console.log("Getting the deployed version of the HelloBlockchain smart contract")
+ HelloBlockchain.deployed().then(function(instance) {
+ console.log("Calling getMessage function for contract ", instance.address);
+ return instance.getMessage();
+ }).then(function(result) {
+ console.log("Request message value: ", result);
+ console.log("Request complete");
+ done();
+ }).catch(function(e) {
+ console.log(e);
+ done();
+ });
+ };
+ ```
+
+1. In VS Code's terminal pane, use Truffle to execute the script on your blockchain network. In the terminal pane menu bar, select the **Terminal** tab and **PowerShell** in the dropdown.
+
+ ```bash
+ truffle exec getmessage.js --network <blockchain network>
+ ```
+
+ Replace \<blockchain network\> with the name of the blockchain network defined in the **truffle-config.js**.
+
+The script queries the smart contract by calling the getMessage function. The current value of the **RequestMessage** state variable is returned.
+
+![Output from getmessage query showing the current value of RequestMessage state variable](./media/send-transaction/execute-get.png)
+
+Notice the value is not **Hello, blockchain!**. Instead, the returned value is a placeholder. When you change and deploy the contract, the changed contract is deployed at a new address and the state variables are assigned values in the smart contract constructor. The Truffle sample **2_deploy_contracts.js** migration script deploys the smart contract and passes a placeholder value as an argument. The constructor sets the **RequestMessage** state variable to the placeholder value and that's what is returned.
+
+1. To set the **RequestMessage** state variable and query the value, run the **sendrequest.js** and **getmessage.js** scripts again.
+
+ ![Output from sendrequest and getmessage scripts showing RequestMessage has been set](./media/send-transaction/execute-set-get.png)
+
+ **sendrequest.js** sets the **RequestMessage** state variable to **Hello, blockchain!** and **getmessage.js** queries the contract for value of **RequestMessage** state variable and returns **Hello, blockchain!**.
## Clean up resources When no longer needed, you can delete the resources by deleting the `myResourceGroup` resource group you created in the *Create a blockchain member* prerequisite quickstart.
blockchain Data Sql Management Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/blockchain/workbench/data-sql-management-studio.md
write and test queries against Azure Blockchain Workbench's SQL DB. This section
## Prerequisites
-* Download [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms?view=sql-server-2017).
+* Download [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms).
## Connecting SQL Server Management Studio to data in Azure Blockchain Workbench
blockchain Database Views https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/blockchain/workbench/database-views.md
# Azure Blockchain Workbench database views
-Azure Blockchain Workbench Preview delivers data from distributed ledgers to an *off-chain* SQL DB database. The off-chain database makes it possible to use SQL and existing tools, such as [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms?view=sql-server-2017), to interact with blockchain data.
+Azure Blockchain Workbench Preview delivers data from distributed ledgers to an *off-chain* SQL DB database. The off-chain database makes it possible to use SQL and existing tools, such as [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms), to interact with blockchain data.
Azure Blockchain Workbench provides a set of database views that provide access to data that will be helpful when performing your queries. These views are heavily denormalized to make it easy to quickly get started building reports, analytics, and otherwise consume blockchain data with existing tools and without having to retrain database staff.
This view provides details on the consortium members that are provisioned to use
## vwWorkflow
-This view represents the details core workflow metadata as well as the workflowΓÇÖs functions and parameters. Designed for reporting, it also contains metadata about the application associated with the workflow. This view contains data from multiple underlying tables to facilitate reporting on workflows. For each workflow, this view contains the following data:
+This view represents the details core workflow metadata as well as the workflow's functions and parameters. Designed for reporting, it also contains metadata about the application associated with the workflow. This view contains data from multiple underlying tables to facilitate reporting on workflows. For each workflow, this view contains the following data:
- Associated application definition - Associated workflow definition
This view represents the details core workflow metadata as well as the workflow
## vwWorkflowFunction
-This view represents the details core workflow metadata as well as the workflowΓÇÖs functions and parameters. Designed for reporting, it also contains metadata about the application associated with the workflow. This view contains data from multiple underlying tables to facilitate reporting on workflows. For each workflow function, this view contains the following data:
+This view represents the details core workflow metadata as well as the workflow's functions and parameters. Designed for reporting, it also contains metadata about the application associated with the workflow. This view contains data from multiple underlying tables to facilitate reporting on workflows. For each workflow function, this view contains the following data:
- Associated application definition - Associated workflow definition
cognitive-services Devices Sdk Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/devices-sdk-release-notes.md
The following sections list changes in the most recent releases.
**Breaking changes** - With this release a number of breaking changes are introduced. Please check [this page](https://aka.ms/csspeech/breakingchanges_1_0_0) for details relating to the APIs.-- The KWS model files are not compatible with Speech Devices SDK 1.0.1. The existing keyword files will be deleted after the new keyword files are written to the device.
+- The keyword recognition model files are not compatible with Speech Devices SDK 1.0.1. The existing keyword files will be deleted after the new keyword files are written to the device.
## Speech Devices SDK 0.5.0: 2018-Aug release
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
This is a bug fix release and only affecting the native/managed SDK. It is not a
**Bug fixes** - Fix FromSubscription when used with Conversation Transcription.-- Fix bug in keyword spotting for voice assistants.
+- Fix bug in keyword recognition for voice assistants.
## Speech SDK 1.5.0: 2019-May release **New features** -- Keyword spotting (KWS) is now available for Windows and Linux. KWS functionality might work with any microphone type, official KWS support, however, is currently limited to the microphone arrays found in the Azure Kinect DK hardware or the Speech Devices SDK.
+- Keyword recognition is now available for Windows and Linux. This functionality might work with any microphone type, but official support is currently limited to the microphone arrays found in the Azure Kinect DK hardware or the Speech Devices SDK.
- Phrase hint functionality is available through the SDK. For more information, see [here](./get-started-speech-to-text.md). - Conversation transcription functionality is available through the SDK. See [here](./conversation-transcription.md). - Add support for voice assistants using the Direct Line Speech channel.
cognitive-services Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-sdk.md
The Speech SDK exposes many features from the Speech service, but not all of the
- Java/Windows & Linux & macOS & Android (Speech Devices SDK) - Go
-#### Keyword spotting
+#### Keyword recognition
-The concept of [keyword spotting](./custom-keyword-basics.md) is supported in the Speech SDK. Keyword spotting is the act of identifying a keyword in speech, followed by an action upon hearing the keyword. For example, "Hey Cortana" would activate the Cortana assistant.
+The concept of [keyword recognition](./custom-keyword-basics.md) is supported in the Speech SDK. Keyword recognition is the act of identifying a keyword in speech, followed by an action upon hearing the keyword. For example, "Hey Cortana" would activate the Cortana assistant.
-**Keyword Spotting (KWS)** is available on the following platforms:
+**Keyword recognition** is available on the following platforms:
- C++/Windows & Linux - C#/Windows & Linux - Python/Windows & Linux - Java/Windows & Linux & Android (Speech Devices SDK)
- - Keyword spotting (KWS) functionality might work with any microphone type, official KWS support, however, is currently limited to the microphone arrays found in the Azure Kinect DK hardware or the Speech Devices SDK
+ - Keyword recognition functionality might work with any microphone type, official keyword recognition support, however, is currently limited to the microphone arrays found in the Azure Kinect DK hardware or the Speech Devices SDK
### Meeting scenarios
cognitive-services Speech Services Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-services-private-link.md
[Azure Private Link](../../private-link/private-link-overview.md) lets you connect to services in Azure by using a [private endpoint](../../private-link/private-endpoint-overview.md). A private endpoint is a private IP address that's accessible only within a specific [virtual network](../../virtual-network/virtual-networks-overview.md) and subnet. This article explains how to set up and use Private Link and private endpoints with Speech Services in Azure Cognitive Services.
+This article then describes how to remove private endpoints later, but still use the Speech resource.
> [!NOTE] > Before you proceed, review [how to use virtual networks with Cognitive Services](../cognitive-services-virtual-networks.md).
-This article also describes [how to remove private endpoints later, but still use the Speech resource](#use-a-speech-resource-with-a-custom-domain-name-and-without-private-endpoints).
+ ## Create a custom domain name Private endpoints require a [custom subdomain name for Cognitive Services](../cognitive-services-custom-subdomains.md). Use the following instructions to create one for your Speech resource. > [!WARNING]
-> A Speech resource with a custom domain name enabled uses a different way to interact with Speech Services. You might have to adjust your application code for both of these scenarios: [private endpoint enabled](#use-a-speech-resource-with-a-custom-domain-name-and-a-private-endpoint-enabled) and [*not* private endpoint enabled](#use-a-speech-resource-with-a-custom-domain-name-and-without-private-endpoints).
+> A Speech resource that uses a custom domain name interacts with Speech Services in a different way.
+> You might have to adjust your application code to use a Speech resource with a private endpoint, and also to use a Speech resource with _no_ private endpoint.
+> Both scenarios may be needed because the switch to custom domain name is _not_ reversible.
>
-> When you enable a custom domain name, the operation is [not reversible](../cognitive-services-custom-subdomains.md#can-i-change-a-custom-domain-name). The only way to go back to the [regional name](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) is to create a new Speech resource.
+> When you turn on a custom domain name, the operation is [not reversible](../cognitive-services-custom-subdomains.md#can-i-change-a-custom-domain-name). The only way to go back to the [regional name](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) is to create a new Speech resource.
> > If your Speech resource has a lot of associated custom models and projects created via [Speech Studio](https://speech.microsoft.com/), we strongly recommend trying the configuration with a test resource before you modify the resource used in production.
subdomainName : my-custom-name
``` ## Create your custom domain name
-To enable a custom domain name for the selected Speech resource, use the [Set-AzCognitiveServicesAccount](/powershell/module/az.cognitiveservices/set-azcognitiveservicesaccount) cmdlet.
+To turn on a custom domain name for the selected Speech resource, use the [Set-AzCognitiveServicesAccount](/powershell/module/az.cognitiveservices/set-azcognitiveservicesaccount) cmdlet.
> [!WARNING] > After the following code runs successfully, you'll create a custom domain name for your Speech resource. Remember that this name *cannot* be changed.
If the name is already taken, then you'll see the following response:
"type": null } ```
-## Enable a custom domain name
+## Turn on a custom domain name
-To enable a custom domain name for the selected Speech resource, use the [az cognitiveservices account update](/cli/azure/cognitiveservices/account#az_cognitiveservices_account_update) command.
+To use a custom domain name with the selected Speech resource, use the [az cognitiveservices account update](/cli/azure/cognitiveservices/account#az_cognitiveservices_account_update) command.
Select the Azure subscription that contains the Speech resource. If your Azure account has only one active subscription, you can skip this step. Replace `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` with your Azure subscription ID. ```azurecli-interactive
az cognitiveservices account update --name my-speech-resource-name --resource-gr
***
-## Enable private endpoints
+## Turn on private endpoints
-We recommend using the [private DNS zone](../../dns/private-dns-overview.md) attached to the virtual network with the necessary updates for the private endpoints. You create a private DNS zone by default during the provisioning process. If you're using your own DNS server, you might also need to change your DNS configuration.
+We recommend using the [private DNS zone](../../dns/private-dns-overview.md) attached to the virtual network with the necessary updates for the private endpoints.
+You can create a private DNS zone during the provisioning process.
+If you're using your own DNS server, you might also need to change your DNS configuration.
Decide on a DNS strategy *before* you provision private endpoints for a production Speech resource. And test your DNS changes, especially if you use your own DNS server.
-Use one of the following articles to create private endpoints. These articles use a web app as a sample resource to enable with private endpoints.
+Use one of the following articles to create private endpoints.
+These articles use a web app as a sample resource to make available through private endpoints.
- [Create a private endpoint by using the Azure portal](../../private-link/create-private-endpoint-portal.md) - [Create a private endpoint by using Azure PowerShell](../../private-link/create-private-endpoint-powershell.md)
Follow these steps to test the custom DNS entry from your virtual network:
### Resolve DNS from other networks
-Perform this check only if you've enabled either the **All networks** option or the **Selected Networks and Private Endpoints** access option in the **Networking** section of your resource.
+Perform this check only if you've turned on either the **All networks** option or the **Selected Networks and Private Endpoints** access option in the **Networking** section of your resource.
If you plan to access the resource by using only a private endpoint, you can skip this section.
If you plan to access the resource by using only a private endpoint, you can ski
> [!NOTE] > The resolved IP address points to a virtual network proxy endpoint, which dispatches the network traffic to the private endpoint for the Cognitive Services resource. The behavior will be different for a resource with a custom domain name but *without* private endpoints. See [this section](#dns-configuration) for details.
-## Adjust existing applications and solutions
+## Adjust an application to use a Speech resource with a private endpoint
-A Speech resource with a custom domain enabled uses a different way to interact with Speech Services. This is true for a custom-domain-enabled Speech resource both with and without private endpoints. Information in this section applies to both scenarios.
+A Speech resource with a custom domain interacts with Speech Services in a different way.
+This is true for a custom-domain-enabled Speech resource both with and without private endpoints.
+Information in this section applies to both scenarios.
-### Use a Speech resource with a custom domain name and a private endpoint enabled
+Follow instructions in this section to adjust existing applications and solutions to use a Speech resource with a custom domain name and a private endpoint turned on.
-A Speech resource with a custom domain name and a private endpoint enabled uses a different way to interact with Speech Services. This section explains how to use such a resource with the Speech Services REST APIs and the [Speech SDK](speech-sdk.md).
+A Speech resource with a custom domain name and a private endpoint turned on uses a different way to interact with Speech Services. This section explains how to use such a resource with the Speech Services REST APIs and the [Speech SDK](speech-sdk.md).
> [!NOTE]
-> A Speech resource without private endpoints but with a custom domain name enabled also has a special way of interacting with Speech Services. This way differs from the scenario of a private-endpoint-enabled Speech resource. If you have such resource (for example, you had a resource with private endpoints but then decided to remove them), see the section [Use a Speech resource with a custom domain name and without private endpoints](#use-a-speech-resource-with-a-custom-domain-name-and-without-private-endpoints).
+> A Speech resource without private endpoints that uses a custom domain name also has a special way of interacting with Speech Services.
+> This way differs from the scenario of a Speech resource that uses a private endpoint.
+> This is important to consider because you may decide to remove private endpoints later.
+> See _Adjust an application to use a Speech resource without private endpoints_ later in this article.
-#### Speech resource with a custom domain name and a private endpoint: Usage with the REST APIs
+### Speech resource with a custom domain name and a private endpoint: Usage with the REST APIs
We'll use `my-private-link-speech.cognitiveservices.azure.com` as a sample Speech resource DNS name (custom domain) for this section.
Speech-to-text REST API v3.0 uses a different set of endpoints, so it requires a
The next subsections describe both cases.
-##### Speech-to-text REST API v3.0
+#### Speech-to-text REST API v3.0
Usually, Speech resources use [Cognitive Services regional endpoints](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) for communicating with the [Speech-to-text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30). These resources have the following naming format: <p/>`{region}.api.cognitive.microsoft.com`.
https://westeurope.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions
> [!NOTE] > See [this article](sovereign-clouds.md) for Azure Government and Azure China endpoints.
-After you enable a custom domain for a Speech resource (which is necessary for private endpoints), that resource will use the following DNS name pattern for the basic REST API endpoint: <p/>`{your custom name}.cognitiveservices.azure.com`.
+After you turn on a custom domain for a Speech resource (which is necessary for private endpoints), that resource will use the following DNS name pattern for the basic REST API endpoint: <p/>`{your custom name}.cognitiveservices.azure.com`
-That means that in our example, the REST API endpoint name will be: <p/>`my-private-link-speech.cognitiveservices.azure.com`.
+That means that in our example, the REST API endpoint name will be: <p/>`my-private-link-speech.cognitiveservices.azure.com`
And the sample request URL needs to be converted to: ```http
https://my-private-link-speech.cognitiveservices.azure.com/speechtotext/v3.0/tra
``` This URL should be reachable from the virtual network with the private endpoint attached (provided the [correct DNS resolution](#resolve-dns-from-the-virtual-network)).
-After you enable a custom domain name for a Speech resource, you typically replace the host name in all request URLs with the new custom domain host name. All other parts of the request (like the path `/speechtotext/v3.0/transcriptions` in the earlier example) remain the same.
+After you turn on a custom domain name for a Speech resource, you typically replace the host name in all request URLs with the new custom domain host name. All other parts of the request (like the path `/speechtotext/v3.0/transcriptions` in the earlier example) remain the same.
> [!TIP] > Some customers develop applications that use the region part of the regional endpoint's DNS name (for example, to send the request to the Speech resource deployed in the particular Azure region). > > A custom domain for a Speech resource contains *no* information about the region where the resource is deployed. So the application logic described earlier will *not* work and needs to be altered.
-##### Speech-to-text REST API for short audio and Text-to-speech REST API
+#### Speech-to-text REST API for short audio and Text-to-speech REST API
The [Speech-to-text REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio) and the [Text-to-speech REST API](rest-text-to-speech.md) use two types of endpoints: - [Cognitive Services regional endpoints](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) for communicating with the Cognitive Services REST API to obtain an authorization token
Get familiar with the material in the subsection mentioned in the previous parag
> [!NOTE] > When you're using the Speech-to-text REST API for short audio and Text-to-speech REST API in private endpoint scenarios, use a subscription key passed through the `Ocp-Apim-Subscription-Key` header. (See details for [Speech-to-text REST API for short audio](rest-speech-to-text.md#request-headers) and [Text-to-speech REST API](rest-text-to-speech.md#request-headers)) >
-> Using an authorization token and passing it to the special endpoint via the `Authorization` header will work *only* if you've enabled the **All networks** access option in the **Networking** section of your Speech resource. In other cases you will get either `Forbidden` or `BadRequest` error when trying to obtain an authorization token.
+> Using an authorization token and passing it to the special endpoint via the `Authorization` header will work *only* if you've turned on the **All networks** access option in the **Networking** section of your Speech resource. In other cases you will get either `Forbidden` or `BadRequest` error when trying to obtain an authorization token.
**Text-to-speech REST API usage example**
https://my-private-link-speech.cognitiveservices.azure.com/tts/cognitiveservices
``` See a detailed explanation in the [Construct endpoint URL](#construct-endpoint-url) subsection for the Speech SDK.
-#### Speech resource with a custom domain name and a private endpoint: Usage with the Speech SDK
+### Speech resource with a custom domain name and a private endpoint: Usage with the Speech SDK
Using the Speech SDK with a custom domain name and private-endpoint-enabled Speech resources requires you to review and likely change your application code. We'll use `my-private-link-speech.cognitiveservices.azure.com` as a sample Speech resource DNS name (custom domain) for this section.
-##### Construct endpoint URL
+#### Construct endpoint URL
Usually in SDK scenarios (as well as in the Speech-to-text REST API for short audio and Text-to-speech REST API scenarios), Speech resources use the dedicated regional endpoints for different service offerings. The DNS name format for these endpoints is:
Notice the details:
https://westeurope.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId=974481cc-b769-4b29-af70-2fb557b897c4 ```
-The following equivalent URL uses a private endpoint enabled, where the custom domain name of the Speech resource is `my-private-link-speech.cognitiveservices.azure.com`:
+The following equivalent URL uses a private endpoint, where the custom domain name of the Speech resource is `my-private-link-speech.cognitiveservices.azure.com`:
```http https://my-private-link-speech.cognitiveservices.azure.com/voice/cognitiveservices/v1?deploymentId=974481cc-b769-4b29-af70-2fb557b897c4
https://my-private-link-speech.cognitiveservices.azure.com/voice/cognitiveservic
The same principle in Example 1 is applied, but the key element this time is `voice`.
-##### Modifying applications
+#### Modifying applications
Follow these steps to modify your code: 1. Determine the application endpoint URL:
- - [Enable logging for your application](how-to-use-logging.md) and run it to log activity.
+ - [Turn on logging for your application](how-to-use-logging.md) and run it to log activity.
- In the log file, search for `SPEECH-ConnectionUrl`. In matching lines, the `value` parameter contains the full URL that your application used to reach Speech Services. Example:
Follow these steps to modify your code:
After this modification, your application should work with the private-endpoint-enabled Speech resources. We're working on more seamless support of private endpoint scenarios.
-### Use a Speech resource with a custom domain name and without private endpoints
+## Adjust an application to use a Speech resource without private endpoints
In this article, we've pointed out several times that enabling a custom domain for a Speech resource is *irreversible*. Such a resource will use a different way of communicating with Speech Services, compared to the ones that are using [regional endpoint names](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints).
-This section explains how to use a Speech resource with an enabled custom domain name but *without* any private endpoints with the Speech Services REST APIs and [Speech SDK](speech-sdk.md). This might be a resource that was once used in a private endpoint scenario, but then had its private endpoints deleted.
+This section explains how to use a Speech resource with a custom domain name but *without* any private endpoints with the Speech Services REST APIs and [Speech SDK](speech-sdk.md). This might be a resource that was once used in a private endpoint scenario, but then had its private endpoints deleted.
-#### DNS configuration
+### DNS configuration
Remember how a custom domain DNS name of the private-endpoint-enabled Speech resource is [resolved from public networks](#resolve-dns-from-other-networks). In this case, the IP address resolved points to a proxy endpoint for a virtual network. That endpoint is used for dispatching the network traffic to the private-endpoint-enabled Cognitive Services resource.
Aliases: my-private-link-speech.cognitiveservices.azure.com
``` Compare it with the output from [this section](#resolve-dns-from-other-networks).
-#### Speech resource with a custom domain name and without private endpoints: Usage with the REST APIs
+### Speech resource with a custom domain name and without private endpoints: Usage with the REST APIs
-##### Speech-to-text REST API v3.0
+#### Speech-to-text REST API v3.0
Speech-to-text REST API v3.0 usage is fully equivalent to the case of [private-endpoint-enabled Speech resources](#speech-to-text-rest-api-v30).
-##### Speech-to-text REST API for short audio and Text-to-speech REST API
+#### Speech-to-text REST API for short audio and Text-to-speech REST API
In this case, usage of the Speech-to-text REST API for short audio and usage of the Text-to-speech REST API have no differences from the general case, with one exception. (See the following note.) You should use both APIs as described in the [speech-to-text REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio) and [Text-to-speech REST API](rest-text-to-speech.md) documentation. > [!NOTE] > When you're using the Speech-to-text REST API for short audio and Text-to-speech REST API in custom domain scenarios, use a subscription key passed through the `Ocp-Apim-Subscription-Key` header. (See details for [Speech-to-text REST API for short audio](rest-speech-to-text.md#request-headers) and [Text-to-speech REST API](rest-text-to-speech.md#request-headers)) >
-> Using an authorization token and passing it to the special endpoint via the `Authorization` header will work *only* if you've enabled the **All networks** access option in the **Networking** section of your Speech resource. In other cases you will get either `Forbidden` or `BadRequest` error when trying to obtain an authorization token.
+> Using an authorization token and passing it to the special endpoint via the `Authorization` header will work *only* if you've turned on the **All networks** access option in the **Networking** section of your Speech resource. In other cases you will get either `Forbidden` or `BadRequest` error when trying to obtain an authorization token.
-#### Speech resource with a custom domain name and without private endpoints: Usage with the Speech SDK
+### Speech resource with a custom domain name and without private endpoints: Usage with the Speech SDK
Using the Speech SDK with custom-domain-enabled Speech resources *without* private endpoints is equivalent to the general case as described in the [Speech SDK documentation](speech-sdk.md).
cognitive-services Get Started With Document Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/get-started-with-document-translation.md
To get started, you'll need:
> [!IMPORTANT] >
-> * You won't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You won't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
### What is the custom domain endpoint?
The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Share
* Create a new project. * Replace Program.cs with the C# code shown below.
-* Set your endpoint. subscription key, and container URL values in Program.cs.
+* Set your endpoint, subscription key, and container URL values in Program.cs.
* To process JSON data, add [Newtonsoft.Json package using .NET CLI](https://www.nuget.org/packages/Newtonsoft.Json/). * Run the program from the project directory.
The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Share
* Create a new Node.js project. * Install the Axios library with `npm i axios`.
-* Copy paste the code below into your project.
+* Copy/paste the code below into your project.
* Set your endpoint, subscription key, and container URL values. * Run the program.
gradle run
* Set your endpoint, subscription key, and container URL values. * Save the file with a '.go' extension. * Open a command prompt on a computer with Go installed.
-* Build the file, for example: 'go build example-code.go'.
+* Build the file. For example: 'go build example-code.go'.
* Run the file, for example: 'example-code'.
The following headers are included with each Document Translator API request:
## POST a translation request <!-- markdownlint-disable MD024 -->
-### POST request body without optional glossaryURL
+### POST request body to translate all documents in a container
```json { "inputs": [ { "source": {
- "sourceUrl": "<https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS>",
- "storageSource": "AzureBlob",
- "filter": {
- "prefix": "News",
- "suffix": ".txt"
- },
- "language": "en"
+ "sourceUrl": https://my.blob.core.windows.net/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D
}, "targets": [ {
- "targetUrl": "<https://YOUR-SOURCE-URL-WITH-WRITE-LIST-ACCESS-SAS>",
- "storageSource": "AzureBlob",
- "category": "general",
- "language": "de"
+ "targetUrl": https://my.blob.core.windows.net/target-fr?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D,
+ "language": "fr"
} ] }
The following headers are included with each Document Translator API request:
} ```
-### POST request body with optional glossaryURL
+
+### POST request body to translate a specific document in a container
+
+* Ensure you have specified "storageType": "File"
+* Ensure you have created source URL & SAS token for the specific blob/document (not for the container)
+* Ensure you have specified the target filename as part of the target URL ΓÇô though the SAS token is still for the container.
+* Sample request below shows a single document getting translated into two target languages
```json {
- "inputs":[
- {
- "source":{
- "sourceUrl":"<https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS>",
- "storageSource":"AzureBlob",
- "filter":{
- "prefix":"News",
- "suffix":".txt"
- },
- "language":"en"
- },
- "targets":[
+ "inputs": [
{
- "targetUrl":"<https://YOUR-SOURCE-URL-WITH-WRITE-LIST-ACCESS-SAS>",
- "storageSource":"AzureBlob",
- "category":"general",
- "language":"de",
- "glossaries":[
- {
- "glossaryUrl":"<https://YOUR-GLOSSARY-URL-WITH-READ-LIST-ACCESS-SAS>",
- "format":"xliff",
- "version":"1.2"
- }
- ]
+ "storageType": "File",
+ "source": {
+ "sourceUrl": https://my.blob.core.windows.net/source-en/source-english.docx?sv=2019-12-12&st=2021-01-26T18%3A30%3A20Z&se=2021-02-05T18%3A30%3A00Z&sr=c&sp=rl&sig=d7PZKyQsIeE6xb%2B1M4Yb56I%2FEEKoNIF65D%2Fs0IFsYcE%3D
+ },
+ "targets": [
+ {
+ "targetUrl": https://my.blob.core.windows.net/target/try/Target-Spanish.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D,
+ "language": "es"
+ },
+ {
+ "targetUrl": https://my.blob.core.windows.net/target/try/Target-German.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D,
+ "language": "de"
+ }
+ ]
}
- ]
- }
- ]
+ ]
} ``` + > [!IMPORTANT] >
-> For the code samples below, you'll hard-code your key and endpoint where indicated; remember to remove the key from your code when you're done, and never post it publicly. See [Azure Cognitive Services security](../../cognitive-services-security.md?tabs=command-line%2ccsharp) for ways to securely store and access your credentials.
+> For the code samples below, you'll hard-code your key and endpoint where indicated; remember to remove the key from your code when you're done, and never post it publicly. See [Azure Cognitive Services security](/azure/cognitive-services/cognitive-services-security?tabs=command-line%2Ccsharp) for ways to securely store and access your credentials.
> > You may need to update the following fields, depending upon the operation: >>>
func main() {
## Content limits
-The table below lists the limits for data that you send to Document Translation.
+The table below lists the limits for data that you send to Document Translation (Preview).
|Attribute | Limit| |||
The table below lists the limits for data that you send to Document Translation.
> [!div class="nextstepaction"] > [Create a customized language system using Custom Translator](../custom-translator/overview.md) >
->
+>
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/overview.md
The following document file types are supported by Document Translation:
|Microsoft Outlook|.msg|An email message created or saved within Microsoft Outlook.| |Microsoft PowerPoint|.pptx| A presentation file used to display content in a slideshow format.| |Microsoft Word|.docx| A text document file.|
-|Tab Separated Values/TAB|.tsv/.tab| a tab-delimited raw-data file used by spreadsheet programs.|
+|Tab Separated Values/TAB|.tsv/.tab| A tab-delimited raw-data file used by spreadsheet programs.|
|Text|.txt| An unformatted text document.| |Translation Memory Exchange|.tmx|An open XML standard used for exchanging translation memory (TM) data created by Computer Aided Translation (CAT) and localization applications.|
The following glossary file types are supported by Document Translation:
| File type| File extension|Description| |||--| |Localization Interchange File Format|.xlf. , xliff| A parallel document format, export of Translation Memory systems. The languages used are defined inside the file.|
-|Tab Separated Values/TAB|.tsv/.tab| a tab-delimited raw-data file used by spreadsheet programs.|
+|Tab Separated Values/TAB|.tsv/.tab| A tab-delimited raw-data file used by spreadsheet programs.|
## Next steps
cognitive-services Cancel Operation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/cancel-operation.md
+
+ Title: Document Translation cancel operation method
+
+description: The cancel operations method cancels a currently processing or queued operation.
+++++++ Last updated : 03/25/2021+++
+# Document Translation: cancel operations
+
+Cancel a currently processing or queued operation. An operation won't be canceled if it is already completed or failed or canceling. A bad request will be returned. All documents that have completed translation won't be canceled and will be charged. All pending documents will be canceled if possible.
+
+## Request URL
+
+Send a `DELETE` request to:
+```DELETE HTTP
+https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1/batches/{id}
+```
+
+Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request parameters
+
+Request parameters passed on the query string are:
+
+|Query parameter|Required|Description|
+|--|--|--|
+|id|True|The operation-id.|
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+|--|--|
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+| Status Code| Description|
+|--|--|
+|200|OK. Cancel request has been submitted|
+|401|Unauthorized. Check your credentials.|
+|404|Not found. Resource is not found.
+|500|Internal Server Error.
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+
+## Cancel operations response
+
+### Successful response
+
+The following information is returned in a successful response.
+
+|Name|Type|Description|
+| | | |
+|id|string|ID of the operation.|
+|createdDateTimeUtc|string|Operation created date time.|
+|lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
+|status|String|List of possible statuses for job or document: <ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
+|summary|StatusSummary|Summary containing the details listed below.|
+|summary.total|integer|Count of total documents.|
+|summary.failed|integer|Count of documents failed.|
+|summary.success|integer|Count of documents successfully translated.|
+|summary.inProgress|integer|Count of documents in progress.|
+|summary.notYetStarted|integer|Count of documents not yet started processing.|
+|summary.cancelled|integer|Number of canceled.|
+|summary.totalCharacterCharged|integer|Total characters charged by the API.|
+
+### Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|target|string|Gets the source of the error. For example, it would be "documents" or "document id" for an invalid document.|
+|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message, and optional properties target, details(key value pair), inner error (can be nested).|
+|innerError.code|string|Gets code error string.|
+|inner.Eroor.message|string|Gets high-level error message.|
+
+## Examples
+
+### Example successful response
+
+The following JSON object is an example of a successful response.
+
+Status code: 200
+
+```JSON
+{
+ "id": "727bf148-f327-47a0-9481-abae6362f11e",
+ "createdDateTimeUtc": "2020-03-26T00:00:00Z",
+ "lastActionDateTimeUtc": "2020-03-26T01:00:00Z",
+ "status": "Succeeded",
+ "summary": {
+ "total": 10,
+ "failed": 1,
+ "success": 9,
+ "inProgress": 0,
+ "notYetStarted": 0,
+ "cancelled": 0,
+ "totalCharacterCharged": 0
+ }
+}
+```
+
+### Example error response
+
+The following JSON object is an example of an error response. The schema for other error codes is the same.
+
+Status code: 500
+
+```JSON
+{
+ "error": {
+ "code": "InternalServerError",
+ "message": "Internal Server Error",
+ "target": "Operation",
+ "innerError": {
+ "code": "InternalServerError",
+ "message": "Unexpected internal server error has occurred"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Get Document Formats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-document-formats.md
+
+ Title: Document Translation get document formats method
+
+description: The get document formats method returns a list of supported document formats.
+++++++ Last updated : 03/25/2021+++
+# Document Translation: get document formats
+
+The Get Document Formats method returns a list of document formats supported by the Document Translation service. The list includes the common file extension, and the content-type if using the upload API.
+
+## Request URL
+
+Send a `GET` request to:
+```HTTP
+GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1/documents/formats
+```
+
+Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+|--|--|
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+|--|--|
+|200|OK. Returns the list of supported document file formats.|
+|500|Internal Server Error.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+
+## File format response
+
+### Successful fileFormatListResult response
+
+The following information is returned in a successful response.
+
+|Name|Type|Description|
+| | | |
+|value|FileFormat []|FileFormat[] contains the details listed below.|
+|value.format|string[]|Supported Content-Types for this format.|
+|value.fileExtensions|string[]|Supported file extension for this format.|
+|value.contentTypes|string[]|Name of the format.|
+|value.versions|String[]|Supported Version.|
+
+### Error response
+
+|Name|Type|Description|
+| | | |
+ |code|string|Enums containing high-level error codes. Possible values:<ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error (this can be nested).|
+|innerError.code|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+
+## Examples
+
+### Example successful response
+The following is an example of a successful response.
+
+Status code: 200
+
+```JSON
+{
+ "value": [
+ {
+ "format": "PlainText",
+ "fileExtensions": [
+ ".txt"
+ ],
+ "contentTypes": [
+ "text/plain"
+ ],
+ "versions": []
+ },
+ {
+ "format": "PortableDocumentFormat",
+ "fileExtensions": [
+ ".pdf"
+ ],
+ "contentTypes": [
+ "application/pdf"
+ ],
+ "versions": []
+ },
+ {
+ "format": "OpenXmlPresentation",
+ "fileExtensions": [
+ ".pptx"
+ ],
+ "contentTypes": [
+ "application/vnd.openxmlformats-officedocument.presentationml.presentation"
+ ],
+ "versions": []
+ },
+ {
+ "format": "OpenXmlSpreadsheet",
+ "fileExtensions": [
+ ".xlsx"
+ ],
+ "contentTypes": [
+ "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"
+ ],
+ "versions": []
+ },
+ {
+ "format": "OutlookMailMessage",
+ "fileExtensions": [
+ ".msg"
+ ],
+ "contentTypes": [
+ "application/vnd.ms-outlook"
+ ],
+ "versions": []
+ },
+ {
+ "format": "HtmlFile",
+ "fileExtensions": [
+ ".html"
+ ],
+ "contentTypes": [
+ "text/html"
+ ],
+ "versions": []
+ },
+ {
+ "format": "OpenXmlWord",
+ "fileExtensions": [
+ ".docx"
+ ],
+ "contentTypes": [
+ "application/vnd.openxmlformats-officedocument.wordprocessingml.document"
+ ],
+ "versions": []
+ }
+ ]
+}
+```
+
+### Example error response
+
+The following is an example of an error response. The schema for other error codes is the same.
+
+Status code: 500
+
+```JSON
+{
+ "error": {
+ "code": "InternalServerError",
+ "message": "Internal Server Error",
+ "innerError": {
+ "code": "InternalServerError",
+ "message": "Unexpected internal server error has occurred"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Get Document Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-document-status.md
+
+ Title: Document Translation get document status method
+
+description: The get document status method returns the status for a specific document.
+++++++ Last updated : 03/25/2021+++
+# Document Translation: get document status
+
+The Get Document Status method returns the status for a specific document. The method returns the translation status for a specific document based on the request ID and document ID.
+
+## Request URL
+
+Send a `GET` request to:
+```HTTP
+GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1/batches/{id}/documents/{documentId}
+```
+
+Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request parameters
+
+Request parameters passed on the query string are:
+
+|Query parameter|Required|Description|
+| | | |
+|documentId|True|The document ID.|
+|id|True|The batch ID.|
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+| | |
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+| | |
+|200|OK. Successful request and it is accepted by the service. The operation details are returned.HeadersRetry-After: integerETag: string|
+|401|Unauthorized. Check your credentials.|
+|404|Not Found. Resource is not found.|
+|500|Internal Server Error.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+
+## Get document status response
+
+### Successful get document status response
+
+|Name|Type|Description|
+| | | |
+|path|string|Location of the document or folder.|
+|createdDateTimeUtc|string|Operation created date time.|
+|lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
+|status|String|List of possible statuses for job or document: <ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
+|to|string|Two letter language code of To Language. See the list of languages.|
+|progress|number|Progress of the translation if available|
+|id|string|Document ID.|
+|characterCharged|integer|Characters charged by the API.|
+
+### Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message, and optional properties target, details(key value pair), inner error (can be nested).|
+|innerError.code|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+
+## Examples
+
+### Example successful response
+The following JSON object is an example of a successful response.
+
+```JSON
+{
+ "path": "https://myblob.blob.core.windows.net/destinationContainer/fr/mydoc.txt",
+ "createdDateTimeUtc": "2020-03-26T00:00:00Z",
+ "lastActionDateTimeUtc": "2020-03-26T01:00:00Z",
+ "status": "Running",
+ "to": "fr",
+ "progress": 0.1,
+ "id": "273622bd-835c-4946-9798-fd8f19f6bbf2",
+ "characterCharged": 0
+}
+```
+
+### Example error response
+
+The following JSON object is an example of an error response. The schema for other error codes is the same.
+
+Status code: 401
+
+```JSON
+{
+ "error": {
+ "code": "Unauthorized",
+ "message": "User is not authorized",
+ "target": "Document",
+ "innerError": {
+ "code": "Unauthorized",
+ "message": "Operation is not authorized"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Get Document Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-document-storage-source.md
+
+ Title: Document Translation get document storage source method
+
+description: The get document storage source method returns a list of supported storage sources.
+++++++ Last updated : 03/25/2021+++
+# Document Translation: get document storage source
+
+The Get Document Storage Source method returns a list of storage sources/options supported by the Document Translation service.
+
+## Request URL
+
+Send a `GET` request to:
+```HTTP
+GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1/storagesources
+```
+
+Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+| | |
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+| | |
+|200|OK. Successful request and returns the list of storage sources.|
+|500|Internal Server Error.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+
+## Get document storage source response
+
+### Successful get document storage source response
+Base type for list return in the Get Document Storage Source API.
+
+|Name|Type|Description|
+| | | |
+|value|string []|List of objects.|
++
+### Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error (this can be nested).|
+|innerError.code|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+
+## Examples
+
+### Example successful response
+
+The following is an example of a successful response.
+
+```JSON
+{
+ "value": [
+ "AzureBlob"
+ ]
+}
+```
+
+### Example error response
+The following is an example of an error response. The schema for other error codes is the same.
+
+Status code: 500
+
+```JSON
+{
+ "error": {
+ "code": "InternalServerError",
+ "message": "Internal Server Error",
+ "innerError": {
+ "code": "InternalServerError",
+ "message": "Unexpected internal server error has occurred"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Get Glossary Formats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-glossary-formats.md
+
+ Title: Document Translation get glossary formats method
+
+description: The get glossary formats method returns the list of supported glossary formats.
+++++++ Last updated : 03/25/2021+++
+# Document Translation: get glossary formats
+
+The Get Glossary Formats method returns a list of supported glossary formats supported by the Document Translation service. The list includes the common file extension used.
+
+## Request URL
+
+Send a `GET` request to:
+```HTTP
+GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1/glossaries/formats
+```
+
+Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+| | |
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+| | |
+|200|OK. Returns the list of supported glossary file formats.|
+|500|Internal Server Error.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
++
+## Get glossary formats response
+
+Base type for list return in the Get Glossary Formats API.
+
+### Successful get glossary formats response
+
+Base type for list return in the Get Glossary Formats API.
+
+|Status Code|Description|
+| | |
+|200|OK. Returns the list of supported glossary file formats.|
+|500|Internal Server Error.|
+|Other Status Codes|Too many requestsServer temporary unavailable|
+
+### Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error (this can be nested).|
+|innerError.code|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+
+## Examples
+
+### Example successful response
+
+The following is an example of a successful response.
+
+```JSON
+{
+ "value": [
+ {
+ "format": "XLIFF",
+ "fileExtensions": [
+ ".xlf"
+ ],
+ "contentTypes": [
+ "application/xliff+xml"
+ ],
+ "versions": [
+ "1.0",
+ "1.1",
+ "1.2"
+ ]
+ },
+ {
+ "format": "TSV",
+ "fileExtensions": [
+ ".tsv",
+ ".tab"
+ ],
+ "contentTypes": [
+ "text/tab-separated-values"
+ ],
+ "versions": []
+ }
+ ]
+}
+```
+
+### Example error response
+the following is an example of an error response. The schema for other error codes is the same.
+
+Status code: 500
+
+```JSON
+{
+ "error": {
+ "code": "InternalServerError",
+ "message": "Internal Server Error",
+ "innerError": {
+ "code": "InternalServerError",
+ "message": "Unexpected internal server error has occurred"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Get Operation Documents Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-operation-documents-status.md
+
+ Title: Document Translation get operation documents status
+
+description: The get operation documents status method returns the status for all documents in a batch document translation request.
+++++++ Last updated : 03/25/2021+++
+# Document Translation: get operation documents status
+
+The Get Operation Documents Status method returns the status for all documents in a batch document translation request.
+
+The documents included in the response are sorted by document ID in descending order. If the number of documents in the response exceeds our paging limit, server-side paging is used. Paginated responses indicate a partial result and include a continuation token in the response. The absence of a continuation token means that no additional pages are available.
+
+$top and $skip query parameters can be used to specify a number of results to return and an offset for the collection. The server honors the values specified by the client. However, clients must be prepared to handle responses that contain a different page size or contain a continuation token.
+
+When both $top and $skip are included, the server should first apply $skip and then $top on the collection.
+
+> [!NOTE]
+> If the server can't honor $top and/or $skip, the server must return an error to the client informing about it instead of just ignoring the query options. This reduces the risk of the client making assumptions about the data returned.
+
+## Request URL
+
+Send a `GET` request to:
+```HTTP
+GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1/batches/{id}/documents
+```
+
+Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request parameters
+
+Request parameters passed on the query string are:
+
+|Query parameter|Required|Description|
+| | | |
+|id|True|The operation ID.|
+|$skip|False|Skip the $skip entries in the collection. When both $top and $skip are supplied, $skip is applied first.|
+|$top|False|Take the $top entries in the collection. When both $top and $skip are supplied, $skip is applied first.|
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+| | |
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+| | |
+|200|OK. Successful request and returns the status of the documents. HeadersRetry-After: integerETag: string|
+|400|Invalid request. Check input parameters.|
+|401|Unauthorized. Check your credentials.|
+|404|Resource is not found.|
+|500|Internal Server Error.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
++
+## Get operation documents status response
+
+### Successful get operation documents status response
+
+The following information is returned in a successful response.
+
+|Name|Type|Description|
+| | | |
+|@nextLink|string|Url for the next page. Null if no more pages available.|
+|value|DocumentStatusDetail []|The detail status of individual documents listed below.|
+|value.path|string|Location of the document or folder.|
+|value.createdDateTimeUtc|string|Operation created date time.|
+|value.lastActionDateTimeUt|string|Date time in which the operation's status has been updated.|
+|value.status|status|List of possible statuses for job or document.<ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
+|value.to|string|To language.|
+|value.progress|string|Progress of the translation if available.|
+|value.id|string|Document ID.|
+|value.characterCharged|integer|Characters charged by the API.|
+
+### Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|target|string|Gets the source of the error. For example, it would be "documents" or "document id" in the case of an invalid document.|
+|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error (this can be nested).|
+|innerError.code|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+
+## Examples
+
+### Example successful response
+
+The following is an example of a successful response.
+
+```JSON
+{
+ "value": [
+ {
+ "path": "https://myblob.blob.core.windows.net/destinationContainer/fr/mydoc.txt",
+ "createdDateTimeUtc": "2020-03-26T00:00:00Z",
+ "lastActionDateTimeUtc": "2020-03-26T01:00:00Z",
+ "status": "Running",
+ "to": "fr",
+ "progress": 0.1,
+ "id": "273622bd-835c-4946-9798-fd8f19f6bbf2",
+ "characterCharged": 0
+ }
+ ],
+ "@nextLink": "https://westus.cognitiveservices.azure.com/translator/text/batch/v1.0.preview.1/operation/0FA2822F-4C2A-4317-9C20-658C801E0E55/documents?$top=5&$skip=15"
+}
+```
+
+### Example error response
+
+The following is an example of an error response. The schema for other error codes is the same.
+
+Status code: 500
+
+```JSON
+{
+ "error": {
+ "code": "InternalServerError",
+ "message": "Internal Server Error",
+ "target": "Operation",
+ "innerError": {
+ "code": "InternalServerError",
+ "message": "Unexpected internal server error has occurred"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Get Operation Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-operation-status.md
+
+ Title: Document Translation get operation status
+
+description: The get operation status method returns the status for a document translation request.
+++++++ Last updated : 03/25/2021+++
+# Document Translation: get operation status
+
+The Get Operation Documents Status method returns the status for a document translation request. The status includes the overall request status and the status for documents that are being translated as part of that request.
+
+## Request URL
+
+Send a `GET` request to:
+```HTTP
+GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1/batches/{id}
+```
+
+Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
++
+## Request parameters
+
+Request parameters passed on the query string are:
+
+|Query parameter|Required|Description|
+| | | |
+|id|True|The operation ID.|
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+| | |
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+| | |
+|200|OK. Successful request and returns the status of the batch translation operation. HeadersRetry-After: integerETag: string|
+|401|Unauthorized. Check your credentials.|
+|404|Resource is not found.|
+|500|Internal Server Error.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+
+## Get operation status response
+
+### Successful get operation status response
+
+The following information is returned in a successful response.
+
+|Name|Type|Description|
+| | | |
+|id|string|ID of the operation.|
+|createdDateTimeUtc|string|Operation created date time.|
+|lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
+|status|String|List of possible statuses for job or document: <ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
+|summary|StatusSummary|Summary containing the details listed below.|
+|summary.total|integer|Total count.|
+|summary.failed|integer|Failed count.|
+|summary.success|integer|Number of successful.|
+|summary.inProgress|integer|Number of in progress.|
+|summary.notYetStarted|integer|Count of not yet started.|
+|summary.cancelled|integer|Number of canceled.|
+|summary.totalCharacterCharged|integer|Total characters charged by the API.|
+
+###Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|target|string|Gets the source of the error. For example, it would be "documents" or "document id" for an invalid document.|
+|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message, and optional properties target, details(key value pair), inner error (can be nested).|
+|innerError.code|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+
+## Examples
+
+### Example successful response
+
+The following JSON object is an example of a successful response.
+
+```JSON
+{
+ "id": "727bf148-f327-47a0-9481-abae6362f11e",
+ "createdDateTimeUtc": "2020-03-26T00:00:00Z",
+ "lastActionDateTimeUtc": "2020-03-26T01:00:00Z",
+ "status": "Succeeded",
+ "summary": {
+ "total": 10,
+ "failed": 1,
+ "success": 9,
+ "inProgress": 0,
+ "notYetStarted": 0,
+ "cancelled": 0,
+ "totalCharacterCharged": 0
+ }
+}
+```
+
+### Example error response
+
+The following JSON object is an example of an error response. The schema for other error codes is the same.
+
+Status code: 401
+
+```JSON
+{
+ "error": {
+ "code": "Unauthorized",
+ "message": "User is not authorized",
+ "target": "Document",
+ "innerError": {
+ "code": "Unauthorized",
+ "message": "Operation is not authorized"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Get Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-operations.md
+
+ Title: Document Translation get operations
+
+description: The get operations method returns a list of batch requests submitted and the status for each request.
+++++++ Last updated : 03/25/2021+++
+# Document Translation: get operations
+
+The Get Operations method returns a list of batch requests submitted and the status for each request. This list only contains batch requests submitted by the user (based on the subscription). The status for each request is sorted by id.
+
+If the number of requests exceeds our paging limit, server-side paging is used. Paginated responses indicate a partial result and include a continuation token in the response. The absence of a continuation token means that no additional pages are available.
+
+$top and $skip query parameters can be used to specify a number of results to return and an offset for the collection.
+
+The server honors the values specified by the client. However, clients must be prepared to handle responses that contain a different page size or contain a continuation token.
+
+When both $top and $skip are included, the server should first apply $skip and then $top on the collection.
+
+> [!NOTE]
+> If the server can't honor $top and/or $skip, the server must return an error to the client informing about it instead of just ignoring the query options. This reduces the risk of the client making assumptions about the data returned.
+
+## Request URL
+
+Send a `GET` request to:
+```HTTP
+GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1/batches
+```
+
+Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request parameters
+
+Request parameters passed on the query string are:
+
+|Query parameter|Required|Description|
+| | | |
+|$skip|False|Skip the $skip entries in the collection. When both $top and $skip are supplied, $skip is applied first.|
+|$top|False|Take the $top entries in the collection. When both $top and $skip are supplied, $skip is applied first.|
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+| | |
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+| | |
+|200|OK. Successful request and returns the status of the all the operations. HeadersRetry-After: integerETag: string|
+|400|Bad Request. Invalid request. Check input parameters.|
+|401|Unauthorized. Check your credentials.|
+|500|Internal Server Error.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+
+## Get operations response
+
+### Successful get operations response
+
+The following information is returned in a successful response.
+
+|Name|Type|Description|
+| | | |
+|id|string|ID of the operation.|
+|createdDateTimeUtc|string|Operation created date time.|
+|lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
+|status|String|List of possible statuses for job or document: <ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
+|summary|StatusSummary[]|Summary containing the details listed below.|
+|summary.total|integer|Count of total documents.|
+|summary.failed|integer|Count of documents failed.|
+|summary.success|integer|Count of documents successfully translated.|
+|summary.inProgress|integer|Count of documents in progress.|
+|summary.notYetStarted|integer|Count of documents not yet started processing.|
+|summary.cancelled|integer|Count of documents canceled.|
+|summary.totalCharacterCharged|integer|Total count of characters charged.|
+
+###Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|target|string|Gets the source of the error. For example, it would be "documents" or "document id" in the case of an invalid document.|
+|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error (this can be nested).|
+|innerError.code|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+
+## Examples
+
+### Example successful response
+
+The following is an example of a successful response.
+
+```JSON
+{
+ "value": [
+ {
+ "id": "727bf148-f327-47a0-9481-abae6362f11e",
+ "createdDateTimeUtc": "2020-03-26T00:00:00Z",
+ "lastActionDateTimeUtc": "2020-03-26T01:00:00Z",
+ "status": "Succeeded",
+ "summary": {
+ "total": 10,
+ "failed": 1,
+ "success": 9,
+ "inProgress": 0,
+ "notYetStarted": 0,
+ "cancelled": 0,
+ "totalCharacterCharged": 0
+ }
+ }
+ ]
+}
+```
+
+### Example error response
+
+The following is an example of an error response. The schema for other error codes is the same.
+
+Status code: 500
+
+```JSON
+{
+ "error": {
+ "code": "InternalServerError",
+ "message": "Internal Server Error",
+ "target": "Operation",
+ "innerError": {
+ "code": "InternalServerError",
+ "message": "Unexpected internal server error has occurred"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Submit Batch Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/submit-batch-request.md
+
+ Title: Document Translation submit batch request
+
+description: Submit a document translation request to the Document Translation service.
+++++++ Last updated : 03/25/2021+++
+# Document Translation: submit batch request
+
+Use this API to submit a bulk (batch) translation request to the Document Translation service. Each request can contain multiple documents and must contain a source and destination container for each document.
+
+The prefix and suffix filter (if supplied) are used to filter folders. The prefix is applied to the subpath after the container name.
+
+Glossaries / Translation memory can be included in the request and are applied by the service when the document is translated.
+
+If the glossary is invalid or unreachable during translation, an error is indicated in the document status. If a file with the same name already exists at the destination, it will be overwritten. The targetUrl for each target language must be unique.
+
+## Request URL
+
+Send a `POST` request to:
+```HTTP
+POST https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0-preview.1/batches
+```
+
+Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+| | |
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Request Body: Batch Submission Request
+
+|Name|Type|Description|
+| | | |
+|inputs|BatchRequest[]|BatchRequest listed below. The input list of documents or folders containing documents. Media Types: "application/json", "text/json", "application/*+json".|
+
+### Inputs
+
+Definition for the input batch translation request.
+
+|Name|Type|Required|Description|
+| | | | |
+|source|SourceInput[]|True|inputs.source listed below. Source of the input documents.|
+|storageType|StorageInputType[]|True|inputs.storageType listed below. Storage type of the input documents source string.|
+|targets|TargetInput[]|True|inputs.target listed below. Location of the destination for the output.|
+
+**inputs.source**
+
+Source of the input documents.
+
+|Name|Type|Required|Description|
+| | | | |
+|filter|DocumentFilter[]|False|DocumentFilter[] listed below.|
+|filter.prefix|string|False|A case-sensitive prefix string to filter documents in the source path for translation. For example, when using an Azure storage blob Uri, use the prefix to restrict sub folders for translation.|
+|filter.suffix|string|False|A case-sensitive suffix string to filter documents in the source path for translation. This is most often use for file extensions.|
+|language|string|False|Language code If none is specified, we will perform auto detect on the document.|
+|sourceUrl|string|True|Location of the folder / container or single file with your documents.|
+|storageSource|StorageSource|False|StorageSource listed below.|
+|storageSource.AzureBlob|string|False||
+
+**inputs.storageType**
+
+Storage type of the input documents source string.
+
+|Name|Type|
+| | |
+|file|string|
+|folder|string|
+
+**inputs.target**
+
+Destination for the finished translated documents.
+
+|Name|Type|Required|Description|
+| | | | |
+|category|string|False|Category / custom system for translation request.|
+|glossaries|Glossary[]|False|Glossary listed below. List of Glossary.|
+|glossaries.format|string|False|Format.|
+|glossaries.glossaryUrl|string|True (if using glossaries)|Location of the glossary. We will use the file extension to extract the formatting if the format parameter isn't supplied. If the translation language pair isn't present in the glossary, it won't be applied.|
+|glossaries.storageSource|StorageSource|False|StorageSource listed above.|
+|targetUrl|string|True|Location of the folder / container with your documents.|
+|language|string|True|Two letter Target Language code. See [list of language codes](../../language-support.md).|
+|storageSource|StorageSource []|False|StorageSource [] listed above.|
+|version|string|False|Version.|
+
+## Example request
+
+The following are examples of batch requests.
+
+**Translating all documents in a container**
+
+```json
+{
+ "inputs": [
+ {
+ "source": {
+ "sourceUrl": https://my.blob.core.windows.net/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D
+ },
+ "targets": [
+ {
+ "targetUrl": https://my.blob.core.windows.net/target-fr?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D,
+ "language": "fr"
+ }
+ ]
+ }
+ ]
+}
+```
+
+**Translating all documents in a container applying glossaries**
+
+Ensure you have created glossary URL & SAS token for the specific blob/document (not for the container)
+
+```json
+{
+ "inputs": [
+ {
+ "source": {
+ "sourceUrl": https://my.blob.core.windows.net/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D
+ },
+ "targets": [
+ {
+ "targetUrl": https://my.blob.core.windows.net/target-fr?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D,
+ "language": "fr"
+ "glossaries": [
+ {
+ "glossaryUrl": https://my.blob.core.windows.net/glossaries/en-fr.xlf?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=BsciG3NWoOoRjOYesTaUmxlXzyjsX4AgVkt2AsxJ9to%3D,
+ "format": "xliff",
+ "version": "1.2"
+ }
+ ]
+
+ }
+ ]
+ }
+ ]
+}
+```
+
+**Translating specific folder in a container**
+
+Ensure you have specified the folder name (case sensitive) as prefix in filter ΓÇô though the SAS token is still for the container.
+
+```json
+{
+ "inputs": [
+ {
+ "source": {
+ "sourceUrl": https://my.blob.core.windows.net/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D,
+ "filter": {
+ "prefix": "MyFolder/"
+ }
+ },
+ "targets": [
+ {
+ "targetUrl": https://my.blob.core.windows.net/target-fr?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D,
+ "language": "fr"
+ }
+ ]
+ }
+ ]
+}
+```
+
+**Translating specific document in a container**
+
+* Ensure you have specified "storageType": "File"
+* Ensure you have created source URL & SAS token for the specific blob/document (not for the container)
+* Ensure you have specified the target filename as part of the target URL ΓÇô though the SAS token is still for the container.
+* Sample request below shows a single document getting translated into two target languages
+
+```json
+{
+ "inputs": [
+ {
+ "storageType": "File",
+ "source": {
+ "sourceUrl": https://my.blob.core.windows.net/source-en/source-english.docx?sv=2019-12-12&st=2021-01-26T18%3A30%3A20Z&se=2021-02-05T18%3A30%3A00Z&sr=c&sp=rl&sig=d7PZKyQsIeE6xb%2B1M4Yb56I%2FEEKoNIF65D%2Fs0IFsYcE%3D
+ },
+ "targets": [
+ {
+ "targetUrl": https://my.blob.core.windows.net/target/try/Target-Spanish.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D,
+ "language": "es"
+ },
+ {
+ "targetUrl": https://my.blob.core.windows.net/target/try/Target-German.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D,
+ "language": "de"
+ }
+ ]
+ }
+ ]
+}
+```
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+| | |
+|202|Accepted. Successful request and the batch request are created by the service. The header Operation-Location will indicate a status url with the operation ID.HeadersOperation-Location: string|
+|400|Bad Request. Invalid request. Check input parameters.|
+|401|Unauthorized. Please check your credentials.|
+|429|Request rate is too high.|
+|500|Internal Server Error.|
+|503|Service is currently unavailable. Please try again later.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+
+## Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|innerError|InnerErrorV2|New Inner Error format, which conforms to Cognitive Services API Guidelines. It contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error (this can be nested).|
+|inner.Errorcode|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+
+## Examples
+
+### Example successful response
+
+The following information is returned in a successful response.
+
+You can find the job ID in the POST method's response Header Operation-Location URL value. The last parameter of the URL is the operation's job ID (the string following "/operation/").
+
+```HTTP
+Operation-Location: https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0.preview.1/operation/0FA2822F-4C2A-4317-9C20-658C801E0E55
+```
+
+### Example error response
+
+```JSON
+{
+ "error": {
+ "code": "ServiceUnavailable",
+ "message": "Service is temporary unavailable",
+ "innerError": {
+ "code": "ServiceTemporaryUnavailable",
+ "message": "Service is currently unavailable. Please try again later"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-custom.md
Previously updated : 03/15/2021 Last updated : 03/25/2021
At any time, you can view a list of all the custom models under your subscriptio
## Next steps
-View **[Form Recognizer API reference](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/5ed8c9843c2794cbb1a96291)** documentation to learn more.
+Learn more about the Form Recognizer client library by exploring our API reference documentation.
+> [!div class="nextstepaction"]
+> [Form Recognizer API reference](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/5ed8c9843c2794cbb1a96291)
>
cognitive-services Data Feeds From Different Sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/data-feeds-from-different-sources.md
# Add data feeds from different data sources to Metrics Advisor
-Use this article to find the settings and requirements for connecting different types of data sources to Metrics Advisor. Make sure to read how to [Onboard your data](how-tos/onboard-your-data.md) to learn about the key concepts for using your data with Metrics Advisor.
+Use this article to find the settings and requirements for connecting different types of data sources to Metrics Advisor. Make sure to read how to [Onboard your data](how-tos/onboard-your-data.md) to learn about the key concepts for using your data with Metrics Advisor. \
## Supported authentication types | Authentication types | Description | | |-|
-|**Basic** | You will need to be able to provide basic parameters for accessing data sources. For example a connection string or key. Data feed admins are able to view these credentials. |
+|**Basic** | You will need to be able to provide basic parameters for accessing data sources. For example, a connection string or key. Data feed admins are able to view these credentials. |
| **AzureManagedIdentity** | [Managed identities](../../active-directory/managed-identities-azure-resources/overview.md) for Azure resources is a feature of Azure Active Directory. It provides Azure services with an automatically managed identity in Azure AD. You can use the identity to authenticate to any service that supports Azure AD authentication.| | **AzureSQLConnectionString**| Store your AzureSQL connection string as a **credential entity** in Metrics Advisor, and use it directly each time when onboarding metrics data. Only admins of the Credential entity are able to view these credentials, but enables authorized viewers to create data feeds without needing to know details for the credentials. | | **DataLakeGen2SharedKey**| Store your data lake account key as a **credential entity** in Metrics Advisor and use it directly each time when onboarding metrics data. Only admins of the Credential entity are able to view these credentials, but enables authorized viewers to create data feed without needing to know the credential details.|
Use this article to find the settings and requirements for connecting different
|[**MySQL**](#mysql) | Basic | |[**PostgreSQL**](#pgsql)| Basic|
-Create an **Credential entity** and use it for authenticating to your data sources. The following sections specify the parameters required by for *Basic* authentication.
+Create a Credential entity** and use it for authenticating to your data sources. The following sections specify the parameters required by for *Basic* authentication.
## <span id="appinsights">Azure Application Insights</span>
This is the file template of the Blob file. For example: *X_%Y-%m-%d-%h-%M.json*
* `%h` is the hour formatted as `HH` * `%M` is the minute formatted as `mm`
-Currently Metrics Advisor supports the data schema in the JSON files as follow. For example:
+Currently Metrics Advisor supports the data schema in the JSON files as follows. For example:
``` JSON [
The timestamp field must match one of these two formats:
## <span id="table">Azure Table Storage</span>
-* **Connection String**: Please refer to [View and copy a connection string](../../storage/common/storage-account-keys-manage.md?tabs=azure-portal&toc=%2fazure%2fstorage%2ftables%2ftoc.json#view-account-access-keys) for information on how to retrieve the connection string from Azure Table Storage.
+* **Connection String**: Please create an SAS (shared access signature) URL and fill in here. The most straightforward way to generate a SAS URL is using the Azure Portal. By using the Azure portal, you can navigate graphically. To create an SAS URL via the Azure portal, first, navigate to the storage account youΓÇÖd like to access under the Settings section then click Shared access signature. Check at least "Table" and "Object" checkboxes, then click the Generate SAS and connection string button. Table service SAS URL is what you need to copy and fill in the text box in the Metrics Advisor workspace.
* **Table Name**: Specify a table to query against. This can be found in your Azure Storage Account instance. Click **Tables** in the **Table Service** section. * **Query**
-You can use the `@StartTime` in your query. `@StartTime` is replaced with a yyyy-MM-ddTHH:mm:ss format string in script.
+You can use the `@StartTime` in your query. `@StartTime` is replaced with a yyyy-MM-ddTHH:mm:ss format string in script. Tip: Use Azure storage explorer to create a query with specific time range and make sure it runs okay, then do the replacement.
``` mssql
- let StartDateTime = datetime(@StartTime); let EndDateTime = StartDateTime + 1d;
- SampleTable | where Timestamp >= StartDateTime and Timestamp < EndDateTime | project Timestamp, Market, RPM
+ date ge datetime'@StartTime' and date lt datetime'@EndTime'
``` ## <span id="es">Elasticsearch</span>
-* **Host**:Specify the master host of Elasticsearch Cluster.
-* **Port**:Specify the master port of Elasticsearch Cluster.
-* **Authorization Header**:Specify the authorization header value of Elasticsearch Cluster.
-* **Query**:Specify the query to get data. Placeholder @StartTime is supported.(e.g. when data of 2020-06-21T00:00:00Z is ingested, @StartTime = 2020-06-21T00:00:00)
+* **Host**: Specify the master host of Elasticsearch Cluster.
+* **Port**: Specify the master port of Elasticsearch Cluster.
+* **Authorization Header**: Specify the authorization header value of Elasticsearch Cluster.
+* **Query**: Specify the query to get data. Placeholder @StartTime is supported.(e.g. when data of 2020-06-21T00:00:00Z is ingested, @StartTime = 2020-06-21T00:00:00)
## <span id="http">HTTP request</span>
-* **Request URL**: A HTTP url which can return a JSON. The placeholders %Y,%m,%d,%h,%M are supported: %Y=year in format yyyy, %m=month in format MM, %d=day in format dd, %h=hour in format HH, %M=minute in format mm. For example: `http://microsoft.com/ProjectA/%Y/%m/X_%Y-%m-%d-%h-%M`.
+* **Request URL**: An HTTP url that can return a JSON. The placeholders %Y,%m,%d,%h,%M are supported: %Y=year in format yyyy, %m=month in format MM, %d=day in format dd, %h=hour in format HH, %M=minute in format mm. For example: `http://microsoft.com/ProjectA/%Y/%m/X_%Y-%m-%d-%h-%M`.
* **Request HTTP method**: Use GET or POST. * **Request header**: Could add basic authentication. * **Request payload**: Only JSON payload is supported. Placeholder @StartTime is supported in the payload. The response should be in the following JSON format: [{"timestamp": "2018-01-01T00:00:00Z", "market":"en-us", "count":11, "revenue":1.23}, {"timestamp": "2018-01-01T00:00:00Z", "market":"zh-cn", "count":22, "revenue":4.56}].(e.g. when data of 2020-06-21T00:00:00Z is ingested, @StartTime = 2020-06-21T00:00:00.0000000+00:00)
You can use the `@StartTime` in your query. `@StartTime` is replaced with a yyyy
## Next steps * While waiting for your metric data to be ingested into the system, read about [how to manage data feed configurations](how-tos/manage-data-feeds.md).
-* When your metric data is ingested, you can [Configure metrics and fine tune detecting configuration](how-tos/configure-metrics.md).
+* When your metric data is ingested, you can [Configure metrics and fine tune detecting configuration](how-tos/configure-metrics.md).
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/overview.md
# What is Metrics Advisor (preview)?
-Metrics Advisor is a part of Azure Cognitive Services that uses AI perform data monitoring and anomaly detection in time series data. The service automates the process of applying models to your data, and provides a set of APIs web-based workspace for data ingestion, anomaly detection, and diagnostics - without needing to know machine learning. Use Metrics Advisor to:
+Metrics Advisor is a part of Azure Cognitive Services that uses AI to perform data monitoring and anomaly detection in time series data. The service automates the process of applying models to your data, and provides a set of APIs and a web-based workspace for data ingestion, anomaly detection, and diagnostics - without needing to know machine learning. Developers can build AIOps, predicative maintenance, and business monitor applications on top of the service. Use Metrics Advisor to:
-* Analyze multi-dimensional data from multiple data sources
+* Analyze multi-dimensional data from multiple data sources
* Identify and correlate anomalies * Configure and fine-tune the anomaly detection model used on your data
-* Diagnose anomalies and help with root cause analysis.
+* Diagnose anomalies and help with root cause analysis
:::image type="content" source="media/metrics-advisor-overview.png" alt-text="Metrics Advisor overview"::: ## Connect to a variety of data sources
-Metrics Advisor can connect to, and [ingest multi-dimensional metric](how-tos/onboard-your-data.md) data from many data stores, including: SQL Server, Azure Blob Storage, MongoDB and more.
+Metrics Advisor can connect to, and [ingest multi-dimensional metric](how-tos/onboard-your-data.md) data from many data stores, including: SQL Server, Azure Blob Storage, MongoDB and more.
## Easy-to-use and customizable anomaly detection
-* Metrics Advisor automatically selects the best model for your data, without needing to know any machine learning.
+* Metrics Advisor automatically selects the best model for your data, without needing to know any machine learning.
* Automatically monitor every time series within [multi-dimensional metrics](glossary.md#multi-dimensional-metric). * Use [parameter tuning](how-tos/configure-metrics.md) and [interactive feedback](how-tos/anomaly-feedback.md) to customize the model applied on your data, and future anomaly detection results.
+## Real-time alerts through multiple channels
-## Real time alerts through multiple channels
-
-Whenever anomalies are detected, Metrics Advisor is able to [send real time alerts](how-tos/alerts.md) through multiple channels using hooks, such as: email hooks, web hooks and Azure DevOps hooks. Flexible alert rules let you customize which alerts are sent, and where.
+Whenever anomalies are detected, Metrics Advisor is able to [send real time alerts](how-tos/alerts.md) through multiple channels using hooks, such as: email hooks, web hooks, and Azure DevOps hooks. Flexible alert rules let you customize which alerts are sent, and their destination.
## Smart diagnostic insights by analyzing anomalies
-Analyze anomalies detected on multi-dimensional metrics, and generate [smart diagnostic insights](how-tos/diagnose-incident.md) including most the most likely root cause, diagnostic trees, metric drilling, and more. By configuring [Metrics graph](how-tos/metrics-graph.md), cross metrics analysis can enabled to help you visualize incidents.
+Analyze anomalies detected on multi-dimensional metrics, and generate [smart diagnostic insights](how-tos/diagnose-incident.md) including most the most likely root cause, diagnostic trees, metric drilling, and more. By configuring [Metrics graph](how-tos/metrics-graph.md), cross metrics analysis can be enabled to help you visualize incidents.
## Typical workflow The workflow is simple: after onboarding your data, you can fine-tune the anomaly detection, and create configurations to fit your scenario.
-1. [Create an Azure resource](../cognitive-services-apis-create-account.md) for Metrics Advisor.
+1. [Create an Azure resource](https://go.microsoft.com/fwlink/?linkid=2142156) for Metrics Advisor.
2. Build your first monitor using the web portal. 1. Onboard your data 2. Fine-tune anomaly detection
The workflow is simple: after onboarding your data, you can fine-tune the anomal
## Next steps * Explore a quickstart: [Monitor your first metric on web](quickstarts/web-portal.md).
-* Explore a quickstart: [Use the REST APIs to customize your solution](./quickstarts/rest-api-and-client-library.md).
+* Explore a quickstart: [Use the REST APIs to customize your solution](./quickstarts/rest-api-and-client-library.md).
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/known-issues.md
-# Known issues: Azure Communication Services client libraries
-This article provides information about limitations and known issues related to the Azure Communication Services client libraries.
+# Known issues: Azure Communication Services SDKs
+This article provides information about limitations and known issues related to the Azure Communication Services SDKs.
> [!IMPORTANT] > There are multiple factors that can affect the quality of your calling experience. Refer to the **[network requirements](https://docs.microsoft.com/azure/communication-services/concepts/voice-video-calling/network-requirements)** documentation to learn more about Communication Services network configuration and testing best practices.
-## JavaScript client library
+## JavaScript SDK
-This section provides information about known issues associated with JavaScript voice and video calling client libraries in Azure Communication Services.
+This section provides information about known issues associated with the Azure Communication Services JavaScript voice and video calling SDKs.
-### After refreshing the page, user is not removed from the call immediately
-If user is in a call and decides to refresh the page, the Communication Services client library may not be able to inform the Communication Services media service that it's about to disconnect. The Communication Services media service will not remove such user immediately from the call but it will wait for a user to rejoin assuming problems with network connectivity. User will be removed from the call after media service will timeout.
+### Refreshing a page doesn't immediately remove the user from their call
-We encourage developers build experiences that don't require end-users to refresh the page of your application while participating in a call. If user will refresh the page, the best way to handle it for the app is to reuse the same Communication Services user ID for the user after he returns back to the application after refreshes.
+If a user is in a call and decides to refresh the page, the Communication Services media service won't remove this user immediately from the call. It will wait for the user to rejoin. The user will be removed from the call after the media service times out.
-For the perspective of other participants in the call, such user will remain in the call for predefined amount of time (1-2 mins).
-If user will rejoin with the same Communication Services user ID, he will be represented as the same, existing object in the `remoteParticipants` collection.
-If previously user was sending video, `videoStreams` collection will keep previous stream information until service will timeout and remove it, in this scenario application may decide to observe any new streams added to the collection and render one with highest `id`.
+It's best to build user experiences that don't require end-users to refresh the page of your application while in a call. If a user refreshes the page, reuse the same Communication Services user ID after they return back to the application.
+
+From the perspective of other participants in the call, the user will remain in the call for period of time (1-2 minutes).
+If the user rejoins with the same Communication Services user ID, they'll be represented as the same, existing object in the `remoteParticipants` collection.
+
+If the user was sending video before refreshing, the `videoStreams` collection will keep the previous stream information until the service times out and removes it. In this scenario, the application may decide to observe any new streams added to the collection and render one with the highest `id`.
### It's not possible to render multiple previews from multiple devices on web
-This is a known limitation. Refer to the [calling client library overview](https://docs.microsoft.com/azure/communication-services/concepts/voice-video-calling/calling-sdk-features) for more information.
+This is a known limitation. For more information, refer to the [calling SDK overview](https://docs.microsoft.com/azure/communication-services/concepts/voice-video-calling/calling-sdk-features).
+
+### Enumerating devices isn't possible in Safari when the application runs on iOS or iPadOS
-### Enumeration of the microphone and speaker devices is not possible in Safari when the application runs on iOS or iPadOS
Applications can't enumerate/select mic/speaker devices (like Bluetooth) on Safari iOS/iPad. This is a known operating system limitation. If you're using Safari on macOS, your app will not be able to enumerate/select speakers through the Communication Services Device Manager. In this scenario, devices must be selected via the OS. If you use Chrome on macOS, the app can enumerate/select devices through the Communication Services Device Manager. ### Audio connectivity is lost when receiving SMS messages or calls during an ongoing VoIP call
-Mobile browsers don't maintain connectivity while in the background state. This can lead to a degraded call experience if the VoIP call was interrupted by text message or incoming PSTN call that pushes your application into the background.
+Mobile browsers don't maintain connectivity while in the background state. This can lead to a degraded call experience if the VoIP call was interrupted by an event that pushes your application into the background.
<br/>Client library: Calling (JavaScript) <br/>Browsers: Safari, Chrome
Mobile browsers don't maintain connectivity while in the background state. This
Switching between video devices may cause your video stream to pause while the stream is acquired from the selected device. #### Possible causes
-Streaming from and switching between media devices is computationally intensive. Switching frequently can cause performance degradation. Developers are encouraged to stop one device stream before starting another.
+Switching between devices frequently can cause performance degradation. Developers are encouraged to stop one device stream before starting another.
### Bluetooth headset microphone is not detected therefore is not audible during the call on Safari on iOS Bluetooth headsets aren't supported by Safari on iOS. Your Bluetooth device will not be listed in available microphone options and other participants will not be able to hear you if you try using Bluetooth over Safari.
Users may experience degraded video quality when devices are rotated.
### Camera switching makes the screen freeze
-When a Communication Services user joins a call using the JavaScript calling client library and then hits the camera switch button, the UI may become completely unresponsive until the application is refreshed or browser is pushed to the background by user.
+When a Communication Services user joins a call using the JavaScript calling SDK and then hits the camera switch button, the UI may become unresponsive until the application is refreshed or browser is pushed to the background by user.
<br/>Devices affected: Google Pixel 4a <br/>Client library: Calling (JavaScript)
Under investigation.
### If the video signal was stopped while the call is in "connecting" state, the video will not be sent after the call started If users decide to quickly turn video on/off while call is in `Connecting` state - this may lead to problem with stream acquired for the call. We encourage developers to build their apps in a way that doesn't require video to be turned on/off while call is in `Connecting` state. This issue may cause degraded video performance in the following scenarios:
+ - If the user starts with audio and then start and stop video while the call is in `Connecting` state.
+ - If the user starts with audio and then start and stop video while the call is in `Lobby` state.
#### Possible causes Under investigation. ### Sometimes it takes a long time to render remote participant videos
-During an ongoing group call, _User A_ sends video and then _User B_ joins the call. Sometimes, User B doesn't see video from User A, or User A's video begins rendering after a long delay. This could be caused by a network environment that requires further configuration. Refer to the [network requirements](https://docs.microsoft.com/azure/communication-services/concepts/voice-video-calling/network-requirements) documentation for network configuration guidance.
+During an ongoing group call, _User A_ sends video and then _User B_ joins the call. Sometimes, User B doesn't see video from User A, or User A's video begins rendering after a long delay. This issue could be caused by a network environment that requires further configuration. Refer to the [network requirements](https://docs.microsoft.com/azure/communication-services/concepts/voice-video-calling/network-requirements) documentation for network configuration guidance.
communication-services Sdk Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/sdk-options.md
# SDKs and REST APIs
-Azure Communication Services capabilities are conceptually organized into six areas. Most areas have fully open-sourced client libraries programmed against published REST APIs that you can use directly over the Internet. The Calling client library uses proprietary network interfaces and is currently closed-source. Samples and more technical details for SDKs are published in the [Azure Communication Services GitHub repo](https://github.com/Azure/communication).
+Azure Communication Services capabilities are conceptually organized into six areas. Most areas have fully open-sourced SDKs programmed against published REST APIs that you can use directly over the Internet. The Calling SDK uses proprietary network interfaces and is currently closed-source. Samples and more technical details for SDKs are published in the [Azure Communication Services GitHub repo](https://github.com/Azure/communication).
## REST APIs Communication Services APIs are documented alongside other Azure REST APIs in [docs.microsoft.com](/rest/api/azure/). This documentation will tell you how to structure your HTTP messages and offers guidance for using Postman. This documentation is also offered in Swagger format on [GitHub](https://github.com/Azure/azure-rest-api-specs).
Communication Services APIs are documented alongside other Azure REST APIs in [d
| Assembly | Namespaces| Protocols | Capabilities | ||-||--| | Azure Resource Manager | Azure.ResourceManager.Communication | [REST](https://docs.microsoft.com/rest/api/communication/communicationservice)| Provision and manage Communication Services resources|
-| Common | Azure.Communication.Common| REST | Provides base types for other client libraries |
+| Common | Azure.Communication.Common| REST | Provides base types for other SDKs |
| Identity | Azure.Communication.Identity| [REST](https://docs.microsoft.com/rest/api/communication/communicationidentity)| Manage users, access tokens| | Phone numbers _(beta)_| Azure.Communication.PhoneNumbers| [REST](https://docs.microsoft.com/rest/api/communication/phonenumberadministration)| Acquire and manage phone numbers | | Chat | Azure.Communication.Chat| [REST](https://docs.microsoft.com/rest/api/communication/) with proprietary signaling | Add real-time text based chat to your applications | | SMS| Azure.Communication.SMS | [REST](https://docs.microsoft.com/rest/api/communication/sms)| Send and receive SMS messages| | Calling| Azure.Communication.Calling | Proprietary transport | Use voice, video, screen-sharing, and other real-time data communication capabilities |
-The Azure Resource Manager, Identity, and SMS client libraries are focused on service integration, and in many cases security issues arise if you integrate these functions into end-user applications. The Common and Chat client libraries are suitable for service and client applications. The Calling client library is designed for client applications. A client library focused on service scenarios is in development.
+The Azure Resource Manager, Identity, and SMS SDKs are focused on service integration, and in many cases security issues arise if you integrate these functions into end-user applications. The Common and Chat SDKs are suitable for service and client applications. The Calling SDK is designed for client applications. An SDK focused on service scenarios is in development.
### Languages and publishing locations
Certain REST APIs and corresponding SDK methods have throttle limits you should
| API | Throttle | ||| | [All Search Telephone Number Plan APIs](https://docs.microsoft.com/rest/api/communication/phonenumberadministration) | 4 requests/day |
-| [Purchase Telephone Number Plan](https://docs.microsoft.com/rest/api/communication/phonenumberadministration/purchasesearch) | 1 request/day |
+| [Purchase Telephone Number Plan](https://docs.microsoft.com/rest/api/communication/phonenumberadministration/purchasesearch) | 1 purchase a month |
| [Send SMS](https://docs.microsoft.com/rest/api/communication/sms/send) | 200 requests/minute |
communication-services Plan Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/plan-solution.md
Azure Communication Services allows you to use phone numbers to make voice calls
## Azure Subscriptions eligibility
-To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired on trial accounts or by Azure free credits.
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired on trial accounts or by Azure free credits.
Phone number availability is currently restricted to Azure subscriptions that have a billing address in the United States and Communication Services resources that have a US data location.
The table below summarizes these phone number types:
| Toll-Free | +1 (toll-free area *code*) XXX XX XX | US | Calling (Outbound), SMS (Inbound/Outbound)| Assigning phone numbers to Interactive Voice Response (IVR) systems/Bots, SMS applications |
-### Phone number features in Azure Communication Services
+### Phone number capabilities in Azure Communication Services
[!INCLUDE [Emergency Calling Notice](../../includes/emergency-calling-notice-include.md)]
-For most phone numbers, we allow you to configure an "a la carte" set of features. These features can be selected as you lease your telephone numbers within Azure Communication Services.
+For most phone numbers, we allow you to configure an "a la carte" set of capabilities. These capabilities can be selected as you lease your telephone numbers within Azure Communication Services.
-The features that are available to you depend on the country that you're operating within, your use case, and the phone number type that you've selected. These features vary by country due to regulatory requirements. Azure Communication Services offers the following phone number features:
+The capabilities that are available to you depend on the country that you're operating within, your use case, and the phone number type that you've selected. These capabilities vary by country due to regulatory requirements. Azure Communication Services offers the following phone number capabilities:
- **One-way outbound SMS** This option allows you to send SMS messages to your users. This can be useful in notification and two-factor authentication scenarios. - **Two-way inbound and outbound SMS** This option allows you to send and receive messages from your users using phone numbers. This can be useful in customer service scenarios.
communication-services About Call Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/about-call-types.md
Previously updated : 03/10/2021 Last updated : 03/25/2021
A one-to-one call on Azure Communication Services happens when one of your users
A group call on Azure Communication Services happens when three or more participants connect to one another. Any combination of VoIP and PSTN-connected users can be present on a group call. A one-to-one call can be converted into a group call by adding more participants to the call. One of those participants can be a bot. ### Supported video standards
-We support H.264 (MPEG-4)
+We support H.264 (MPEG-4).
### Video quality We support up to Full HD 1080p on the native (iOS, Android) SDKs. For Web (JS) SDK we support Standard HD 720p. The quality depends on the available bandwidth.
-### Rooms concept
-Rooms are a set of APIs and SDKs that allow you to easily add audio, video, screen sharing, PSTN and SMS interactions to your website or native application.
-During the preview you can use the group ID to join the same conversation. You can create as many group IDs as you need and separate the users by the ΓÇ£roomsΓÇ¥. Moving forward will introduce more controls around ΓÇ£roomsΓÇ¥
- ## Next steps > [!div class="nextstepaction"]
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
The following list presents the set of features which are currently available in
| | Set / update scaling mode | ✔️ | ✔️ | ✔️ | | Render remote video stream | ✔️ | ✔️ | ✔️
-## Calling client library streaming support
-The Communication Services Calling client library supports the following streaming configurations:
+## Calling SDK streaming support
+The Communication Services Calling SDK supports the following streaming configurations:
| Limit |Web | Android/iOS| |--|-||
-|**# of outgoing streams that can be sent simultaneously** |1 video + 1 screen sharing | 1 video + 1 screen sharing|
-|**# of incoming streams that can be rendered simultaneously** |1 video + 1 screen sharing| 6 video + 1 screen sharing |
+|**# of outgoing streams that can be sent simultaneously** |1 video or 1 screen sharing | 1 video + 1 screen sharing|
+|**# of incoming streams that can be rendered simultaneously** |1 video or 1 screen sharing| 6 video + 1 screen sharing |
-## Calling client library timeouts
+## Calling SDK timeouts
-The following timeouts apply to the Communication Services Calling client libraries:
+The following timeouts apply to the Communication Services Calling SDKs:
| Action | Timeout in seconds | | -- | - |
For example, this iframe allows both camera and microphone access:
For more information, see the following articles: - Familiarize yourself with general [call flows](../call-flows.md) - Learn about [call types](../voice-video-calling/about-call-types.md)-- [Plan your PSTN solution](../telephony-sms/plan-solution.md)
+- [Plan your PSTN solution](../telephony-sms/plan-solution.md)
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/overview.md
The following resources are a great place to get started with Azure Communicatio
| Resource |Description | | | |
-|**[Create a Communication Services resource](./quickstarts/create-communication-resource.md)**|You can begin using Azure Communication Services by using the Azure portal or Communication Services client library to provision your first Communication Services resource. Once you have your Communication Services resource connection string, you can provision your first user access tokens.|
+|**[Create a Communication Services resource](./quickstarts/create-communication-resource.md)**|You can begin using Azure Communication Services by using the Azure portal or Communication Services SDK to provision your first Communication Services resource. Once you have your Communication Services resource connection string, you can provision your first user access tokens.|
|**[Get a phone number](./quickstarts/telephony-sms/get-phone-number.md)**|You can use Azure Communication Services to provision and release telephone numbers. These telephone numbers can be used to initiate outbound calls and build SMS communications solutions.|
-|**[Send an SMS from your app](./quickstarts/telephony-sms/send.md)**|The Azure Communication Services SMS client library allows you to send and receive SMS messages from your .NET and JavaScript applications.|
+|**[Send an SMS from your app](./quickstarts/telephony-sms/send.md)**|The Azure Communication Services SMS SDK allows you to send and receive SMS messages from your .NET and JavaScript applications.|
After creating an Communication Services resource you can start building client scenarios, such as voice and video calling or text chat. | Resource |Description | | | |
-|**[Create your first user access token](./quickstarts/access-tokens.md)**|User access tokens are used to authenticate your services against your Azure Communication Services resource. These tokens are provisioned and reissued using the Communication Services client library.|
-|**[Get started with voice and video calling](./quickstarts/voice-video-calling/getting-started-with-calling.md)**| Azure Communication Services allows you to add voice and video calling to your apps using the Calling client library. This library is powered by WebRTC and allows you to establish peer-to-peer, multimedia, real-time communications within your applications.|
+|**[Create your first user access token](./quickstarts/access-tokens.md)**|User access tokens are used to authenticate your services against your Azure Communication Services resource. These tokens are provisioned and reissued using the Communication Services SDK.|
+|**[Get started with voice and video calling](./quickstarts/voice-video-calling/getting-started-with-calling.md)**| Azure Communication Services allows you to add voice and video calling to your apps using the Calling SDK. This library is powered by WebRTC and allows you to establish peer-to-peer, multimedia, real-time communications within your applications.|
|**[Join your calling app to a Teams meeting](./quickstarts/voice-video-calling/get-started-teams-interop.md)**|Azure Communication Services can be used to build custom meeting experiences that interact with Microsoft Teams. Users of your Communication Services solution(s) can interact with Teams participants over voice, video, chat, and screen sharing.|
-|**[Get started with chat](./quickstarts/chat/get-started.md)**|The Azure Communication Services Chat client library can be used to integrate real-time chat into your applications.|
+|**[Get started with chat](./quickstarts/chat/get-started.md)**|The Azure Communication Services Chat SDK can be used to integrate real-time chat into your applications.|
## Samples
The following resources will help you learn about the Azure Communication Servic
| Resource | Description | | | |
-|**[Client libraries and REST APIs](./concepts/sdk-options.md)**|Azure Communication Services capabilities are conceptually organized into six areas, each represented by an SDK. You can decide which SDKs to use based on your real-time communication needs.|
+|**[SDKs and REST APIs](./concepts/sdk-options.md)**|Azure Communication Services capabilities are conceptually organized into six areas, each represented by an SDK. You can decide which SDKs to use based on your real-time communication needs.|
|**[Calling SDK overview](./concepts/voice-video-calling/calling-sdk-features.md)**|Review the Communication Services Calling SDK overview.| |**[Chat SDK overview](./concepts/chat/sdk-features.md)**|Review the Communication Services Chat SDK overview.| |**[SMS SDK overview](./concepts/telephony-sms/sdk-features.md)**|Review the Communication Services SMS SDK overview.|
communication-services Get Phone Number https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/get-phone-number.md
Title: Quickstart - Get a phone number from Azure Communication Services
-description: Learn how to buy a Communication Services phone number using the Azure portal.
+ Title: Quickstart - Manage Phone Numbers using Azure Communication Services
+description: Learn how to manage phone numbers using Azure Communication Services
Last updated 03/10/2021
-
+zone_pivot_groups: acs-azp-java-net-python-csharp-js
-# Quickstart: Get a phone number using the Azure portal
+# Quickstart: Manage Phone Numbers
[!INCLUDE [Regional Availability Notice](../../includes/regional-availability-include.md)]
-Get started with Azure Communication Services by using the Azure portal to purchase a telephone number.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [An active Communication Services resource.](../create-communication-resource.md)-
-## Get a phone number
-
-To begin provisioning numbers, go to your Communication Services resource on the [Azure portal](https://portal.azure.com).
--
-### Getting new phone numbers
-
-Navigate to the **Phone Numbers** blade in the resource menu.
--
-Press the **Get** button to launch the wizard. The wizard on the **Phone numbers** blade will walk you through a series of questions that helps you choose the phone number that best fits your scenario.
-
-You will first need to choose the **Country/region** where you would like to provision the phone number. After selecting the Country/region, you will then need to select the **Use case** which best suites your needs.
--
-### Select your phone number features
-
-Configuring your phone number is broken down into two steps:
-
-1. The selection of the [number type](../../concepts/telephony-sms/plan-solution.md#phone-number-types-in-azure-communication-services)
-2. The selection of the [number features](../../concepts/telephony-sms/plan-solution.md#phone-number-features-in-azure-communication-services)
-
-You can select from two phone number types: **Geographic**, and **Toll-free**. When you've selected a number type, you can then choose the feature.
-
-In our example, we've selected a **Toll-free** number type with the **Outbound calling** and **Inbound and Outbound SMS** features.
--
-From here, click the **Next: Numbers** button at the bottom of the page to customize the phone number(s) you would like to provision.
-
-### Customizing phone numbers
-
-On the **Numbers** page, you will customize the phone number(s) which you'd like to provision.
--
-> [!NOTE]
-> This quickstart is showing the **Toll-free** Number type customization flow. The experience may be slightly different if you have chosen the **Geographic** Number type, but the end-result will be the same.
-
-Choose the **Area code** from the list of available Area codes and enter the quantity which you'd like to provision, then click **Search** to find numbers which meet your selected requirements. The phone numbers which meet your needs will be shown along with their monthly cost.
--
-> [!NOTE]
-> Availability depends on the Number type, location, and the features that you have selected.
-> Numbers are reserved for a short time before the transaction expires. If the transaction expires, you will need to re-select the numbers.
-
-To view the purchase summary and place your order, click the **Next: Summary** button at the bottom of the page.
-
-### Place order
-
-The summary page will review the Number type, Features, Phone Numbers, and Total monthly cost to provision the phone numbers.
-
-> [!NOTE]
-> The prices shown are the **monthly recurring charges** which cover the cost of leasing the selected phone number to you. Not included in this view is the **Pay-as-you-go costs** which are incurred when you make or receive calls. The price lists are [available here](../../concepts/pricing.md). These costs depend on number type and destinations called. For example, price-per-minute for a call from a Seattle regional number to a regional number in New York and a call from the same number to a UK mobile number may be different.
-
-Finally, click **Place order** at the bottom of the page to confirm.
--
-## Find your phone numbers on the Azure portal
-
-Navigate to your Azure Communication Resource on the [Azure portal](https://portal.azure.com):
--
-Select the Phone Numbers blade in the menu to manage your phone numbers.
--
-> [!NOTE]
-> It may take a few minutes for the provisioned numbers to be shown on this page.
+Get started with Azure Communication Services by using the Azure portal or the Communication Services Phone Numbers Client Library to manage telephone numbers.
-### Customizing phone numbers
-On the **Numbers** page, you can select a phone number to configure it.
-Select the features from the available options, then click **Confirm** to apply your selection.
## Troubleshooting
communication-services Get Started With Video Calling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/get-started-with-video-calling.md
function subscribeToParticipantVideoStreams(remoteParticipant) {
You have to subscribe to a `isAvailableChanged` event to render the `remoteVideoStream`. If the `isAvailable` property changes to `true`, a remote participant is sending a stream. Whenever availability of a remote stream changes you can choose to destroy the whole `Renderer`, a specific `RendererView` or keep them, but this will result in displaying blank video frame. ```JavaScript function handleVideoStream(remoteVideoStream) {
- remoteVideoStream.on('availabilityChanged', async () => {
+ remoteVideoStream.on('isAvailableChanged', async () => {
if (remoteVideoStream.isAvailable) { remoteVideoView(remoteVideoStream); } else {
cosmos-db Synapse Link Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/synapse-link-power-bi.md
Title: Power BI and serverless SQL pool to analyze Azure Cosmos DB data with Synapse Link description: Learn how to build a serverless SQL pool database and views over Synapse Link for Azure Cosmos DB, query the Azure Cosmos DB containers and then build a model with Power BI over those views.-+ Last updated 11/30/2020-+
cost-management-billing Manage Azure Subscription Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/manage-azure-subscription-policy.md
+
+ Title: Manage Azure subscription policies
+description: Learn how to manage Azure subscription policies to control the movement of Azure subscriptions from and into directories.
++++ Last updated : 03/10/2021++++
+# Manage Azure subscription policies
+
+>[!NOTE]
+>This feature is currently in preview and is being gradually rolled out, so not everyone may see this experience on the Azure portal yet.
+
+This article helps you configure Azure subscription policies for subscription operations to control the movement of Azure subscriptions from and into directories.
+
+## Prerequisites
+
+- Only directory [global administrators](../../active-directory/roles/permissions-reference.md#global-administrator) can edit subscription policies. Before editing subscription policies, the global administrator must [Elevate access to manage all Azure subscriptions and management groups](../../role-based-access-control/elevate-access-global-admin.md). Then they can edit subscription policies.
+- All other users can only read the current policy setting.
+
+## Available subscription policy settings
+
+Use the following policy settings to control the movement of Azure subscriptions from and into directories.
+
+### Subscriptions leaving AAD directory
+
+The policy allows or stops users from moving subscriptions out of the current directory. [Subscription owners](../../role-based-access-control/built-in-roles.md#owner) can [change the directory of an Azure subscription](../../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md) to another one where they're a member. It poses governance challenges, so global administrators can allow or disallow directory users from changing the directory.
+
+### Subscriptions entering AAD directory
+
+The policy allows or stops users from other directories, who have access in the current directory, to move subscriptions into the current directory. [Subscription owners](../../role-based-access-control/built-in-roles.md#owner) can [change the directory of an Azure subscription](../../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md) to another one where they're a member. It poses governance challenges, so global administrators can allow or disallow directory users from changing the directory.
+
+### Exempted Users
+
+For governance reasons, global administrators can block all subscription directory moves - in to our out of the current directory. However they might want to allow specific users to do either operations. For either situation, they can configure a list of exempted users that allows the users to bypass the policy setting that applies to everyone else.
+
+## Setting subscription policy
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Navigate to **Subscriptions**. **Manage Policies** is shown on the command bar.
+ :::image type="content" source="./media/manage-azure-subscription-policy/subscription-blade-manage-policies.png" alt-text="Screenshot showing Manage Polices in Subscriptions." lightbox="./media/manage-azure-subscription-policy/subscription-blade-manage-policies.png" :::
+1. Select **Manage Policies** to view details about the current subscription policies set for the directory. A global administrator with [elevated permissions](../../role-based-access-control/elevate-access-global-admin.md) can make edits to the settings including adding or removing exempted users.
+ :::image type="content" source="./media/manage-azure-subscription-policy/subscription-blade-policies.png" alt-text="Screenshot showing specific policy settings and exempted users." lightbox="./media/manage-azure-subscription-policy/subscription-blade-policies.png" :::
+1. Select **Save changes** at the bottom to save changes. The changes are effective immediately.
+
+## Read subscription policy
+
+Non-global administrators can still navigate to the subscription policy area to view the directory's policy settings. They can't make any edits. They can't see the list of exempted users for privacy reasons. They can view their global administrators to submit requests for policy changes, as long as the directory settings allow them to.
++
+## Next steps
+
+- Read the [Cost Management + Billing documentation](../index.yml)
data-factory Data Flow Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-troubleshoot-guide.md
Last updated 03/18/2021
# Troubleshoot mapping data flows in Azure Data Factory This article explores common troubleshooting methods for mapping data flows in Azure Data Factory.
For more help with troubleshooting, see these resources:
* [Data Factory feature requests](https://feedback.azure.com/forums/270578-data-factory) * [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory) * [Stack Overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory)
-* [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
+* [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-install-update.md
Previously updated : 03/23/2021 Last updated : 03/25/2021 # Update your Azure Stack Edge Pro GPU
The procedure described in this article was performed using a different version
> > For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2103-release-notes.md). > - To apply 2103 update, your device must be running 2010. If you are not running the minimal supported version, you'll see this error: *Update package cannot be installed as its dependencies are not met*.
+> - This update requires you to apply two updates sequentially. First you apply the device software updates and then the Kubernetes updates.
> - Keep in mind that installing an update or hotfix restarts your device. This update contains the device software updates and the Kubernetes updates. Given that the Azure Stack Edge Pro is a single node device, any I/O in progress is disrupted and your device experiences a downtime of up to 1.5 hours for the update. To install updates on your device, you first need to configure the location of the update server. After the update server is configured, you can apply the updates via the Azure portal UI or the local web UI.
Each of these steps is described in the following sections.
We recommend that you install updates through the Azure portal. The device automatically scans for updates once a day. Once the updates are available, you see a notification in the portal. You can then download and install the updates. > [!NOTE]
-> Make sure that the device is healthy and status shows as **Online** before you proceed to install the updates.
+> Make sure that the device is healthy and status shows as **Your device is running fine!** before you proceed to install the updates.
1. When the updates are available for your device, you see a notification. Select the notification or from the top command bar, **Update device**. This will allow you to apply device software updates.
We recommend that you install updates through the Azure portal. The device autom
4. After the download is complete, the notification banner updates to indicate the completion. If you chose to download and install the updates, the installation will begin automatically.
- ![Software version after update 7](./media/azure-stack-edge-gpu-install-update/portal-update-6.png)
- If you chose to download updates only, then select the notification to open the **Device updates** blade. Select **Install**.
- ![Software version after update 8](./media/azure-stack-edge-gpu-install-update/portal-update-7.png)
-
-5. You see a notification that the install is in progress.
-
- ![Software version after update 9](./media/azure-stack-edge-gpu-install-update/portal-update-8.png)
-
- The portal also displays an informational alert to indicate that the install is in progress. The device goes offline and is in maintenance mode.
+5. You see a notification that the install is in progress. The portal also displays an informational alert to indicate that the install is in progress. The device goes offline and is in maintenance mode.
![Software version after update 10](./media/azure-stack-edge-gpu-install-update/portal-update-9.png)
We recommend that you install updates through the Azure portal. The device autom
![Software version after update 12](./media/azure-stack-edge-gpu-install-update/portal-update-11.png)
-7. After the restart, if you select the **Update device** from the top command bar, you can see the progress of the updates.
+7. After the restart, the device software will finish updating. After the update is complete, you can verify from the local web UI that the device software is updated. The Kubernetes software version has not been updated.
+
+ ![Software version after update 13](./media/azure-stack-edge-gpu-install-update/portal-update-12.png)
+
+8. You will see a notification banner indicating that device updates are available. Select this banner to start updating the Kubernetes software on your device.
+
+ ![Software version after update 13a](./media/azure-stack-edge-gpu-install-update/portal-update-13.png)
++
+ ![Software version after update 14](./media/azure-stack-edge-gpu-install-update/portal-update-14-a.png)
+
+ If you select the **Update device** from the top command bar, you can see the progress of the updates.
+
+ ![Software version after update 15](./media/azure-stack-edge-gpu-install-update/portal-update-14-b.png)
+
-8. The device status updates to **Online** after the updates are installed.
+8. The device status updates to **Your device is running fine** after the updates are installed.
- ![Software version after update 13](./media/azure-stack-edge-gpu-install-update/portal-update-14.png)
+ ![Software version after update 16](./media/azure-stack-edge-gpu-install-update/portal-update-15.png)
- From the top command bar, select **Device updates**. Verify that update has successfully installed and the device software version reflects that.
+ Go to the local web UI and then go to **Software update** page. Verify that the Kubernetes update has successfully installed and the software version reflects that.
- ![Software version after update 14](./media/azure-stack-edge-gpu-install-update/portal-update-15.png)
+ ![Software version after update 17](./media/azure-stack-edge-gpu-install-update/portal-update-16.png)
Once the device software and Kubernetes updates are successfully installed, the banner notification disappears. Your device has now the latest version of device software and Kubernetes.
databox Data Box Heavy System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-heavy-system-requirements.md
Previously updated : 07/03/2019 Last updated : 03/25/2021 # Azure Data Box Heavy system requirements
databox Data Box System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-system-requirements.md
Previously updated : 02/22/2021 Last updated : 03/25/2021 # Azure Data Box system requirements
The software requirements include supported operating systems, file transfer pro
[!INCLUDE [data-box-supported-file-systems-clients](../../includes/data-box-supported-file-systems-clients.md)] > [!IMPORTANT]
-> Connection to Data Box shares is not supported via REST for export orders.
+> Connection to Data Box shares is not supported via REST for export orders.
### Supported storage accounts
dedicated-hsm Deployment Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/deployment-architecture.md
na ms.devlang: na Previously updated : 02/05/2020- Last updated : 03/25/2021+
Azure Dedicated HSM provides cryptographic key storage in Azure. It meets stringent security requirements. Customers will benefit from using Azure Dedicated HSM if they:
-* Must meet FIPS 140-2 Level 3 certification
+* Must meet [FIPS 140-2 Level-3](https://csrc.nist.gov/publications/detail/fips/140/2/final) certification
* Require that they have exclusive access to the HSM * should have complete control of their devices The HSMs are distributed across MicrosoftΓÇÖs data centers and can be easily provisioned as a pair of devices as the basis of a highly available solution. They may also be deployed across regions for a disaster resilient solution. The regions with Dedicated HSM available currently can be checked using the [Products by Region page](https://azure.microsoft.com/global-infrastructure/services/?products=azure-dedicated-hsm).
-Each of the regions has HSM racks deployed in either two independent data centers or at least two independent availability zones. For example, South East Asia has three availability zones and East US 2 has two. There is a total of eight regions across Europe, Asia, and the USA that offer the Dedicated HSM service and this changes as we add new HSM racks in new regions. For more information on Azure regions, see the official [Azure regions information](https://azure.microsoft.com/global-infrastructure/regions/).
+* East US
+* East US 2
+* West US
+* West US 2
+* South Central US
+* Southeast Asia
+* East Asia
+* India Central
+* India South
+* Japan East
+* Japan West
+* North Europe
+* West Europe
+* UK South
+* UK West
+* Canada Central
+* Canada East
+* Australia East
+* Australia Southeast
+* Switzerland North
+* Switzerland West
+* US Gov Virginia
+* US Gov Texas
+
+Each of these regions has HSM racks deployed in either two independent data centers or at least two independent availability zones. South East Asia has three availability zones and East US 2 has two. There is a total of twenty three regions across Europe, Asia, and North America that offer the Dedicated HSM service. For more information on Azure regions, see the official [Azure regions information](https://azure.microsoft.com/global-infrastructure/regions/).
Some design factors for any Dedicated HSM-based solution are location/latency, high availability, and support for other distributed applications. ## Device location
Optimal HSM device location is in closest proximity to the applications performi
## High availability
-To achieve high availability, a customer must use two HSM devices in a region that are configured using Thanles software as a high availability pair. This type of deployment ensures the availability of keys if a single device experiences a problem preventing it from processing key operations. It also significantly reduces risk when performing break/fix maintenance such as power supply replacement. It is important for a design to account for any kind of regional level failure. Regional level failures can happen when there are natural disasters such as hurricanes, floods, or earthquakes. These types of events should be mitigated by provisioning HSM devices in another region. Devices deployed in another region may be paired together via Thales software configuration. This means that the minimum deployment for a highly available and disaster resilient solution is four HSM devices across two regions. Local redundancy and redundancy across regions can be used as a baseline to add any further HSM device deployments to support latency, capacity or to meet other application-specific requirements.
+To achieve high availability, a customer must use two HSM devices in a region that are configured using Thales software as a high availability pair. This type of deployment ensures the availability of keys if a single device experiences a problem preventing it from processing key operations. It also significantly reduces risk when performing break/fix maintenance such as power supply replacement. It is important for a design to account for any kind of regional level failure. Regional level failures can happen when there are natural disasters such as hurricanes, floods, or earthquakes. These types of events should be mitigated by provisioning HSM devices in another region. Devices deployed in another region may be paired together via Thales software configuration. This means that the minimum deployment for a highly available and disaster resilient solution is four HSM devices across two regions. Local redundancy and redundancy across regions can be used as a baseline to add any further HSM device deployments to support latency, capacity or to meet other application-specific requirements.
## Distributed application support
Dedicated HSM devices are typically deployed in support of applications that nee
## Next steps
-Once deployment architecture is determined, most configuration activities to implement that architecture will be provided by Thales. This includes device configuration as well as application integration scenarios. For more information, use the [Thales customer support](https://supportportal.gemalto.com/csm/) portal and download administration and configuration guides. The Microsoft partner site has a variety of integration guides.
+Once deployment architecture is determined, most configuration activities to implement that architecture will be provided by Thales. This includes device configuration as well as application integration scenarios. For more information, use the [Thales customer support](https://supportportal.thalesgroup.com/csm) portal and download administration and configuration guides. The Microsoft partner site has a variety of integration guides.
It is recommended that all key concepts of the service, such as high availability and security for example, are well understood before device provisioning or application design and deployment. Further concept level topics:
dedicated-hsm Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/faq.md
na ms.devlang: na Previously updated : 12/10/2020 Last updated : 03/25/2021 #Customer intent: As an IT Pro, Decision maker I am looking for key storage capability within Azure Cloud that meets FIPS 140-2 Level 3 certification and that gives me exclusive access to the hardware.
A Hardware Security Module (HSM) is a physical computing device used to safeguar
### Q: What is the Azure Dedicated HSM offering?
-Azure Dedicated HSM is a cloud-based service that provides HSMs hosted in Azure datacenters that are directly connected to a customer's virtual network. These HSMs are dedicated network appliances (Thales Network Luna HSM 7). They are deployed directly to a customers' private IP address space and Microsoft does not have any access to the cryptographic functionality of the HSMs. Only the customer has full administrative and cryptographic control over these devices. Customers are responsible for the management of the device and they can get full activity logs directly from their devices. Dedicated HSMs help customers meet compliance/regulatory requirements such as FIPS 140-2 Level 3, HIPAA, PCI-DSS, and eIDAS and many others.
+Azure Dedicated HSM is a cloud-based service that provides HSMs hosted in Azure datacenters that are directly connected to a customer's virtual network. These HSMs are dedicated [Thales Luna 7 HSM](https://cpl.thalesgroup.com/encryption/hardware-security-modules/network-hsms) network appliances. They are deployed directly to a customers' private IP address space and Microsoft does not have any access to the cryptographic functionality of the HSMs. Only the customer has full administrative and cryptographic control over these devices. Customers are responsible for the management of the device and they can get full activity logs directly from their devices. Dedicated HSMs help customers meet compliance/regulatory requirements such as FIPS 140-2 Level 3, HIPAA, PCI-DSS, and eIDAS and many others.
### Q: What hardware is used for Dedicated HSM?
-Microsoft has partnered with Thales to deliver the Azure Dedicated HSM service. The specific device used is the [Thales Network Luna HSM 7](https://cpl.thalesgroup.com/encryption/hardware-security-modules/network-hsms). This device not only provides FIPS 140-2 Level 3 validated firmware, but also offers low-latency, high performance, and high capacity via 10 partitions.
+Microsoft has partnered with Thales to deliver the Azure Dedicated HSM service. The specific device used is the [Thales Luna 7 HSM model A790](https://cpl.thalesgroup.com/encryption/hardware-security-modules/network-hsms). This device not only provides [FIPS 140-2 Level-3](https://csrc.nist.gov/publications/detail/fips/140/2/final) validated firmware, but also offers low-latency, high performance, and high capacity via 10 partitions.
### Q: What is an HSM used for?
Customers can provision HSMs in specific regions using PowerShell or command-lin
### Q: What software is provided with the Dedicated HSM service?
-Thales supplies all software for the HSM device once provisioned by Microsoft. The software is available at the [Thales customer support portal](https://supportportal.gemalto.com/csm/). Customers using the Dedicated HSM service are required to be registered for Thales support and have a Customer ID that enables access and download of relevant software. The supported client software is version 7.2, which is compatible with the FIPS 140-2 Level 3 validated firmware version 7.0.3.
+Thales supplies all software for the HSM device once provisioned by Microsoft. The software is available at the [Thales customer support portal](https://supportportal.thalesgroup.com/csm). Customers using the Dedicated HSM service are required to be registered for Thales support and have a Customer ID that enables access and download of relevant software. The supported client software is version 7.2, which is compatible with the FIPS 140-2 Level 3 validated firmware version 7.0.3.
### Q: What extra costs may be incurred with Dedicated HSM service?
At this time, Azure Dedicated HSM only provides HSMs with password-based authent
### Q: Will Azure Dedicated HSM host my HSMs for me?
-Microsoft only offers the Thales Network Luna HSM 7 via the Dedicated HSM service and cannot host any customer-provided devices.
+Microsoft only offers the Thales Luna 7 HSM model A790 via the Dedicated HSM service and cannot host any customer-provided devices.
### Q: Does Azure Dedicated HSM support payment (PIN/EFT) features?
-The Azure Dedicated HSM service uses Thales Network Luna HSM 7 devices. These devices do not support payment HSM-specific functionality (such as PIN or EFT) or certifications. If you would like Azure Dedicated HSM service to support payment HSMs in future, pass on the feedback to your Microsoft Account Representative.
+The Azure Dedicated HSM service uses Thales Luna 7 HSMs. These devices do not support payment HSM specific functionality (such as PIN or EFT) or certifications. If you would like Azure Dedicated HSM service to support Payment HSMs in future, pass on the feedback to your Microsoft Account Representative.
### Q: Which Azure regions is Dedicated HSM available in?
As of late March 2019, Dedicated HSM is available in the 14 regions listed below
### Q: How does my application connect to a Dedicated HSM?
-You use Thales provided HSM client tools/SDK/software to perform cryptographic operations from your applications. The software is available at the [Thales customer support portal](https://supportportal.gemalto.com/csm/). Customers using the Dedicated HSM service are required to be registered for Thales support and have a Customer ID that enables access and download of relevant software.
+You use Thales provided HSM client tools/SDK/software to perform cryptographic operations from your applications. The software is available at the [Thales customer support portal](https://supportportal.thalesgroup.com/csm). Customers using the Dedicated HSM service are required to be registered for Thales support and have a Customer ID that enables access and download of relevant software.
### Q: Can an application connect to Dedicated HSM from a different VNET in or across regions?
No. Azure Dedicated HSMs are only accessible from inside your virtual network.
### Q: Can I import keys from an existing On-premises HSM to Dedicated HSM?
-Yes, if you have on-premises Thales Network Luna HSM 7 HSMs. There are multiple methods. Refer to the [Thales HSM documentation](https://thalesdocs.com/gphsm/luna/7.2/docs/network/Content/Home_network.htm).
+Yes, if you have on-premises Thales Luna 7 HSMs. There are multiple methods. Refer to the [Thales HSM documentation](https://thalesdocs.com/gphsm/luna/7.2/docs/network/Content/Home_network.htm).
### Q: What operating systems are supported by Dedicated HSM client software?
To have high availability, you need to set up your HSM client application config
### Q: What authentication mechanisms are supported by Dedicated HSM?
-Azure Dedicated HSM uses SafeNet Network HSM 7 appliances (model A790) and they support password-based authentication.
+Azure Dedicated HSM uses [Thales Luna 7 HSM model A790](https://cpl.thalesgroup.com/encryption/hardware-security-modules/network-hsms) devices and they support password-based authentication.
### Q: What SDKs, APIs, client software is available to use with Dedicated HSM?
Yes. High availability configuration and setup are performed in the HSM client s
### Q: Can I add HSMs from my on-premises network to a high availability group with Azure Dedicated HSM?
-Yes. They must meet the high availability requirements for SafeNet Luna Network HSM 7.
+Yes. They must meet the high availability requirements for [Thales Luna 7 HSMs](https://cpl.thalesgroup.com/encryption/hardware-security-modules/network-hsms)
### Q: Can I add Luna 5/6 HSMs from on-premises networks to a high availability group with Azure Dedicated HSM?
Azure datacenters have extensive physical and procedural security controls. In a
### Q: What happens if there is a security breach or hardware tampering event?
-Dedicated HSM service uses Thales Network Luna HSM 7 appliances. These appliances support physical and logical tamper detection. If there is ever a tamper event the HSMs are automatically zeroized.
+Dedicated HSM service uses [Thales Luna 7 HSM](https://cpl.thalesgroup.com/encryption/hardware-security-modules/network-hsms) appliances. These devices support physical and logical tamper detection. If there is ever a tamper event the HSMs are automatically zeroized.
### Q: How do I ensure that keys in my Dedicated HSMs are not lost due to error or a malicious insider attack?
It is highly recommended to use an on-premises HSM backup device to perform regu
Support is provided by both Microsoft and Thales. If you have an issue with the hardware or network access, raise a support request with Microsoft and if you have an issue with HSM configuration, software, and application development raise a support request with Thales. If you have an undetermined issue, raise a support request with Microsoft and then Thales can be engaged as required.
-### Q: How do I get the client software, documentation and access to integration guidance for the Thales Network Luna HSM 7?
+### Q: How do I get the client software, documentation and access to integration guidance for the Thales Luna 7 HSM?
After registering for the service, a Thales Customer ID will be provided that allows for registration in the Thales customer support portal. This will enable access to all software and documentation as well as enabling support requests directly with Thales.
The HSM has a command-line reboot option, however, we are experiencing issues wh
### Q: Is it safe to store encryption keys for my most important data in Dedicated HSM?
-Yes, Dedicated HSM provisions Thales Network Luna HSM 7 appliances that use FIPS 140-2 Level 3 validated HSMs.
+Yes, Dedicated HSM provisions Thales Luna 7 HSMs that are [FIPS 140-2 Level-3](https://csrc.nist.gov/publications/detail/fips/140/2/final) validated.
### Q: What cryptographic keys and algorithms are supported by Dedicated HSM?
-Dedicated HSM service provisions Thales Network Luna HSM 7 appliances. They support a wide range of cryptographic key types and algorithms including:
+Dedicated HSM service provisions Thales Luna 7 HSM appliances. They support a wide range of cryptographic key types and algorithms including:
Full Suite B support * Asymmetric:
Full Suite B support
### Q: Is Dedicated HSM FIPS 140-2 Level 3 validated?
-Yes. Dedicated HSM service provisions Thales Network Luna HSM 7 appliances that use FIPS 140-2 Level 3 validated HSMs.
+Yes. Dedicated HSM service provisions [Thales Luna 7 HSM model A790](https://cpl.thalesgroup.com/encryption/hardware-security-modules/network-hsms) appliances that are [FIPS 140-2 Level-3](https://csrc.nist.gov/publications/detail/fips/140/2/final) validated.
### Q: What do I need to do to make sure I operate Dedicated HSM in FIPS 140-2 Level 3 validated mode?
-The Dedicated HSM service provisions Thales Network Luna HSM 7 appliances. These appliances use FIPS 140-2 Level 3 validated HSMs. The default deployed configuration, operating system, and firmware are also FIPS validated. You do not need to take any action for FIPS 140-2 Level 3 compliance.
+The Dedicated HSM service provisions Thales Luna 7 HSM appliances. These devices are FIPS 140-2 Level 3 validated HSMs. The default deployed configuration, operating system, and firmware are also FIPS validated. You do not need to take any action for FIPS 140-2 Level 3 compliance.
### Q: How does a customer ensure that when an HSM is deprovisioned all the key material is wiped out?
Before requesting deprovisioning, a customer must have zeroized the HSM using Th
### Q: How many cryptographic operations are supported per second with Dedicated HSM?
-Dedicated HSM provisions Thales Network Luna HSM 7 HSMs. Here's a summary of maximum performance for some operations:
+Dedicated HSM provisions Thales Luna 7 HSMs. Here's a summary of maximum performance for some operations:
* RSA-2048: 10,000 transactions per second * ECC P256: 20,000 transactions per second
Dedicated HSM provisions Thales Network Luna HSM 7 HSMs. Here's a summary of max
### Q: How many partitions can be created in Dedicated HSM?
-The SafeNet Luna HSM 7 model A790 used includes a license for 10 partitions in the cost of the service. The device has a limit of 100 partitions and adding partitions up to this limit would incur extra licensing costs and require installation of a new license file on the device.
+The [Thales Luna 7 HSM model A790](https://cpl.thalesgroup.com/encryption/hardware-security-modules/network-hsms) used includes a license for 10 partitions in the cost of the service. The device has a limit of 100 partitions and adding partitions up to this limit would incur extra licensing costs and require installation of a new license file on the device.
### Q: How many keys can be supported in Dedicated HSM?
dedicated-hsm High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/high-availability.md
na ms.devlang: na Previously updated : 01/15/2021- Last updated : 03/25/2021+ # Azure Dedicated HSM high availability
Azure Dedicated HSM is underpinned by MicrosoftΓÇÖs highly available datacenters
## High availability example
-Information on how to configure HSM devices for high availability at the software level is in the 'Thales Luna 7 HSM Administration Guide'. This document is available at the [Thales HSM Page](https://thalesdocs.com/gphsm/Content/luna/network/luna_network_releases.htm).
+Information on how to configure HSM devices for high availability at the software level is in the 'Thales Luna 7 HSM Administration Guide'. This document is available at the [Thales HSM Page](https://cpl.thalesgroup.com/encryption/hardware-security-modules/network-hsms).
The following diagram shows a highly available architecture. It uses multiple devices in region and multiple devices paired in a separate region. This architecture uses a minimum of four HSM devices and virtual networking components.
Further concept level topics:
* [Supportability](supportability.md) * [Monitoring](monitoring.md)
-For specific details on configuring HSM devices for high availability, please refer to the Thales Customer Support Portal for the Administrator Guides and see section 6.
+For specific details on configuring HSM devices for high availability, please refer to the [Thales customer support portal](https://supportportal.thalesgroup.com/csm) for the Administrator Guides and see section 6.
dedicated-hsm Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/networking.md
na ms.devlang: na Previously updated : 12/06/2018- Last updated : 03/25/2021+
For globally distributed applications or for high availability regional failover
![Diagram shows two regions connected by two V P N gateways. Each region contains peered virtual networks.](media/networking/global-vnet.png)
+## Networking Restrictions
+> [!NOTE]
+> A constraint of the Dedicated HSM service using subnet delegation is imposed restrictions that should be considered when designing the target network architecture for an HSM deployment. Use of subnet delegation means NSGs, UDRs and Global VNet Peering are not supported for Dedicated HSM. The sections below give help with alternative techniques to achieve the same or a similar outcome for these capabilities.
+
+The HSM NIC which resides in the Dedicated HSM VNet cannot use Network Security Groups, or User Defined Routes. This means that it is not possible to set default-deny policies from standpoint of the Dedicated HSM VNet, and that other network segments must be allowlisted to gain access to the Dedicated HSM service.
+
+Adding the Network Virtual Appliances (NVA) Proxy solution also allows for an NVA firewall in the transit/DMZ hub to be logically placed in front of the HSM NIC, thus providing the needed alternative to NSGs and UDRs.
+
+### Solution Architecture
+This networking design requires the following elements:
+1. A transit or DMZ hub VNet with an NVA proxy tier. Ideally two or more NVAs are present.
+2. An ExpressRoute circuit with a private peering enabled and a connection to the transit hub VNet.
+3. A VNet peering between the transit hub VNet and the Dedicated HSM VNet.
+4. An NVA firewall or Azure Firewall can be deployed offer DMZ services in the hub as an option.
+5. Additional workload spoke VNets can be peered to the hub VNet. The Gemalto client can access the dedicated HSM service through the hub VNet.
+
+![Diagram shows a DMZ hub VNet with an NVA proxy tier for NSG and UDR workaround](media/networking/network-architecture.png)
+
+Since adding the NVA proxy solution also allows for an NVA firewall in the transit/DMZ hub to be logically placed in front of the HSM NIC, thus providing the needed default-deny policies. In our example, we will use the Azure Firewall for this purpose and will need the following elements in place:
+1. An Azure Firewall deployed into subnet ΓÇ£AzureFirewallSubnetΓÇ¥ in the DMZ hub VNet
+2. A Routing Table with a UDR that directs traffic headed to the Azure ILB private endpoint into the Azure Firewall. This Routing Table will be applied to the GatewaySubnet where the customer ExpressRoute Virtual Gateway resides
+3. Network security rules within the AzureFirewall to allow forwarding between a trusted source range and the Azure IBL private endpoint listening on TCP port 1792. This security logic will add the necessary ΓÇ£default denyΓÇ¥ policy against the Dedicated HSM service. Meaning, only trusted source IP ranges will be allowed into the Dedicated HSM service. All other ranges will be dropped.
+4. A Routing Table with a UDR that directs traffic headed to on-prem into the Azure Firewall. This Routing Table will be applied to the NVA proxy subnet.
+5. An NSG applied to the Proxy NVA subnet to trust only the subnet range of the Azure Firewall as a source, and to only allow forwarding to the HSM NIC IP address over TCP port 1792.
+
+> [!NOTE]
+> Because the NVA proxy tier will SNAT the client IP address as it forwards to the HSM NIC, no UDRs are required between the HSM VNet and the DMZ hub VNet.
+
+### Alternative to UDRs
+The NVA tier solution mentioned above works as an alternative to UDRs. There are some important points to note.
+1. Network Address Translation should be configured on NVA to allow for return traffic to be routed correctly.
+2. Customers should disable the client ip-check in Luna HSM configuration to use VNA for NAT. The following commands servce as an example.
+```
+Disable:
+[hsm01] lunash:>ntls ipcheck disable
+NTLS client source IP validation disabled
+Command Result : 0 (Success)
+
+Show:
+[hsm01] lunash:>ntls ipcheck show
+NTLS client source IP validation : Disable
+Command Result : 0 (Success)
+```
+3. Deploy UDRs for ingress traffic into the NVA tier.
+4. As per design, HSM subnets will not initiate an outbound connection request to the platform tier.
+
+### Alternative to using Global VNET Peering
+There are a couple of architectures you can use as an alternative to Global VNet peering.
+1. Use [Vnet-to-Vnet VPN Gateway Connection](https://docs.microsoft.com/azure/vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal)
+2. Connect HSM VNET with another VNET with an ER circuit. This works best when a direct on-premises path is required or VPN VNET.
+
+#### HSM with direct Express Route connectivity
+![Diagram shows HSM with direct Express Route connectivity](media/networking/expressroute-connectivity.png)
+ ## Next steps - [Frequently asked questions](faq.md)
For globally distributed applications or for high availability regional failover
- [High availability](high-availability.md) - [Physical Security](physical-security.md) - [Monitoring](monitoring.md)-- [Deployment architecture](deployment-architecture.md)
+- [Deployment architecture](deployment-architecture.md)
dedicated-hsm Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/overview.md
na
ms.devlang: na Previously updated : 12/07/2018- Last updated : 03/25/2021+ #Customer intent: As an IT Pro, Decision maker I am looking for key storage capability within Azure Cloud that meets FIPS 140-2 Level 3 certification and that gives me exclusive access to the hardware.
Azure Dedicated HSM is an Azure service that provides cryptographic key storage in Azure. Dedicated HSM meets the most stringent security requirements. It's the ideal solution for customers who require FIPS 140-2 Level 3-validated devices and complete and exclusive control of the HSM appliance.
- HSM devices are deployed globally across several Azure regions. They can be easily provisioned as a pair of devices and configured for high availability. HSM devices can also be provisioned across regions to assure against regional-level failover. Microsoft delivers the Dedicated HSM service by using the [SafeNet Luna Network HSM 7 (Model A790)](https://safenet.gemalto.com/data-encryption/hardware-security-modules-hsms/safenet-network-hsm/) appliance from Gemalto. This device offers the highest levels of performance and cryptographic integration options.
+ HSM devices are deployed globally across several Azure regions. They can be easily provisioned as a pair of devices and configured for high availability. HSM devices can also be provisioned across regions to assure against regional-level failover. Microsoft delivers the Dedicated HSM service by using the [Thales Luna 7 HSM model A790](https://cpl.thalesgroup.com/encryption/hardware-security-modules/network-hsms) appliances. This device offers the highest levels of performance and cryptographic integration options.
-After they're provisioned, HSM devices are connected directly to a customerΓÇÖs virtual network. They can also be accessed by on-premises application and management tools when you configure point-to-site or site-to-site VPN connectivity. Customers get the software and documentation to configure and manage HSM devices from GemaltoΓÇÖs support portal.
+After they're provisioned, HSM devices are connected directly to a customerΓÇÖs virtual network. They can also be accessed by on-premises application and management tools when you configure point-to-site or site-to-site VPN connectivity. Customers get the software and documentation to configure and manage HSM devices from [Thales customer support portal](https://supportportal.thalesgroup.com/csm).
## Why use Azure Dedicated HSM? ### FIPS 140-2 Level-3 compliance
-Many organizations have stringent industry regulations that dictate that cryptographic key storage meets [FIPS 140-2 Level-3](https://csrc.nist.gov/publications/detail/fips/140/2/final) requirements. MicrosoftΓÇÖs multi-tenant Azure Key Vault service currently only provides FIPS 140-2 Level-2 certification. Azure Dedicated HSM fulfills a real need for the financial services industry, government agencies, and others who must meet FIPS 140-2 Level-3 requirements.
+Many organizations have stringent industry regulations that dictate that cryptographic keys must be stored in [FIPS 140-2 Level-3](https://csrc.nist.gov/publications/detail/fips/140/2/final) validated HSMs. Azure Dedicated HSM and a new single-tenant offering, [Azure Key Vault Managed HSM (preview)](https://docs.microsoft.com/azure/key-vault/managed-hsm), help customers from various industry segments, such as financial services industry, government agencies, and others meet FIPS 140-2 Level-3 requirements. While MicrosoftΓÇÖs multi-tenant [Azure Key Vault](https://docs.microsoft.com/azure/key-vault) service currently uses FIPS 140-2 Level-2 validated HSMs.
### Single-tenant devices
Many customers require full administrative control and sole access to their devi
### High performance
-The Gemalto device was selected for this service for a variety of reasons. It offers a broad range of cryptographic algorithm support, a variety of supported operating systems, and broad API support. The specific model that's deployed offers excellent performance with 10,000 operations per second for RSA-2048. It supports 10 partitions that can be used for unique application instances. This device is a low latency, high capacity, and high throughput device.
+The Thales device was selected for this service for a variety of reasons. It offers a broad range of cryptographic algorithm support, a variety of supported operating systems, and broad API support. The specific model that's deployed offers excellent performance with 10,000 operations per second for RSA-2048. It supports 10 partitions that can be used for unique application instances. This device is a low latency, high capacity, and high throughput device.
### Unique cloud-based offering
Azure Dedicated HSM is not a good fit for the following type of scenario: Micros
### It depends
-Whether Azure Dedicated HSM will work for you depends on a potentially complex mix of requirements and compromises that you can or cannot make. An example is the FIPS 140-2 Level 3 requirement. This requirement is common, and Dedicated HSM is currently the only option for meeting it. If these mandated requirements aren't relevant, then often it's a choice between Azure Key Vault and Dedicated HSM. Assess your requirements before making a decision.
+Whether Azure Dedicated HSM will work for you depends on a potentially complex mix of requirements and compromises that you can or cannot make. An example is the FIPS 140-2 Level 3 requirement. This requirement is common, and Azure Dedicated HSM and a new single-tenant offering, [Azure Key Vault Managed HSM (preview)](https://docs.microsoft.com/azure/key-vault/managed-hsm) are currently the only options for meeting it. If these mandated requirements aren't relevant, then often it's a choice between Azure Key Vault and Azure Dedicated HSM. Assess your requirements before making a decision.
Situations in which you will have to weigh your options include:
Situations in which you will have to weigh your options include:
This is a highly specialized service. Therefore, we recommend that you fully understand the key concepts in this documentation set, including pricing, support, and service-level agreements.
-The [Gemalto integration guides](https://safenet.gemalto.com/partners/microsoft/) help you facilitate the provisioning of HSMs into an existing virtual network environment. There are also are how-to guides for helping you determine how to set up your deployment architecture.
+The [Thales integration guides](https://cpl.thalesgroup.com/partners/overview) help you facilitate the provisioning of HSMs into an existing virtual network environment. There are also how-to guides for helping you determine how to set up your deployment architecture.
* [High availability](high-availability.md) * [Physical security](physical-security.md)
dedicated-hsm Physical Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/physical-security.md
na ms.devlang: na Previously updated : 12/07/2018- Last updated : 03/25/2021+ # Azure Dedicated HSM physical security
Azure Dedicated HSM helps you meet advanced security requirements for key storag
## Security through procurement
-Microsoft follows a secure procurement process. We manage the chain of custody and ensure that the specific device ordered and shipped is the device arriving at our data centers. The devices are in tamper-event plastic backs. They are stored in a secure storage area until commissioned in the data gallery of the data center. The racks containing the HSM devices are considered high business impact(HBI). The devices are locked and under video surveillance at all times front and back.
+Microsoft follows a secure procurement process. We manage the chain of custody and ensure that the specific device ordered and shipped is the device arriving at our data centers. The devices are in serialized tamper-event plastic bags and containers. They are stored in a secure storage area until commissioned in the data gallery of the data center. The racks containing the HSM devices are considered high business impact(HBI). The devices are locked and under video surveillance at all times front and back.
## Security through deployment
If a Microsoft engineer must access the rack used by HSM devices (for example, n
## Logical level security considerations
-HSMs are provisioned to a virtual network created by the customer. This is a customerΓÇÖs private IUP Address space. This configuration provides a valuable logical network level isolation and ensures access only by the customer. This implies that all logical level security controls are the responsibility of the customer.
+HSMs are provisioned to a virtual network created by the customer within the customerΓÇÖs private IP Address space. This configuration provides a valuable logical network level isolation and ensures access only by the customer. This implies that all logical level security controls are the responsibility of the customer.
## Next steps
It is recommended that all key concepts of the service, such as high availabilit
* [Networking](networking.md) * [Supportability](supportability.md) * [Monitoring](monitoring.md)
-* [Deployment architecture](deployment-architecture.md)
+* [Deployment architecture](deployment-architecture.md)
dedicated-hsm Supportability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/supportability.md
na
ms.devlang: na Previously updated : 03/27/2019- Last updated : 03/25/2021+ # Azure Dedicated HSM Supportability
-The Azure Dedicated HSM Service provides a physical device for sole customer use with complete administrative control and management responsibility. The device made available is a [Gemalto SafeNet Luna 7 HSM model A790](https://safenet.gemalto.com/data-encryption/hardware-security-modules-hsms/safenet-network-hsm/). Microsoft will have no administrative access once provisioned by a customer, beyond physical serial port attachment as a monitoring role. Without access, Microsoft can have no ongoing software level maintenance or system administration responsibilities. As a result, customers are responsible for typical operational activities.
-Customers are fully responsible for applications that use the HSMs and should work with Gemalto for support or consulting-based assistance. Due to the extent of customer ownership of operational hygiene, it is not possible for Microsoft to offer any kind of high availability guarantee for this service. It is the customerΓÇÖs responsibility to ensure their applications are correctly configured to achieve high-availability. Microsoft will monitor and maintain device health and network connectivity.
+The Azure Dedicated HSM Service provides a physical device for sole customer use with complete administrative control and management responsibility. The device made available is a [Thales Luna 7 HSM model A790](https://cpl.thalesgroup.com/encryption/hardware-security-modules/network-hsms). Microsoft will have no administrative access once provisioned by a customer, beyond physical serial port attachment as a monitoring role. Without access, Microsoft can have no ongoing software level maintenance or system administration responsibilities. As a result, customers are responsible for typical operational activities.
+Customers are fully responsible for applications that use the HSMs and should work with Thales for support or consulting-based assistance. Due to the extent of customer ownership of operational hygiene, it is not possible for Microsoft to offer any kind of high availability guarantee for this service. It is the customerΓÇÖs responsibility to ensure their applications are correctly configured to achieve high-availability. Microsoft will monitor and maintain device health and network connectivity.
## Getting support
-Customer support for Dedicated HSM is a joint effort between Microsoft and Gemalto. Any hardware issues or network path issues will be addressed by Microsoft, and anything to do with the actual HSM, such as configuration, software, firmware and application development, will be addressed by Gemalto. This support model ensures the quickest route to the most effective support. If in doubt with a particular issue, raise a support request with Microsoft and we will ensure you are directed appropriately. Microsoft will stay engaged in all support scenarios and strive for the best support experience for our customers.
+Customer support for Dedicated HSM is a joint effort between Microsoft and Thales. Any hardware issues or network path issues will be addressed by Microsoft, and anything to do with the actual HSM, such as configuration, software, firmware and application development, will be addressed by Thales. This support model ensures the quickest route to the most effective support. If in doubt with a particular issue, raise a support request with Microsoft and we will ensure you are directed appropriately. Microsoft will stay engaged in all support scenarios and strive for the best support experience for our customers.
-## Gemalto support
+## Thales support
-Customers using the Dedicated HSM service qualify for support from Gemalto as per their Plus Support Plan. This just requires a registration process using the Gemalto support portal. A Customer ID and instructions will be provided for this as part of the initial engagement with Microsoft to gain access to the Dedicated HSM service. The mechanism to get support from Gemalto is via their [customer support portal](https://supportportal.gemalto.com/csm/).
-A key point of note is that Gemalto will provide all software and documentation required to use the HSM (for example, client access software and SDKs) via download on the customer support portal.
+Customers using the Dedicated HSM service qualify for support from Thales as per their Plus Support Plan. This just requires a registration process using the Thales support portal. A Customer ID and instructions will be provided for this as part of the initial engagement with Microsoft to gain access to the Dedicated HSM service. The mechanism to get support from Thales is via their [customer support portal](https://supportportal.thalesgroup.com/csm).
+A key point of note is that Thales will provide all software and documentation required to use the HSM (for example, client access software and SDKs) via download on the customer support portal.
### Software components
Various software components are used in the configuration of HSM devices:
### Guidance
-Gemalto makes available administration and configuration guidance via the [customer support portal](https://supportportal.gemalto.com/csm/). Once signed in using a valid customer ID, these documents are available for download. Gemalto also provides a series of integration guides to help customers with different scenarios and software integrations. For more information, see the [Gemalto partner site for Microsoft](https://safenet.gemalto.com/partners/microsoft/).
+Thales makes available administration and configuration guidance via the [Thales customer support portal](https://supportportal.thalesgroup.com/csm). Once signed in using a valid customer ID, these documents are available for download. Thales also provides a series of integration guides to help customers with different scenarios and software integrations. For more information, see the [Thales partner site for Microsoft](https://cpl.thalesgroup.com/partners/overview).
### Support
-Any software level issue or question in relation to using the HSMs as part of the Dedicated HSM service, should be addressed to Gemalto support directly. All software components listed above, and any custom HSM configuration that is post-provisioning, will be addressed by Gemalto. For more information, see the [Gemalto customer support portal](https://supportportal.gemalto.com/csm/).
+Any software level issue or question in relation to using the HSMs as part of the Dedicated HSM service, should be addressed to Thales support directly. All software components listed above, and any custom HSM configuration that is post-provisioning, will be addressed by Thales. For more information, see the [Thales customer support portal](https://supportportal.thalesgroup.com/csm).
### Consulting services
-For any assistance in the design, development and deployment of custom applications that use the HSM, contact your Gemalto account representative.
+For any assistance in the design, development and deployment of custom applications that use the HSM, contact your Thales account representative.
## Microsoft support
Issues such as the following should be reported to Microsoft:
* Network access issues * Problems provisioning and deprovisioning.
-Microsoft has physical serial port access to the device via a monitoring role (that is, not administrative role) that enables basic health telemetry. This will allow Microsoft to provide proactive notification of issues to the customer unless the customer chooses to restrict this permission.
+Microsoft has physical serial port access to the device via a monitoring role (that is a non-administrative role) that enables basic health telemetry. This will allow Microsoft to provide proactive notification of issues to the customer unless the customer chooses to restrict this permission.
### Provisioning and decommissioning
-After a customer has an approved registration for the Dedicated HSM service, they will be able to create HSM resources (currently via PowerShell or command-line interface and not the Azure portal). The resource goes through an allocation process that maps a physical device in a specified region, to a customerΓÇÖs pre-defined virtual network (VNet). Once visible on a VNet, the customer can access the device and configure it further as per requirements. Customers access their dedicated HSMs using Gemalto client software and tools. The resource creation process is supported by Microsoft. Custom configuration process and beyond are supported by Gemalto. (see Gemalto support above). When a customer has finished using an HSM, it must be reset (or zeroized) to ensure no persistence of data. The process of resetting the device removes all custom configuration and data. Microsoft deallocates the device and returns it to the pool in a pristine state. This means that when the device is returned to the pool there is no evidence of previous customer activity.
+After a customer has an approved registration for the Dedicated HSM service, they will be able to create HSM resources (currently via PowerShell or command-line interface and not the Azure portal). The resource goes through an allocation process that maps a physical device in a specified region, to a customerΓÇÖs pre-defined virtual network (VNet). Once visible on a VNet, the customer can access the device and configure it further as per requirements. Customers access their dedicated HSMs using Thales client software and tools. The resource creation process is supported by Microsoft. Custom configuration process and beyond are supported by Thales. (see Thales support above). When a customer has finished using an HSM, it must be reset (or zeroized) to ensure no persistence of data. The process of resetting the device removes all custom configuration and data. Microsoft deallocates the device and returns it to the pool in a pristine state. This means that when the device is returned to the pool there is no evidence of previous customer activity.
### Hardware issues The HSM device has redundant and replaceable power supplies and fan units. However, fan unit removal will still cause a tamper event. When a component failure occurs, Microsoft will use the most appropriate process to address the component level issue in a way that causes minimal interruption and lowest risk to our customers service availability.
-Any more serious failure of the device will result in that device being replaced by a fresh one from the free pool. The customer simply includes the new device in the existing HA pair for it to synchronize and return to full operational state. The failed device will have its data bearing devices removed and shredded on site at the data center. Only the chassis will be returned to Gemalto for recycling.
-
+Any more serious failure of the device will result in that device being replaced by a new device from the free pool. The customer simply includes the new device in the existing HA pair for it to synchronize and return to full operational state. The failed device will have its data bearing devices removed and shredded on site at the data center.
### Networking issues
If customers experience networking access problems to the HSM device, they shoul
## Service level expectations for support For Microsoft support service levels, refer to the [Azure support plan](https://azure.microsoft.com/support/plans/).
-For Gemalto support service levels, refer to the [Gemalto Support Essentials](https://azure.microsoft.com/support/plans/).
+For Thales support service levels, refer to the [Thales Support Essentials](https://azure.microsoft.com/support/plans/).
## Next steps
dedicated-hsm Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/troubleshoot.md
na
ms.devlang: na Previously updated : 12/07/2018- Last updated : 03/25/2021+ #Customer intent: As an IT Pro, Decision maker I am looking for key storage capability within Azure Cloud that meets FIPS 140-2 Level 3 certification and that gives me exclusive access to the hardware. # Troubleshooting the Azure Dedicated HSM service
-The Azure Dedicated HSM service has two distinct facets. Firstly, the registration and deployment in Azure of the HSM devices with their underlying network components. Secondly, the configuration of the HSM devices in preparation for use/integration with a given workload or application. Although the Thales Luna Network HSM devices are the same in Azure as you would purchase directly from Thales, the fact they are a resource in Azure creates some unique considerations. These considerations and any resulting troubleshooting insights or best practices, are documented here to ensure high visibility and access to critical information. Once the service is in use, definitive information is available via support requests to either Microsoft or Thales directly.
+The Azure Dedicated HSM service has two distinct facets. Firstly, the registration and deployment in Azure of the HSM devices with their underlying network components. Secondly, the configuration of the HSM devices in preparation for use/integration with a given workload or application. Although the [Thales Luna 7 HSM](https://cpl.thalesgroup.com/encryption/hardware-security-modules/network-hsms) devices are the same in Azure as you would purchase directly from Thales, the fact they are a resource in Azure creates some unique considerations. These considerations and any resulting troubleshooting insights or best practices, are documented here to ensure high visibility and access to critical information. Once the service is in use, definitive information is available via support requests to either Microsoft or Thales directly.
> [!NOTE] > It should be noted that prior to performing any configuration on a newly deployed HSM device, it should be updated with any relevant patches. A specific required patch is [KB0019789](https://supportportal.gemalto.com/csm?id=kb_article_view&sys_kb_id=19a81c8bdb9a1fc8d298728dae96197d&sysparm_article=KB0019789) in Thales support portal which addresses a issue where the system becomes unresponsive during reboot. ## HSM Registration
-Dedicated HSM is not freely available for use as it is delivering hardware resources in the cloud and hence is a precious resource that needs protecting. We therefore use a allowlisting process via email using HSMrequest@microsoft.com.
+Dedicated HSM is not freely available for use as it is delivering hardware resources in the cloud and hence is a precious resource that needs protecting. We therefore use a allowlisiting process via email using HSMrequest@microsoft.com.
### Getting access to Dedicated HSM
-If you believe Dedicated HSM will fit your key storage requirements, then email HSMrequest@microsoft.com to request access. Outline your application, the regions you would like HSMs and the volume of HSMs you are looking for. If you work with a Microsoft representative, such as an Account Executive or Cloud Solution Architect for example, then include them in any request.
+First ask yourself what use cases do you have that cannot be addressed by [Azure Key Vault](https://docs.microsoft.com/azure/key-vault/general/overview) or [Azure Managed HSM](https://docs.microsoft.com/azure/key-vault/managed-hsm/overview). If then you believe only Dedicated HSM will fit your key storage requirements, then email HSMrequest@microsoft.com to request access. Outline your application and use cases, the regions you would like HSMs and the volume of HSMs you are looking for. If you work with a Microsoft representative, such as an Account Executive or Cloud Solution Architect for example, then include them in any request.
## HSM Provisioning
The standard ARM template provided for deployment has HSM and ExpressRoute gatew
### HSM Deployment Using Terraform
-A few customers have used Terraform as an automation environment instead of ARM templates as supplied when registering for this service. The HSMs cannot be deployed this way but the dependent networking resources can. Terraform has a module to call out to a minimal ARM template that jut has the HSM deployment. In this situation, care should be taken to ensure networking resources such as the required ExpressRoute Gateway are fully deployed before deploying HSMs. The following CLI command can be used to test for completed deployment and integrated as required. Replace the angle bracket place holders for your specific naming. You should look for a result of "provisioningState is Succeeded"
+A few customers have used Terraform as an automation environment instead of ARM templates as supplied when registering for this service. The HSMs cannot be deployed this way but the dependent networking resources can. Terraform has a module to call out to a minimal ARM template that just has the HSM deployment. In this situation, care should be taken to ensure networking resources such as the required ExpressRoute Gateway are fully deployed before deploying HSMs. The following CLI command can be used to test for completed deployment and integrated as required. Replace the angle bracket place holders for your specific naming. You should look for a result of "provisioningState is Succeeded"
```azurecli az resource show --ids /subscriptions/<subid>/resourceGroups/<myresourcegroup>/providers/Microsoft.Network/virtualNetworkGateways/<myergateway>
az resource show --ids /subscriptions/<subid>/resourceGroups/<myresourcegroup>/p
Deployments can fail if you exceed 2 HSMs per stamp and 4 HSMs per region. To avoid this situation, ensure you have deleted resources from previously failed deployments before deploying again. Refer to the "How do I see HSMs" item below to check resources. If you believe you need to exceed this quota, which is primarily there as a safeguard, then please email HSMrequest@microsoft.com with details. ### Deployment failure based on capacity
-When a particular stamp or region is becoming full, that is, nearly all free HSMs are provisioned, this can lead to deployment failures. Each stamp has 11 HSMs available for customers, which means 22 per region. There are also 3 spares and 1 test device in each stamp. If you believe you may have hit a limit, then email HSMrequest@microsoft.com for information on fill-level of specific stamps.
+When a particular stamp or region is becoming full, that is, nearly all free HSMs are provisioned, this can lead to deployment failures. Each stamp has 12 HSMs available for customers, which means 24 per region. There are also 2 spares and 1 test device in each stamp. If you believe you may have hit a limit, then email HSMrequest@microsoft.com for information on fill-level of specific stamps.
### How do I see HSMs when provisioned?
-Due to Dedicated HSM being an allowlisted service, it is considered a "Hidden Type" in the Azure portal. To see the HSM resources, you must check the "Show hidden types" check box as shown below. The NIC resource always follows the HSM and is a good place to find out the IP address of the HSM prior to using SSH to connect.
+Due to Dedicated HSM being a allowlisted service, it is considered a "Hidden Type" in the Azure portal. To see the HSM resources, you must check the "Show hidden types" check box as shown below. The NIC resource always follows the HSM and is a good place to find out the IP address of the HSM prior to using SSH to connect.
![Screenshot that highlights the Show hidden types check](./media/troubleshoot/hsm-provisioned.png)
Providing incorrect credentials to HSMs can have destructive consequences. The f
The following items are situation where configuration errors are either common or have an impact that is worthy of calling out: ### HSM Documentation and Software
-Software and documentation for the Thales SafeNet Luna 7 HSM devices is not available from Microsoft and must be downloaded from Thales directly. Registration is required using the Thales Customer ID received during the registration process. The devices, as provided by Microsoft, have software version 7.2 and firmware version 7.0.3. Early in 2020 Thales made documentation public and it can be found [here](https://thalesdocs.com/gphsm/luna/7.2/docs/network/Content/Home_network.htm).
+Software and documentation for the [Thales Luna 7 HSM](https://cpl.thalesgroup.com/encryption/hardware-security-modules/network-hsms) devices is not available from Microsoft and must be downloaded from Thales directly. Registration is required using the Thales Customer ID received during the registration process. The devices, as provided by Microsoft, have software version 7.2 and firmware version 7.0.3. Early in 2020 Thales made documentation public and it can be found [here](https://thalesdocs.com/gphsm/luna/7.2/docs/network/Content/Home_network.htm).
### HSM Networking Configuration
Be careful when configuring the networking within the HSM. The HSM has a connec
### HSM Device Reboot
-Some configuration changes require the HSM to be power cycled or rebooted. Microsoft testing of the HSM in Azure determined that on some occasions the reboot could stop responding. The implication is that a support request must be created in the Azure portal requesting hard-reboot and that could take up to 48 hours to complete considering it's a manual process in an Azure datacenter. To avoid this situation, ensure you have deployed the reboot patch available from Thales directly. Refer to [KB0019789](https://supportportal.gemalto.com/csm?sys_kb_id=d66911e2db4ffbc0d298728dae9619b0&id=kb_article_view&sysparm_rank=1&sysparm_tsqueryId=d568c35bdb9a4850d6b31f3b4b96199e&sysparm_article=KB0019789) in the Thales Luna Network HSM 7.2 Downloads for a recommended patch for an issue where the system becomes unresponsive during reboot (Note: you will need to have registered in the Thales support portal to download).
+Some configuration changes require the HSM to be power cycled or rebooted. Microsoft testing of the HSM in Azure determined that on some occasions the reboot could stop responding. The implication is that a support request must be created in the Azure portal requesting hard-reboot and that could take up to 48 hours to complete considering it's a manual process in an Azure datacenter. To avoid this situation, ensure you have deployed the reboot patch available from Thales directly. Refer to [KB0019789](https://supportportal.gemalto.com/csm?sys_kb_id=d66911e2db4ffbc0d298728dae9619b0&id=kb_article_view&sysparm_rank=1&sysparm_tsqueryId=d568c35bdb9a4850d6b31f3b4b96199e&sysparm_article=KB0019789) in the Thales Luna 7 HSM 7.2 Downloads for a recommended patch for an issue where the system becomes unresponsive during reboot (Note: you will need to have registered in the [Thales customer support portal](https://supportportal.thalesgroup.com/csm) to download).
### NTLS Certificates out of sync A client may lose connectivity to an HSM when a certificate expires or has been overwritten through configuration updates. The certificate exchange client configuration should be reapplied with each HSM.
dedicated-hsm Tutorial Deploy Hsm Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/tutorial-deploy-hsm-cli.md
na Previously updated : 10/20/2020- Last updated : 03/25/2021+ # Tutorial: Deploying HSMs into an existing virtual network using the Azure CLI
The output should look as shown on the image below:
![Screenshot shows output in PowerShell window.](media/tutorial-deploy-hsm-cli/hsm-show-output.png)
-At this point, you have allocated all resources for a highly available, two HSM deployment and validated access and operational state. Any further configuration or testing involves more work with the HSM device itself. For this, you should follow the instructions in the Thales Luna Network HSM 7 Administration Guide chapter 7 to initialize the HSM and create partitions. All documentation and software are available directly from Thales for download once you are registered in the Thales Customer Support Portal and have a Customer ID. Download Client Software version 7.2 to get all required components.
+At this point, you have allocated all resources for a highly available, two HSM deployment and validated access and operational state. Any further configuration or testing involves more work with the HSM device itself. For this, you should follow the instructions in the Thales Luna 7 HSM Administration Guide chapter 7 to initialize the HSM and create partitions. All documentation and software are available directly from Thales for download once you are registered in the [Thales customer support portal](https://supportportal.thalesgroup.com/csm) and have a Customer ID. Download Client Software version 7.2 to get all required components.
## Delete or clean up resources If you have finished with just the HSM device, then it can be deleted as a resource and returned to the free pool. The obvious concern when doing this is any sensitive customer data that is on the device. The best way to "zeroize" a device is to get the HSM admin password wrong 3 times (note: this is not appliance admin, it's the actual HSM admin). As a safety measure to protect key material, the device cannot be deleted as an Azure resource until it is in the zeroized state. > [!NOTE]
-> if you have issue with any Thales device configuration you should contact [Thales customer support](https://safenet.gemalto.com/technical-support/).
+> if you have issue with any Thales device configuration you should contact [Thales customer support](https://supportportal.thalesgroup.com/csm).
If you have finished with all resources in this resource group, then you can remove them all with the following command:
dedicated-hsm Tutorial Deploy Hsm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/tutorial-deploy-hsm-powershell.md
na Previously updated : 07/14/2020- Last updated : 03/25/2021+ # Tutorial ΓÇô Deploying HSMs into an existing virtual network using PowerShell
The command should return a status of ΓÇ£RegisteredΓÇ¥ (as shown below) before y
### Creating HSM resources
-An HSM device is provisioned into a customersΓÇÖ virtual network. This implies the requirement for a subnet. A dependency for the HSM to enable communication between the virtual network and physical device is an ExpressRoute Gateway, and finally a virtual machine is required to access the HSM device using the Gemalto client software. These resources have been collected into a template file, with corresponding parameter file, for ease of use. The files are available by contacting Microsoft directly at HSMrequest@Microsoft.com.
+An HSM device is provisioned into a customersΓÇÖ virtual network. This implies the requirement for a subnet. A dependency for the HSM to enable communication between the virtual network and physical device is an ExpressRoute Gateway, and finally a virtual machine is required to access the HSM device using the Thales client software. These resources have been collected into a template file, with corresponding parameter file, for ease of use. The files are available by contacting Microsoft directly at HSMrequest@Microsoft.com.
Once you have the files, you must edit the parameter file to insert your preferred names for resources. This means editing lines with ΓÇ£valueΓÇ¥: ΓÇ£ΓÇ¥.
The output should look like the image shown below:
![Screenshot that shows the output from the hsm show command.](media/tutorial-deploy-hsm-powershell/output.png)
-At this point, you have allocated all resources for a highly available, two HSM deployment and validated access and operational state. Any further configuration or testing involves more work with the HSM device itself. For this, you should follow the instructions in the Gemalto Luna Network HSM 7 Administration Guide chapter 7 to initialize the HSM and create partitions. All documentation and software are available directly from Gemalto for download once you are registered in the Gemalto Customer Support Portal and have a Customer ID. Download Client Software version 7.2 to get all required components.
+At this point, you have allocated all resources for a highly available, two HSM deployment and validated access and operational state. Any further configuration or testing involves more work with the HSM device itself. For this, you should follow the instructions in the Thales Luna 7 HSM Administration Guide chapter 7 to initialize the HSM and create partitions. All documentation and software are available directly from Thales for download once you are registered in the [Thales customer support portal](https://supportportal.thalesgroup.com/csm) and have a Customer ID. Download Client Software version 7.2 to get all required components.
## Delete or clean up resources If you have finished with just the HSM device, then it can be deleted as a resource and returned to the free pool. The obvious concern when doing this is any sensitive customer data that is on the device. The best way to "zeroize" a device is to get the HSM admin password wrong 3 times (note: this is not appliance admin, it's the actual HSM admin). As a safety measure to protect key material, the device cannot be deleted as an Azure resource until it is in the zeroized state. > [!NOTE]
-> if you have issue with any Gemalto device configuration you should contact [Gemalto customer support](https://safenet.gemalto.com/technical-support/).
+> if you have issue with any Thales device configuration you should contact [Thales customer support](https://supportportal.thalesgroup.com/csm).
If you want to remove the HSM resource in Azure you can use the following command replacing the "$" variables with your unique parameters:
event-grid Add Identity Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/add-identity-roles.md
+
+ Title: Add managed identity to a role on Azure Event Grid destination
+description: This article describes how to add managed identity to Azure roles on destinations such as Azure Service Bus and Azure Event Hubs.
+ Last updated : 03/25/2021++
+# Add an identity to Azure roles on Azure Event Grid destinations
+This section describes how to add the identity for your system topic, custom topic, or domain to an Azure role.
+
+## Prerequisites
+Assign a system-assigned managed identity by using instructions from the following articles:
+
+- [Custom topics or domains](enable-identity-custom-topics-domains.md)
+- [System topics](enable-identity-system-topics.md)
+
+## Supported destinations and Azure roles
+After you enable identity for your event grid custom topic or domain, Azure automatically creates an identity in Azure Active Directory. Add this identity to appropriate Azure roles so that the custom topic or domain can forward events to supported destinations. For example, add the identity to the **Azure Event Hubs Data Sender** role for an Azure Event Hubs namespace so that the event grid custom topic can forward events to event hubs in that namespace.
+
+Currently, Azure event grid supports custom topics or domains configured with a system-assigned managed identity to forward events to the following destinations. This table also gives you the roles that the identity should be in so that the custom topic can forward the events.
+
+| Destination | Azure role |
+| -- | |
+| Service Bus queues and topics | [Azure Service Bus Data Sender](../service-bus-messaging/authenticate-application.md#azure-built-in-roles-for-azure-service-bus) |
+| Azure Event Hubs | [Azure Event Hubs Data Sender](../event-hubs/authorize-access-azure-active-directory.md#azure-built-in-roles-for-azure-event-hubs) |
+| Azure Blob storage | [Storage Blob Data Contributor](../storage/common/storage-auth-aad-rbac-portal.md#azure-roles-for-blobs-and-queues) |
+| Azure Queue storage |[Storage Queue Data Message Sender](../storage/common/storage-auth-aad-rbac-portal.md#azure-roles-for-blobs-and-queues) |
++
+## Use the Azure portal
+You can use the Azure portal to assign the custom topic or domain identity to an appropriate role so that the custom topic or domain can forward events to the destination.
+
+The following example adds a managed identity for an event grid custom topic named **msitesttopic** to the **Azure Service Bus Data Sender** role for a Service Bus namespace that contains a queue or topic resource. When you add to the role at the namespace level, the event grid custom topic can forward events to all entities within the namespace.
+
+1. Go to your **Service Bus namespace** in the [Azure portal](https://portal.azure.com).
+1. Select **Access Control** in the left pane.
+1. Select **Add** in the **Add a role assignment** section.
+1. On the **Add a role assignment** page, do the following steps:
+ 1. Select the role. In this case, it's **Azure Service Bus Data Sender**.
+ 1. Select the **identity** for your event grid custom topic or domain.
+ 1. Select **Save** to save the configuration.
+
+The steps are similar for adding an identity to other roles mentioned in the table.
+
+## Use the Azure CLI
+The example in this section shows you how to use the Azure CLI to add an identity to an Azure role. The sample commands are for event grid custom topics. The commands for event grid domains are similar.
+
+### Get the principal ID for the custom topic's system identity
+First, get the principal ID of the custom topic's system-managed identity and assign the identity to appropriate roles.
+
+```azurecli-interactive
+topic_pid=$(az ad sp list --display-name "$<TOPIC NAME>" --query [].objectId -o tsv)
+```
+
+### Create a role assignment for event hubs at various scopes
+The following CLI example shows how to add a custom topic's identity to the **Azure Event Hubs Data Sender** role at the namespace level or at the event hub level. If you create the role assignment at the namespace level, the custom topic can forward events to all event hubs in that namespace. If you create a role assignment at the event hub level, the custom topic can forward events only to that specific event hub.
++
+```azurecli-interactive
+role="Azure Event Hubs Data Sender"
+namespaceresourceid=$(az eventhubs namespace show -n $<EVENT HUBS NAMESPACE NAME> -g <RESOURCE GROUP of EVENT HUB> --query "{I:id}" -o tsv)
+eventhubresourceid=$(az eventhubs eventhub show -n <EVENT HUB NAME> --namespace-name <EVENT HUBS NAMESPACE NAME> -g <RESOURCE GROUP of EVENT HUB> --query "{I:id}" -o tsv)
+
+# create role assignment for the whole namespace
+az role assignment create --role "$role" --assignee "$topic_pid" --scope "$namespaceresourceid"
+
+# create role assignment scoped to just one event hub inside the namespace
+az role assignment create --role "$role" --assignee "$topic_pid" --scope "$eventhubresourceid"
+```
+
+### Create a role assignment for a Service Bus topic at various scopes
+The following CLI example shows how to add an event grid custom topic's identity to the **Azure Service Bus Data Sender** role at the namespace level or at the Service Bus topic level. If you create the role assignment at the namespace level, the event grid topic can forward events to all entities (Service Bus queues or topics) within that namespace. If you create a role assignment at the Service Bus queue or topic level, the event grid custom topic can forward events only to that specific Service Bus queue or topic.
+
+```azurecli-interactive
+role="Azure Service Bus Data Sender"
+namespaceresourceid=$(az servicebus namespace show -n $RG\SB -g "$RG" --query "{I:id}" -o tsv
+sbustopicresourceid=$(az servicebus topic show -n topic1 --namespace-name $RG\SB -g "$RG" --query "{I:id}" -o tsv)
+
+# create role assignment for the whole namespace
+az role assignment create --role "$role" --assignee "$topic_pid" --scope "$namespaceresourceid"
+
+# create role assignment scoped to just one hub inside the namespace
+az role assignment create --role "$role" --assignee "$topic_pid" --scope "$sbustopicresourceid"
+```
+
+## Next steps
+Now that you have assigned a system-assigned identity to your system topic, custom topic, or domain, and added the identity to appropriate roles on destinations, see [Devlier events using identity](managed-service-identity.md) on delivering events to destinations using the identity.
++
event-grid Consume Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/consume-private-endpoints.md
Under this configuration, the traffic goes over the public IP/internet from Even
## Deliver events to Event Hubs using managed identity To deliver events to event hubs in your Event Hubs namespace using managed identity, follow these steps:
-1. [Enable system-assigned identity for a topic or a domain](managed-service-identity.md#create-a-custom-topic-or-domain-with-an-identity).
+1. Enable system-assigned identity: [system topics](enable-identity-system-topics.md), [custom topics, and domains](enable-identity-custom-topics-domains.md).
1. [Add the identity to the **Azure Event Hubs Data Sender** role on the Event Hubs namespace](../event-hubs/authenticate-managed-identity.md#to-assign-azure-roles-using-the-azure-portal). 1. [Enable the **Allow trusted Microsoft services to bypass this firewall** setting on your Event Hubs namespace](../event-hubs/event-hubs-service-endpoints.md#trusted-microsoft-services). 1. [Configure the event subscription](managed-service-identity.md#create-event-subscriptions-that-use-an-identity) that uses an event hub as an endpoint to use the system-assigned identity.
To deliver events to event hubs in your Event Hubs namespace using managed ident
## Deliver events to Service Bus using managed identity To deliver events to Service Bus queues or topics in your Service Bus namespace using managed identity, follow these steps:
-1. [Enable system-assigned identity for a topic or a domain](managed-service-identity.md#create-a-custom-topic-or-domain-with-an-identity).
-1. Add the identity to the [Azure Service Bus Data Sender](/service-bus-messaging/service-bus-managed-service-identity#azure-built-in-roles-for-azure-service-bus) role on the Service Bus namespace
+1. Enable system-assigned identity: [system topics](enable-identity-system-topics.md), [custom topics, and domains](enable-identity-custom-topics-domains.md).
+1. [Add the identity to the **Azure Service Bus Data Sender**](/service-bus-messaging/service-bus-managed-service-identity#azure-built-in-roles-for-azure-service-bus) role on the Service Bus namespace
1. [Enable the **Allow trusted Microsoft services to bypass this firewall** setting on your Service Bus namespace](../service-bus-messaging/service-bus-service-endpoints.md#trusted-microsoft-services).
-1. [Configure the event subscription](managed-service-identity.md#create-event-subscriptions-that-use-an-identity) that uses a Service Bus queue or topic as an endpoint to use the system-assigned identity.
+1. [Configure the event subscription](managed-service-identity.md) that uses a Service Bus queue or topic as an endpoint to use the system-assigned identity.
## Deliver events to Storage To deliver events to Storage queues using managed identity, follow these steps:
-1. [Enable system-assigned identity for a topic or a domain](managed-service-identity.md#create-a-custom-topic-or-domain-with-an-identity).
-1. Add the identity to the [Storage Queue Data Message Sender](../storage/common/storage-auth-aad-rbac-portal.md) role on Azure Storage queue.
+1. Enable system-assigned identity: [system topics](enable-identity-system-topics.md), [custom topics, and domains](enable-identity-custom-topics-domains.md).
+1. [Add the identity to the **Storage Queue Data Message Sender**](../storage/common/storage-auth-aad-rbac-portal.md) role on Azure Storage queue.
1. [Configure the event subscription](managed-service-identity.md#create-event-subscriptions-that-use-an-identity) that uses a Service Bus queue or topic as an endpoint to use the system-assigned identity.
event-grid Delivery And Retry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/delivery-and-retry.md
Event Grid defaults to sending each event individually to subscribers. The subsc
Batched delivery has two settings:
-* **Max events per batch** - Maximum number of events Event Grid will deliver per batch. This number will never be exceeded, however fewer events may be delivered if no other events are available at the time of publish. Event Grid does not delay events in order to create a batch if fewer events are available. Must be between 1 and 5,000.
-* **Preferred batch size in kilobytes** - Target ceiling for batch size in kilobytes. Similar to max events, the batch size may be smaller if more events are not available at the time of publish. It is possible that a batch is larger than the preferred batch size *if* a single event is larger than the preferred size. For example, if the preferred size is 4 KB and a 10-KB event is pushed to Event Grid, the 10-KB event will still be delivered in its own batch rather than being dropped.
+* **Max events per batch** - Maximum number of events Event Grid will deliver per batch. This number will never be exceeded, however fewer events may be delivered if no other events are available at the time of publish. Event Grid doesn't delay events to create a batch if fewer events are available. Must be between 1 and 5,000.
+* **Preferred batch size in kilobytes** - Target ceiling for batch size in kilobytes. Similar to max events, the batch size may be smaller if more events aren't available at the time of publish. It's possible that a batch is larger than the preferred batch size *if* a single event is larger than the preferred size. For example, if the preferred size is 4 KB and a 10-KB event is pushed to Event Grid, the 10-KB event will still be delivered in its own batch rather than being dropped.
Batched delivery in configured on a per-event subscription basis via the portal, CLI, PowerShell, or SDKs.
For more information on using Azure CLI with Event Grid, see [Route storage even
When EventGrid receives an error for an event delivery attempt, EventGrid decides whether it should retry the delivery or dead-letter or drop the event based on the type of the error.
-If the error returned by the subscribed endpoint is configuration related error that can't be fixed with retries (for example, if the endpoint is deleted), EventGrid will either perform dead lettering the event or drop the event if dead letter is not configured.
+If the error returned by the subscribed endpoint is configuration-related error that can't be fixed with retries (for example, if the endpoint is deleted), EventGrid will either perform dead lettering the event or drop the event if dead letter isn't configured.
Following are the types of endpoints for which retry doesn't happen:
Following are the types of endpoints for which retry doesn't happen:
| Webhook | 400 Bad Request, 413 Request Entity Too Large, 403 Forbidden, 404 Not Found, 401 Unauthorized | > [!NOTE]
-> If Dead-Letter is not configured for endpoint, events will be dropped when above errors happen. Consider configuring Dead-Letter, if you don't want these kinds of events to be dropped.
+> If Dead-Letter isn't configured for endpoint, events will be dropped when above errors happen. Consider configuring Dead-Letter, if you don't want these kinds of events to be dropped.
-If the error returned by the subscribed endpoint is not among the above list, EventGrid performs the retry using policies described below:
+If the error returned by the subscribed endpoint isn't among the above list, EventGrid performs the retry using policies described below:
Event Grid waits 30 seconds for a response after delivering a message. After 30 seconds, if the endpoint hasnΓÇÖt responded, the message is queued for retry. Event Grid uses an exponential backoff retry policy for event delivery. Event Grid retries delivery on the following schedule on a best effort basis:
By default, Event Grid expires all events that aren't delivered within 24 hours.
As an endpoint experiences delivery failures, Event Grid will begin to delay the delivery and retry of events to that endpoint. For example, if the first 10 events published to an endpoint fail, Event Grid will assume that the endpoint is experiencing issues and will delay all subsequent retries *and new* deliveries for some time - in some cases up to several hours.
-The functional purpose of delayed delivery is to protect unhealthy endpoints as well as the Event Grid system. Without back-off and delay of delivery to unhealthy endpoints, Event Grid's retry policy and volume capabilities can easily overwhelm a system.
+The functional purpose of delayed delivery is to protect unhealthy endpoints and the Event Grid system. Without back-off and delay of delivery to unhealthy endpoints, Event Grid's retry policy and volume capabilities can easily overwhelm a system.
## Dead-letter events When Event Grid can't deliver an event within a certain time period or after trying to deliver the event a certain number of times, it can send the undelivered event to a storage account. This process is known as **dead-lettering**. Event Grid dead-letters an event when **one of the following** conditions is met.
If either of the conditions is met, the event is dropped or dead-lettered. By d
Event Grid sends an event to the dead-letter location when it has tried all of its retry attempts. If Event Grid receives a 400 (Bad Request) or 413 (Request Entity Too Large) response code, it immediately schedules the event for dead-lettering. These response codes indicate delivery of the event will never succeed.
-The time-to-live expiration is checked ONLY at the next scheduled delivery attempt. Therefore, even if time-to-live expires before the next scheduled delivery attempt, event expiry is checked only at the time of the next delivery and then subsequently dead-lettered.
+The time-to-live expiration is checked ONLY at the next scheduled delivery attempt. So, even if time-to-live expires before the next scheduled delivery attempt, event expiry is checked only at the time of the next delivery and then subsequently dead-lettered.
There is a five-minute delay between the last attempt to deliver an event and when it is delivered to the dead-letter location. This delay is intended to reduce the number Blob storage operations. If the dead-letter location is unavailable for four hours, the event is dropped.
All other codes not in the above set (200-204) are considered failures and will
| 503 Service Unavailable | Retry after 30 seconds or more | | All others | Retry after 10 seconds or more |
+## Delivery with custom headers
+Event subscriptions allow you to set up http headers that are included in delivered events. This capability allows you to set custom headers that are required by a destination. You can set up to 10 headers when creating an event subscription. Each header value shouldn't be greater than 4,096 (4K) bytes. You can set custom headers on the events that are delivered to the following destinations:
+
+- Webhooks
+- Azure Service Bus topics and queues
+- Azure Event Hubs
+- Relay Hybrid Connections
+
+For more information, see [Delivery with custom headers](delivery-properties.md).
## Next steps
event-grid Delivery Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/delivery-properties.md
+
+ Title: Azure Event Grid - Set custom headers on delivered events
+description: Describes how you can set custom headers (or delivery properties) on delivered events.
+ Last updated : 03/24/2021++
+# Delivery with custom headers
+Event subscriptions allow you to set up http headers that are included in delivered events. This capability allows you to set custom headers that are required by a destination. You can set up to 10 headers when creating an event subscription. Each header value shouldn't be greater than 4,096 (4K) bytes.
+
+You can set custom headers on the events that are delivered to the following destinations:
+
+- Webhooks
+- Azure Service Bus topics and queues
+- Azure Event Hubs
+- Relay Hybrid Connections
+
+When creating an event subscription in the Azure portal, you can use the **Delivery Properties** tab to set custom http headers. This page lets you set fixed and dynamic header values.
+
+## Setting static header values
+To set headers with a fixed value, provide the name of the header and its value in the corresponding fields:
++
+You may want check **Is secret?** when providing sensitive data. Sensitive data won't be displayed on the Azure portal.
+
+## Setting dynamic header values
+You can set the value of a header based on a property in an incoming event. Use JsonPath syntax to refer to an incoming eventΓÇÖs property value to be used as the value for a header in outgoing requests. For example, to set the value of a header named **Channel** using the value of the incoming event property **system** in the event data, configure your event subscription in the following way:
++
+## Examples
+This section gives you a few examples of using delivery properties.
+
+### Setting the Authorization header with a bearer token (non-normative example)
+
+Set a value to an Authorization header to identify the request with your Webhook handler. An Authorization header can be set if you aren't [protecting your Webhook with Azure Active Directory](secure-webhook-delivery.md).
+
+| Header name | Header type | Header value |
+| :-- | :-- | :-- |
+|`Authorization` | Static | `BEARER SlAV32hkKG...`|
+
+Outgoing requests should now contain the header set on the event subscription:
+
+```console
+GET /home.html HTTP/1.1
+
+Host: acme.com
+
+User-Agent: <user-agent goes here>
+
+Authorization: BEARER SlAV32hkKG...
+```
+
+> [!NOTE]
+> Defining authorization headers is a sensible option when your destination is a Webhook. It should not be used for [functions subscribed with a resource id](/rest/api/eventgrid/eventsubscriptions/createorupdate#azurefunctioneventsubscriptiondestination), Service Bus, Event Hubs, and Hybrid Connections as those destinations support their own authentication schemes when used with Event Grid.
+
+### Service Bus example
+Azure Service Bus supports the use of a [BrokerProperties HTTP header](/rest/api/servicebus/message-headers-and-properties#message-headers) to define message properties when sending single messages. The value of the `BrokerProperties` header should be provided in the JSON format. For example, if you need to set message properties when sending a single message to Service Bus, set the header in the following way:
+
+| Header name | Header type | Header value |
+| :-- | :-- | :-- |
+|`BrokerProperties` | Static | `BrokerProperties: { "MessageId": "{701332E1-B37B-4D29-AA0A-E367906C206E}", "TimeToLive" : 90}` |
++
+### Event Hubs example
+
+If you need to publish events to a specific partition within an event hub, define a [BrokerProperties HTTP header](/rest/api/eventhub/event-hubs-runtime-rest#common-headers) on your event subscription to specify the partition key that identifies the target event hub partition.
+
+| Header name | Header type | Header value |
+| :-- | :-- | :-- |
+|`BrokerProperties` | Static | `BrokerProperties: {"PartitionKey": "0000000000-0000-0000-0000-000000000000000"}` |
++
+### Configure time to live on outgoing events to Azure Storage Queues
+For the Azure Storage Queues destination, you can only configure the time-to-live the outgoing message will have once it has been delivered to an Azure Storage queue. If no time is provided, the messageΓÇÖs default time to live is 7 days. You can also set the event to never expire.
++
+## Next steps
+For more information about event delivery, see the following article:
+
+- [Delivery and retry](delivery-and-retry.md)
+- [Webhook event delivery](webhook-event-delivery.md)
+- [Event filtering](event-filtering.md)
event-grid Enable Identity Custom Topics Domains https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/enable-identity-custom-topics-domains.md
+
+ Title: Enable managed identity on Azure Event Grid custom topics and domains
+description: This article describes how enable managed service identity for an Azure Event Grid custom topic or domain.
+ Last updated : 03/25/2021++
+# Assign a system-managed identity to an Event Grid custom topic or domain
+This article shows you how to enable a system-managed identity for an Event Grid custom topic or a domain. To learn about managed identities, see [What are managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+
+## Enable identity at the time of creation
+
+### Using Azure portal
+You can enable system-assigned identity for a custom topic or a domain while creating it in the Azure portal. The following image shows how to enable a system-managed identity for a custom topic. Basically, you select the option **Enable system assigned identity** on the **Advanced** page of the topic creation wizard. You'll see this option on the **Advanced** page of the domain creation wizard too.
+
+![Enable identity while creating a custom topic](./media/managed-service-identity/create-topic-identity.png)
+
+### Using Azure CLI
+You can also use the Azure CLI to create a custom topic or domain with a system-assigned identity. Use the `az eventgrid topic create` command with the `--identity` parameter set to `systemassigned`. If you don't specify a value for this parameter, the default value `noidentity` is used.
+
+```azurecli-interactive
+# create a custom topic with a system-assigned identity
+az eventgrid topic create -g <RESOURCE GROUP NAME> --name <TOPIC NAME> -l <LOCATION> --identity systemassigned
+```
+
+Similarly, you can use the `az eventgrid domain create` command to create a domain with a system-managed identity.
+
+## Enable identity for an existing custom topic or domain
+In this section, you learn how to enable a system-managed identity for an existing custom topic or domain.
+
+### Using Azure portal
+The following procedure shows you how to enable system-managed identity for a custom topic. The steps for enabling an identity for a domain are similar.
+
+1. Go to the [Azure portal](https://portal.azure.com).
+2. Search for **event grid topics** in the search bar at the top.
+3. Select the **custom topic** for which you want to enable the managed identity.
+4. Switch to the **Identity** tab.
+5. Turn **on** the switch to enable the identity.
+1. Select **Save** on the toolbar to save the setting.
+
+ :::image type="content" source="./media/managed-service-identity/identity-existing-topic.png" alt-text="Identity page for a custom topic":::
+
+You can use similar steps to enable an identity for an event grid domain.
+
+### Use the Azure CLI
+Use the `az eventgrid topic update` command with `--identity` set to `systemassigned` to enable system-assigned identity for an existing custom topic. If you want to disable the identity, specify `noidentity` as the value.
+
+```azurecli-interactive
+# Update the topic to assign a system-assigned identity.
+az eventgrid topic update -g $rg --name $topicname --identity systemassigned --sku basic
+```
+
+The command for updating an existing domain is similar (`az eventgrid domain update`).
++
+## Next steps
+Add the identity to an appropriate role (for example, Service Bus Data Sender) on the destination (for example, a Service Bus queue). For detailed steps, see [Add identity to Azure roles on destinations](add-identity-roles.md).
event-grid Enable Identity System Topics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/enable-identity-system-topics.md
+
+ Title: Enable managed identity on Azure Event Grid system topic
+description: This article describes how enable managed service identity for an Azure Event Grid system topic.
+ Last updated : 03/25/2021++
+# Assign a system-managed identity to an Event Grid system topic
+In this article, you learn how to enable system-managed identity for an existing Event Grid system topic. To learn about managed identities, see [What are managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+
+> [!IMPORTANT]
+> Currently, you can't enable a system-managed identity when creating a new system topic, that is, when creating an event subscription on an Azure resource that supports system topics.
++
+## Use Azure portal
+The following procedure shows you how to enable system-managed identity for a system topic.
+
+1. Go to the [Azure portal](https://portal.azure.com).
+2. Search for **event grid system topics** in the search bar at the top.
+3. Select the **system topic** for which you want to enable the managed identity.
+4. Select **Identity** on the left menu. You don't see this option for a system topic that's in the global location.
+5. Turn **on** the switch to enable the identity.
+1. Select **Save** on the toolbar to save the setting.
+
+ :::image type="content" source="./media/managed-service-identity/identity-existing-system-topic.png" alt-text="Identity page for a system topic":::
+1. Select **Yes** on the confirmation message.
+
+ :::image type="content" source="./media/managed-service-identity/identity-existing-system-topic-confirmation.png" alt-text="Assign identity to a system topic - confirmation":::
+1. Confirm that you see the object ID of the system-assigned managed identity and see a link to assign roles.
+
+ :::image type="content" source="./media/managed-service-identity/identity-existing-system-topic-completed.png" alt-text="Assign identity to a system topic - completed":::
+
+## Global Azure sources
+You can enable system-managed identity only for the regional Azure resources. You can't enable it for system topics associated with global Azure resources such as Azure subscriptions, resource groups, or Azure Maps. The system topics for these global sources are also not associated with a specific region. You don't see the **Identity** page for the system topic whose location is set to **Global**.
++++
+## Next steps
+Add the identity to an appropriate role (for example, Service Bus Data Sender) on the destination (for example, a Service Bus queue). For detailed steps, see [Add identity to Azure roles on destinations](add-identity-roles.md).
event-grid Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/managed-service-identity.md
Title: Event delivery, managed service identity, and private link description: This article describes how to enable managed service identity for an Azure event grid topic. Use it to forward events to supported destinations. Previously updated : 01/28/2021 Last updated : 03/25/2021 # Event delivery with a managed identity
-This article describes how to enable a [managed service identity](../active-directory/managed-identities-azure-resources/overview.md) for Azure event grid custom topics or domains. Use it to forward events to supported destinations such as Service Bus queues and topics, event hubs, and storage accounts.
+This article describes how to use a [managed service identity](../active-directory/managed-identities-azure-resources/overview.md) for an Azure event grid system topic, custom topic, or domain. Use it to forward events to supported destinations such as Service Bus queues and topics, event hubs, and storage accounts.
-Here are the steps that are covered in detail in this article:
-1. Create a custom topic or domain with a system-assigned identity, or update an existing custom topic or domain to enable identity.
-1. Add the identity to an appropriate role (for example, Service Bus Data Sender) on the destination (for example, a Service Bus queue).
-1. When you create event subscriptions, enable the usage of the identity to deliver events to the destination.
-> [!NOTE]
-> Currently, it's not possible to deliver events using [private endpoints](../private-link/private-endpoint-overview.md). For more information, see the [Private endpoints](#private-endpoints) section at the end of this article.
-## Create a custom topic or domain with an identity
-First, let's look at how to create a topic or a domain with a system-managed identity.
+## Prerequisites
+1. Assign a system-assigned identity to a system topic, a custom topic, or a domain.
+ - For custom topics and domains, see [Enable managed identity for custom topics and domains](enable-identity-custom-topics-domains.md).
+ - For system topics, see [Enable managed identity for system topics](enable-identity-system-topics.md)
+1. Add the identity to an appropriate role (for example, Service Bus Data Sender) on the destination (for example, a Service Bus queue). For detailed steps, see [Add identity to Azure roles on destinations](add-identity-roles.md)
-### Use the Azure portal
-You can enable system-assigned identity for a custom topic or domain while you create it in the Azure portal. The following image shows how to enable a system-managed identity for a custom topic. Basically, you select the option **Enable system assigned identity** on the **Advanced** page of the topic creation wizard. You'll see this option on the **Advanced** page of the domain creation wizard too.
-
-![Enable identity while creating a custom topic](./media/managed-service-identity/create-topic-identity.png)
-
-### Use the Azure CLI
-You can also use the Azure CLI to create a custom topic or domain with a system-assigned identity. Use the `az eventgrid topic create` command with the `--identity` parameter set to `systemassigned`. If you don't specify a value for this parameter, the default value `noidentity` is used.
-
-```azurecli-interactive
-# create a custom topic with a system-assigned identity
-az eventgrid topic create -g <RESOURCE GROUP NAME> --name <TOPIC NAME> -l <LOCATION> --identity systemassigned
-```
-
-Similarly, you can use the `az eventgrid domain create` command to create a domain with a system-managed identity.
-
-## Enable an identity for an existing custom topic or domain
-In the previous section, you learned how to enable a system-managed identity while you created a custom topic or a domain. In this section, you learn how to enable a system-managed identity for an existing custom topic or domain.
-
-### Use the Azure portal
-The following procedure shows you how to enable system-managed identity for a custom topic. The steps for enabling an identity for a domain are similar.
-
-1. Go to the [Azure portal](https://portal.azure.com).
-2. Search for **event grid topics** in the search bar at the top.
-3. Select the **custom topic** for which you want to enable the managed identity.
-4. Switch to the **Identity** tab.
-5. Turn **on** the switch to enable the identity.
-1. Select **Save** on the toolbar to save the setting.
-
- :::image type="content" source="./media/managed-service-identity/identity-existing-topic.png" alt-text="Identity page for a custom topic":::
-
-You can use similar steps to enable an identity for an event grid domain.
-
-### Use the Azure CLI
-Use the `az eventgrid topic update` command with `--identity` set to `systemassigned` to enable system-assigned identity for an existing custom topic. If you want to disable the identity, specify `noidentity` as the value.
-
-```azurecli-interactive
-# Update the topic to assign a system-assigned identity.
-az eventgrid topic update -g $rg --name $topicname --identity systemassigned --sku basic
-```
-
-The command for updating an existing domain is similar (`az eventgrid domain update`).
-
-## Supported destinations and Azure roles
-After you enable identity for your event grid custom topic or domain, Azure automatically creates an identity in Azure Active Directory. Add this identity to appropriate Azure roles so that the custom topic or domain can forward events to supported destinations. For example, add the identity to the **Azure Event Hubs Data Sender** role for an Azure Event Hubs namespace so that the event grid custom topic can forward events to event hubs in that namespace.
-
-Currently, Azure event grid supports custom topics or domains configured with a system-assigned managed identity to forward events to the following destinations. This table also gives you the roles that the identity should be in so that the custom topic can forward the events.
-
-| Destination | Azure role |
-| -- | |
-| Service Bus queues and topics | [Azure Service Bus Data Sender](../service-bus-messaging/authenticate-application.md#azure-built-in-roles-for-azure-service-bus) |
-| Azure Event Hubs | [Azure Event Hubs Data Sender](../event-hubs/authorize-access-azure-active-directory.md#azure-built-in-roles-for-azure-event-hubs) |
-| Azure Blob storage | [Storage Blob Data Contributor](../storage/common/storage-auth-aad-rbac-portal.md#azure-roles-for-blobs-and-queues) |
-| Azure Queue storage |[Storage Queue Data Message Sender](../storage/common/storage-auth-aad-rbac-portal.md#azure-roles-for-blobs-and-queues) |
-
-## Add an identity to Azure roles on destinations
-This section describes how to add the identity for your custom topic or domain to an Azure role.
-
-### Use the Azure portal
-You can use the Azure portal to assign the custom topic or domain identity to an appropriate role so that the custom topic or domain can forward events to the destination.
-
-The following example adds a managed identity for an event grid custom topic named **msitesttopic** to the **Azure Service Bus Data Sender** role for a Service Bus namespace that contains a queue or topic resource. When you add to the role at the namespace level, the event grid custom topic can forward events to all entities within the namespace.
-
-1. Go to your **Service Bus namespace** in the [Azure portal](https://portal.azure.com).
-1. Select **Access Control** in the left pane.
-1. Select **Add** in the **Add a role assignment** section.
-1. On the **Add a role assignment** page, do the following steps:
- 1. Select the role. In this case, it's **Azure Service Bus Data Sender**.
- 1. Select the **identity** for your event grid custom topic or domain.
- 1. Select **Save** to save the configuration.
-
-The steps are similar for adding an identity to other roles mentioned in the table.
-
-### Use the Azure CLI
-The example in this section shows you how to use the Azure CLI to add an identity to an Azure role. The sample commands are for event grid custom topics. The commands for event grid domains are similar.
-
-#### Get the principal ID for the custom topic's system identity
-First, get the principal ID of the custom topic's system-managed identity and assign the identity to appropriate roles.
-
-```azurecli-interactive
-topic_pid=$(az ad sp list --display-name "$<TOPIC NAME>" --query [].objectId -o tsv)
-```
-
-#### Create a role assignment for event hubs at various scopes
-The following CLI example shows how to add a custom topic's identity to the **Azure Event Hubs Data Sender** role at the namespace level or at the event hub level. If you create the role assignment at the namespace level, the custom topic can forward events to all event hubs in that namespace. If you create a role assignment at the event hub level, the custom topic can forward events only to that specific event hub.
--
-```azurecli-interactive
-role="Azure Event Hubs Data Sender"
-namespaceresourceid=$(az eventhubs namespace show -n $<EVENT HUBS NAMESPACE NAME> -g <RESOURCE GROUP of EVENT HUB> --query "{I:id}" -o tsv)
-eventhubresourceid=$(az eventhubs eventhub show -n <EVENT HUB NAME> --namespace-name <EVENT HUBS NAMESPACE NAME> -g <RESOURCE GROUP of EVENT HUB> --query "{I:id}" -o tsv)
-
-# create role assignment for the whole namespace
-az role assignment create --role "$role" --assignee "$topic_pid" --scope "$namespaceresourceid"
-
-# create role assignment scoped to just one event hub inside the namespace
-az role assignment create --role "$role" --assignee "$topic_pid" --scope "$eventhubresourceid"
-```
-
-#### Create a role assignment for a Service Bus topic at various scopes
-The following CLI example shows how to add an event grid custom topic's identity to the **Azure Service Bus Data Sender** role at the namespace level or at the Service Bus topic level. If you create the role assignment at the namespace level, the event grid topic can forward events to all entities (Service Bus queues or topics) within that namespace. If you create a role assignment at the Service Bus queue or topic level, the event grid custom topic can forward events only to that specific Service Bus queue or topic.
-
-```azurecli-interactive
-role="Azure Service Bus Data Sender"
-namespaceresourceid=$(az servicebus namespace show -n $RG\SB -g "$RG" --query "{I:id}" -o tsv
-sbustopicresourceid=$(az servicebus topic show -n topic1 --namespace-name $RG\SB -g "$RG" --query "{I:id}" -o tsv)
-
-# create role assignment for the whole namespace
-az role assignment create --role "$role" --assignee "$topic_pid" --scope "$namespaceresourceid"
-
-# create role assignment scoped to just one hub inside the namespace
-az role assignment create --role "$role" --assignee "$topic_pid" --scope "$sbustopicresourceid"
-```
+ > [!NOTE]
+ > Currently, it's not possible to deliver events using [private endpoints](../private-link/private-endpoint-overview.md). For more information, see the [Private endpoints](#private-endpoints) section at the end of this article.
## Create event subscriptions that use an identity
-After you have an event grid custom topic or a domain with a system-managed identity and have added the identity to the appropriate role on the destination, you're ready to create subscriptions that use the identity.
+After you have an event grid custom topic or system topic or domain with a system-managed identity and have added the identity to the appropriate role on the destination, you're ready to create subscriptions that use the identity.
### Use the Azure portal When you create an event subscription, you see an option to enable the use of a system-assigned identity for an endpoint in the **ENDPOINT DETAILS** section.
Under this configuration, the traffic goes over the public IP/internet from Even
## Next steps
-For more information about managed service identities, see [What are managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+To learn about managed identities, see [What are managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
firewall Tutorial Hybrid Ps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/tutorial-hybrid-ps.md
Previously updated : 08/28/2020 Last updated : 03/26/2021 customer intent: As an administrator, I want to control network access from an on-premises network to an Azure virtual network.
There are three key requirements for this scenario to work correctly:
See the [Create Routes](#create-the-routes) section in this article to see how these routes are created. >[!NOTE]
->Azure Firewall must have direct Internet connectivity. If your AzureFirewallSubnet learns a default route to your on-premises network via BGP, you must override this with a 0.0.0.0/0 UDR with the **NextHopType** value set as **Internet** to maintain direct Internet connectivity.
+>Azure Firewall must have direct Internet connectivity. If your AzureFirewallSubnet learns a default route to your on-premises network via BGP, you must configure Azure Firewall in forced tunneling mode. If this is an existing Azure Firewall, which cannot be reconfigured in forced tunneling mode, it is recommended to add a 0.0.0.0/0 UDR on the AzureFirewallSubnet with the **NextHopType** value set as **Internet** to maintain direct Internet connectivity.
>
->Azure Firewall can be configured to support forced tunneling. For more information, see [Azure Firewall forced tunneling](forced-tunneling.md).
+>For more information, see [Azure Firewall forced tunneling](forced-tunneling.md).
>[!NOTE] >Traffic between directly peered VNets is routed directly even if a UDR points to Azure Firewall as the default gateway. To send subnet to subnet traffic to the firewall in this scenario, a UDR must contain the target subnet network prefix explicitly on both subnets.
genomics Frequently Asked Questions Genomics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/genomics/frequently-asked-questions-genomics.md
You need two access keys in case you want to update (regenerate) them without in
## Do you save my storage account keys? Your storage account key is used to create short-term access tokens for the Microsoft Genomics service to read your input files and write the output files. The default token duration is 48 hours. The token duration can be changed with the `-sas/--sas-duration` option of the submit command; the value is in hours.
+## Does Microsoft Genomics store customer data?
+
+No. Microsoft Genomics does not store any customer data.
+ ## What genome references can I use? These references are supported:
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-release-notes-archive.md
Last updated 02/08/2021
Azure HDInsight is one of the most popular services among enterprise customers for open-source Apache Hadoop and Apache Spark analytics on Azure.
+## Release date: 02/05/2021
+
+This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region in several days.
+
+### New features
+#### Dav4-series support
+HDInsight added Dav4-series support in this release. Learn more about [Dav4-series here](/azure/virtual-machines/dav4-dasv4-series).
+
+#### Kafka REST Proxy GA
+Kafka REST Proxy enables you to interact with your Kafka cluster via a REST API over HTTPS. Kafka Rest Proxy is general available starting from this release. Learn more about [Kafka REST Proxy here](/azure/hdinsight/kafka/rest-proxy).
+
+#### Moving to Azure virtual machine scale sets
+HDInsight now uses Azure virtual machines to provision the cluster. The service is gradually migrating to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process may take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected.
+
+### Deprecation
+#### Disabled VM sizes
+Starting form January 9 2021, HDInsight will block all customers creating clusters using standand_A8, standand_A9, standand_A10 and standand_A11 VM sizes. Existing clusters will run as is. Consider moving to HDInsight 4.0 to avoid potential system/support interruption.
+
+### Behavior changes
+#### Default cluster VM size changes to Ev3-series
+Default cluster VM sizes will be changed from D-series to Ev3-series. This change applies to head nodes and worker nodes. To avoid this change impacting your tested workflows, specify the VM sizes that you want to use in the ARM template.
+
+#### Network interface resource not visible for clusters running on Azure virtual machine scale sets
+HDInsight is gradually migrating to Azure virtual machine scale sets. Network interfaces for virtual machines are no longer visible to customers for clusters that use Azure virtual machine scale sets.
+
+#### Breaking change for .NET for Apache Spark 1.0.0
+With the latest release, HDInsight introduces the first official version v1.0.0 of the [ΓÇ£.NET for Apache SparkΓÇ¥](https://github.com/dotnet/spark) library. It provides DataFrame API completeness for Spark 2.4.x and Spark 3.0.x along with a host of [other features](https://github.com/dotnet/spark/blob/master/docs/release-notes/1.0.0/release-1.0.0.md). There will be breaking changes for this major version, refer to [the .NET for Apache Spark migration guide](https://github.com/dotnet/spark/blob/master/docs/migration-guide.md#upgrading-from-microsoftspark-0x-to-10) to understand steps needed to update your code and pipelines. To learn more, refer to this [.NET for Apache Spark v1.0 on Azure HDInsight guide](/azure/hdinsight/spark/spark-dotnet-version-update#using-net-for-apache-spark-v10-in-hdinsight).
+
+### Upcoming changes
+The following changes will happen in upcoming releases.
+
+#### Default cluster version will be changed to 4.0
+Starting February 2021, the default version of HDInsight cluster will be changed from 3.6 to 4.0. For more information about available versions, see [available versions](./hdinsight-component-versioning.md). Learn more about what is new in [HDInsight 4.0](./hdinsight-version-release.md).
+
+#### OS version upgrade
+HDInsight is upgrading OS version from Ubuntu 16.04 to 18.04. The upgrade will complete before April 2021.
+
+#### HDInsight 3.6 end of support on June 30 2021
+HDInsight 3.6 will be end of support. Starting form June 30 2021, customers can't create new HDInsight 3.6 clusters. Existing clusters will run as is without the support from Microsoft. Consider moving to HDInsight 4.0 to avoid potential system/support interruption.
+
+### Component version change
+No component version change for this release. You can find the current component versions for HDInsight 4.0 and HDInsight 3.6 in [this doc](./hdinsight-component-versioning.md).
+ ## Release date: 11/18/2020 This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region in several days.
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-release-notes.md
description: Latest release notes for Azure HDInsight. Get development tips and
Previously updated : 02/08/2021 Last updated : 03/23/2021 # Azure HDInsight release notes
Azure HDInsight is one of the most popular services among enterprise customers f
If you would like to subscribe on release notes, watch releases on [this GitHub repository](https://github.com/hdinsight/release-notes/releases).
-## Release date: 02/05/2021
+## Release date: 03/24/2021
This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release is made available to all regions over several days. The release date here indicates the first region release date. If you don't see below changes, wait for the release being live in your region in several days. ## New features
-### Dav4-series support
-HDInsight added Dav4-series support in this release. Learn more about [Dav4-series here](../virtual-machines/dav4-dasv4-series.md).
+### Spark 3.0 preview
+HDInsight added [Spark 3.0.0](https://spark.apache.org/docs/3.0.0/) support to HDInsight 4.0 as a Preview feature.
-### Kafka REST Proxy GA
-Kafka REST Proxy enables you to interact with your Kafka cluster via a REST API over HTTPS. Kafka Rest Proxy is general available starting from this release. Learn more about [Kafka REST Proxy here](./kafk).
+### Kafka 2.4 preview
+HDInsight added [Kafka 2.4.1](http://kafka.apache.org/24/documentation.html) support to HDInsight 4.0 as a Preview feature.
### Moving to Azure virtual machine scale sets HDInsight now uses Azure virtual machines to provision the cluster. The service is gradually migrating to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process may take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected. ## Deprecation
-### Disabled VM sizes
-Starting form January 9 2021, HDInsight will block all customers creating clusters using standand_A8, standand_A9, standand_A10 and standand_A11 VM sizes. Existing clusters will run as is. Consider moving to HDInsight 4.0 to avoid potential system/support interruption.
+No deprecation in this release.
## Behavior changes
-### Default cluster VM size changes to Ev3-series
-Default cluster VM sizes will be changed from D-series to Ev3-series. This change applies to head nodes and worker nodes. To avoid this change impacting your tested workflows, specify the VM sizes that you want to use in the ARM template.
+### Default cluster version is changed to 4.0
+The default version of HDInsight cluster is changed from 3.6 to 4.0. For more information about available versions, see [available versions](./hdinsight-component-versioning.md). Learn more about what is new in [HDInsight 4.0](./hdinsight-version-release.md).
+
+### Default cluster VM sizes are changed to Ev3-series
+Default cluster VM sizes are changed from D-series to Ev3-series. This change applies to head nodes and worker nodes. To avoid this change impacting your tested workflows, specify the VM sizes that you want to use in the ARM template.
### Network interface resource not visible for clusters running on Azure virtual machine scale sets HDInsight is gradually migrating to Azure virtual machine scale sets. Network interfaces for virtual machines are no longer visible to customers for clusters that use Azure virtual machine scale sets. -
-### Breaking change for .NET for Apache Spark 1.0.0
-With the latest release, HDInsight introduces the first official version v1.0.0 of the [ΓÇ£.NET for Apache SparkΓÇ¥](https://github.com/dotnet/spark) library. It provides DataFrame API completeness for Spark 2.4.x and Spark 3.0.x along with a host of [other features](https://github.com/dotnet/spark/blob/master/docs/release-notes/1.0.0/release-1.0.0.md). There will be breaking changes for this major version, refer to [the .NET for Apache Spark migration guide](https://github.com/dotnet/spark/blob/master/docs/migration-guide.md#upgrading-from-microsoftspark-0x-to-10) to understand steps needed to update your code and pipelines. To learn more, refer to this [.NET for Apache Spark v1.0 on Azure HDInsight guide](./spark/spark-dotnet-version-update.md#using-net-for-apache-spark-v10-in-hdinsight).
-- ## Upcoming changes The following changes will happen in upcoming releases.
-### Default cluster version will be changed to 4.0
-Starting February 2021, the default version of HDInsight cluster will be changed from 3.6 to 4.0. For more information about available versions, see [available versions](./hdinsight-component-versioning.md). Learn more about what is new in [HDInsight 4.0](./hdinsight-version-release.md).
- ### OS version upgrade
-HDInsight is upgrading OS version from Ubuntu 16.04 to 18.04. The upgrade will complete before April 2021.
+HDInsight will be upgrading OS version from Ubuntu 16.04 to 18.04. The upgrade will complete before April 2021.
### HDInsight 3.6 end of support on June 30 2021 HDInsight 3.6 will be end of support. Starting form June 30 2021, customers can't create new HDInsight 3.6 clusters. Existing clusters will run as is without the support from Microsoft. Consider moving to HDInsight 4.0 to avoid potential system/support interruption.
HDInsight 3.6 will be end of support. Starting form June 30 2021, customers can'
HDInsight continues to make cluster reliability and performance improvements. ## Component version change
-No component version change for this release. You can find the current component versions for HDInsight 4.0 and HDInsight 3.6 in [this doc](./hdinsight-component-versioning.md).
+Added support for Spark 3.0.0 and Kafka 2.4.1 as Preview.
+You can find the current component versions for HDInsight 4.0 and HDInsight 3.6 in [this doc](./hdinsight-component-versioning.md).
iot-central Howto Connect Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-connect-powerbi.md
*This topic applies to administrators and solution developers.*
-[!Note] This solution uses [legacy data export features](./howto-export-data-legacy.md). Stay tuned for updated guidance on how to connect to Power BI using the latest data export.
+> [!Note]
+> This solution uses [legacy data export features](./howto-export-data-legacy.md). Stay tuned for updated guidance on how to connect to Power BI using the latest data export.
:::image type="content" source="media/howto-connect-powerbi/iot-continuous-data-export.png" alt-text="Power BI solution pipeline":::
Use the Power BI Solution for Azure IoT Central V3 to create a powerful Power BI
- Filter down to data sent by specific devices - View the most recent telemetry data in a table
-This solution sets up a pipeline that reads data from your [Continuous Data Export](./howto-export-data-legacy.md) Azure Blob storage account. The pipeline uses Azure Functions, Azure Data Factory, and Azure SQL Database to process and transform the data. you can visualize and analyze the data in a Power BI report that you download as a PBIX file. All of the resources are created in your Azure subscription, so you can customize each component to suit your needs.
+This solution sets up a pipeline that reads data from your [legacy data export](./howto-export-data-legacy.md) Azure Blob storage account. The pipeline uses Azure Functions, Azure Data Factory, and Azure SQL Database to process and transform the data. you can visualize and analyze the data in a Power BI report that you download as a PBIX file. All of the resources are created in your Azure subscription, so you can customize each component to suit your needs.
## Prerequisites
To complete the steps in this how-to guide, you need an active Azure subscriptio
Setting up the solution requires the following resources: - A version 3 IoT Central application. To learn how to check your application version, see [About your application](./howto-get-app-info.md). To learn how to create an IoT Central application, see [Create an Azure IoT Central application](./quick-deploy-iot-central.md).-- Continuous data export configured to export telemetry, devices, and device templates to Azure Blob storage. To learn more, see [How to export IoT data to destinations in Azure](howto-export-data.md).
+- Legacy continuous data export which is configured to export telemetry, devices, and device templates to Azure Blob storage. To learn more, see [legacy data export documentation](howto-export-data-legacy.md).
- Make sure that only your IoT Central application is exporting data to the blob container. - Your [devices must send JSON encoded messages](../../iot-hub/iot-hub-devguide-messages-d2c.md). Devices must specify `contentType:application/JSON` and `contentEncoding:utf-8` or `contentEncoding:utf-16` or `contentEncoding:utf-32` in the message system properties. - Power BI Desktop (latest version). See [Power BI downloads](https://powerbi.microsoft.com/downloads/).
iot-develop Quickstart Devkit Mxchip Az3166 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-mxchip-az3166.md
Last updated 03/17/2021
**Applies to**: [Embedded device development](about-iot-develop.md#embedded-device-development)<br> **Total completion time**: 30 minutes
+[![Browse code](media/common/browse-github-code.png)](https://github.com/azure-rtos/getting-started/tree/master/MXChip/AZ3166)
+ In this tutorial you use Azure RTOS to connect an MXCHIP AZ3166 IoT DevKit (hereafter, MXCHIP DevKit) to Azure IoT. The article is part of the series [Get started with Azure IoT embedded device development](quickstart-device-development.md). The series introduces device developers to Azure RTOS, and shows how to connect several device evaluation kits to Azure IoT. You will complete the following tasks:
You will complete the following tasks:
* Build an image and flash it onto the MXCHIP DevKit * Use Azure IoT Central to create cloud components, view properties, view device telemetry, and call direct commands
-> [!NOTE]
-> If you prefer to only view the code and not complete this article, see the sample at [Connect an MXCHIP AZ3166 to Azure IoT](https://github.com/azure-rtos/getting-started/tree/master/MXChip/AZ3166). If you plan to complete this article, you'll clone the GitHub repo in a later step.
- ## Prerequisites * A PC running Microsoft Windows 10
iot-develop Quickstart Send Telemetry Cli Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-send-telemetry-cli-node.md
ms.devlang: node Previously updated : 01/11/2021 Last updated : 03/25/2021 # Quickstart: Send telemetry from a device to an IoT hub (Node.js)
In this quickstart, you learn a basic IoT device application development workflo
## Use the Node.js SDK to send messages In this section, you will use the Node.js SDK to send messages from your simulated device to your IoT hub.
-1. Open a new terminal window. You will use this terminal to install the Node.js SDK and work with Node.js sample code. You should now have two terminals open: the one you just opened to work with Node.js, and the CLI shell that you used in previous sections to enter Azure CLI commands.
+1. Open a new terminal window. You will use this terminal to install the Node.js SDK and work with Node.js sample code. You should now have two terminals open: the one you just opened to work with Node.js, and the CLI shell that you used in previous sections to enter Azure CLI commands.
1. Copy the [Azure IoT Node.js SDK device samples](https://github.com/Azure/azure-iot-sdk-node/tree/master/device/samples) to your local machine:
In this section, you will use the Node.js SDK to send messages from your simulat
git clone https://github.com/Azure/azure-iot-sdk-node ```
-1. Navigate to the *azure-iot-sdk-node/device/samples* directory:
+1. Navigate to the *azure-iot-sdk-node/device/samples/pnp* directory:
```console
- cd azure-iot-sdk-node/device/samples
+ cd azure-iot-sdk-node/device/samples/pnp
```+ 1. Install the Azure IoT Node.js SDK and necessary dependencies: ```console npm install ```+ This command installs the proper dependencies as specified in the *package.json* file in the device samples directory.
-1. Set the Device Connection String as an environment variable called `DEVICE_CONNECTION_STRING`. The string value to use is the string you obtained in the previous section after creating your simulated Node.js device.
+1. Set both of the following environment variables, to enable your simulated device to connect to Azure IoT.
+ * Set an environment variable called `IOTHUB_DEVICE_CONNECTION_STRING`. For the variable value, use the device connection string that you saved in the previous section.
+ * Set an environment variable called `IOTHUB_DEVICE_SECURITY_TYPE`. For the variable, use the literal string value `connectionString`.
**Windows (cmd)** ```console
- set DEVICE_CONNECTION_STRING=<your connection string here>
+ set IOTHUB_DEVICE_CONNECTION_STRING=<your connection string here>
+ ```
+ ```console
+ set IOTHUB_DEVICE_SECURITY_TYPE=connectionString
``` > [!NOTE]
- > For Windows CMD there are no quotation marks surrounding the connection string.
+ > For Windows CMD there are no quotation marks surrounding the string values for each variable.
- **Linux (bash)**
+ **PowerShell**
- ```bash
- export DEVICE_CONNECTION_STRING="<your connection string here>"
+ ```azurepowershell
+ $env:IOTHUB_DEVICE_CONNECTION_STRING='<your connection string here>'
+ ```
+ ```azurepowershell
+ $env:IOTHUB_DEVICE_SECURITY_TYPE='connectionString'
```
+ **Bash (Linux or Windows)**
+
+ ```bash
+ export IOTHUB_DEVICE_CONNECTION_STRING="<your connection string here>"
+ ```
+ ```bash
+ export IOTHUB_DEVICE_SECURITY_TYPE="connectionString"
+ ```
1. In your open CLI shell, run the [az iot hub monitor-events](/cli/azure/ext/azure-iot/iot/hub#ext-azure-iot-az-iot-hub-monitor-events) command to begin monitoring for events on your simulated IoT device. Event messages will be printed in the terminal as they arrive. ```azurecli az iot hub monitor-events --output table --hub-name {YourIoTHubName} ```
-1. In your Node.js terminal, run the code for the installed sample file *simple_sample_device.js* . This code accesses the simulated IoT device and sends a message to the IoT hub.
+1. In your Node.js terminal, run the code for the installed sample file *simple_thermostat.js* . This code accesses the simulated IoT device and sends a message to the IoT hub.
To run the Node.js sample from the terminal: ```console
- node ./simple_sample_device.js
- ```
-
- Optionally, you can run the Node.js code from the sample in your JavaScript IDE:
- ```javascript
- 'use strict';
-
- const Protocol = require('azure-iot-device-mqtt').Mqtt;
- // Uncomment one of these transports and then change it in fromConnectionString to test other transports
- // const Protocol = require('azure-iot-device-amqp').AmqpWs;
- // const Protocol = require('azure-iot-device-http').Http;
- // const Protocol = require('azure-iot-device-amqp').Amqp;
- // const Protocol = require('azure-iot-device-mqtt').MqttWs;
- const Client = require('azure-iot-device').Client;
- const Message = require('azure-iot-device').Message;
-
- // String containing Hostname, Device Id & Device Key in the following formats:
- // "HostName=<iothub_host_name>;DeviceId=<device_id>;SharedAccessKey=<device_key>"
- const deviceConnectionString = process.env.DEVICE_CONNECTION_STRING;
- let sendInterval;
-
- function disconnectHandler () {
- clearInterval(sendInterval);
- client.open().catch((err) => {
- console.error(err.message);
- });
- }
-
- // The AMQP and HTTP transports have the notion of completing, rejecting or abandoning the message.
- // For example, this is only functional in AMQP and HTTP:
- // client.complete(msg, printResultFor('completed'));
- // If using MQTT calls to complete, reject, or abandon are no-ops.
- // When completing a message, the service that sent the C2D message is notified that the message has been processed.
- // When rejecting a message, the service that sent the C2D message is notified that the message won't be processed by the device. the method to use is client.reject(msg, callback).
- // When abandoning the message, IoT Hub will immediately try to resend it. The method to use is client.abandon(msg, callback).
- // MQTT is simpler: it accepts the message by default, and doesn't support rejecting or abandoning a message.
- function messageHandler (msg) {
- console.log('Id: ' + msg.messageId + ' Body: ' + msg.data);
- client.complete(msg, printResultFor('completed'));
- }
-
- function generateMessage () {
- const windSpeed = 10 + (Math.random() * 4); // range: [10, 14]
- const temperature = 20 + (Math.random() * 10); // range: [20, 30]
- const humidity = 60 + (Math.random() * 20); // range: [60, 80]
- const data = JSON.stringify({ deviceId: 'myFirstDevice', windSpeed: windSpeed, temperature: temperature, humidity: humidity });
- const message = new Message(data);
- message.properties.add('temperatureAlert', (temperature > 28) ? 'true' : 'false');
- return message;
- }
-
- function errorCallback (err) {
- console.error(err.message);
- }
-
- function connectCallback () {
- console.log('Client connected');
- // Create a message and send it to the IoT Hub every two seconds
- sendInterval = setInterval(() => {
- const message = generateMessage();
- console.log('Sending message: ' + message.getData());
- client.sendEvent(message, printResultFor('send'));
- }, 2000);
-
- }
-
- // fromConnectionString must specify a transport constructor, coming from any transport package.
- let client = Client.fromConnectionString(deviceConnectionString, Protocol);
-
- client.on('connect', connectCallback);
- client.on('error', errorCallback);
- client.on('disconnect', disconnectHandler);
- client.on('message', messageHandler);
-
- client.open()
- .catch(err => {
- console.error('Could not connect: ' + err.message);
- });
-
- // Helper function to print results in the console
- function printResultFor(op) {
- return function printResult(err, res) {
- if (err) console.log(op + ' error: ' + err.toString());
- if (res) console.log(op + ' status: ' + res.constructor.name);
- };
- }
+ node ./simple_thermostat.js
```
+ > [!NOTE]
+ > This code sample uses Azure IoT Plug and Play, which lets you integrate smart devices into your solutions without any manual configuration. By default, most samples in this documentation use IoT Plug and Play. To learn more about the advantages of IoT PnP, and cases for using or not using it, see [What is IoT Plug and Play?](../iot-pnp/overview-iot-plug-and-play.md)
As the Node.js code sends a simulated telemetry message from your device to the IoT hub, the message appears in your CLI shell that is monitoring events: ```output
+Starting event monitor, use ctrl-c to stop...
event: component: ''
- interface: ''
+ interface: dtmi:com:example:Thermostat;1
module: ''
- origin: <your device name>
- payload: '{"deviceId":"myFirstDevice","windSpeed":11.853592092144627,"temperature":22.62484121157508,"humidity":66.17960805575937}'
+ origin: <your device ID>
+ payload:
+ temperature: 36.87027777131555
``` Your device is now securely connected and sending telemetry to Azure IoT Hub.
iot-edge How To Install Iot Edge On Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-install-iot-edge-on-windows.md
If you want to deploy to a remote target device instead of your local device and
![Select your device to verify it is supported](./media/how-to-install-iot-edge-on-windows/evaluate-supported-device.png)
-1. Accept the default settings on the **2.2 Settings** tab.
+1. On the **2.2 Settings** tab, review the configuration settings of your deployment. Once you are satisfied with the settings, select **Next**.
+
+ ![Review the configuration settings of your deployment](./media/how-to-install-iot-edge-on-windows/default-deployment-configuration-settings.png)
+
+ >[!NOTE]
+ >If you are using a Windows virtual machine, it is recommended to use a default switch rather than an external switch to ensure the Linux virtual machine created in the deployment can obtain an IP address.
+ >
+ >Using a default switch assigns the Linux virtual machine an internal IP address. This internal IP address cannot be reached from outside the Windows virtual machine, but it can be connected to locally while logged onto the Windows virtual machine.
+ >
+ >If you are using Windows Server, please note that Azure IoT Edge for Linux on Windows does not automatically support the default switch. For a local Windows Server virtual machine, ensure the Linux virtual machine can obtain an IP address through the external switch. For a Windows Server virtual machine in Azure, set up an internal switch before deploying IoT Edge for Linux on Windows.
1. On the **2.3 Deployment** tab, you can watch the progress of the deployment. The full process includes downloading the Azure IoT Edge for Linux on Windows package, installing the package, configuring the host device, and setting up the Linux virtual machine. This process may take several minutes to complete. A successful deployment is pictured below.
Install IoT Edge for Linux on Windows onto your target device if you have not al
``` > [!NOTE]
- > You can run this command without parameters or optionally customize deployment with parameters. You can refer to [the IoT Edge for Linux on Windows PowerShell script reference](reference-iot-edge-for-linux-on-windows-scripts.md#deploy-eflow) to see their meaningΓÇïs.
+ > You can run this command without parameters or optionally customize deployment with parameters. You can refer to [the IoT Edge for Linux on Windows PowerShell script reference](reference-iot-edge-for-linux-on-windows-scripts.md#deploy-eflow) to see parameter meaningΓÇïs and default values.
1. Enter 'Y' to accept the license terms.
iot-edge Reference Iot Edge For Linux On Windows Scripts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/reference-iot-edge-for-linux-on-windows-scripts.md
The **Deploy-Eflow** command is the main deployment method. The deployment comma
| vmSizeDefintion | No longer than 30 characters | Definition of the number of cores and available RAM for the virtual machine. **Default value**: Standard_K8S_v1. | | vmDiskSize | Between 8 GB and 256 GB | Maximum disk size of the dynamically expanding virtual hard disk. **Default value**: 16 GB. | | vmUser | No longer than 30 characters | Username for logging on to the virtual machine. |
-| vnetType | **Transparent** or **ICS** | The type of virtual switch. **Default value**: Transparent. |
+| vnetType | **Transparent** or **ICS** | The type of virtual switch. **Default value**: Transparent. Transparent refers to an external switch, while ICS refers to an internal switch. |
| vnetName | No longer than 64 characters | The name of the virtual switch. **Default value**: External. | | enableVtpm | None | **Switch parameter**. Create the virtual machine with TPM enabled or disabled. | | mobyPackageVersion | No longer than 30 characters | Version of Moby package to be verified or installed on the virtual machine. **Default value:** 19.03.11. | | iotedgePackageVersion | No longer than 30 characters | Version of IoT Edge package to be verified or installed on the virtual machine. **Default value:** 1.1.0. | | installPackages | None | **Switch parameter**. When toggled, the script will attempt to install the Moby and IoT Edge packages rather than only verifying the packages are present. |
+>[!NOTE]
+>By default, if the process cannot find an external switch with the name `External`, it will search for any existing external switch through which to obtain an IP address. If there is no external switch available, it will search for an internal switch. If there is no internal switch available, it will attempt to create the default switch through which to obtain an IP address.
+ ## Verify-EflowVm The **Verify-EflowVm** command is an exposed function to check that the IoT Edge for Linux on Windows virtual machine was created. It takes only common parameters, and it will return **true** if the virtual machine was created and **false** if not. For additional information, use the command `Get-Help Verify-EflowVm -full`.
iot-hub-device-update Device Update Azure Real Time Operating System https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-azure-real-time-operating-system.md
Title: Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System | Microsoft Docs
-description: Get started with Device Update for Azure IoT Hub using Azure-Real-Time-Operating-System
+ Title: Device Update for Azure Real-time-operating-system | Microsoft Docs
+description: Get started with Device Update for Azure Real-time-operating-system
Last updated 3/18/2021
iot-hub-device-update Import Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/import-update.md
Learn how to import a new update into Device Update for IoT Hub. If you haven't
## Prerequisites
-* [Access to an IoT Hub with Device Update for IoT Hub enabled](create-device-update-account.md). It is recommended that you use a S1 (Standard) tier or above for your IoT Hub.
+* [Access to an IoT Hub with Device Update for IoT Hub enabled](create-device-update-account.md).
* An IoT device (or simulator) provisioned for Device Update within IoT Hub. * If using a real device, youΓÇÖll need an update image file for image update, or [APT Manifest file](device-update-apt-manifest.md) for package update.
-* [PowerShell 5](/powershell/scripting/install/installing-powershell) or later.
+* [PowerShell 5](/powershell/scripting/install/installing-powershell) or later (includes Linux, macOS and Windows installs)
* Supported browsers: * [Microsoft Edge](https://www.microsoft.com/edge) * Google Chrome
iot-hub Iot Hub Ha Dr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-ha-dr.md
Once the failover operation for the IoT hub completes, all operations from the d
> > - If you use Azure Functions or Azure Stream Analytics to connect the built-in Events endpoint, you might need to perform a **Restart**. This is because during failover previous offsets are no longer valid. >
-> - When routing to storage, we recommend listing the blobs or files and then iterating over them, to ensure all blobs or files are read without making any assumptions of partition. The partition range could potentially change during a Microsoft-initiated failover or manual failover. You can use the [List Blobs API](/rest/api/storageservices/list-blobs) to enumerate the list of blobs or [List ADLS Gen2 API](/rest/api/storageservices/datalakestoragegen2/path/list) for the list of files. To learn more, see [Azure Storage as a routing endpoint](iot-hub-devguide-messages-d2c.md#azure-storage-as-a-routing-endpoint).
+> - When routing to storage, we recommend listing the blobs or files and then iterating over them, to ensure all blobs or files are read without making any assumptions of partition. The partition range could potentially change during a Microsoft-initiated failover or manual failover. You can use the [List Blobs API](/rest/api/storageservices/list-blobs) to enumerate the list of blobs or [List ADLS Gen2 API](/rest/api/storageservices/datalakestoragegen2/filesystem/listpaths) for the list of files. To learn more, see [Azure Storage as a routing endpoint](iot-hub-devguide-messages-d2c.md#azure-storage-as-a-routing-endpoint).
## Microsoft-initiated failover
Here's a summary of the HA/DR options presented in this article that can be used
* [What is Azure IoT Hub?](about-iot-hub.md) * [Get started with IoT Hubs (Quickstart)](quickstart-send-telemetry-dotnet.md)
-* [Tutorial: Perform manual failover for an IoT hub](tutorial-manual-failover.md)
+* [Tutorial: Perform manual failover for an IoT hub](tutorial-manual-failover.md)
iot-hub Iot Hub Security X509 Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-security-x509-get-started.md
- Title: Tutorial for X.509 security in Azure IoT Hub | Microsoft Docs
-description: Get started on the X.509 based security in your Azure IoT hub in a simulated environment.
------ Previously updated : 08/20/2019---
-# Set up X.509 security in your Azure IoT hub
-
-This tutorial shows the steps you need to secure your Azure IoT hub using the *X.509 Certificate Authentication*. For the purpose of illustration, we use the open-source tool OpenSSL to create certificates locally on your Windows machine. We recommend that you use this tutorial for test purposes only. For a production environment, you should purchase the certificates from a *root certificate authority (CA)*. Also, in production, make sure you have a strategy in place to handle certificate rollover when a device certificate or a CA certificate expires.
--
-## Prerequisites
-
-This tutorial requires that you have the following resources ready:
-
-* You have created an IoT hub with your Azure subscription. See [Create an IoT hub through portal](iot-hub-create-through-portal.md) for detailed steps.
-
-* You have [Visual Studio 2017 or Visual Studio 2019](https://www.visualstudio.com/vs/) installed.
-
-## Get X.509 CA certificates
-
-The X.509 certificate-based security in the IoT Hub requires you to start with an [X.509 certificate chain](https://en.wikipedia.org/wiki/X.509#Certificate_chains_and_cross-certification), which includes the root certificate as well as any intermediate certificates up until the leaf certificate.
-
-You may choose any of the following ways to get your certificates:
-
-* Purchase X.509 certificates from a *root certificate authority (CA)*. This method is recommended for production environments.
-
-* Create your own X.509 certificates using a third-party tool such as [OpenSSL](https://www.openssl.org/). This technique is fine for test and development purposes. See [Managing test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md) for information about generating test CA certificates using PowerShell or Bash. The rest of this tutorial uses test CA certificates generated by following the instructions in [Managing test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md).
-
-* Generate an [X.509 intermediate CA certificate](iot-hub-x509ca-overview.md#sign-devices-into-the-certificate-chain-of-trust) signed by an existing root CA certificate and upload it to the hub. Once the intermediate certificate is uploaded and verified, as instructed below, it can be used in the place of a root CA certificate mentioned below. Tools like OpenSSL ([openssl req](https://www.openssl.org/docs/man1.1.0/man1/req.html) and [openssl ca](https://www.openssl.org/docs/man1.1.0/man1/ca.html)) can be used to generate and sign an intermediate CA certificate.
-
-> [!NOTE]
-> Do not upload the 3rd party root if it is not unique to you because that would enable other customers of the 3rd party to connect their devices to your IoT Hub.
-
-## Register X.509 CA certificates to your IoT hub
-
-These steps show you how to add a new Certificate Authority to your IoT hub through the portal. When you use X.509 certificate CA authentication, make sure you register your new certificate before the existing one expires as part of your certificate rollover strategy.
-
-> [!NOTE]
-> The maximum number of X.509 CA certificates that can be registered to an IoT hub is 25. For more information, see [Azure IoT Hub quotas and throttling](iot-hub-devguide-quotas-throttling.md).
-
-1. In the Azure portal, navigate to your IoT hub and select **Settings** > **Certificates** for the hub.
-
-1. Select **Add** to add a new certificate.
-
-1. In **Certificate Name**, enter a friendly display name, and select the certificate file you created in the previous section from your computer.
-
-1. Once you get a notification that your certificate is successfully uploaded, select **Save**.
-
- ![Upload certificate](./media/iot-hub-security-x509-get-started/iot-hub-add-cert.png)
-
- Your certificate appears in the certificates list with status of **Unverified**.
-
-1. Select the certificate that you just added to display **Certificate Details**, and then select **Generate Verification Code**.
-
- ![Verify certificate](./media/iot-hub-security-x509-get-started/copy-verification-code.png)
-
-1. Copy the **Verification Code** to the clipboard. You use it to validate the certificate ownership.
-
-1. Follow Step 3 in [Managing test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md). This process signs your verification code with the private key associate with your X.509 CA certificate, which generates a signature. There are tools available to perform this signing process, for example, OpenSSL. This process is known as the [Proof of possession](https://tools.ietf.org/html/rfc5280#section-3.1).
-
-1. In **Certificate Details**, under **Verification Certificate .pem or .cer file**, find and open the signature file. Then select **Verify**.
-
- The status of your certificate changes to **Verified**. Select **Refresh** if the certificate does not update automatically.
-
-## Create an X.509 device for your IoT hub
-
-1. In the Azure portal, navigate to your IoT hub, and then select **Explorers** > **IoT devices**.
-
-1. Select **New** to add a new device.
-
-1. In **Device ID**, enter a friendly display name. For **Authentication type**, choose **X.509 CA Signed**, and then select **Save**.
-
- ![Create X.509 device in portal](./media/iot-hub-security-x509-get-started/new-x509-device.png)
-
-## Authenticate your X.509 device with the X.509 certificates
-
-To authenticate your X.509 device, you need to first sign the device with the CA certificate. Signing of leaf devices is normally done at the manufacturing plant, where manufacturing tools have been enabled accordingly. As the device goes from one manufacturer to another, each manufacturer's signing action is captured as an intermediate certificate within the chain. The result is a certificate chain from the CA certificate to the device's leaf certificate. Step 4 in [Managing test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md) generates a device certificate.
-
-Next, we will show you how to create a C# application to simulate the X.509 device registered for your IoT hub. We will send temperature and humidity values from the simulated device to your hub. In this tutorial, we will create only the device application. It is left as an exercise to the readers to create the IoT Hub service application that will send response to the events sent by this simulated device. The C# application assumes that you have followed the steps in [Managing test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md).
-
-1. Open Visual Studio, select **Create a new project**, and then choose the **Console App (.NET Framework)** project template. Select **Next**.
-
-1. In **Configure your new project**, name the project *SimulateX509Device*, and then select **Create**.
-
- ![Create X.509 device project in Visual Studio](./media/iot-hub-security-x509-get-started/create-device-project-vs2019.png)
-
-1. In Solution Explorer, right-click the **SimulateX509Device** project, and then select **Manage NuGet Packages**.
-
-1. In the **NuGet Package Manager**, select **Browse** and search for and choose **Microsoft.Azure.Devices.Client**. Select **Install**.
-
- ![Add device SDK NuGet package in Visual Studio](./media/iot-hub-security-x509-get-started/device-sdk-nuget.png)
-
- This step downloads, installs, and adds a reference to the Azure IoT device SDK NuGet package and its dependencies.
-
-1. Add the following `using` statements at the top of the **Program.cs** file:
-
- ```csharp
- using Microsoft.Azure.Devices.Client;
- using Microsoft.Azure.Devices.Shared;
- using System.Security.Cryptography.X509Certificates;
- ```
-
-1. Add the following fields to the **Program** class:
-
- ```csharp
- private static int MESSAGE_COUNT = 5;
- private const int TEMPERATURE_THRESHOLD = 30;
- private static String deviceId = "<your-device-id>";
- private static float temperature;
- private static float humidity;
- private static Random rnd = new Random();
- ```
-
- Use the friendly device name you used in the preceding section in place of _<your_device_id>_.
-
-1. Add the following function to create random numbers for temperature and humidity and send these values to the hub:
-
- ```csharp
- static async Task SendEvent(DeviceClient deviceClient)
- {
- string dataBuffer;
- Console.WriteLine("Device sending {0} messages to IoTHub...\n", MESSAGE_COUNT);
-
- for (int count = 0; count < MESSAGE_COUNT; count++)
- {
- temperature = rnd.Next(20, 35);
- humidity = rnd.Next(60, 80);
- dataBuffer = string.Format("{{\"deviceId\":\"{0}\",\"messageId\":{1},\"temperature\":{2},\"humidity\":{3}}}", deviceId, count, temperature, humidity);
- Message eventMessage = new Message(Encoding.UTF8.GetBytes(dataBuffer));
- eventMessage.Properties.Add("temperatureAlert", (temperature > TEMPERATURE_THRESHOLD) ? "true" : "false");
- Console.WriteLine("\t{0}> Sending message: {1}, Data: [{2}]", DateTime.Now.ToLocalTime(), count, dataBuffer);
-
- await deviceClient.SendEventAsync(eventMessage);
- }
- }
- ```
-
-1. Finally, add the following lines of code to the **Main** function, replacing the placeholders _device-id_, _your-iot-hub-name_, and _absolute-path-to-your-device-pfx-file_ as required by your setup.
-
- ```csharp
- try
- {
- var cert = new X509Certificate2(@"<absolute-path-to-your-device-pfx-file>", "1234");
- var auth = new DeviceAuthenticationWithX509Certificate("<device-id>", cert);
- var deviceClient = DeviceClient.Create("<your-iot-hub-name>.azure-devices.net", auth, TransportType.Amqp_Tcp_Only);
-
- if (deviceClient == null)
- {
- Console.WriteLine("Failed to create DeviceClient!");
- }
- else
- {
- Console.WriteLine("Successfully created DeviceClient!");
- SendEvent(deviceClient).Wait();
- }
-
- Console.WriteLine("Exiting...\n");
- }
- catch (Exception ex)
- {
- Console.WriteLine("Error in sample: {0}", ex.Message);
- }
- ```
-
- This code connects to your IoT hub by creating the connection string for your X.509 device. Once successfully connected, it then sends temperature and humidity events to the hub, and waits for its response.
-
-1. Run the app. Because this application accesses a *.pfx* file, you may need to run this app as an administrator.
-
- 1. Build the Visual Studio solution.
-
- 1. Open a new Command Prompt window by using **Run as administrator**.
-
- 1. Navigate to the folder that contains your solution, then navigate to the *bin/Debug* path within the solution folder.
-
- 1. Run the application **SimulateX509Device.exe** from the command prompt.
-
- You should see your device successfully connecting to the hub and sending the events.
-
- ![Run device app](./media/iot-hub-security-x509-get-started/device-app-success.png)
-
-## Next steps
-
-To learn more about securing your IoT solution, see:
-
-* [IoT Security Best Practices](../iot-fundamentals/iot-security-best-practices.md)
-
-* [IoT Security Architecture](../iot-fundamentals/iot-security-architecture.md)
-
-* [Secure your IoT deployment](../iot-fundamentals/iot-security-deployment.md)
-
-To further explore the capabilities of IoT Hub, see:
-
-* [Deploying AI to edge devices with Azure IoT Edge](../iot-edge/quickstart-linux.md)
iot-hub Tutorial X509 Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-x509-certificates.md
+
+ Title: Tutorial - Understand X.509 public key certificates for Azure IoT Hub| Microsoft Docs
+description: Tutorial - Understand X.509 public key certificates for Azure IoT Hub
+++++ Last updated : 02/26/2021++
+#Customer intent: As a developer, I want to be able to use X.509 certificates to authenticate devices to an IoT hub. This step of the tutorial needs to introduce me to X.509 Public Key certificates.
++
+# Tutorial: Understanding X.509 Public Key Certificates
+
+X.509 certificates are digital documents that represent a user, computer, service, or device. They are issued by a certification authority (CA), subordinate CA, or registration authority and contain the public key of the certificate subject. They do not contain the subject's private key which must be stored securely. Public key certificates are documented by [RFC 5280](https://tools.ietf.org/html/rfc5280). They are digitally signed and, in general, contain the following information:
+
+* Information about the certificate subject
+* The public key that corresponds to the subject's private key
+* Information about the issuing CA
+* The supported encryption and/or digital signing algorithms
+* Information to determine the revocation and validity status of the certificate
+
+## Certificate fields
+
+Over time there have been three certificate versions. Each version adds fields to the one before. Version 3 is current and contains version 1 and version 2 fields in addition to version 3 fields. Version 1 defined the following fields:
+
+* **Version**: A value (1, 2, or 3) that identifies the version number of the certificate
+* **Serial Number**: A unique number for each certificate issued by a CA
+* **CA Signature Algorithm**: Name of the algorithm the CA uses to sign the certificate contents
+* **Issuer Name**: The distinguished name (DN) of the certificate's issuing CA
+* **Validity Period**: The time period for which the certificate is considered valid
+* **Subject Name**: Name of the entity represented by the certificate
+* **Subject Public Key Info**: Public key owned by the certificate subject
+
+Version 2 added the following fields containing information about the certificate issuer. These fields are, however, rarely used.
+
+* **Issuer Unique ID**: A unique identifier for the issuing CA as defined by the CA
+* **Subject Unique ID**: A unique identifier for the certificate subject as defined by the issuing CA
+
+Version 3 certificates added the following extensions:
+
+* **Authority Key Identifier**: This can be one of two values:
+ * The subject of the CA and serial number of the CA certificate that issued this certificate
+ * A hash of the public key of the CA that issued this certificate
+* **Subject Key Identifier**: Hash of the current certificate's public key
+* **Key Usage** Defines the service for which a certificate can be used. This can be one or more of the following values:
+ * **Digital Signature**
+ * **Non-Repudiation**
+ * **Key Encipherment**
+ * **Data Encipherment**
+ * **Key Agreement**
+ * **Key Cert Sign**
+ * **CRL Sign**
+ * **Encipher Only**
+ * **Decipher Only**
+* **Private Key Usage Period**: Validity period for the private key portion of a key pair
+* **Certificate Policies**: Policies used to validate the certificate subject
+* **Policy Mappings**: Maps a policy in one organization to policy in another
+* **Subject Alternative Name**: List of alternate names for the subject
+* **Issuer Alternative Name**: List of alternate names for the issuing CA
+* **Subject Dir Attribute**: Attributes from an X.500 or LDAP directory
+* **Basic Constraints**: Allows the certificate to designate whether it is issued to a CA, or to a user, computer, device, or service. This extension also includes a path length constraint that limits the number of subordinate CAs that can exist.
+* **Name Constraints**: Designates which namespaces are allowed in a CA-issued certificate
+* **Policy Constraints**: Can be used to prohibit policy mappings between CAs
+* **Extended Key Usage**: Indicates how a certificate's public key can be used beyond the purposes identified in the **Key Usage** extension
+* **CRL Distribution Points**: Contains one or more URLs where the base certificate revocation list (CRL) is published
+* **Inhibit anyPolicy**: Inhibits the use of the **All Issuance Policies** OID (2.5.29.32.0) in subordinate CA certificates
+* **Freshest CRL**: Contains one or more URLs where the issuing CA's delta CRL is published
+* **Authority Information Access**: Contains one or more URLs where the issuing CA certificate is published
+* **Subject Information Access**: Contains information about how to retrieve additional details for a certificate subject
+
+## Certificate formats
+
+Certificates can be saved in a variety of formats. Azure IoT Hub authentication typically uses the PEM and PFX formats.
+
+### Binary certificate
+
+This contains a raw form binary certificate using DER ASN.1 Encoding.
+
+### ASCII PEM format
+
+A PEM certificate (.pem extension) contains a base64-encoded certificate beginning with --BEGIN CERTIFICATE-- and ending with --END CERTIFICATE--. The PEM format is very common and is required by IoT Hub when uploading certain certificates.
+
+### ASCII (PEM) key
+
+Contains a base64-encoded DER key with possibly additional metadata about the algorithm used for password protection.
+
+### PKCS#7 certificate
+
+A format designed for the transport of signed or encrypted data. It is defined by [RFC 2315](https://tools.ietf.org/html/rfc2315). It can include the entire certificate chain.
+
+### PKCS#8 key
+
+The format for a private key store defined by [RFC 5208](https://tools.ietf.org/html/rfc5208).
+
+### PKCS#12 key and certificate
+
+A complex format that can store and protect a key and the entire certificate chain. It is commonly used with a .pfx extension. PKCS#12 is synonymous with the PFX format.
+
+## Next steps
+
+If you want to generate test certificates that you can use to authenticate devices to your IoT Hub, see the following topics:
+
+* [Using Microsoft-Supplied Scripts to Create Test Certificates](tutorial-x509-scripts.md)
+* [Using OpenSSL to Create Test Certificates](tutorial-x509-openssl.md)
+* [Using OpenSSL to Create Self-Signed Test Certificates](tutorial-x509-self-sign.md)
+
+If you have a certification authority (CA) certificate or subordinate CA certificate and you want to upload it to your IoT hub and prove that you own it, see [Proving Possession of a CA Certificate](tutorial-x509-prove-possession.md).
iot-hub Tutorial X509 Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-x509-introduction.md
+
+ Title: Tutorial - Understand Cryptography and X.509 certificates for Azure IoT Hub | Microsoft Docs
+description: Tutorial - Understand cryptography and X.509 PKI for Azure IoT Hub
+++++ Last updated : 02/25/2021++
+#Customer intent: As a developer, I want to be able to use X.509 certificates to authenticate devices to an IoT hub. This step of the tutorial needs to introduce me to X.509 Public Key Infrastructure and public key encryption.
++
+# Tutorial: Understanding Public Key Cryptography and X.509 Public Key Infrastructure
+
+You can use X.509 certificates to authenticate devices to an Azure IoT Hub. A certificate is a digital document that contains the device's public key and can be used to verify that the device is what it claims to be. X.509 certificates and certificate revocation lists (CRLs) are documented by [RFC 5280](https://tools.ietf.org/html/rfc5280). Certificates are just one part of an X.509 public key infrastructure (PKI). To understand X.509 PKI, you need to understand cryptographic algorithms, cryptographic keys, certificates, and certification authorities (CAs):
+
+* **Algorithms** define how original plaintext data is transformed into ciphertext and back to plaintext.
+* **Keys** are random or pseudorandom data strings used as input to an algorithm.
+* **Certificates** are digital documents that contain an entity's public key and enable you to determine whether the subject of the certificate is who or what it claims to be.
+* **Certification Authorities** attest to the authenticity of certificate subjects.
+
+You can purchase a certificate from a certification authority (CA). You can also, for testing and development or if you are working in a self-contained environment, create a self-signed root CA. If, for example, you own one or more devices and are testing IoT hub authentication, you can self-sign your root CA and use that to issue device certificates. You can also issue self-signed device certificates. This is discussed in subsequent articles.
+
+Before discussing X.509 certificates in more detail and using them to authenticate devices to an IoT Hub, we discuss the cryptography on which certificates are based.
+
+## Cryptography
+
+Cryptography is used to protect information and communications. This is typically done by using cryptographic techniques to scramble plaintext (ordinary text) into ciphertext (encoded text) and back again. This scrambling process is called encryption. The reverse process is called decryption. Cryptography is concerned with the following objectives:
+
+* **Confidentiality**: The information can be understood by only the intended audience.
+* **Integrity**: The information cannot be altered in storage or in transit.
+* **Non-repudiation**: The creator of information cannot later deny that creation.
+* **Authentication**: The sender and receiver can confirm each other's identity.
+
+## Encryption
+
+The encryption process requires an algorithm and a key. The algorithm defines how data is transformed from plaintext into ciphertext and back to plaintext. A key is a random string of data used as input to the algorithm. All of the security of the process is contained in the key. Therefore, the key must be stored securely. The details of the most popular algorithms, however, are publicly available.
+
+There are two types of encryption. Symmetric encryption uses the same key for both encryption and decryption. Asymmetric encryption uses different but mathematically related keys to perform encryption and decryption.
+
+### Symmetric encryption
+
+Symmetric encryption uses the same key to encrypt plaintext into ciphertext and decrypt ciphertext back into plaintext. The necessary length of the key, expressed in number of bits, is determined by the algorithm. After the key is used to encrypt plaintext, the encrypted message is sent to the recipient who then decrypts the ciphertext. The symmetric key must be securely transmitted to the recipient. Sending the key is the greatest security risk when using a symmetric algorithm.
+
+![Symmetric encryption example](media/tutorial-x509-introduction/symmetric-keys.png)
+
+### Asymmetric encryption
+
+If only symmetric encryption is used, the problem is that all parties to the communication must possess the private key. However, it is possible that unauthorized third parties can capture the key during transmission to authorized users. To address this issue, use asymmetric or public key cryptography instead.
+
+In asymmetric cryptography, every user has two mathematically related keys called a key pair. One key is public and the other key is private. The key pair ensures that only the recipient has access to the private key needed to decrypt the data. The following illustration summarizes the asymmetric encryption process.
+
+![Asymmetric encryption example](media/tutorial-x509-introduction/asymmetric-keys.png)
+
+1. The recipient creates a public-private key pair and sends the public key to a CA. The CA packages the public key in an X.509 certificate.
+
+1. The sending party obtains the recipient's public key from the CA.
+
+1. The sender encrypts plaintext data using an encryption algorithm. The recipient's public key is used to perform encryption.
+
+1. The sender transmits the ciphertext to the recipient. It isn't necessary to send the key because the recipient already has the private key needed to decrypt the ciphertext.
+
+1. The recipient decrypts the ciphertext by using the specified asymmetric algorithm and the private key.
+
+### Combining symmetric and asymmetric encryption
+
+Symmetric and asymmetric encryption can be combined to take advantage of their relative strengths. Symmetric encryption is much faster than asymmetric but, because of the necessity of sending private keys to other parties, is not as secure. To combine the two types together, symmetric encryption can be used to convert plaintext to ciphertext. Asymmetric encryption is used to exchange the symmetric key. This is demonstrated by the following diagram.
+
+![Symmetric and assymetric encryption](media/tutorial-x509-introduction/symmetric-asymmetric-encryption.png)
+
+1. The sender retrieves the recipient's public key.
+
+1. The sender generates a symmetric key and uses it to encrypt the original data.
+
+1. The sender uses the recipient's public key to encrypt the symmetric key.
+
+1. The sender transmits the encrypted symmetric key and the ciphertext to the intended recipient.
+
+1. The recipient uses the private key that matches the recipient's public key to decrypt the sender's symmetric key.
+
+1. The recipient uses the symmetric key to decrypt the ciphertext.
+
+### Asymmetric signing
+
+Asymmetric algorithms can be used to protect data from modification and prove the identity of the data creator. The following illustration shows how asymmetric signing helps prove the sender's identity.
+
+![Asymmetric signing example](media/tutorial-x509-introduction/asymmetric-signing.png)
+
+1. The sender passes plaintext data through an asymmetric encryption algorithm, using the private key for encryption. Notice that this scenario reverses use of the private and public keys outlined in the preceding section that detailed asymmetric encryption.
+
+1. The resulting ciphertext is sent to the recipient.
+
+1. The recipient obtains the originator's public key from a directory.
+
+1. The recipient decrypts the ciphertext by using the originator's public key. The resulting plaintext proves the originator's identity because only the originator has access to the private key that initially encrypted the original text.
+
+## Signing
+
+Digital signing can be used to determine whether the data has been modified in transit or at rest. The data is passed through a hash algorithm, a one-way function that produces a mathematical result from the given message. The result is called a *hash value*, *message digest*, *digest*, *signature*, or *thumbprint*. A hash value cannot be reversed to obtain the original message. Because A small change in the message results in a significant change in the *thumbprint*, the hash value can be used to determine whether a message has been altered. The following illustration shows how asymmetric encryption and hash algorithms can be used to verify that a message has not been modified.
+
+![Signing example](media/tutorial-x509-introduction/signing.png)
+
+1. The sender creates a plaintext message.
+
+1. The sender hashes the plaintext message to create a message digest.
+
+1. The sender encrypts the digest using a private key.
+
+1. The sender transmits the plaintext message and the encrypted digest to the intended recipient.
+
+1. The recipient decrypts the digest by using the sender's public key.
+
+1. The recipient runs the same hash algorithm that the sender used over the message.
+
+1. The recipient compares the resulting signature to the decrypted signature. If the digests are the same, the message was not modified during transmission.
+
+## Next steps
+
+To learn more about the fields that make up a certificate, see [Understanding X.509 Public Key Certificates](tutorial-x509-certificates.md).
+
+If you already know a lot about X.509 certificates, and you want to generate test versions that you can use to authenticate to your IoT Hub, see the following topics:
+
+* [Using Microsoft-Supplied Scripts to Create Test Certificates](tutorial-x509-scripts.md)
+* [Using OpenSSL to Create Test Certificates](tutorial-x509-openssl.md)
+* [Using OpenSSL to Create Self-Signed Test Certificates](tutorial-x509-self-sign.md)
+
+If you have a certification authority (CA) certificate or subordinate CA certificate and you want to upload it to your IoT hub and prove that you own it, see [Proving Possession of a CA Certificate](tutorial-x509-prove-possession.md).
iot-hub Tutorial X509 Openssl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-x509-openssl.md
+
+ Title: Tutorial - Use OpenSSL to create X.509 test certificates for Azure IoT Hub| Microsoft Docs
+description: Tutorial - Use OpenSSL to create CA and device certificates for Azure IoT hub
+++++ Last updated : 02/26/2021++
+#Customer intent: As a developer, I want to be able to use X.509 certificates to authenticate devices to an IoT hub. This step of the tutorial needs to introduce me to OpenSSL that I can use to generate test certificates.
++
+# Tutorial: Using OpenSSL to create test certificates
+
+Although you can purchase X.509 certificates from a trusted certification authority, creating your own test certificate hierarchy or using self-signed certificates is adequate for testing IoT hub device authentication. The following example uses [OpenSSL](https://www.openssl.org/) and the [OpenSSL Cookbook](https://www.feistyduck.com/library/openssl-cookbook/online/ch-openssl.html) to create a certification authority (CA), a subordinate CA, and a device certificate. The example then signs the subordinate CA and the device certificate into a certificate hierarchy. This is presented for example purposes only.
+
+## Step 1 - Create the root CA directory structure
+
+Create a directory structure for the certification authority.
+
+* The **certs** directory stores new certificates.
+* The **db** directory is used for the certificate database.
+* The **private** directory stores the CA private key.
+
+```bash
+ mkdir rootca
+ cd rootca
+ mkdir certs db private
+ touch db/index
+ openssl rand -hex 16 > db/serial
+ echo 1001 > db/crlnumber
+```
+
+## Step 2 - Create a root CA configuration file
+
+Before creating a CA, create a configuration file and save it as `rootca.conf` in the rootca directory.
+
+```xml
+[default]
+name = rootca
+domain_suffix = example.com
+aia_url = http://$name.$domain_suffix/$name.crt
+crl_url = http://$name.$domain_suffix/$name.crl
+default_ca = ca_default
+name_opt = utf8,esc_ctrl,multiline,lname,align
+
+[ca_dn]
+commonName = "Test Root CA"
+
+[ca_default]
+home = ../rootca
+database = $home/db/index
+serial = $home/db/serial
+crlnumber = $home/db/crlnumber
+certificate = $home/$name.crt
+private_key = $home/private/$name.key
+RANDFILE = $home/private/random
+new_certs_dir = $home/certs
+unique_subject = no
+copy_extensions = none
+default_days = 3650
+default_crl_days = 365
+default_md = sha256
+policy = policy_c_o_match
+
+[policy_c_o_match]
+countryName = optional
+stateOrProvinceName = optional
+organizationName = optional
+organizationalUnitName = optional
+commonName = supplied
+emailAddress = optional
+
+[req]
+default_bits = 2048
+encrypt_key = yes
+default_md = sha256
+utf8 = yes
+string_mask = utf8only
+prompt = no
+distinguished_name = ca_dn
+req_extensions = ca_ext
+
+[ca_ext]
+basicConstraints = critical,CA:true
+keyUsage = critical,keyCertSign,cRLSign
+subjectKeyIdentifier = hash
+
+[sub_ca_ext]
+authorityKeyIdentifier = keyid:always
+basicConstraints = critical,CA:true,pathlen:0
+extendedKeyUsage = clientAuth,serverAuth
+keyUsage = critical,keyCertSign,cRLSign
+subjectKeyIdentifier = hash
+
+```
+
+## Step 3 - Create a root CA
+
+First, generate the key and the certificate signing request (CSR) in the rootca directory.
+
+```bash
+ openssl req -new -config rootca.conf -out rootca.csr -keyout private/rootca.key
+```
+
+Next, create a self-signed CA certificate. Self-signing is suitable for testing purposes. Specify the ca_ext configuration file extensions on the command line. These indicate that the certificate is for a root CA and can be used to sign certificates and certificate revocation lists (CRLs). Sign the certificate, and commit it to the database.
+
+```bash
+ openssl ca -selfsign -config rootca.conf -in rootca.csr -out rootca.crt -extensions ca_ext
+```
+
+## Step 4 - Create the subordinate CA directory structure
+
+Create a directory structure for the subordinate CA.
+
+```bash
+ mkdir subca
+ cd subca
+ mkdir certs db private
+ touch db/index
+ openssl rand -hex 16 > db/serial
+ echo 1001 > db/crlnumber
+```
+
+## Step 5 - Create a subordinate CA configuration file
+
+Create a configuration file and save it as subca.conf in the `subca` directory.
+
+```bash
+[default]
+name = subca
+domain_suffix = example.com
+aia_url = http://$name.$domain_suffix/$name.crt
+crl_url = http://$name.$domain_suffix/$name.crl
+default_ca = ca_default
+name_opt = utf8,esc_ctrl,multiline,lname,align
+
+[ca_dn]
+commonName = "Test Subordinate CA"
+
+[ca_default]
+home = .
+database = $home/db/index
+serial = $home/db/serial
+crlnumber = $home/db/crlnumber
+certificate = $home/$name.crt
+private_key = $home/private/$name.key
+RANDFILE = $home/private/random
+new_certs_dir = $home/certs
+unique_subject = no
+copy_extensions = copy
+default_days
+default_crl_days = 90
+default_md = sha256
+policy = policy_c_o_match
+
+[policy_c_o_match]
+countryName = optional
+stateOrProvinceName = optional
+organizationName = optional
+organizationalUnitName = optional
+commonName = supplied
+emailAddress = optional
+
+[req]
+default_bits = 2048
+encrypt_key = yes
+default_md = sha256
+utf8 = yes
+string_mask = utf8only
+prompt = no
+distinguished_name = ca_dn
+req_extensions = ca_ext
+
+[ca_ext]
+basicConstraints = critical,CA:true
+keyUsage = critical,keyCertSign,cRLSign
+subjectKeyIdentifier = hash
+
+[sub_ca_ext]
+authorityKeyIdentifier = keyid:always
+basicConstraints = critical,CA:true,pathlen:0
+extendedKeyUsage = clientAuth,serverAuth
+keyUsage = critical,keyCertSign,cRLSign
+subjectKeyIdentifier = hash
+
+[client_ext]
+authorityKeyIdentifier = keyid:always
+basicConstraints = critical,CA:false
+extendedKeyUsage = clientAuth
+keyUsage = critical,digitalSignature
+subjectKeyIdentifier = hash
+```
+
+## Step 6 - Create a subordinate CA
+
+Create a new serial number in the `rootca/db/serial` file for the subordinate CA certificate.
+
+```bash
+ openssl rand -hex 16 > db/serial
+```
+
+>[!IMPORTANT]
+>You must create a new serial number for every subordinate CA certificate and every device certificate that you create. Different certificates cannot have the same serial number.
+
+This example shows you how to create a subordinate or registration CA. Because you can use the root CA to sign certificates, creating a subordinate CA isnΓÇÖt strictly necessary. Having a subordinate CA does, however, mimic real world certificate hierarchies in which the root CA is kept offline and subordinate CAs issue client certificates.
+
+Use the configuration file to generate a key and a certificate signing request (CSR).
+
+```bash
+ openssl req -new -config subca.conf -out subca.csr -keyout private/subca.key
+```
+
+Submit the CSR to the root CA and use the root CA to issue and sign the subordinate CA certificate. Specify sub_ca_ext for the extensions switch on the command line. The extensions indicate that the certificate is for a CA that can sign certificates and certificate revocation lists (CRLs). When prompted, sign the certificate, and commit it to the database.
+
+```bash
+ openssl ca -config ../rootca/rootca.conf -in subca.csr -out subca.crt -extensions sub_ca_ext
+```
+
+## Step 7 - Demonstrate proof of possession
+
+You now have both a root CA certificate and a subordinate CA certificate. You can use either one to sign device certificates. The one you choose must be uploaded to your IoT Hub. The following steps assume that you are using the subordinate CA certificate. To upload and register your subordinate CA certificate to your IoT Hub:
+
+1. In the Azure portal, navigate to your IoTHub and select **Settings > Certificates**.
+
+1. Select **Add** to add your new subordinate CA certificate.
+
+1. Enter a display name in the **Certificate Name** field, and select the PEM certificate file you created previously.
+
+1. Select **Save**. Your certificate is shown in the certificate list with a status of **Unverified**. The verification process will prove that you own the certificate.
+
+
+1. Select the certificate to view the **Certificate Details** dialog.
+
+1. Select **Generate Verification Code**. For more information, see [Prove Possession of a CA certificate](tutorial-x509-prove-possession.md).
+
+1. Copy the verification code to the clipboard. You must set the verification code as the certificate subject. For example, if the verification code is BB0C656E69AF75E3FB3C8D922C1760C58C1DA5B05AAA9D0A, add that as the subject of your certificate as shown in the next step.
+
+1. Generate a private key.
+
+ ```bash
+ $ openssl req -new -key pop.key -out pop.csr
+
+ --
+ Country Name (2 letter code) [XX]:.
+ State or Province Name (full name) []:.
+ Locality Name (eg, city) [Default City]:.
+ Organization Name (eg, company) [Default Company Ltd]:.
+ Organizational Unit Name (eg, section) []:.
+ Common Name (eg, your name or your server hostname) []:BB0C656E69AF75E3FB3C8D922C1760C58C1DA5B05AAA9D0A
+ Email Address []:
+
+ Please enter the following 'extra' attributes
+ to be sent with your certificate request
+ A challenge password []:
+ An optional company name []:
+
+ ```
+
+9. Create a certificate using the root CA configuration file and the CSR.
+
+ ```bash
+ openssl ca -config rootca.conf -in pop.csr -out pop.crt -extensions client_ext
+
+ ```
+
+10. Select the new certificate in the **Certificate Details** view
+
+11. After the certificate uploads, select **Verify**. The CA certificate status should change to **Verified**.
+
+## Step 8 - Create a device in your IoT Hub
+
+Navigate to your IoT Hub in the Azure portal and create a new IoT device identity with the following values:
+
+1. Provide the **Device ID** that matches the subject name of your device certificates.
+
+1. Select the **X.509 CA Signed** authentication type.
+
+1. Select **Save**.
+
+## Step 9 - Create a client device certificate
+
+To generate a client certificate, you must first generate a private key. The following command shows how to use OpenSSL to create a private key. Create the key in the subca directory.
+
+```bash
+openssl genpkey -out device.key -algorithm RSA -pkeyopt rsa_keygen_bits:2048
+```
+
+Create a certificate signing request (CSR) for the key. You do not need to enter a challenge password or an optional company name. You must, however, enter the device ID in the common name field.
+
+```bash
+openssl req -new -key device.key -out device.csr
+
+--
+Country Name (2 letter code) [XX]:.
+State or Province Name (full name) []:.
+Locality Name (eg, city) [Default City]:.
+Organization Name (eg, company) [Default Company Ltd]:.
+Organizational Unit Name (eg, section) []:
+Common Name (eg, your name or your server hostname) []:`<your device ID>`
+Email Address []:
+
+Please enter the following 'extra' attributes
+to be sent with your certificate request
+A challenge password []:
+An optional company name []:
+
+```
+
+Check that the CSR is what you expect.
+
+```bash
+openssl req -text -in device.csr -noout
+```
+
+Send the CSR to the subordinate CA for signing into the certificate hierarchy. Specify `client_ext` in the `-extensions` switch. Notice that the `Basic Constraints` in the issued certificate indicate that this certificate is not for a CA. If you are signing multiple certificates, be sure to update the serial number before generating each certificate by using the openssl `rand -hex 16 > db/serial` command.
+
+```bash
+openssl ca -config subca.conf -in device.csr -out device.crt -extensions client_ext
+```
+
+## Next Steps
+
+Go to [Testing Certificate Authentication](tutorial-x509-test-certificate.md) to determine if your certificate can authenticate your device to your IoT Hub.
iot-hub Tutorial X509 Prove Possession https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-x509-prove-possession.md
+
+ Title: Tutorial - Prove Ownership of CA certificates in Azure IoT Hub | Microsoft Docs
+description: Tutorial - Prove that you own a CA certificate for Azure IoT Hub
+++++ Last updated : 02/26/2021++
+#Customer intent: As a developer, I want to be able to use X.509 certificates to authenticate devices to an IoT hub. This step of the tutorial needs to show me how to prove that I own the certificate I uploaded to IoT Hub
++
+# Tutorial: Proving possession of a CA certificate
+
+After you upload your root certification authority (CA) certificate or subordinate CA certificate to your IoT hub, you must prove that you own the certificate:
+
+1. In the Azure portal, navigate to your IoTHub and select **Settings > Certificates**.
+
+2. Select **Add** to add a new CA certificate.
+
+3. Enter a display name in the **Certificate Name** field, and select the PEM certificate to add.
+
+4. Select **Save**. Your certificate is shown in the certificate list with a status of **Unverified**. This verification process will prove that you have possession of the certificate.
+
+5. Select the certificate to view the **Certificate Details** dialog.
+
+6. Select **Generate Verification Code** in the dialog.
+
+ :::image type="content" source="media/tutorial-x509-prove-possession/certificate-details.png" alt-text="{Certificate details dialog}":::
+
+7. Copy the verification code to the clipboard. You must set the verification code as the certificate subject. For example, if the verification code is 75B86466DA34D2B04C0C4C9557A119687ADAE7D4732BDDB3, add that as the subject of your certificate as shown in the next step.
+
+8. There are three ways to generate a verification certificate:
+
+ * If you are using the PowerShell script supplied by Microsoft, run `New-CACertsVerificationCert "75B86466DA34D2B04C0C4C9557A119687ADAE7D4732BDDB3"` to create a certificate named `VerifyCert4.cer`. For more information, see [Using Microsoft-supplied Scripts](tutorial-x509-scripts.md).
+
+ * If you are using the Bash script supplied by Microsoft, run `./certGen.sh create_verification_certificate "75B86466DA34D2B04C0C4C9557A119687ADAE7D4732BDDB3"` to create a certificate named `verification-code.cert.pem`. For more information, see [Using Microsoft-supplied Scripts](tutorial-x509-scripts.md).
+
+ * If you are using OpenSSL to generate your certificates, you must first generate a private key and a certificate signing request (CSR):
+
+ ```bash
+ $ openssl req -new -key pop.key -out pop.csr
+
+ --
+ Country Name (2 letter code) [XX]:.
+ State or Province Name (full name) []:.
+ Locality Name (eg, city) [Default City]:.
+ Organization Name (eg, company) [Default Company Ltd]:.
+ Organizational Unit Name (eg, section) []:.
+ Common Name (eg, your name or your server hostname) []:75B86466DA34D2B04C0C4C9557A119687ADAE7D4732BDDB3
+ Email Address []:
+
+ Please enter the following 'extra' attributes
+ to be sent with your certificate request
+ A challenge password []:
+ An optional company name []:
+
+ ```
+
+ Then, create a certificate using the root CA configuration file (shown below) or the subordinate CA configuration file and the CSR.
+
+ ```bash
+ openssl ca -config rootca.conf -in pop.csr -out pop.crt -extensions client_ext
+
+ ```
+
+ For more information, see [Using OpenSSL to Create Test Certificates](tutorial-x509-openssl.md).
+
+10. Select the new certificate in the **Certificate Details** view.
+
+11. After the certificate uploads, select **Verify**. The CA certificate status should change to **Verified**.
iot-hub Tutorial X509 Scripts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-x509-scripts.md
+
+ Title: Tutorial - Use Microsoft scripts to create x.509 test certificates for Azure IoT Hub | Microsoft Docs
+description: Tutorial - Use custom scripts to create CA and device certificates for Azure IoT Hub
+++++ Last updated : 02/26/2021++
+#Customer intent: As a developer, I want to be able to use X.509 certificates to authenticate devices to an IoT hub. This step of the tutorial needs to introduce me to Microsoft scripts that I can use to generate test certificates.
++
+# Tutorial: Using Microsoft-supplied scripts to create test certificates
+
+Microsoft provides PowerShell and Bash scripts to help you understand how to create your own X.509 certificates and authenticate them to an IoT Hub. The scripts are located in [GitHub](https://github.com/Azure/azure-iot-sdk-c/tree/master/tools/CACertificates). They are provided for demonstration purposes only. Certificates created by them must not be used for production. The certificates contain hard-coded passwords (ΓÇ£1234ΓÇ¥) and expire after 30 days. For a production environment, you'll need to use your own best practices for certificate creation and lifetime management.
+
+## PowerShell scripts
+
+### Step 1 - Setup
+
+Get OpenSSL for Windows. See <https://www.openssl.org/docs/faq.html#MISC4> for places to download it or <https://www.openssl.org/source/> to build from source. Then run the preliminary scripts:
+
+1. Copy the scripts from [GitHub](https://github.com/Azure/azure-iot-sdk-c/tree/master/tools/CACertificates) into the local directory in which you want to work. All files will be created as children of this directory.
+
+1. Start PowerShell as an administrator.
+
+1. Change to the directory where you loaded the scripts.
+
+1. On the command line, set the environment variable `$ENV:OPENSSL_CONF` to the directory in which the openssl configuration file (openssl.cnf) is located.
+
+1. Run `Set-ExecutionPolicy -ExecutionPolicy Unrestricted` so that PowerShell can run the scripts.
+
+1. Run `. .\ca-certs.ps1`. This brings the functions of the script into the PowerShell global namespace.
+
+1. Run `Test-CACertsPrerequisites`. PowerShell uses the Windows Certificate Store to manage certificates. This command verifies that there won't be name collisions later with existing certificates and that OpenSSL is setup correctly.
+
+### Step 2 - Create certificates
+
+Run `New-CACertsCertChain [ecc|rsa]`. ECC is recommended for CA certificates but not required. This script updates your directory and Windows Certificate store with the following CA and intermediate certificates:
+
+* intermediate1.pem
+* intermediate2.pem
+* intermediate3.pem
+* RootCA.cer
+* RootCA.pem
+
+After running the script, add the new CA certificate (RootCA.pem) to your IoT Hub:
+
+1. Go to your IoT Hub and navigate to Certificates.
+
+1. Select **Add**.
+
+1. Enter a display name for the CA certificate.
+
+1. Upload the CA certificate.
+
+1. Select **Save**.
+
+### Step 3 - Prove possession
+
+Now that you've uploaded your CA certificate to your IoT Hub, you'll need to prove that you actually own it:
+
+1. Select the new CA certificate.
+
+1. Select **Generate Verification Code** in the **Certificate Details** dialog. For more information, see [Prove Possession of a CA certificate](tutorial-x509-prove-possession.md).
+
+1. Create a certificate that contains the verification code. For example, if the verification code is `"106A5SD242AF512B3498BD6098C4941E66R34H268DDB3288"`, run the following to create a new certificate in your working directory containing the subject `CN = 106A5SD242AF512B3498BD6098C4941E66R34H268DDB3288`. The script creates a certificate named `VerifyCert4.cer`.
+
+ `New-CACertsVerificationCert "106A5SD242AF512B3498BD609C4941E66R34H268DDB3288"`
+
+1. Upload `VerifyCert4.cer` to your IoT Hub in the **Certificate Details** dialog.
+
+1. Select **Verify**.
+
+### Step 4 - Create a new device
+
+Create a device for your IoT Hub:
+
+1. In your IoT Hub, navigate to the **IoT Devices** section.
+
+1. Add a new device with ID `mydevice`.
+
+1. For authentication, choose **X.509 CA Signed**.
+
+1. Run `New-CACertsDevice mydevice` to create a new device certificate. This creates the following files in your working directory:
+
+ * `mydevice.pfx`
+ * `mydevice-all.pem`
+ * `mydevice-private.pem`
+ * `mydevice-public.pem`
+
+### Step 5 - Test your device certificate
+
+Go to [Testing Certificate Authentication](tutorial-x509-test-certificate.md) to determine if your device certificate can authenticate to your IoT Hub. You will need the PFX version of your certificate, `mydevice.pfx`.
+
+### Step 6 - Cleanup
+
+From the start menu, open **Manage Computer Certificates** and navigate to **Certificates - Local Computer > personal**. Remove certificates issued by "Azure IoT CA TestOnly*". Similarly remove the appropriate certificates from **>Trusted Root Certification Authority > Certificates and >Intermediate Certificate Authorities > Certificates**.
+
+## Bash Scripts
+
+### Step 1 - Setup
+
+1. Start Bash.
+
+1. Change to the directory in which you want to work. All files will be created in this directory.
+
+1. Copy `*.cnf` and `*.sh` to your working directory.
+
+### Step 2 - Create certificates
+
+1. Run `./certGen.sh create_root_and_intermediate`. This creates the following files in the **certs** directory:
+
+ * azure-iot-test-only.chain.ca.cert.pem
+ * azure-iot-test-only.intermediate.cert.pem
+ * azure-iot-test-only.root.ca.cert.pem
+
+1. Go to your IoT Hub and navigate to **Certificates**.
+
+1. Select **Add**.
+
+1. Enter a display name for the CA certificate.
+
+1. Upload only the CA certificate to your IoT Hub. The name of the certificate is `./certs/azure-iot-test-only.root.ca.cert.pem.`
+
+1. Select **Save**.
+
+### Step 3 - Prove possession
+
+1. Select the new CA certificate created in the preceding step.
+
+1. Select **Generate Verification Code** in the **Certificate Details** dialog. For more information, see [Prove Possession of a CA certificate](tutorial-x509-prove-possession.md).
+
+1. Create a certificate that contains the verification code. For example, if the verification code is `"106A5SD242AF512B3498BD6098C4941E66R34H268DDB3288"`, run the following to create a new certificate in your working directory named `verification-code.cert.pem` which contains the subject `CN = 106A5SD242AF512B3498BD6098C4941E66R34H268DDB3288`.
+
+ `./certGen.sh create_verification_certificate "106A5SD242AF512B3498BD6098C4941E66R34H268DDB3288"`
+
+1. Upload the certificate to your IoT hub in the **Certificate Details** dialog.
+
+1. Select **Verify**.
+
+### Step 4 - Create a new device
+
+Create a device for your IoT hub:
+
+1. In your IoT Hub, navigate to the IoT Devices section.
+
+1. Add a new device with ID `mydevice`.
+
+1. For authentication, choose **X.509 CA Signed**.
+
+1. Run `./certGen.sh create_device_certificate mydevice` to create a new device certificate. This creates two files named `new-device.cert.pem` and `new-device.cert.pfx` files in your working directory.
+
+### Step 5 - Test your device certificate
+
+Go to [Testing Certificate Authentication](tutorial-x509-test-certificate.md) to determine if your device certificate can authenticate to your IoT Hub. You will need the PFX version of your certificate, `new-device.cert.pfx`.
+
+### Step 6 - Cleanup
+
+Because the bash script simply creates certificates in your working directory, just delete them when you are done testing.
+
+## Next Steps
+
+To test your certificate, go to [Testing Certificate Authentication](tutorial-x509-test-certificate.md) to determine if your certificate can authenticate your device to your IoT Hub.
iot-hub Tutorial X509 Self Sign https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-x509-self-sign.md
+
+ Title: Tutorial - Use OpenSSL to create self signed certificates for Azure IoT Hub | Microsoft Docs
+description: Tutorial - Use OpenSSL to create self-signed X.509 certificates for Azure IoT Hub
+++++ Last updated : 02/26/2021++
+#Customer intent: As a developer, I want to be able to use X.509 certificates to authenticate devices to an IoT hub. This step of the tutorial needs to show me how to use OpenSSL to self-sign device certificates.
++
+# Tutorial: Using OpenSSL to create self-signed certificates
+
+You can authenticate a device to your IoT Hub using two self-signed device certificates. This is sometimes called thumbprint authentication because the certificates contain thumbprints (hash values) that you submit to the IoT hub. The following steps tell you how to create two self-signed certificates.
+
+## Step 1 - Create a key for the first certificate
+
+```bash
+openssl genpkey -out device1.key -algorithm RSA -pkeyopt rsa_keygen_bits:2048
+```
+
+## Step 2 - Create a CSR for the first certificate
+
+Make sure that you specify the device ID when prompted.
+
+```bash
+openssl req -new -key device1.key -out device1.csr
+
+Country Name (2 letter code) [XX]:.
+State or Province Name (full name) []:.
+Locality Name (eg, city) [Default City]:.
+Organization Name (eg, company) [Default Company Ltd]:.
+Organizational Unit Name (eg, section) []:.
+Common Name (eg, your name or your server hostname) []:{your-device-id}
+Email Address []:
+
+```
+
+## Step 3 - Check the CSR
+
+```bash
+openssl req -text -in device1.csr -noout
+```
+
+## Step 4 - Self-sign certificate 1
+
+```bash
+openssl x509 -req -days 365 -in device1.csr -signkey device1.key -out device.crt
+```
+
+## Step 5 - Create a key for certificate 2
+
+When prompted, specify the same device ID that you used for certificate 1.
+
+```bash
+openssl req -new -key device2.key -out device2.csr
+
+Country Name (2 letter code) [XX]:.
+State or Province Name (full name) []:.
+Locality Name (eg, city) [Default City]:.
+Organization Name (eg, company) [Default Company Ltd]:.
+Organizational Unit Name (eg, section) []:.
+Common Name (eg, your name or your server hostname) []:{your-device-id}
+Email Address []:
+
+```
+
+## Step 6 - Self-sign certificate 2
+
+```bash
+openssl