Updates from: 01/22/2022 02:09:04
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Localization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/localization.md
Previously updated : 03/08/2021 Last updated : 01/21/2022
The **LocalizedString** element contains the following attributes:
| Attribute | Required | Description | | | -- | -- |
-| ElementType | Yes | Possible values: [ClaimsProvider](#claimsprovider), [ClaimType](#claimtype), [ErrorMessage](#errormessage), [GetLocalizedStringsTransformationClaimType](#getlocalizedstringstransformationclaimtype), [FormatLocalizedStringTransformationClaimType](#formatlocalizedstringtransformationclaimtype), [Predicate](#predicate), [InputValidation](#inputvalidation), or [UxElement](#uxelement). |
-| ElementId | Yes | If **ElementType** is set to `ClaimType`, `Predicate`, or `InputValidation`, this element contains a reference to a claim type already defined in the ClaimsSchema section. |
+| ElementType | Yes | Possible values: [ClaimsProvider](#claimsprovider), [ClaimType](#claimtype), [ErrorMessage](#errormessage), [GetLocalizedStringsTransformationClaimType](#getlocalizedstringstransformationclaimtype), [FormatLocalizedStringTransformationClaimType](#formatlocalizedstringtransformationclaimtype), [Predicate](#predicate), [PredicateValidation](#predicatevalidation), or [UxElement](#uxelement). |
+| ElementId | Yes | If **ElementType** is set to `ClaimType`, `Predicate`, or `PredicateValidation`, this element contains a reference to a claim type already defined in the ClaimsSchema section. |
| StringId | Yes | If **ElementType** is set to `ClaimType`, this element contains a reference to an attribute of a claim type. Possible values: `DisplayName`, `AdminHelpText`, or `PatternHelpText`. The `DisplayName` value is used to set the claim display name. The `AdminHelpText` value is used to set the help text name of the claim user. The `PatternHelpText` value is used to set the claim pattern help text. If **ElementType** is set to `UxElement`, this element contains a reference to an attribute of a user interface element. If **ElementType** is set to `ErrorMessage`, this element specifies the identifier of an error message. See [Localization string IDs](localization-string-ids.md) for a complete list of the `UxElement` identifiers.| ## ElementType
The ElementType reference to a claim type, a claim transformation, or a user int
|Error message|`ErrorMessage`||The ID of the error message | |Copies localized strings into claims|`GetLocalizedStringsTra nsformationClaimType`||The name of the output claim| |Predicate user message|`Predicate`|The name of the predicate| The attribute of the predicate to be localized. Possible values: `HelpText`.|
-|Predicate group user message|`InputValidation`|The ID of the PredicateValidation element.|The ID of the PredicateGroup element. The predicate group must be a child of the predicate validation element as defined in the ElementId.|
+|Predicate group user message|`PredicateValidation`|The ID of the PredicateValidation element.|The ID of the PredicateGroup element. The predicate group must be a child of the predicate validation element as defined in the ElementId.|
|User interface elements |`UxElement` | | The ID of the user interface element to be localized.| |[Display Control](display-controls.md) |`DisplayControl` |The ID of the display control. | The ID of the user interface element to be localized.|
The following example shows how to localize predicates help text.
<LocalizedString ElementType="Predicate" ElementId="Uppercase" StringId="HelpText">an uppercase letter</LocalizedString> ```
-### InputValidation
+### PredicateValidation
-The InputValidation value is used to localize one of the [PredicateValidation](predicates.md) group error messages.
+The PredicateValidation value is used to localize one of the [PredicateValidation](predicates.md) group error messages.
```xml <PredicateValidations>
The InputValidation value is used to localize one of the [PredicateValidation](p
The following example shows how to localize a predicate validation group help text. ```xml
-<LocalizedString ElementType="InputValidation" ElementId="CustomPassword" StringId="CharacterClasses">The password must have at least 3 of the following:</LocalizedString>
+<LocalizedString ElementType="PredicateValidation" ElementId="CustomPassword" StringId="CharacterClasses">The password must have at least 3 of the following:</LocalizedString>
``` ### UxElement
active-directory Concept Mfa Howitworks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-mfa-howitworks.md
Previously updated : 08/05/2021 Last updated : 01/07/2022
# How it works: Azure AD Multi-Factor Authentication
-Multi-factor authentication is a process where a user is prompted during the sign-in process for an additional form of identification, such as to enter a code on their cellphone or to provide a fingerprint scan.
+Multi-factor authentication is a process in which users are prompted during the sign-in process for an additional form of identification, such as a code on their cellphone or a fingerprint scan.
-If you only use a password to authenticate a user, it leaves an insecure vector for attack. If the password is weak or has been exposed elsewhere, is it really the user signing in with the username and password, or is it an attacker? When you require a second form of authentication, security is increased as this additional factor isn't something that's easy for an attacker to obtain or duplicate.
+If you only use a password to authenticate a user, it leaves an insecure vector for attack. If the password is weak or has been exposed elsewhere, an attacker could be using it to gain access. When you require a second form of authentication, security is increased because this additional factor isn't something that's easy for an attacker to obtain or duplicate.
-![Conceptual image of the different forms of multi-factor authentication](./media/concept-mfa-howitworks/methods.png)
+![Conceptual image of the various forms of multi-factor authentication.](./media/concept-mfa-howitworks/methods.png)
Azure AD Multi-Factor Authentication works by requiring two or more of the following authentication methods: * Something you know, typically a password.
-* Something you have, such as a trusted device that is not easily duplicated, like a phone or hardware key.
+* Something you have, such as a trusted device that's not easily duplicated, like a phone or hardware key.
* Something you are - biometrics like a fingerprint or face scan. Azure AD Multi-Factor Authentication can also further secure password reset. When users register themselves for Azure AD Multi-Factor Authentication, they can also register for self-service password reset in one step. Administrators can choose forms of secondary authentication and configure challenges for MFA based on configuration decisions.
-Apps and services don't need changes to use Azure AD Multi-Factor Authentication. The verification prompts are part of the Azure AD sign-in event, which automatically requests and processes the MFA challenge when required.
+You don't need to change apps and services to use Azure AD Multi-Factor Authentication. The verification prompts are part of the Azure AD sign-in, which automatically requests and processes the MFA challenge when needed.
-![Authentication methods in use at the sign-in screen](media/concept-authentication-methods/overview-login.png)
+![MFA sign-in screen.](media/concept-mfa-howitworks/sign-in-screen.png)
## Available verification methods
-When a user signs in to an application or service and receives an MFA prompt, they can choose from one of their registered forms of additional verification. Users can access [My Profile](https://myprofile.microsoft.com) to edit or add verification methods.
+When users sign in to an application or service and receive an MFA prompt, they can choose from one of their registered forms of additional verification. Users can access [My Profile](https://myprofile.microsoft.com) to edit or add verification methods.
The following additional forms of verification can be used with Azure AD Multi-Factor Authentication: * Microsoft Authenticator app
-* OATH Hardware token (preview)
-* OATH Software token
+* Windows Hello for Business
+* FIDO2 security key
+* OATH hardware token (preview)
+* OATH software token
* SMS * Voice call ## How to enable and use Azure AD Multi-Factor Authentication
-All Azure AD tenants can use [security defaults](../fundamentals/concept-fundamentals-security-defaults.md) to quickly enable Microsoft Authenticator for all users. Users and groups can be enabled for Azure AD Multi-Factor Authentication to prompt for additional verification during the sign-in event.
+You can use [security defaults](../fundamentals/concept-fundamentals-security-defaults.md) in Azure AD tenants to quickly enable Microsoft Authenticator for all users. You can enable Azure AD Multi-Factor Authentication to prompt users and groups for additional verification during sign-in.
-For more granular controls, [Conditional Access](../conditional-access/overview.md) policies can be used to define events or applications that require MFA. These policies can allow regular sign-in events when the user is on the corporate network or a registered device, but prompt for additional verification factors when remote or on a personal device.
+For more granular controls, you can use [Conditional Access](../conditional-access/overview.md) policies to define events or applications that require MFA. These policies can allow regular sign-in when the user is on the corporate network or a registered device but prompt for additional verification factors when the user is remote or on a personal device.
-![Overview diagram of how Conditional Access works to secure the sign-in process](media/tutorial-enable-azure-mfa/conditional-access-overview.png)
+![Diagram that shows how Conditional Access works to secure the sign-in process.](media/tutorial-enable-azure-mfa/conditional-access-overview.png)
## Next steps
active-directory Howto Mfa Mfasettings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-mfasettings.md
Previously updated : 08/12/2021 Last updated : 01/11/2022
# Configure Azure AD Multi-Factor Authentication settings
-To customize the end-user experience for Azure AD Multi-Factor Authentication, you can configure options for settings like the account lockout thresholds or fraud alerts and notifications. Some settings are directly in the Azure portal for Azure Active Directory (Azure AD), and some in a separate Azure AD Multi-Factor Authentication portal.
+To customize the end-user experience for Azure AD Multi-Factor Authentication, you can configure options for settings like account lockout thresholds or fraud alerts and notifications. Some settings are available directly in the Azure portal for Azure Active Directory (Azure AD), and some are in a separate Azure AD Multi-Factor Authentication portal.
The following Azure AD Multi-Factor Authentication settings are available in the Azure portal: | Feature | Description | | - | -- |
-| [Account lockout](#account-lockout) | Temporarily lock accounts from using Azure AD Multi-Factor Authentication if there are too many denied authentication attempts in a row. This feature only applies to users who enter a PIN to authenticate. (MFA Server) |
-| [Block/unblock users](#block-and-unblock-users) | Block specific users from being able to receive Azure AD Multi-Factor Authentication requests. Any authentication attempts for blocked users are automatically denied. Users remain blocked for 90 days from the time that they are blocked or they're manually unblocked. |
+| [Account lockout](#account-lockout) | Temporarily lock accounts from using Azure AD Multi-Factor Authentication if there are too many denied authentication attempts in a row. This feature applies only to users who enter a PIN to authenticate. (MFA Server) |
+| [Block/unblock users](#block-and-unblock-users) | Block specific users from being able to receive Azure AD Multi-Factor Authentication requests. Any authentication attempts for blocked users are automatically denied. Users remain blocked for 90 days from the time that they're blocked or until they're manually unblocked. |
| [Fraud alert](#fraud-alert) | Configure settings that allow users to report fraudulent verification requests. | | [Notifications](#notifications) | Enable notifications of events from MFA Server. |
-| [OATH tokens](concept-authentication-oath-tokens.md) | Used in cloud-based Azure AD MFA environments to manage OATH tokens for users. |
+| [OATH tokens](concept-authentication-oath-tokens.md) | Used in cloud-based Azure AD Multi-Factor Authentication environments to manage OATH tokens for users. |
| [Phone call settings](#phone-call-settings) | Configure settings related to phone calls and greetings for cloud and on-premises environments. |
-| Providers | This will show any existing authentication providers that you may have associated with your account. New authentication providers may not be created as of September 1, 2018 |
+| Providers | This will show any existing authentication providers that you have associated with your account. Adding new providers is disabled as of September 1, 2018. |
![Azure portal - Azure AD Multi-Factor Authentication settings](./media/howto-mfa-mfasettings/multi-factor-authentication-settings-portal.png) ## Account lockout
-To prevent repeated MFA attempts as part of an attack, the account lockout settings let you specify how many failed attempts to allow before the account becomes locked out for a period of time. The account lockout settings are only applied when a pin code is entered for the MFA prompt.
+To prevent repeated MFA attempts as part of an attack, the account lockout settings let you specify how many failed attempts to allow before the account becomes locked out for a period of time. The account lockout settings are applied only when a PIN code is entered for the MFA prompt.
The following settings are available:
-* Number of MFA denials to trigger account lockout
+* Number of MFA denials that trigger account lockout
* Minutes until account lockout counter is reset * Minutes until account is automatically unblocked
-To configure account lockout settings, complete the following settings:
+To configure account lockout settings, complete these steps:
1. Sign in to the [Azure portal](https://portal.azure.com) as an administrator.
-1. Browse to **Azure Active Directory** > **Security** > **MFA** > **Account lockout**.
-1. Enter the require values for your environment, then select **Save**.
+1. Go to **Azure Active Directory** > **Security** > **MFA** > **Account lockout**.
+1. Enter the values for your environment, and then select **Save**.
- ![Screenshot of the account lockout settings in the Azure portal](./media/howto-mfa-mfasettings/account-lockout-settings.png)
+ ![Screenshot that shows the account lockout settings in the Azure portal.](./media/howto-mfa-mfasettings/account-lockout-settings.png)
## Block and unblock users
-If a user's device has been lost or stolen, you can block Azure AD Multi-Factor Authentication attempts for the associated account. Any Azure AD Multi-Factor Authentication attempts for blocked users are automatically denied. Users remain blocked for 90 days from the time that they are blocked. We have published a video on [how to block and unblock users in your tenant](https://www.youtube.com/watch?v=WdeE1On4S1o) to show you how to do this.
+If a user's device is lost or stolen, you can block Azure AD Multi-Factor Authentication attempts for the associated account. Any Azure AD Multi-Factor Authentication attempts for blocked users are automatically denied. Users remain blocked for 90 days from the time that they're blocked. For a video that explains how to do this, see [how to block and unblock users in your tenant](https://www.youtube.com/watch?v=WdeE1On4S1o).
### Block a user
-To block a user, complete the following steps, or watch [this short video](https://www.youtube.com/watch?v=WdeE1On4S1o&feature=youtu.be)
+To block a user, complete the following steps.
+
+[Watch a short video that describes this process.](https://www.youtube.com/watch?v=WdeE1On4S1o&feature=youtu.be)
1. Browse to **Azure Active Directory** > **Security** > **MFA** > **Block/unblock users**. 1. Select **Add** to block a user.
-1. Enter the username for the blocked user as `username@domain.com`, then provide a comment in the *Reason* field.
-1. When ready, select **OK** to block the user.
+1. Enter the user name for the blocked user in the format `username@domain.com`, and then provide a comment in the **Reason** box.
+1. Select **OK** to block the user.
### Unblock a user To unblock a user, complete the following steps:
-1. Browse to **Azure Active Directory** > **Security** > **MFA** > **Block/unblock users**.
-1. In the *Action* column next to the desired user, select **Unblock**.
-1. Enter a comment in the *Reason for unblocking* field.
-1. When ready, select **OK** to unblock the user.
+1. Go to **Azure Active Directory** > **Security** > **MFA** > **Block/unblock users**.
+1. In the **Action** column next to the user, select **Unblock**.
+1. Enter a comment in the **Reason for unblocking** box.
+1. Select **OK** to unblock the user.
## Fraud alert
-The fraud alert feature lets users report fraudulent attempts to access their resources. When an unknown and suspicious MFA prompt is received, users can report the fraud attempt using the Microsoft Authenticator app or through their phone.
+The fraud alert feature lets users report fraudulent attempts to access their resources. When an unknown and suspicious MFA prompt is received, users can report the fraud attempt by using the Microsoft Authenticator app or through their phone.
The following fraud alert configuration options are available:
-* **Automatically block users who report fraud**: If a user reports fraud, the Azure AD MFA authentication attempts for the user account are blocked for 90 days or until an administrator unblocks their account. An administrator can review sign-ins by using the sign-in report, and take appropriate action to prevent future fraud. An administrator can then [unblock](#unblock-a-user) the user's account.
-* **Code to report fraud during initial greeting**: When users receive a phone call to perform multi-factor authentication, they normally press **#** to confirm their sign-in. To report fraud, the user enters a code before pressing **#**. This code is **0** by default, but you can customize it.
+* **Automatically block users who report fraud**. If a user reports fraud, the Azure AD Multi-Factor Authentication attempts for the user account are blocked for 90 days or until an administrator unblocks the account. An administrator can review sign-ins by using the sign-in report, and take appropriate action to prevent future fraud. An administrator can then [unblock](#unblock-a-user) the user's account.
+* **Code to report fraud during initial greeting**. When users receive a phone call to perform multi-factor authentication, they normally press **#** to confirm their sign-in. To report fraud, the user enters a code before pressing **#**. This code is **0** by default, but you can customize it.
> [!NOTE] > The default voice greetings from Microsoft instruct users to press **0#** to submit a fraud alert. If you want to use a code other than **0**, record and upload your own custom voice greetings with appropriate instructions for your users. To enable and configure fraud alerts, complete the following steps:
-1. Browse to **Azure Active Directory** > **Security** > **MFA** > **Fraud alert**.
-1. Set the *Allow users to submit fraud alerts* setting to **On**.
-1. Configure the *Automatically block users who report fraud* or *Code to report fraud during initial greeting* setting as desired.
-1. When ready, select **Save**.
+1. Go to **Azure Active Directory** > **Security** > **MFA** > **Fraud alert**.
+1. Set **Allow users to submit fraud alerts** to **On**.
+1. Configure the **Automatically block users who report fraud** or **Code to report fraud during initial greeting** setting as needed.
+1. Select **Save**.
### View fraud reports When a user reports fraud, the event shows up in the Sign-ins report (as a sign-in that was rejected by the user) and in the Audit logs. -- To view fraud reports in the Sign-ins report, click **Azure Active Directory** > **Sign-ins** > **Authentication Details**. The fraud report is part of the standard Azure AD Sign-ins report and appears in the **Result Detail** as **MFA denied, Fraud Code Entered**.
+- To view fraud reports in the Sign-ins report, select **Azure Active Directory** > **Sign-in logs** > **Authentication Details**. The fraud report is part of the standard Azure AD Sign-ins report and appears in the **Result Detail** as **MFA denied, Fraud Code Entered**.
-- To view fraud reports in the Audit logs, click **Azure Active Directory** > **Audit Logs**. The fraud report appears under Activity type **Fraud reported - user is blocked for MFA** or **Fraud reported - no action taken** based on the tenant-level settings for fraud report.
+- To view fraud reports in the Audit logs, select **Azure Active Directory** > **Audit logs**. The fraud report appears under Activity type **Fraud reported - user is blocked for MFA** or **Fraud reported - no action taken** based on the tenant-level settings for fraud report.
## Notifications
-Email notifications can be configured when users report fraud alerts. These notifications are typically sent to identity administrators, as the user's account credentials are likely compromised. The following example shows what a fraud alert notification email looks like:
+You can configure Azure AD to send email notifications when users report fraud alerts. These notifications are typically sent to identity administrators, because the user's account credentials are likely compromised. The following example shows what a fraud alert notification email looks like:
-![Example fraud alert notification email](./media/howto-mfa-mfasettings/multi-factor-authentication-fraud-alert-email.png)
+![Screenshot that shows a fraud alert notification email.](./media/howto-mfa-mfasettings/multi-factor-authentication-fraud-alert-email.png)
-To configure fraud alert notifications, complete the following settings:
+To configure fraud alert notifications:
-1. Browse to **Azure Active Directory** > **Security** > **Multi-Factor Authentication** > **Notifications**.
-1. Enter the email address to add into the next box.
-1. To remove an existing email address, select the **...** option next to the desired email address, then select **Delete**.
-1. When ready, select **Save**.
+1. Go to **Azure Active Directory** > **Security** > **Multi-Factor Authentication** > **Notifications**.
+1. Enter the email address to send the notification to.
+1. To remove an existing email address, select **...** next to the email address, and then select **Delete**.
+1. Select **Save**.
## OATH tokens
-Azure AD supports the use of OATH-TOTP SHA-1 tokens that refresh codes every 30 or 60 seconds. Customers can purchase these tokens from the vendor of their choice.
+Azure AD supports the use of OATH TOTP SHA-1 tokens that refresh codes every 30 or 60 seconds. You can purchase these tokens from the vendor of your choice.
-OATH TOTP hardware tokens typically come with a secret key, or seed, pre-programmed in the token. These keys must be input into Azure AD as described in the following steps. Secret keys are limited to 128 characters, which may not be compatible with all tokens. The secret key can only contain the characters *a-z* or *A-Z* and digits *1-7*, and must be encoded in *Base32*.
+OATH TOTP hardware tokens typically come with a secret key, or seed, pre-programmed in the token. You need to input these keys into Azure AD as described in the following steps. Secret keys are limited to 128 characters, which might not be compatible with all tokens. The secret key can contain only the characters *a-z* or *A-Z* and digits *1-7*. It must be encoded in Base32.
Programmable OATH TOTP hardware tokens that can be reseeded can also be set up with Azure AD in the software token setup flow.
-OATH hardware tokens are supported as part of a public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
+OATH hardware tokens are supported as part of a public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms).
-![Uploading OATH tokens to the MFA OATH tokens blade](media/concept-authentication-methods/mfa-server-oath-tokens-azure-ad.png)
+![Screenshot that shows the OATH tokens section.](media/concept-authentication-methods/mfa-server-oath-tokens-azure-ad.png)
-Once tokens are acquired they must be uploaded in a comma-separated values (CSV) file format including the UPN, serial number, secret key, time interval, manufacturer, and model as shown in the following example:
+After you acquire tokens, you need to upload them in a comma-separated values (CSV) file format. Include the UPN, serial number, secret key, time interval, manufacturer, and model, as shown in this example:
```csv upn,serial number,secret key,time interval,manufacturer,model
Helga@contoso.com,1234567,1234567abcdef1234567abcdef,60,Contoso,HardwareKey
``` > [!NOTE]
-> Make sure you include the header row in your CSV file.
+> Be sure to include the header row in your CSV file.
-Once properly formatted as a CSV file, an administrator can then sign in to the Azure portal, navigate to **Azure Active Directory > Security > MFA > OATH tokens**, and upload the resulting CSV file.
+An administrator can sign in to the Azure portal, go to **Azure Active Directory > Security > MFA > OATH tokens**, and upload the CSV file.
-Depending on the size of the CSV file, it may take a few minutes to process. Select the **Refresh** button to get the current status. If there are any errors in the file, you can download a CSV file that lists any errors for you to resolve. The field names in the downloaded CSV file are different than the uploaded version.
+Depending on the size of the CSV file, it might take a few minutes to process. Select **Refresh** to get the status. If there are any errors in the file, you can download a CSV file that lists them. The field names in the downloaded CSV file are different from those in the uploaded version.
-Once any errors have been addressed, the administrator then can activate each key by selecting **Activate** for the token and entering the OTP displayed on the token.
+After any errors are addressed, the administrator can activate each key by selecting **Activate** for the token and entering the OTP displayed in the token.
-Users may have a combination of up to five OATH hardware tokens or authenticator applications, such as the Microsoft Authenticator app, configured for use at any time.
+Users can have a combination of up to five OATH hardware tokens or authenticator applications, such as the Microsoft Authenticator app, configured for use at any time.
## Phone call settings
-If users receive phone calls for MFA prompts, you can configure their experience, such as caller ID or voice greeting they hear.
+If users receive phone calls for MFA prompts, you can configure their experience, such as caller ID or the voice greeting they hear.
-In the United States, if you haven't configured MFA Caller ID, voice calls from Microsoft come from the following number. If using spam filters, make sure to exclude this number:
+In the United States, if you haven't configured MFA caller ID, voice calls from Microsoft come from the following number. Uses with spam filters should exclude this number.
-* *+1 (855) 330 8653*
+* *+1 (855) 330-8653*
> [!NOTE]
-> When Azure AD Multi-Factor Authentication calls are placed through the public telephone network, sometimes the calls are routed through a carrier that doesn't support caller ID. Because of this, caller ID isn't guaranteed, even though Azure AD Multi-Factor Authentication always sends it. This applies both to phone calls and to text messages provided by Azure AD Multi-Factor Authentication. If you need to validate that a text message is from Azure AD Multi-Factor Authentication, see [What SMS short codes are used for sending messages?](multi-factor-authentication-faq.yml#what-sms-short-codes-are-used-for-sending-sms-messages-to-my-users-)
+> When Azure AD Multi-Factor Authentication calls are placed through the public telephone network, sometimes the calls are routed through a carrier that doesn't support caller ID. Because of this, caller ID isn't guaranteed, even though Azure AD Multi-Factor Authentication always sends it. This applies both to phone calls and text messages provided by Azure AD Multi-Factor Authentication. If you need to validate that a text message is from Azure AD Multi-Factor Authentication, see [What SMS short codes are used for sending messages?](multi-factor-authentication-faq.yml#what-sms-short-codes-are-used-for-sending-sms-messages-to-my-users-).
To configure your own caller ID number, complete the following steps:
-1. Browse to **Azure Active Directory** > **Security** > **MFA** > **Phone call settings**.
-1. Set the **MFA caller ID number** to the number you wish users to see on their phone. Only US-based numbers are allowed.
-1. When ready, select **Save**.
+1. Go to **Azure Active Directory** > **Security** > **MFA** > **Phone call settings**.
+1. Set the **MFA caller ID number** to the number you want users to see on their phones. Only US-based numbers are allowed.
+1. Select **Save**.
### Custom voice messages
-You can use your own recordings or greetings for Azure AD Multi-Factor Authentication with the custom voice messages feature. These messages can be used in addition to or to replace the default Microsoft recordings.
+You can use your own recordings or greetings for Azure AD Multi-Factor Authentication. These messages can be used in addition to the default Microsoft recordings or to replace them.
Before you begin, be aware of the following restrictions:
-* The supported file formats are *.wav* and *.mp3*.
+* The supported file formats are .wav and .mp3.
* The file size limit is 1 MB.
-* Authentication messages should be shorter than 20 seconds. Messages that are longer than 20 seconds can cause the verification to fail. The user might not respond before the message finishes and the verification times out.
+* Authentication messages should be shorter than 20 seconds. Messages that are longer than 20 seconds can cause the verification to fail. If the user doesn't respond before the message finishes, the verification times out.
### Custom message language behavior When a custom voice message is played to the user, the language of the message depends on the following factors:
-* The language of the current user.
+* The language of the user.
* The language detected by the user's browser.
- * Other authentication scenarios may behave differently.
+ * Other authentication scenarios might behave differently.
* The language of any available custom messages.
- * This language is chosen by the administrator, when a custom message is added.
+ * This language is chosen by the administrator when a custom message is added.
-For example, if there is only one custom message, with a language of German:
+For example, if there's only one custom message, and it's in German:
* A user who authenticates in the German language will hear the custom German message. * A user who authenticates in English will hear the standard English message. ### Custom voice message defaults
-The following sample scripts can be used to create your own custom messages. These phrases are the defaults if you don't configure your own custom messages:
+You can use the following sample scripts to create your own custom messages. These phrases are the defaults if you don't configure your own custom messages.
| Message name | Script | | | | | Authentication successful | Your sign-in was successfully verified. Goodbye. |
-| Extension prompt | Thank you for using Microsoft's sign-in verification system. Please press pound key to continue. |
-| Fraud Confirmation | A fraud alert has been submitted. To unblock your account, please contact your company's IT help desk. |
-| Fraud greeting (Standard) | Thank you for using Microsoft's sign-in verification system. Please press the pound key to finish your verification. If you did not initiate this verification, someone may be trying to access your account. Please press zero pound to submit a fraud alert. This will notify your company's IT team and block further verification attempts. |
-| Fraud reported A fraud alert has been submitted. | To unblock your account, please contact your company's IT help desk. |
-| Activation | Thank you for using the Microsoft's sign-in verification system. Please press the pound key to finish your verification. |
+| Extension prompt | Thank you for using Microsoft's sign-in verification system. Please press the pound key to continue. |
+| Fraud confirmation | A fraud alert has been submitted. To unblock your account, please contact your company's IT help desk. |
+| Fraud greeting (standard) | Thank you for using Microsoft's sign-in verification system. Please press the pound key to finish your verification. If you did not initiate this verification, someone may be trying to access your account. Please press zero pound to submit a fraud alert. This will notify your company's IT team and block further verification attempts. |
+| Fraud reported | A fraud alert has been submitted. To unblock your account, please contact your company's IT help desk. |
+| Activation | Thank you for using the Microsoft sign-in verification system. Please press the pound key to finish your verification. |
| Authentication denied retry | Verification denied. |
-| Retry (Standard) | Thank you for using the Microsoft's sign-in verification system. Please press the pound key to finish your verification. |
-| Greeting (Standard) | Thank you for using the Microsoft's sign-in verification system. Please press the pound key to finish your verification. |
+| Retry (standard) | Thank you for using the Microsoft sign-in verification system. Please press the pound key to finish your verification. |
+| Greeting (standard) | Thank you for using the Microsoft sign-in verification system. Please press the pound key to finish your verification. |
| Greeting (PIN) | Thank you for using Microsoft's sign-in verification system. Please enter your PIN followed by the pound key to finish your verification. | | Fraud greeting (PIN) | Thank you for using Microsoft's sign-in verification system. Please enter your PIN followed by the pound key to finish your verification. If you did not initiate this verification, someone may be trying to access your account. Please press zero pound to submit a fraud alert. This will notify your company's IT team and block further verification attempts. |
-| Retry(PIN) | Thank you for using Microsoft's sign-in verification system. Please enter your PIN followed by the pound key to finish your verification. |
+| Retry (PIN) | Thank you for using Microsoft's sign-in verification system. Please enter your PIN followed by the pound key to finish your verification. |
| Extension prompt after digits | If already at this extension, press the pound key to continue. | | Authentication denied | I'm sorry, we cannot sign you in at this time. Please try again later. |
-| Activation greeting (Standard) | Thank you for using the Microsoft's sign-in verification system. Please press the pound key to finish your verification. |
-| Activation retry (Standard) | Thank you for using the Microsoft's sign-in verification system. Please press the pound key to finish your verification. |
+| Activation greeting (standard) | Thank you for using the Microsoft sign-in verification system. Please press the pound key to finish your verification. |
+| Activation retry (standard) | Thank you for using the Microsoft sign-in verification system. Please press the pound key to finish your verification. |
| Activation greeting (PIN) | Thank you for using Microsoft's sign-in verification system. Please enter your PIN followed by the pound key to finish your verification. |
-| Extension prompt before digits | Thank you for using Microsoft's sign-in verification system. Please transfer this call to extension … |
+| Extension prompt before digits | Thank you for using Microsoft's sign-in verification system. Please transfer this call to extension \<extension>. |
### Set up a custom message To use your own custom messages, complete the following steps:
-1. Browse to **Azure Active Directory** > **Security** > **MFA** > **Phone call settings**.
+1. Go to **Azure Active Directory** > **Security** > **MFA** > **Phone call settings**.
1. Select **Add greeting**.
-1. Choose the **Type** of greeting, such as *Greeting (standard)* or *Authentication successful*.
-1. Select the **Language**, based on the previous section on [custom message language behavior](#custom-message-language-behavior).
-1. Browse for and select an *.mp3* or *.wav* sound file to upload.
-1. When ready, select **Add**, then **Save**.
+1. Choose the **Type** of greeting, such as **Greeting (standard)** or **Authentication successful**.
+1. Select the **Language**. See the previous section on [custom message language behavior](#custom-message-language-behavior).
+1. Browse for and select an .mp3 or .wav sound file to upload.
+1. Select **Add** and then **Save**.
## MFA service settings
-Settings for app passwords, trusted IPs, verification options, and remember multi-factor authentication for Azure AD Multi-Factor Authentication can be found in service settings. This is more of a legacy portal, and isn't part of the regular Azure AD portal.
+Settings for app passwords, trusted IPs, verification options, and remembering multi-factor authentication on trusted devices are available in the service settings. This is a legacy portal. It isn't part of the regular Azure AD portal.
-Service settings can be accessed from the Azure portal by browsing to **Azure Active Directory** > **Security** > **MFA** > **Getting started** > **Configure** > **Additional cloud-based MFA settings**. A new window or tab opens with additional *service settings* options.
+You can access service settings from the Azure portal by going to **Azure Active Directory** > **Security** > **MFA** > **Getting started** > **Configure** > **Additional cloud-based MFA settings**. A window or tab opens with additional service settings options.
-## Trusted IPs
+### Trusted IPs
-The _Trusted IPs_ feature of Azure AD Multi-Factor Authentication bypasses multi-factor authentication prompts for users who sign in from a defined IP address range. You can set trusted IP ranges for your on-premises environments so when users are in one of those locations, there's no Azure AD Multi-Factor Authentication prompt. The _Trusted IPs_ feature of Azure AD Multi-Factor Authentication requires Azure AD Premium P1 edition.
+The trusted IPs feature of Azure AD Multi-Factor Authentication bypasses multi-factor authentication prompts for users who sign in from a defined IP address range. You can set trusted IP ranges for your on-premises environments. When users are in one of these locations, there's no Azure AD Multi-Factor Authentication prompt. The trusted IPs feature requires Azure AD Premium P1 edition.
> [!NOTE]
-> The trusted IPs can include private IP ranges only when you use MFA Server. For cloud-based Azure AD Multi-Factor Authentication, you can only use public IP address ranges.
+> The trusted IPs can include private IP ranges only when you use MFA Server. For cloud-based Azure AD Multi-Factor Authentication, you can use only public IP address ranges.
>
-> IPv6 ranges are only supported in the [Named location (preview)](../conditional-access/location-condition.md) interface.
+> IPv6 ranges are supported only in the [Named locations (preview)](../conditional-access/location-condition.md) interface.
-If your organization deploys the NPS extension to provide MFA to on-premises applications note the source IP address will always appear to be the NPS server the authentication attempt flows through.
+If your organization uses the NPS extension to provide MFA to on-premises applications, the source IP address will always appear to be the NPS server that the authentication attempt flows through.
| Azure AD tenant type | Trusted IP feature options | |: |: | | Managed |**Specific range of IP addresses**: Administrators specify a range of IP addresses that can bypass multi-factor authentication for users who sign in from the company intranet. A maximum of 50 trusted IP ranges can be configured.|
-| Federated |**All Federated Users**: All federated users who sign in from inside of the organization can bypass multi-factor authentication. The users bypass verification by using a claim that is issued by Active Directory Federation Services (AD FS).<br/>**Specific range of IP addresses**: Administrators specify a range of IP addresses that can bypass multi-factor authentication for users who sign in from the company intranet. |
+| Federated |**All Federated Users**: All federated users who sign in from inside the organization can bypass multi-factor authentication. Users bypass verification by using a claim that's issued by Active Directory Federation Services (AD FS).<br/>**Specific range of IP addresses**: Administrators specify a range of IP addresses that can bypass multi-factor authentication for users who sign in from the company intranet. |
-Trusted IP bypass works only from inside of the company intranet. If you select the **All Federated Users** option and a user signs in from outside the company intranet, the user has to authenticate by using multi-factor authentication. The process is the same even if the user presents an AD FS claim.
+Trusted IP bypass works only from inside the company intranet. If you select the **All Federated Users** option and a user signs in from outside the company intranet, the user has to authenticate by using multi-factor authentication. The process is the same even if the user presents an AD FS claim.
-### End-user experience inside of corpnet
+#### User experience inside the corporate network
-When the trusted IPs feature is disabled, multi-factor authentication is required for browser flows. App passwords are required for older rich client applications.
+When the trusted IPs feature is disabled, multi-factor authentication is required for browser flows. App passwords are required for older rich-client applications.
-When trusted IPs are used, multi-factor authentication isn't required for browser flows. App passwords aren't required for older rich client applications, provided that the user hasn't created an app password. After an app password is in use, the password remains required.
+When trusted IPs are used, multi-factor authentication isn't required for browser flows. App passwords aren't required for older rich-client applications if the user hasn't created an app password. After an app password is in use, the password is required.
-### End-user experience outside corpnet
+#### User experience outside the corporate network
-Regardless of whether trusted IP are defined, multi-factor authentication is required for browser flows. App passwords are required for older rich client applications.
+Regardless of whether trusted IP are defined, multi-factor authentication is required for browser flows. App passwords are required for older rich-client applications.
-### Enable named locations by using Conditional Access
+#### Enable named locations by using Conditional Access
-You can use Conditional Access rules to define named locations using the following steps:
+You can use Conditional Access rules to define named locations by using the following steps:
-1. In the Azure portal, search for and select **Azure Active Directory**, then browse to **Security** > **Conditional Access** > **Named locations**.
+1. In the Azure portal, search for and select **Azure Active Directory**, and then go to **Security** > **Conditional Access** > **Named locations**.
1. Select **New location**. 1. Enter a name for the location. 1. Select **Mark as trusted location**.
-1. Enter the IP Range in CIDR notation for your environment, such as *40.77.182.32/27*.
+1. Enter the IP range for your environment in CIDR notation. For example, *40.77.182.32/27*.
1. Select **Create**.
-### Enable the Trusted IPs feature by using Conditional Access
+#### Enable the trusted IPs feature by using Conditional Access
-To enable trusted IPs using Conditional Access policies, complete the following steps:
+To enable trusted IPs by using Conditional Access policies, complete the following steps:
-1. In the Azure portal, search for and select **Azure Active Directory**, then browse to **Security** > **Conditional Access** > **Named locations**.
+1. In the Azure portal, search for and select **Azure Active Directory**, and then go to **Security** > **Conditional Access** > **Named locations**.
1. Select **Configure MFA trusted IPs**.
-1. On the **Service Settings** page, under **Trusted IPs**, choose from any of the following two options:
+1. On the **Service Settings** page, under **Trusted IPs**, choose one of these options:
- * **For requests from federated users originating from my intranet**: To choose this option, select the check box. All federated users who sign in from the corporate network bypass multi-factor authentication by using a claim that is issued by AD FS. Ensure that AD FS has a rule to add the intranet claim to the appropriate traffic. If the rule does not exist, create the following rule in AD FS:
+ * **For requests from federated users originating from my intranet**: To choose this option, select the checkbox. All federated users who sign in from the corporate network bypass multi-factor authentication by using a claim that's issued by AD FS. Ensure that AD FS has a rule to add the intranet claim to the appropriate traffic. If the rule doesn't exist, create the following rule in AD FS:
`c:[Type== "http://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork"] => issue(claim = c);`
- * **For requests from a specific range of public IPs**: To choose this option, enter the IP addresses in the text box by using CIDR notation.
- * For IP addresses that are in the range xxx.xxx.xxx.1 through xxx.xxx.xxx.254, use notation like **xxx.xxx.xxx.0/24**.
- * For a single IP address, use notation like **xxx.xxx.xxx.xxx/32**.
+ * **For requests from a specific range of public IPs**: To choose this option, enter the IP addresses in the text box, in CIDR notation.
+ * For IP addresses that are in the range *xxx.xxx.xxx*.1 through *xxx.xxx.xxx*.254, use notation like ***xxx.xxx.xxx*.0/24**.
+ * For a single IP address, use notation like ***xxx.xxx.xxx.xxx*/32**.
* Enter up to 50 IP address ranges. Users who sign in from these IP addresses bypass multi-factor authentication. 1. Select **Save**.
-### Enable the Trusted IPs feature by using service settings
+#### Enable the trusted IPs feature by using service settings
-If you don't want to use Conditional Access policies to enable trusted IPs, you can configure the *service settings* for Azure AD Multi-Factor Authentication using the following steps:
+If you don't want to use Conditional Access policies to enable trusted IPs, you can configure the service settings for Azure AD Multi-Factor Authentication by using the following steps:
-1. In the Azure portal, search for and select **Azure Active Directory**, then choose **Users**.
-1. Select **Multi-Factor Authentication**.
-1. Under Multi-Factor Authentication, select **service settings**.
-1. On the **Service Settings** page, under **Trusted IPs**, choose one (or both) of the following two options:
+1. In the Azure portal, search for and select **Azure Active Directory**, and then select **Users**.
+1. Select **Per-user MFA**.
+1. Under **multi-factor authentication** at the top of the page, select **service settings**.
+1. On the **service settings** page, under **Trusted IPs**, choose one or both of the following options:
- * **For requests from federated users on my intranet**: To choose this option, select the check box. All federated users who sign in from the corporate network bypass multi-factor authentication by using a claim that is issued by AD FS. Ensure that AD FS has a rule to add the intranet claim to the appropriate traffic. If the rule does not exist, create the following rule in AD FS:
+ * **For requests from federated users on my intranet**: To choose this option, select the checkbox. All federated users who sign in from the corporate network bypass multi-factor authentication by using a claim that's issued by AD FS. Ensure that AD FS has a rule to add the intranet claim to the appropriate traffic. If the rule doesn't exist, create the following rule in AD FS:
`c:[Type== "http://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork"] => issue(claim = c);`
- * **For requests from a specified range of IP address subnets**: To choose this option, enter the IP addresses in the text box by using CIDR notation.
- * For IP addresses that are in the range xxx.xxx.xxx.1 through xxx.xxx.xxx.254, use notation like **xxx.xxx.xxx.0/24**.
- * For a single IP address, use notation like **xxx.xxx.xxx.xxx/32**.
+ * **For requests from a specified range of IP address subnets**: To choose this option, enter the IP addresses in the text box, in CIDR notation.
+ * For IP addresses that are in the range *xxx.xxx.xxx*.1 through *xxx.xxx.xxx*.254, use notation like ***xxx.xxx.xxx*.0/24**.
+ * For a single IP address, use notation like ***xxx.xxx.xxx.xxx*/32**.
* Enter up to 50 IP address ranges. Users who sign in from these IP addresses bypass multi-factor authentication. 1. Select **Save**.
-## Verification methods
+### Verification methods
-You can choose the verification methods that are available for your users in the service settings portal. When your users enroll their accounts for Azure AD Multi-Factor Authentication, they choose their preferred verification method from the options that you have enabled. Guidance for the user enrollment process is provided in [Set up my account for multi-factor authentication](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc).
+You can choose the verification methods that are available for your users in the service settings portal. When your users enroll their accounts for Azure AD Multi-Factor Authentication, they choose their preferred verification method from the options that you've enabled. Guidance for the user enrollment process is provided in [Set up my account for multi-factor authentication](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc).
The following verification methods are available: | Method | Description | |: |: |
-| Call to phone |Places an automated voice call. The user answers the call and presses # in the phone keypad to authenticate. The phone number is not synchronized to on-premises Active Directory. |
+| Call to phone |Places an automated voice call. The user answers the call and presses # on the phone to authenticate. The phone number isn't synchronized to on-premises Active Directory. |
| Text message to phone |Sends a text message that contains a verification code. The user is prompted to enter the verification code into the sign-in interface. This process is called one-way SMS. Two-way SMS means that the user must text back a particular code. Two-way SMS is deprecated and not supported after November 14, 2018. Administrators should enable another method for users who previously used two-way SMS.|
-| Notification through mobile app |Sends a push notification to your phone or registered device. The user views the notification and selects **Verify** to complete verification. The Microsoft Authenticator app is available for [Windows Phone](https://www.microsoft.com/p/microsoft-authenticator/9nblgggzmcj6), [Android](https://go.microsoft.com/fwlink/?Linkid=825072), and [iOS](https://go.microsoft.com/fwlink/?Linkid=825073). |
+| Notification through mobile app |Sends a push notification to the user's phone or registered device. The user views the notification and selects **Verify** to complete verification. The Microsoft Authenticator app is available for [Windows Phone](https://www.microsoft.com/p/microsoft-authenticator/9nblgggzmcj6), [Android](https://go.microsoft.com/fwlink/?Linkid=825072), and [iOS](https://go.microsoft.com/fwlink/?Linkid=825073). |
| Verification code from mobile app or hardware token |The Microsoft Authenticator app generates a new OATH verification code every 30 seconds. The user enters the verification code into the sign-in interface. The Microsoft Authenticator app is available for [Windows Phone](https://www.microsoft.com/p/microsoft-authenticator/9nblgggzmcj6), [Android](https://go.microsoft.com/fwlink/?Linkid=825072), and [iOS](https://go.microsoft.com/fwlink/?Linkid=825073). |
-For more information, see [What authentication and verification methods are available in Azure AD?](concept-authentication-methods.md)
+For more information, see [What authentication and verification methods are available in Azure AD?](concept-authentication-methods.md).
-### Enable and disable verification methods
+#### Enable and disable verification methods
To enable or disable verification methods, complete the following steps:
-1. In the Azure portal, search for and select **Azure Active Directory**, then choose **Users**.
-1. Select **Multi-Factor Authentication**.
-1. Under Multi-Factor Authentication, select **service settings**.
-1. On the **Service Settings** page, under **verification options**, select/unselect the methods to provide to your users.
-1. Click **Save**.
+1. In the Azure portal, search for and select **Azure Active Directory**, and then select **Users**.
+1. Select **Per-user MFA**.
+1. Under **multi-factor authentication** at the top of the page, select **service settings**.
+1. On the **service settings** page, under **verification options**, select or clear the appropriate checkboxes.
+1. Select **Save**.
-## Remember Multi-Factor Authentication
+### Remember multi-factor authentication
-The _remember Multi-Factor Authentication_ feature lets users bypass subsequent verifications for a specified number of days, after they've successfully signed-in to a device by using Multi-Factor Authentication. To enhance usability and minimize the number of times a user has to perform MFA on the same device, select a duration of 90 days or more.
+ The **remember multi-factor authentication** feature lets users bypass subsequent verifications for a specified number of days, after they've successfully signed in to a device by using MFA. To enhance usability and minimize the number of times a user has to perform MFA on a given device, select a duration of 90 days or more.
> [!IMPORTANT]
-> If an account or device is compromised, remembering Multi-Factor Authentication for trusted devices can affect security. If a corporate account becomes compromised or a trusted device is lost or stolen, you should [Revoke MFA Sessions](howto-mfa-userdevicesettings.md).
+> If an account or device is compromised, remembering MFA for trusted devices can affect security. If a corporate account becomes compromised or a trusted device is lost or stolen, you should [Revoke MFA Sessions](howto-mfa-userdevicesettings.md).
>
-> The restore action revokes the trusted status from all devices, and the user is required to perform multi-factor authentication again. You can also instruct your users to restore Multi-Factor Authentication on their own devices as noted in [Manage your settings for multi-factor authentication](https://support.microsoft.com/account-billing/change-your-two-step-verification-method-and-settings-c801d5ad-e0fc-4711-94d5-33ad5d4630f7#turn-on-two-factor-verification-prompts-on-a-trusted-device).
+> The revoke action revokes the trusted status from all devices, and the user is required to perform multi-factor authentication again. You can also instruct your users to restore the original MFA status on their own devices as noted in [Manage your settings for multi-factor authentication](https://support.microsoft.com/account-billing/change-your-two-step-verification-method-and-settings-c801d5ad-e0fc-4711-94d5-33ad5d4630f7#turn-on-two-factor-verification-prompts-on-a-trusted-device).
-### How the feature works
+#### How the feature works
-The remember Multi-Factor Authentication feature sets a persistent cookie on the browser when a user selects the **Don't ask again for X days** option at sign-in. The user isn't prompted again for Multi-Factor Authentication from that same browser until the cookie expires. If the user opens a different browser on the same device or clears their cookies, they're prompted again to verify.
+The **remember multi-factor authentication** feature sets a persistent cookie on the browser when a user selects the **Don't ask again for *X* days** option at sign-in. The user isn't prompted again for MFA from that browser until the cookie expires. If the user opens a different browser on the same device or clears the cookies, they're prompted again to verify.
-The **Don't ask again for X days** option isn't shown on non-browser applications, regardless of whether the app supports modern authentication. These apps use _refresh tokens_ that provide new access tokens every hour. When a refresh token is validated, Azure AD checks that the last multi-factor authentication occurred within the specified number of days.
+The **Don't ask again for *X* days** option isn't shown on non-browser applications, regardless of whether the app supports modern authentication. These apps use _refresh tokens_ that provide new access tokens every hour. When a refresh token is validated, Azure AD checks that the last multi-factor authentication occurred within the specified number of days.
-The feature reduces the number of authentications on web apps, which normally prompt every time. The feature can increase the number of authentications for modern authentication clients that normally prompt every 180 days, if a lower duration is configured. May also increase the number of authentications when combined with Conditional Access policies.
+The feature reduces the number of authentications on web apps, which normally prompt every time. The feature can increase the number of authentications for modern authentication clients that normally prompt every 180 days, if a lower duration is configured. It might also increase the number of authentications when combined with Conditional Access policies.
> [!IMPORTANT]
-> The **remember Multi-Factor Authentication** feature isn't compatible with the **keep me signed in** feature of AD FS, when users perform multi-factor authentication for AD FS through Azure Multi-Factor Authentication Server or a third-party multi-factor authentication solution.
+> The **remember multi-factor authentication** feature isn't compatible with the **keep me signed in** feature of AD FS, when users perform multi-factor authentication for AD FS through MFA Server or a third-party multi-factor authentication solution.
>
-> If your users select **keep me signed in** on AD FS and also mark their device as trusted for Multi-Factor Authentication, the user isn't automatically verified after the **remember multi-factor authentication** number of days expires. Azure AD requests a fresh multi-factor authentication, but AD FS returns a token with the original Multi-Factor Authentication claim and date, rather than performing multi-factor authentication again. **This reaction sets off a verification loop between Azure AD and AD FS.**
+> If your users select **keep me signed in** on AD FS and also mark their device as trusted for MFA, the user isn't automatically verified after the **remember multi-factor authentication** number of days expires. Azure AD requests a fresh multi-factor authentication, but AD FS returns a token with the original MFA claim and date, rather than performing multi-factor authentication again. *This reaction sets off a verification loop between Azure AD and AD FS.*
>
-> The **remember Multi-Factor Authentication** feature is not compatible with B2B users and will not be visible for B2B users when signing into the invited tenants.
+> The **remember multi-factor authentication** feature isn't compatible with B2B users and won't be visible for B2B users when they sign in to the invited tenants.
>
-### Enable remember Multi-Factor Authentication
+#### Enable remember multi-factor authentication
-To enable and configure the option for users to remember their MFA status and bypass prompts, complete the following steps:
+To enable and configure the option to allow users to remember their MFA status and bypass prompts, complete the following steps:
-1. In the Azure portal, search for and select **Azure Active Directory**, then choose **Users**.
-1. Select **Multi-Factor Authentication**.
-1. Under Multi-Factor Authentication, select **service settings**.
-1. On the **Service Settings** page, under **remember multi-factor authentication**, select the **Allow users to remember multi-factor authentication on devices they trust** option.
-1. Set the number of days to allow trusted devices to bypass multi-factor authentication. For the optimal user experience, extend the duration to *90* or more days.
+1. In the Azure portal, search for and select **Azure Active Directory**, and then select **Users**.
+1. Select **Per-user MFA**.
+1. Under **multi-factor authentication** at the top of the page, select **service settings**.
+1. On the **service settings** page, under **remember multi-factor authentication**, select **Allow users to remember multi-factor authentication on devices they trust**.
+1. Set the number of days to allow trusted devices to bypass multi-factor authentication. For the optimal user experience, extend the duration to 90 or more days.
1. Select **Save**.
-### Mark a device as trusted
+#### Mark a device as trusted
-After you enable the remember Multi-Factor Authentication feature, users can mark a device as trusted when they sign in by selecting the option for **Don't ask again**.
+After you enable the **remember multi-factor authentication** feature, users can mark a device as trusted when they sign in by selecting **Don't ask again**.
## Next steps
-To learn more about the available methods for use in Azure AD Multi-Factor Authentication, see [What authentication and verification methods are available in Azure Active Directory?](concept-authentication-methods.md)
+To learn more, see [What authentication and verification methods are available in Azure Active Directory?](concept-authentication-methods.md)
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
Your identity provider and resource providers may see different IP addresses. Th
Examples: -- Your identity provider sees one IP address from the client.-- Your resource provider sees a different IP address from the client after passing through a proxy.
+- Your identity provider sees one IP address from the client while your resource provider sees a different IP address from the client after passing through a proxy.
- The IP address your identity provider sees is part of an allowed IP range in policy but the IP address from the resource provider isn't. To avoid infinite loops because of these scenarios, Azure AD issues a one hour CAE token and won't enforce client location change. In this case, security is improved compared to traditional one hour tokens since we're still evaluating the [other events](#critical-event-evaluation) besides client location change events.
active-directory How To Connect Fed Saml Idp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-fed-saml-idp.md
na Previously updated : 07/13/2017 Last updated : 01/21/2022
active-directory How To Connect Fed Sha256 Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-fed-sha256-guidance.md
na Previously updated : 10/26/2018 Last updated : 01/21/2022
active-directory How To Connect Fed Single Adfs Multitenant Federation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-fed-single-adfs-multitenant-federation.md
Title: Federating multiple Azure AD with single AD FS - Azure
-description: In this document you will learn how to federate multiple Azure AD with a single AD FS.
+description: In this document, you will learn how to federate multiple Azure AD with a single AD FS.
keywords: federate, ADFS, AD FS, multiple tenants, single AD FS, one ADFS, multi-tenant federation, multi-forest adfs, aad connect, federation, cross-tenant federation documentationcenter: ''
na Previously updated : 07/17/2017 Last updated : 01/21/2022
# Federate multiple instances of Azure AD with single instance of AD FS
-A single high available AD FS farm can federate multiple forests if they have 2-way trust between them. These multiple forests may or may not correspond to the same Azure Active Directory. This article provides instructions on how to configure federation between a single AD FS deployment and more than one forests that sync to different Azure AD.
+A single high available AD FS farm can federate multiple forests if they have 2-way trust between them. These multiple forests may or may not correspond to the same Azure Active Directory. This article provides instructions on how to configure federation between a single AD FS deployment and multiple instances of Azure AD.
![Multi-tenant federation with single AD FS](./media/how-to-connect-fed-single-adfs-multitenant-federation/concept.png)
For AD FS in contoso.com to be able to authenticate users in fabrikam.com, a two
## Step 2: Modify contoso.com federation settings
-The default issuer set for a single domain federated to AD FS is "http\://ADFSServiceFQDN/adfs/services/trust", for example, `http://fs.contoso.com/adfs/services/trust`. Azure Active Directory requires unique issuer for each federated domain. Since the same AD FS is going to federate two domains, the issuer value needs to be modified so that it is unique for each domain AD FS federates with Azure Active Directory.
+The default issuer set for a single domain federated to AD FS is "http\://ADFSServiceFQDN/adfs/services/trust", for example, `http://fs.contoso.com/adfs/services/trust`. Azure Active Directory requires unique issuer for each federated domain. Because AD FS is going to federate two domains, the issuer value needs to be modified so that it is unique.
-On the AD FS server, open Azure AD PowerShell (ensure that the MSOnline module is installed) and perform the following steps:
+On the AD FS server, open Azure AD PowerShell (ensure that the MSOnline module is installed) and do the following steps:
Connect to the Azure Active Directory that contains the domain contoso.com Connect-MsolService
Convert-MsolDomainToFederated -DomainName fabrikam.com -Verbose -SupportMultiple
The above operation will federate the domain fabrikam.com with the same AD FS. You can verify the domain settings by using Get-MsolDomainFederationSettings for both domains. ## Next steps
-[Connect Active Directory with Azure Active Directory](whatis-hybrid-identity.md)
+[Connect Active Directory with Azure Active Directory](whatis-hybrid-identity.md)
active-directory How To Connect Fed Ssl Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-fed-ssl-update.md
na Previously updated : 07/09/2018 Last updated : 01/21/2022
After you complete the configuration, Azure AD Connect displays the message that
## Next steps - [Azure AD Connect and federation](how-to-connect-fed-whatis.md)-- [Active Directory Federation Services management and customization with Azure AD Connect](how-to-connect-fed-management.md)
+- [Active Directory Federation Services management and customization with Azure AD Connect](how-to-connect-fed-management.md)
active-directory How To Connect Fed Whatis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-fed-whatis.md
na Previously updated : 10/09/2018 Last updated : 01/21/2022
This topic is the home for information on federation-related functionalities for
## Additional resources * [Federating two Azure AD with single AD FS](how-to-connect-fed-single-adfs-multitenant-federation.md) * [AD FS deployment in Azure](/windows-server/identity/ad-fs/deployment/how-to-connect-fed-azure-adfs)
-* [High-availability cross-geographic AD FS deployment in Azure with Azure Traffic Manager](/windows-server/identity/ad-fs/deployment/active-directory-adfs-in-azure-with-azure-traffic-manager)
+* [High-availability cross-geographic AD FS deployment in Azure with Azure Traffic Manager](/windows-server/identity/ad-fs/deployment/active-directory-adfs-in-azure-with-azure-traffic-manager)
active-directory How To Connect Fix Default Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-fix-default-rules.md
Previously updated : 03/21/2019 Last updated : 01/21/2022
To fix your rules to change them back to default settings, delete the modified r
## Next steps - [Hardware and prerequisites](how-to-connect-install-prerequisites.md) - [Express settings](how-to-connect-install-express.md)-- [Customized settings](how-to-connect-install-custom.md)
+- [Customized settings](how-to-connect-install-custom.md)
active-directory How To Connect Group Writeback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-group-writeback.md
Previously updated : 06/11/2020 Last updated : 01/21/2022
active-directory How To Connect Health Ad Fs Sign In https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-ad-fs-sign-in.md
na Previously updated : 03/16/2021 Last updated : 01/21/2022
active-directory How To Connect Health Adds https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-adds.md
na Previously updated : 07/18/2017 Last updated : 01/21/2022
By default, we have preselected four performance counters; however, you can incl
* [Using Azure AD Connect Health with AD FS](how-to-connect-health-adfs.md) * [Using Azure AD Connect Health for sync](how-to-connect-health-sync.md) * [Azure AD Connect Health FAQ](reference-connect-health-faq.yml)
-* [Azure AD Connect Health Version History](reference-connect-health-version-history.md)
+* [Azure AD Connect Health Version History](reference-connect-health-version-history.md)
active-directory How To Connect Health Adfs Risky Ip Workbook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-adfs-risky-ip-workbook.md
na Previously updated : 10/14/2021 Last updated : 01/21/2022
active-directory How To Connect Health Adfs Risky Ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-adfs-risky-ip.md
na Previously updated : 02/26/2019 Last updated : 01/21/2022
active-directory How To Connect Health Adfs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-adfs.md
na Previously updated : 02/26/2019 Last updated : 01/21/2022
The report provides the following information:
## Related links * [Azure AD Connect Health](./whatis-azure-ad-connect.md) * [Azure AD Connect Health Agent Installation](how-to-connect-health-agent-install.md)
-* [Risky IP report](how-to-connect-health-adfs-risky-ip.md)
+* [Risky IP report](how-to-connect-health-adfs-risky-ip.md)
active-directory How To Connect Health Agent Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-agent-install.md
na Previously updated : 10/20/2020 Last updated : 01/21/2022
active-directory How To Connect Health Alert Catalog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-alert-catalog.md
na Previously updated : 03/15/2018 Last updated : 01/21/2022
Azure AD Connect Health alerts get resolved on a success condition. Azure AD Con
## Next steps
-* [Azure AD Connect Health FAQ](reference-connect-health-faq.yml)
+* [Azure AD Connect Health FAQ](reference-connect-health-faq.yml)
active-directory How To Connect Health Data Freshness https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-data-freshness.md
na Previously updated : 02/26/2018 Last updated : 01/21/2022
active-directory How To Connect Health Diagnose Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-diagnose-sync-errors.md
na Previously updated : 05/11/2018 Last updated : 01/21/2022
active-directory How To Connect Health Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-operations.md
na Previously updated : 07/18/2017 Last updated : 01/21/2022
active-directory How To Connect Health Sync https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-sync.md
na Previously updated : 07/18/2017 Last updated : 01/21/2022
Read more about [Diagnose and remediate duplicated attribute sync errors](how-to
* [Using Azure AD Connect Health with AD FS](how-to-connect-health-adfs.md) * [Using Azure AD Connect Health with AD DS](how-to-connect-health-adds.md) * [Azure AD Connect Health FAQ](reference-connect-health-faq.yml)
-* [Azure AD Connect Health Version History](reference-connect-health-version-history.md)
+* [Azure AD Connect Health Version History](reference-connect-health-version-history.md)
active-directory How To Connect Import Export Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-import-export-config.md
Previously updated : 07/13/2020 Last updated : 01/21/2022
active-directory How To Connect Install Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-automatic-upgrade.md
na Previously updated : 08/11/2021 Last updated : 01/21/2022
active-directory How To Connect Install Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-custom.md
ms.assetid: 6d42fb79-d9cf-48da-8445-f482c4c536af
Previously updated : 09/10/2020 Last updated : 01/21/2022
active-directory How To Connect Install Existing Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-existing-database.md
na Previously updated : 08/30/2017 Last updated : 01/21/2022
active-directory How To Connect Install Existing Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-existing-tenant.md
Previously updated : 04/25/2019 Last updated : 01/21/2022
active-directory How To Connect Install Express https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-express.md
na Previously updated : 11/29/2021 Last updated : 01/21/2022
Learn more about [Integrating your on-premises identities with Azure Active Dire
| Azure AD Connect overview | [Integrate your on-premises directories with Azure Active Directory](whatis-hybrid-identity.md) | Install using customized settings | [Custom installation of Azure AD Connect](how-to-connect-install-custom.md) | | Upgrade from DirSync | [Upgrade from Azure AD sync tool (DirSync)](how-to-dirsync-upgrade-get-started.md)|
-| Accounts used for installation | [More about Azure AD Connect credentials and permissions](reference-connect-accounts-permissions.md) |
+| Accounts used for installation | [More about Azure AD Connect credentials and permissions](reference-connect-accounts-permissions.md) |
active-directory How To Connect Install Move Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-move-db.md
Previously updated : 04/29/2019 Last updated : 01/21/2022
active-directory How To Connect Install Multiple Domains https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-multiple-domains.md
na Previously updated : 05/31/2017 Last updated : 01/21/2022
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
na Previously updated : 06/21/2021 Last updated : 01/21/2022
active-directory How To Connect Install Roadmap https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-roadmap.md
na Previously updated : 09/18/2018 Last updated : 01/21/2022
The Azure AD Connect Health portal shows views of alerts, performance monitoring
- [Pass-through authentication](how-to-connect-pta.md) - [Azure AD Connect and federation](how-to-connect-fed-whatis.md) - [Install Azure AD Connect Health agents](how-to-connect-health-agent-install.md) -- [Azure AD Connect sync](how-to-connect-sync-whatis.md)
+- [Azure AD Connect sync](how-to-connect-sync-whatis.md)
active-directory How To Connect Install Select Installation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-select-installation.md
na Previously updated : 07/12/2017 Last updated : 01/21/2022
active-directory How To Connect Install Sql Delegation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-sql-delegation.md
na Previously updated : 02/26/2018 Last updated : 01/21/2022
active-directory How To Connect Installation Wizard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-installation-wizard.md
na Previously updated : 07/17/2019 Last updated : 01/21/2022
active-directory How To Connect Migrate Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-migrate-groups.md
Previously updated : 04/02/2020 Last updated : 01/21/2022
active-directory How To Connect Monitor Federation Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-monitor-federation-changes.md
Previously updated : 06/21/2021 Last updated : 01/21/2022
After the environment is configured, the data flows as follows:
- [Integrate Azure AD logs with Azure Monitor logs](../../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) - [Create, view, and manage log alerts using Azure Monitor](../../azure-monitor/alerts/alerts-log.md) - [Manage AD FS trust with Azure AD using Azure AD Connect](how-to-connect-azure-ad-trust.md)-- [Best practices for securing Active Directory Federation Services](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs)
+- [Best practices for securing Active Directory Federation Services](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs)
active-directory How To Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-password-hash-synchronization.md
ms.assetid: 05f16c3e-9d23-45dc-afca-3d0fa9dbf501
Previously updated : 07/01/2021 Last updated : 01/21/2022 search.appverid:
active-directory How To Connect Post Installation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-post-installation.md
na Previously updated : 04/26/2019 Last updated : 01/21/2022
active-directory How To Connect Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-preview.md
na Previously updated : 05/15/2020 Last updated : 01/21/2022
active-directory How To Connect Pta Current Limitations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-pta-current-limitations.md
na Previously updated : 09/04/2018 Last updated : 01/21/2022
active-directory How To Connect Pta Disable Do Not Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-pta-disable-do-not-configure.md
Previously updated : 04/20/2020 Last updated : 01/21/2022
active-directory How To Connect Pta How It Works https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-pta-how-it-works.md
na Previously updated : 07/19/2018 Last updated : 01/21/2022
active-directory How To Connect Pta Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-pta-quick-start.md
na Previously updated : 04/13/2020 Last updated : 01/21/2022
active-directory How To Connect Pta Security Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-pta-security-deep-dive.md
na Previously updated : 05/27/2020 Last updated : 01/21/2022
To auto-update an Authentication Agent:
- [How it works](how-to-connect-pta-how-it-works.md): Learn the basics of how Azure AD Pass-through Authentication works. - [Frequently asked questions](how-to-connect-pta-faq.yml): Find answers to frequently asked questions. - [Troubleshoot](tshoot-connect-pass-through-authentication.md): Learn how to resolve common problems with the Pass-through Authentication feature.-- [Azure AD Seamless SSO](how-to-connect-sso.md): Learn more about this complementary feature.
+- [Azure AD Seamless SSO](how-to-connect-sso.md): Learn more about this complementary feature.
active-directory How To Connect Pta Upgrade Preview Authentication Agents https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-pta-upgrade-preview-authentication-agents.md
na Previously updated : 07/27/2018 Last updated : 01/21/2022
Follow these steps to upgrade Authentication Agents on other servers (where Azur
>If you check the Pass-through Authentication blade on the [Azure Active Directory admin center](https://aad.portal.azure.com) after completing the preceding steps, you'll see two Authentication Agent entries per server - one entry showing the Authentication Agent as **Active** and the other as **Inactive**. This is _expected_. The **Inactive** entry is automatically dropped after a few days. ## Next steps-- [**Troubleshoot**](tshoot-connect-pass-through-authentication.md) - Learn how to resolve common issues with the feature.
+- [**Troubleshoot**](tshoot-connect-pass-through-authentication.md) - Learn how to resolve common issues with the feature.
active-directory How To Connect Pta https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-pta.md
na Previously updated : 10/21/2018 Last updated : 01/21/2022
You can combine Pass-through Authentication with the [Seamless Single Sign-On](h
- [Troubleshoot](tshoot-connect-pass-through-authentication.md) - Learn how to resolve common issues with the feature. - [Security Deep Dive](how-to-connect-pta-security-deep-dive.md) - Additional deep technical information on the feature. - [Azure AD Seamless SSO](how-to-connect-sso.md) - Learn more about this complementary feature.-- [UserVoice](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789) - For filing new feature requests.
+- [UserVoice](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789) - For filing new feature requests.
active-directory How To Connect Selective Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-selective-password-hash-synchronization.md
Previously updated : 03/16/2021 Last updated : 01/21/2022
active-directory How To Connect Single Object Sync https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-single-object-sync.md
Previously updated : 06/24/2021 Last updated : 01/21/2022
The Single Object Sync tool **is** intended for investigating and troubleshootin
## Next steps - [Troubleshooting object synchronization](tshoot-connect-objectsync.md) - [Troubleshoot object not synchronizing](tshoot-connect-object-not-syncing.md)-- [End-to-end troubleshooting of Azure AD Connect objects and attributes](/troubleshoot/azure/active-directory/troubleshoot-aad-connect-objects-attributes)
+- [End-to-end troubleshooting of Azure AD Connect objects and attributes](/troubleshoot/azure/active-directory/troubleshoot-aad-connect-objects-attributes)
active-directory How To Connect Sso How It Works https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sso-how-it-works.md
na Previously updated : 04/16/2019 Last updated : 01/21/2022
active-directory How To Connect Sso Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sso-quick-start.md
na Previously updated : 04/16/2019 Last updated : 01/21/2022
active-directory How To Connect Sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sso.md
na Previously updated : 08/13/2019 Last updated : 01/21/2022
active-directory How To Connect Staged Rollout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-staged-rollout.md
Previously updated : 06/03/2020 Last updated : 01/21/2022
active-directory How To Connect Sync Best Practices Changing Default Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sync-best-practices-changing-default-configuration.md
na Previously updated : 08/29/2017 Last updated : 01/21/2022
active-directory How To Connect Sync Change Addsacct Pass https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sync-change-addsacct-pass.md
na Previously updated : 07/12/2017 Last updated : 01/21/2022
active-directory How To Connect Sync Change Serviceacct Pass https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sync-change-serviceacct-pass.md
na Previously updated : 03/17/2021 Last updated : 01/21/2022
active-directory How To Connect Sync Change The Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sync-change-the-configuration.md
ms.assetid: 7b9df836-e8a5-4228-97da-2faec9238b31
Previously updated : 08/30/2018 Last updated : 01/21/2022
active-directory How To Connect Sync Configure Filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sync-configure-filtering.md
na Previously updated : 03/26/2019 Last updated : 01/21/2022
active-directory How To Connect Sync Endpoint Api V2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sync-endpoint-api-v2.md
editor: ''
Previously updated : 12/04/2020 Last updated : 01/21/2022
active-directory How To Connect Sync Feature Directory Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sync-feature-directory-extensions.md
na Previously updated : 08/09/2021 Last updated : 01/21/2022
The installation shows the following attributes, which are valid candidates:
> [!NOTE] > Not all features in Azure Active Directory support multi valued extension attributes. Please refer to the documentation of the feature in which you plan to use these attributes to confirm they are supported.- The list of attributes is read from the schema cache that's created during installation of Azure AD Connect. If you have extended the Active Directory schema with additional attributes, you must [refresh the schema](how-to-connect-installation-wizard.md#refresh-directory-schema) before these new attributes are visible. An object in Azure AD can have up to 100 attributes for directory extensions. The maximum length is 250 characters. If an attribute value is longer, the sync engine truncates it.
+> [!NOTE]
+> It is not supported to sync constructed attributes, such as msDS-UserPasswordExpiryTimeComputed. If you upgrade from an old version of AADConnect you may still see these attributes show up in the installation wizard, you should not enable them though. Their value will not sync to Azure AD if you do.
+> You can read more about constructed attributes in [this artice](https://docs.microsoft.com/openspecs/windows_protocols/ms-adts/a3aff238-5f0e-4eec-8598-0a59c30ecd56).
+> You should also not attempt to sync [Non-replicated attributes](https://docs.microsoft.com/windows/win32/ad/attributes), such as badPwdCount, Last-Logon, and Last-Logoff, as their values will not be synced to Azure AD.
+ ## Configuration changes in Azure AD made by the wizard During installation of Azure AD Connect, an application is registered where these attributes are available. You can see this application in the Azure portal. Its name is always **Tenant Schema Extension App**.
active-directory How To Connect Sync Feature Preferreddatalocation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sync-feature-preferreddatalocation.md
Previously updated : 06/09/2021 Last updated : 01/21/2022
Learn more about the configuration model in the sync engine:
Overview topics: * [Azure AD Connect sync: Understand and customize synchronization](how-to-connect-sync-whatis.md)
-* [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md)
+* [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md)
active-directory How To Connect Sync Feature Prevent Accidental Deletes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sync-feature-prevent-accidental-deletes.md
Title: 'Azure AD Connect sync: Prevent accidental deletes | Microsoft Docs'
-description: This topic describes the prevent accidental deletes (preventing accidental deletions) feature in Azure AD Connect.
+description: This topic describes how to prevent accidental deletes in Azure AD Connect.
documentationcenter: ''
na Previously updated : 07/12/2017 Last updated : 01/21/2022
This topic describes the prevent accidental deletes (preventing accidental delet
When installing Azure AD Connect, prevent accidental deletes is enabled by default and configured to not allow an export with more than 500 deletes. This feature is designed to protect you from accidental configuration changes and changes to your on-premises directory that would affect many users and other objects. ## What is prevent accidental deletes
-Common scenarios when you see many deletes include:
+Common scenarios involving many deletes include:
* Changes to [filtering](how-to-connect-sync-configure-filtering.md) where an entire [OU](how-to-connect-sync-configure-filtering.md#organizational-unitbased-filtering) or [domain](how-to-connect-sync-configure-filtering.md#domain-based-filtering) is unselected. * All objects in an OU are deleted.
If this was unexpected, then investigate and take corrective actions. To see whi
![Search Connector Space](./media/how-to-connect-sync-feature-prevent-accidental-deletes/searchcs.png)
-[!NOTE] If you aren't sure all deletes are desired, and wish to go down a safer route. You can use the PowerShell cmdlet : `Enable-ADSyncExportDeletionThreshold` to set a new threshold rather than disabling the threshold which could allow undesired deletions.
+[!NOTE] If you aren't sure all deletes are desired, and wish to go down a safer route. You can use the PowerShell cmdlet: `Enable-ADSyncExportDeletionThreshold` to set a new threshold rather than disabling the threshold which could allow undesired deletions.
## If all deletes are desired If all the deletes are desired, then do the following:
active-directory How To Connect Sync Feature Scheduler https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sync-feature-scheduler.md
na Previously updated : 05/01/2019 Last updated : 01/21/2022
active-directory How To Connect Sync Recycle Bin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sync-recycle-bin.md
na Previously updated : 12/17/2018 Last updated : 01/21/2022
This feature helps with restoring Azure AD user objects by doing the following:
* [Azure AD Connect sync: Understand and customize synchronization](how-to-connect-sync-whatis.md)
-* [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md)
+* [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md)
active-directory How To Connect Sync Service Manager Ui https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sync-service-manager-ui.md
na Previously updated : 07/13/2017 Last updated : 01/21/2022
active-directory How To Connect Syncservice Duplicate Attribute Resiliency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-syncservice-duplicate-attribute-resiliency.md
na Previously updated : 01/15/2018 Last updated : 01/21/2022
active-directory How To Connect Syncservice Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-syncservice-features.md
na Previously updated : 9/14/2021 Last updated : 01/21/2022
active-directory Tshoot Connect Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/tshoot-connect-sync-errors.md
na Previously updated : 10/29/2018 Last updated : 01/21/2022 +
-# Troubleshoot errors during synchronization
+# Understanding errors during Azure AD synchronization
Errors can occur when identity data is synced from Windows Server Active Directory to Azure Active Directory (Azure AD). This article provides an overview of different types of sync errors, some of the possible scenarios that cause those errors, and potential ways to fix the errors. This article includes common error types and might not cover all possible errors. This article assumes you're familiar with the underlying [design concepts of Azure AD and Azure AD Connect](plan-connect-design-concepts.md).
+>[!IMPORTANT]
+>This article attempts to address the most common synchronization errors. Unfortunately, covering every scenario in one document is not possible. For more information including in-depth troubleshooting steps, see [End-to-end troubleshooting of Azure AD Connect objects and attributes](https://docs.microsoft.com/troubleshoot/azure/active-directory/troubleshoot-aad-connect-objects-attributes) and the [User Provisioning and Synchronization](https://docs.microsoft.com/troubleshoot/azure/active-directory/welcome-azure-ad) section under the Azure AD troubleshooting documentation.
+ With the latest version of Azure AD Connect \(August 2016 or higher\), a Synchronization Errors Report is available in the [Azure portal](https://aka.ms/aadconnecthealth) as part of Azure AD Connect Health for sync. Starting September 1, 2016, [Azure AD duplicate attribute resiliency](how-to-connect-syncservice-duplicate-attribute-resiliency.md) is enabled by default for all the *new* Azure AD tenants. This feature is automatically enabled for existing tenants.
To resolve this issue:
* [Locate Active Directory objects in Active Directory Administrative Center](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd560661(v=ws.10)) * [Query Azure AD for an object by using Azure AD PowerShell](/previous-versions/azure/jj151815(v=azure.100))
+* [End-to-end troubleshooting of Azure AD Connect objects and attributes](https://docs.microsoft.com/troubleshoot/azure/active-directory/troubleshoot-aad-connect-objects-attributes)
+* [Azure AD Troubleshooting](https://docs.microsoft.com/troubleshoot/azure/active-directory/welcome-azure-ad)
active-directory Tenant Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/tenant-restrictions.md
Last updated 12/6/2021
+ # Restrict access to a tenant
-Large organizations that emphasize security want to move to cloud services like Microsoft 365, but need to know that their users only can access approved resources. Traditionally, companies restrict domain names or IP addresses when they want to manage access. This approach fails in a world where software as a service (or SaaS) apps are hosted in a public cloud, running on shared domain names like [outlook.office.com](https://outlook.office.com/) and [login.microsoftonline.com](https://login.microsoftonline.com/). Blocking these addresses would keep users from accessing Outlook on the web entirely, instead of merely restricting them to approved identities and resources.
+Large organizations that emphasize security want to move to cloud services like Microsoft 365, but need to know that their users only can access approved resources. Traditionally, companies restrict domain names or IP addresses when they want to manage access. This approach fails in a world where software as a service (or SaaS) apps are hosted in a public cloud, running on shared domain names like outlook.office.com and login.microsoftonline.com. Blocking these addresses would keep users from accessing Outlook on the web entirely, instead of merely restricting them to approved identities and resources.
-The Azure Active Directory (Azure AD) solution to this challenge is a feature called tenant restrictions. With tenant restrictions, organizations can control access to SaaS cloud applications, based on the Azure AD tenant the applications use for single sign-on. For example, you may want to allow access to your organization's Microsoft 365 applications, while preventing access to other organizations' instances of these same applications.
+The Azure Active Directory (Azure AD) solution to this challenge is a feature called tenant restrictions. With tenant restrictions, organizations can control access to SaaS cloud applications, based on the Azure AD tenant the applications use for [single sign-on](what-is-single-sign-on.md). For example, you may want to allow access to your organization's Microsoft 365 applications, while preventing access to other organizations' instances of these same applications.
-With tenant restrictions, organizations can specify the list of tenants that users on their network are permitted to access. Azure AD then only grants access to these permitted tenants - all other tenants are blocked, even ones that your users may be a guest in.
+With tenant restrictions, organizations can specify the list of tenants that users on their network are permitted to access. Azure AD then only grants access to these permitted tenants - all other tenants are blocked, even ones that your users may be guests in.
This article focuses on tenant restrictions for Microsoft 365, but the feature protects all apps that send the user to Azure AD for single sign-on. If you use SaaS apps with a different Azure AD tenant from the tenant used by your Microsoft 365, make sure that all required tenants are permitted (e.g. in B2B collaboration scenarios). For more information about SaaS cloud apps, see the [Active Directory Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps).
There are two steps to get started with tenant restrictions. First, make sure th
### URLs and IP addresses
-To use tenant restrictions, your clients must be able to connect to the following Azure AD URLs to authenticate: [login.microsoftonline.com](https://login.microsoftonline.com/), [login.microsoft.com](https://login.microsoft.com/), and [login.windows.net](https://login.windows.net/). Additionally, to access Office 365, your clients must also be able to connect to the fully qualified domain names (FQDNs), URLs, and IP addresses defined in [Office 365 URLs and IP address ranges](https://support.office.com/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2).
+To use tenant restrictions, your clients must be able to connect to the following Azure AD URLs to authenticate:
+
+- login.microsoftonline.com
+- login.microsoft.com
+- login.windows.net
+
+Additionally, to access Office 365, your clients must also be able to connect to the fully qualified domain names (FQDNs), URLs, and IP addresses defined in [Office 365 URLs and IP address ranges](https://support.office.com/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2).
### Proxy configuration and requirements
The following configuration is required to enable tenant restrictions through yo
- The proxy must be able to perform TLS interception, HTTP header insertion, and filter destinations using FQDNs/URLs. -- Clients must trust the certificate chain presented by the proxy for TLS communications. For example, if certificates from an internal [public key infrastructure (PKI)](/windows/desktop/seccertenroll/public-key-infrastructure) are used, the internal issuing root certificate authority certificate must be trusted.
+- Clients must trust the certificate chain presented by the proxy for TLS communications. For example, if certificates from an internal public key infrastructure (PKI) are used, the internal issuing root certificate authority certificate must be trusted.
- Azure AD Premium 1 licenses are required for use of Tenant Restrictions.
For specific details, refer to your proxy server documentation.
## Blocking consumer applications
-Applications from Microsoft that support both consumer accounts and organizational accounts, like [OneDrive](https://onedrive.live.com/) or [Microsoft Learn](/learn/), can sometimes be hosted on the same URL. This means that users that must access that URL for work purposes also have access to it for personal use, which may not be permitted under your operating guidelines.
+Applications from Microsoft that support both consumer accounts and organizational accounts, like OneDrive or Microsoft Learn can sometimes be hosted on the same URL. This means that users that must access that URL for work purposes also have access to it for personal use, which may not be permitted under your operating guidelines.
Some organizations attempt to fix this by blocking `login.live.com` in order to block personal accounts from authenticating. This has several downsides:
active-directory Delegate By Task https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/delegate-by-task.md
You can further restrict permissions by assigning roles at smaller scopes or by
> | Manage user settings | [Global Administrator](../roles/permissions-reference.md#global-administrator) | | > | Read access review of a group or of an app | [Security Reader](../roles/permissions-reference.md#security-reader) | [Security Administrator](../roles/permissions-reference.md#security-administrator)<br/>[User Administrator](../roles/permissions-reference.md#user-administrator) | > | Read all configuration | [Default user role](../fundamentals/users-default-permissions.md) | |
-> | Update enterprise application assignments | [Enterprise application owner](../fundamentals/users-default-permissions.md#object-ownership) | [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator)<br/>[Application Administrator](../roles/permissions-reference.md#application-administrator) |
+> | Update enterprise application assignments | [Enterprise application owner](../fundamentals/users-default-permissions.md#object-ownership) | [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator)<br/>[Application Administrator](../roles/permissions-reference.md#application-administrator)<br/>[User Administrator](../roles/permissions-reference.md#user-administrator) |
> | Update enterprise application owners | [Enterprise application owner](../fundamentals/users-default-permissions.md#object-ownership) | [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator)<br/>[Application Administrator](../roles/permissions-reference.md#application-administrator) | > | Update enterprise application properties | [Enterprise application owner](../fundamentals/users-default-permissions.md#object-ownership) | [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator)<br/>[Application Administrator](../roles/permissions-reference.md#application-administrator) | > | Update enterprise application provisioning | [Enterprise application owner](../fundamentals/users-default-permissions.md#object-ownership) | [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator)<br/>[Application Administrator](../roles/permissions-reference.md#application-administrator) |
active-directory Dropboxforbusiness Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/dropboxforbusiness-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Dropbox Business | Microsoft Docs'
+ Title: 'Tutorial: Azure Active Directory integration with Dropbox Business'
description: Learn how to configure single sign-on between Azure Active Directory and Dropbox Business.
Previously updated : 11/17/2021 Last updated : 01/17/2022 # Tutorial: Integrate Dropbox Business with Azure Active Directory
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
4. Click on the **User Icon** and select **Settings** tab.
- ![Screenshot that shows the "USER ICON" action and "Settings" selected.](./media/dropboxforbusiness-tutorial/configure-1.png "Configure single sign-on")
+ ![Screenshot that shows the "USER ICON" action and "Settings" selected.](./media/dropboxforbusiness-tutorial/user-icon.png "Configure single sign-on")
5. In the navigation pane on the left side, click **Admin console**.
- ![Screenshot that shows "Admin console" selected.](./media/dropboxforbusiness-tutorial/configure-2.png "Configure single sign-on")
+ ![Screenshot that shows "Admin console" selected.](./media/dropboxforbusiness-tutorial/admin-console.png "Configure single sign-on")
6. On the **Admin console**, click **Settings** in the left navigation pane.
- ![Screenshot that shows "Settings" selected.](./media/dropboxforbusiness-tutorial/configure-3.png "Configure single sign-on")
+ ![Screenshot that shows "Settings" selected.](./media/dropboxforbusiness-tutorial/settings.png "Configure single sign-on")
7. Select **Single sign-on** option under the **Authentication** section.
- ![Screenshot that shows the "Authentication" section with "Single sign-on" selected.](./media/dropboxforbusiness-tutorial/configure-4.png "Configure single sign-on")
+ ![Screenshot that shows the "Authentication" section with "Single sign-on" selected.](./media/dropboxforbusiness-tutorial/authentication.png "Configure single sign-on")
8. In the **Single sign-on** section, perform the following steps:
- ![Screenshot that shows the "Single sign-on" configuration settings.](./media/dropboxforbusiness-tutorial/configure-5.png "Configure single sign-on")
+ ![Screenshot that shows the "Single sign-on" configuration settings.](./media/dropboxforbusiness-tutorial/configure-sso.png "Configure single sign-on")
a. Select **Required** as an option from the dropdown for the **Single sign-on**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
### Create Dropbox Business test user
-In this section, a user called B.Simon is created in Dropbox Business. Dropbox Business supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Dropbox Business, a new one is created after authentication.
+1. Log in to the Dropbox Business website as an administrator.
-This application also supports automatic user provisioning. See how to enable auto provisioning for [Dropbox Business](dropboxforbusiness-provisioning-tutorial.md).
+1. Go to the **Admin Console** and click **Members** in the left menu.
+
+ ![Screenshot for Invite member](./media/dropboxforbusiness-tutorial/invite-member.png)
+
+1. Enter the valid user email to add the user and click **Invite**.
->[!Note]
->If you need to create a user manually, Contact [Dropbox Business Client support team](https://www.dropbox.com/business/contact)
+ ![Screenshot for Invite](./media/dropboxforbusiness-tutorial/invite-button.png)
+
+This application also supports automatic user provisioning. See how to enable auto provisioning for [Dropbox Business](dropboxforbusiness-provisioning-tutorial.md).
## Test SSO
active-directory Evercate Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/evercate-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Evercate for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Evercate.
++
+writer: twimmers
++
+ms.assetid: df77d462-071a-4889-b6e1-0554adaa2445
++++ Last updated : 01/10/2022+++
+# Tutorial: Configure Evercate for automatic user provisioning
+
+This tutorial describes the steps you need to do in both Evercate and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Evercate](https://evercate.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Evercate.
+> * Remove users in Evercate when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Evercate.
+> * Provision groups and group memberships in Evercate.
+> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Evercate (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Evercate with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Evercate](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Evercate to support provisioning with Azure AD
+
+1. Log in to Evercate as an administrator and click on **Settings** in the top menu.
+1. Under Settings, navigate to **Advanced -> Connect Azure AD**.
+1. Click the button "**I understand, connect Azure AD**" to start the process.
+ [![connect Azure AD](media/evercate-provisioning-tutorial/connect-azure-ad-page.png)](media/evercate-provisioning-tutorial/connect-azure-ad-page.png#lightbox)
+1. Now you are taken to MicrosoftΓÇÖs Sign in page where you need to sign in as an administrator for your AD.
+
+ The Microsoft user you sign in with must:
+
+ * Be an administrator with permissions to ΓÇ£Enterprise ApplicationsΓÇ¥.
+ * Be an AD user and not a personal account.
+
+ [![Sign in](media/evercate-provisioning-tutorial/sign-in-page.png)](media/evercate-provisioning-tutorial/sign-in-page.png#lightbox)
+
+1. Tick the "**Consent on behalf of your organization**" before clicking accept.
+ [![Provide consent](media/evercate-provisioning-tutorial/consent-page.png)](media/evercate-provisioning-tutorial/consent-page.png#lightbox)
+ > [!NOTE]
+ > If you missed ticking the consent checkbox, every user will get a similar dialog upon their first sign in. See below under the section ΓÇ£Configuring the application in AzureΓÇ¥ on how to give consent for your organization after the connection is made.
+
+1. Once you have successfully set up the connection to Azure AD you can configure which AD features you want to enable in Evercate.
+1. Navigate to **Settings -> Advanced -> Connect Azure AD** you will see the token you need to enable provisioning (enabled from Azure AD) and can tick the box for allowing single sign on for your Evercate account.
+1. Copy and save the token. This value will be entered in the **Secret Token** * field in the Provisioning tab of your Evercate application in the Azure portal.
+
+## Step 3. Add Evercate from the Azure AD application gallery
+
+Add Evercate from the Azure AD application gallery to start managing provisioning to Evercate. If you have previously setup Evercate for SSO, you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to Evercate, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
++
+## Step 5. Configure automatic user provisioning to Evercate
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and groups in Evercate based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Evercate in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **Evercate**.
+
+ ![The Evercate link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Evercate Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Evercate. If the connection fails, ensure your Evercate account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Evercate**.
+
+1. Review the user attributes that are synchronized from Azure AD to Evercate in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Evercate for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Evercate API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Evercate|
+ |||||
+ |userName|String|&check;|&check;|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String||
+ |active|Boolean|||
+ |displayName|String||&check;|
+ |emails[type eq "work"].value|String|||
+ |name.givenName|String|||
+ |name.familyName|String|||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|||
++
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Evercate**.
+
+1. Review the group attributes that are synchronized from Azure AD to Evercate in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Evercate for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Evercate|
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference|||
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Evercate, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and groups that you would like to provision to Evercate by choosing the appropriate values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to execute than next cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory New Relic Limited Release Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/new-relic-limited-release-tutorial.md
In this tutorial, you'll learn how to integrate New Relic with Azure Active Dire
To get started, you need: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* A New Relic organization on the [New Relic One account/user model](https://docs.newrelic.com/docs/accounts/original-accounts-billing/original-product-based-pricing/overview-changes-pricing-user-model/#user-models) and on either Pro or Enterprise edition. For more information, see [New Relic requirements](https://docs.newrelic.com/docs/accounts/accounts-billing/new-relic-one-user-management/authentication-domains-saml-sso-scim-more).
+* A New Relic organization on the [New Relic One account/user model](https://docs.newrelic.com/docs/accounts/accounts-billing/new-relic-one-user-management/introduction-managing-users/#user-models) and on either Pro or Enterprise edition. For more information, see [New Relic requirements](https://docs.newrelic.com/docs/accounts/accounts-billing/new-relic-one-user-management/authentication-domains-saml-sso-scim-more).
## Scenario description
active-directory Swit Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/swit-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Swit for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Swit.
+
+writer: twimmers
+
+ms.assetid: ce8e918b-3a0c-43af-8cb2-3c810143e484
++++ Last updated : 12/16/2021+++
+# Tutorial: Configure Swit for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Swit and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Swit](https://swit.io) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Swit.
+> * Remove users in Swit when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Swit.
+> * Provision groups and group memberships in Swit.
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Swit with Admin permissions.
++
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Swit](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Swit to support provisioning with Azure AD
+
+To configure Swit to support provisioning with Azure AD, send an email to `help@swit.io`.
+
+## Step 3. Add Swit from the Azure AD application gallery
++
+Add Swit from the Azure AD application gallery to start managing provisioning to Swit. If you have previously setup Swit for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to Swit, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control it by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
++
+## Step 5. Configure automatic user provisioning to Swit
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Swit based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Swit in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **Swit**.
+
+ ![The Swit link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. In the **Admin Credentials** section, click on Authorize, make sure that you enter your Swit account's Admin credentials. Click **Test Connection** to ensure Azure AD can connect to Swit. If the connection fails, ensure your Swit account has Admin permissions and try again.
+
+ ![Token](media/swit-provisioning-tutorial/swit-authorize.png)
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Swit**.
+
+1. Review the user attributes that are synchronized from Azure AD to Swit in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Swit for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Swit API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Swit
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |phoneNumbers[type eq "mobile"].value|String|
+ |phoneNumbers[type eq "work"].value|String|
+ |displayName|String||&check;
+ |externalId|String||&check;
+ |preferredLanguage|String|
+
+
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Groups to Swit**.
+
+1. Review the group attributes that are synchronized from Azure AD to Swit in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Swit for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Swit
+ |||||
+ |displayName|String|&check;|&check;
+ |externalId|String||&check;
+ |members|Reference|||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Swit, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Swit by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
aks Use Group Managed Service Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-group-managed-service-accounts.md
Title: Enable Group Managed Service Accounts (GMSA) for you Windows Server nodes on your Azure Kubernetes Service (AKS) cluster (Preview)
-description: Learn how to enable Group Managed Service Accounts (GMSA) for you Windows Server nodes on your Azure Kubernetes Service (AKS) cluster for securing your pods.
+ Title: Enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster (Preview)
+description: Learn how to enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster for securing your pods.
Last updated 11/01/2021
-# Enable Group Managed Service Accounts (GMSA) for you Windows Server nodes on your Azure Kubernetes Service (AKS) cluster (Preview)
+# Enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster (Preview)
[Group Managed Service Accounts (GMSA)][gmsa-overview] is a managed domain account for multiple servers that provides automatic password management, simplified service principal name (SPN) management and the ability to delegate the management to other administrators. AKS provides the ability to enable GMSA on your Windows Server nodes, which allows containers running on Windows Server nodes to integrate with and be managed by GMSA.
After running `kubectl get pods --watch` and waiting several minutes, if your po
[az-provider-register]: /cli/azure/provider#az_provider_register [gmsa-getting-started]: /windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts [gmsa-overview]: /windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview
-[rdp]: rdp.md
+[rdp]: rdp.md
api-management Api Management Howto App Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-app-insights.md
You need an Azure API Management instance. [Create one](get-started-create-servi
To use Application Insights, [create an instance of the Application Insights service](../azure-monitor/app/create-new-resource.md). To create an instance using the Azure portal, see [Workspace-based Application Insights resources](../azure-monitor/app/create-workspace-resource.md).
+> [!NOTE]
+> The Application Insights resource **can be** in a different subscription or even a different tenant than the API Management resource.
+ ## Create a connection between Application Insights and API Management
+> [!NOTE]
+> If your Application Insights resource is in a different tenant, then you will have to create the logger using the [REST API](/rest/api/apimanagement/current-ga/logger/create-or-update)
+ 1. Navigate to your **Azure API Management service instance** in the **Azure portal**. 1. Select **Application Insights** from the menu on the left. 1. Select **+ Add**.
To improve performance issues, skip:
## Next steps + Learn more about [Azure Application Insights](/azure/application-insights/).
-+ Consider [logging with Azure Event Hubs](api-management-howto-log-event-hubs.md).
++ Consider [logging with Azure Event Hubs](api-management-howto-log-event-hubs.md).
api-management Api Management Howto Log Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-log-event-hubs.md
This article describes how to log API Management events using Azure Event Hubs.
For detailed steps on how to create an event hub and get connection strings that you need to send and receive events to and from the Event Hub, see [Create an Event Hubs namespace and an event hub using the Azure portal](../event-hubs/event-hubs-create.md).
+> [!NOTE]
+> The Event Hub resource **can be** in a different subscription or even a different tenant than the API Management resource
+ ## Create an API Management logger Now that you have an Event Hub, the next step is to configure a [Logger](/rest/api/apimanagement/current-ga/logger) in your API Management service so that it can log events to the Event Hub.
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-java.md
To confirm that the datasource was added to the JBoss server, SSH into your weba
## Choosing a Java runtime version
-App Service allows users to choose the major version of the JVM, such as Java 8 or Java 11, and the minor version, such as 1.8.0_232 or 11.0.5. You can also choose to have the minor version automatically updated as new minor versions become available. In most cases, production sites should use pinned minor JVM versions. This will prevent unnanticipated outages during a minor version auto-update. All Java web apps use 64-bit JVMs, this is not configurable.
+App Service allows users to choose the major version of the JVM, such as Java 8 or Java 11, and the patch version, such as 1.8.0_232 or 11.0.5. You can also choose to have the patch version automatically updated as new minor versions become available. In most cases, production sites should use pinned patch JVM versions. This will prevent unnanticipated outages during a patch version auto-update. All Java web apps use 64-bit JVMs, this is not configurable.
+
+If you are using Tomcat, you can choose to pin the patch version of Tomcat. On Windows, you can pin the patch versions of the JVM and Tomcat independently. On Linux, you can pin the patch version of Tomcat; the patch version of the JVM will also be pinned but is not separately configurable.
If you choose to pin the minor version, you will need to periodically update the JVM minor version on the site. To ensure that your application runs on the newer minor version, create a staging slot and increment the minor version on the staging site. Once you have confirmed the application runs correctly on the new minor version, you can swap the staging and production slots.
Product support for the [Microsoft Build of OpenJDK](/java/openjdk/download) is
Visit the [Azure for Java Developers](/java/azure/) center to find Azure quickstarts, tutorials, and Java reference documentation. - [App Service Linux FAQ](faq-app-service-linux.yml)-- [Environment variables and app settings reference](reference-app-settings.md)
+- [Environment variables and app settings reference](reference-app-settings.md)
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview-vnet-integration.md
Because subnet size can't be changed after assignment, use a subnet that's large
When you want your apps in your plan to reach a virtual network that's already connected to by apps in another plan, select a different subnet than the one being used by the preexisting virtual network integration.
+You must have at least the following RBAC permissions on the subnet or at a higher level to configure regional virtual network integration through Azure portal, CLI or when setting the `virtualNetworkSubnetId` site property directly:
+
+| Action | Description |
+|-|-|
+| Microsoft.Network/virtualNetworks/read | Read the virtual network definition |
+| Microsoft.Network/virtualNetworks/subnets/read | Read a virtual network subnet definition |
+| Microsoft.Network/virtualNetworks/subnets/join/action | Joins a virtual network |
+ ### Routes There are two types of routing to consider when you configure regional virtual network integration. Application routing defines what traffic is routed from your application and into the virtual network. Network routing is the ability to control how traffic is routed from your virtual network and out.
azure-arc What Is Azure Arc Enabled Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/what-is-azure-arc-enabled-postgres-hyperscale.md
Microsoft offers Postgres database services in Azure in two ways:
- As a semi-managed service with Azure Arc as it is operated by customers or their partners/vendors ### In Azure PaaS
-**In [Azure PaaS](https://ms.portal.azure.com/#create/Microsoft.PostgreSQLServer)**, Microsoft offers several deployment options for Postgres as a managed service:
+**In [Azure PaaS](https://ms.portal.azure.com/#create/Microsoft.PostgreSQLServer)**, Microsoft offers several deployment options for PostgreSQL as a managed service:
:::row::: :::column:::
- Azure Database for Postgres Single server and Azure Database for Postgres Flexible server. These services are Microsoft managed single-node/single instance Postgres form factor. Azure Database for Postgres Flexible server is the most recent evolution of this service.
+ Azure Database for PostgreSQL Single server and Azure Database for PostgreSQL Flexible server. These services are Microsoft managed single-node/single instance Postgres form factor. Azure Database for PostgreSQL Flexible server is the most recent evolution of this service.
:::column-end::: :::column::: :::image type="content" source="media/postgres-hyperscale/azure-database-for-postgresql-bigger.png" alt-text="Azure Database for PostgreSQL":::
Microsoft offers Postgres database services in Azure in two ways:
:::row::: :::column:::
- **With Azure Arc**, Microsoft offers **a single** Postgres product/service: **Azure Arc-enabled PostgreSQL Hyperscale**. With Azure Arc, we simplified the product definition and the customer experience for Postgres compared to Azure PaaS by providing **one Postgres product** that is capable of:
- - deploying single-node/single-instance Postgres like Azure Database for Postgres Single/Flexible server,
+ **With Azure Arc**, Microsoft offers **a single** Postgres product/service: **Azure Arc-enabled PostgreSQL Hyperscale**. With Azure Arc, we simplified the product definition and the customer experience for PostgreSQL compared to Azure PaaS by providing **one Postgres product** that is capable of:
+ - deploying single-node/single-instance Postgres like Azure Database for PostgreSQL Single/Flexible server,
- deploying multi-nodes/multi-instances Postgres like Azure Database for PostgreSQL Hyperscale (Citus), - great flexibility by allowing customers to morph their Postgres deployments from one-node to multi-nodes of Postgres and vice versa if they desire so. They are able to do so with no data migration and with a simple experience. :::column-end:::
azure-arc Onboard Configuration Manager Custom Task https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/onboard-configuration-manager-custom-task.md
+
+ Title: Connect machines at scale with a Configuration Manager custom task sequence
+description: You can use a custom task sequence that can deploy the Connected Machine Agent to onboard a collection of devices to Azure Arc-enabled servers.
Last updated : 01/20/2022+++
+# Connect machines at scale with a Configuration Manager custom task sequence
+
+Microsoft Endpoint Configuration Manager facilitates comprehensive management of servers supporting the secure and scalable deployment of applications, software updates, and operating systems. Configuration Manager offers the custom task sequence as a flexible paradigm for application deployment.
+
+You can use a custom task sequence, that can deploy the Connected Machine Agent to onboard a collection of devices to Azure Arc-enabled servers.
+
+Before you get started, be sure to review the [prerequisites](agent-overview.md#prerequisites) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Generate a service principal
+
+Follow the steps to [create a service principal for onboarding at scale](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale). Assign the **Azure Connected Machine Onboarding** role to your service principal, and limit the scope of the role to the target Azure landing zone. Make a note of the Service Principal ID and Service Principal Secret, as you'll need these values later.
+
+## Download the agent and create the application
+
+First, download the Azure Connected Machine agent package (AzureConnectedMachineAgent.msi) for Windows from the [Microsoft Download Center](https://aka.ms/AzureConnectedMachineAgent). The Azure Connected Machine agent for Windows can be [upgraded to the latest release manually or automatically](manage-agent.md), depending on your requirements. The .msi must be saved in a server share for the custom task sequence.
+
+Next, [create an application in Configuration Manager](/mem/configmgr/apps/get-started/create-and-deploy-an-application) using the installed Azure Connected Machine agent package:
+
+1. In the **Configuration Manager** console, select **Software Library > Application Management > Applications**.
+1. On the **Home** tab, in the **Create** group, select **Create Application**.
+1. On the **General** page of the Create Application Wizard, select **Automatically detect information about this application from installation files**. This action pre-populates some of the information in the wizard with information that is extracted from the installation .msi file. Then, specify the following information:
+ 1. **Type**: Select **Windows Installer (*.msi file)**
+ 1. **Location**: Select **Browse** to choose the location where you saved the installation file **AzureConnectedMachineAgent.msi**.
+ :::image type="content" source="media/onboard-configuration-manager-custom-task/configuration-manager-create-application.png" alt-text="Screenshot of the Create Application Wizard in Configuration Manager.":::
+1. Select **Next**, and on the **Import Information** page, select **Next** again.
+1. On the **General Information** page, you can supply further information about the application to help you sort and locate it in the Configuration Manager console. Once complete, select Next.
+1. On the **Installation program** page, select **Next**.
+1. On the **Summary** page, confirm your application settings and then complete the wizard.
+
+You have finished creating the application. To find it, in the **Software Library** workspace, expand **Application Management**, and then choose **Applications**.
+
+## Create a task sequence
+
+The next step is to define a custom task sequence that installs the Azure Connected Machine Agent on a machine, then connects it to Azure Arc.
+
+1. In the Configuration Manager console, go to the **Software Library** workspace, expand **Operating Systems**, and then select the **Task Sequences** node.
+1. On the **Home** tab of the ribbon, in the **Create** group, select **Create Task Sequence**. This will launch the Create Task Sequence Wizard.
+1. On the **Create a New Task Sequence** page, select **Create a new custom task sequence**.
+1. On the **Task Sequence Information** page, specify a name for the task sequence and optionally a description of the task sequence.
+
+ :::image type="content" source="media/onboard-configuration-manager-custom-task/configuration-manager-create-task-sequence.png" alt-text="Screenshot of the Create Task Sequence Wizard in Configuration Manager.":::
+
+After you complete the Create Task Sequence Wizard, Configuration Manager adds the custom task sequence to the **Task Sequences** node. You can now edit this task sequence to add steps to it.
+
+1. In the Configuration Manager console, go to the **Software Library** workspace, expand **Operating Systems**, and then select the **Task Sequences** node.
+1. In the **Task Sequence** list, select the task sequence that you want to edit.
+1. Define **Install Application** as the first task in the task sequence.
+ 1. On the **Home** tab of the ribbon, in the**Task Sequence** group, select **Edit**. Then, select **Add**, select **Software**, and select **Install Application**.
+ 1. Set the name to `Install Connected Machine Agent`.
+ 1. Select the Azure Connected Machine Agent.
+ :::image type="content" source="media/onboard-configuration-manager-custom-task/configuration-manager-edit-task-sequence.png" alt-text="Screenshot showing a task sequence being edited in Configuration Manager.":::
+1. Define **Run PowerShell Script** as the second task in the task sequence.
+ 1. Select **Add**, select **General**, and select **Run PowerShell Script**.
+ 1. Set the name to `Connect to Azure Arc`.
+ 1. Select **Enter a PowerShell script**.
+ 1. Select **Add Script**, and then edit the script to connect to Arc as shown below. Note that this template script has placeholder values for the service principal, tenant, subscription, resource group, and location, which you should update to the appropriate values.
+
+ ```azurepowershell
+ & "$env:ProgramW6432\AzureConnectedMachineAgent\azcmagent.exe" connect --service-principal-id <serviceprincipalAppID> --service-principal-secret <serviceprincipalPassword> --tenant-id <tenantID> --subscription-id <subscriptionID> --resource-group <ResourceGroupName> --location <resourceLocation>
+ ```
+
+ :::image type="content" source="media/onboard-configuration-manager-custom-task/configuration-manager-connect-to-azure-arc.png" alt-text="Screenshot showing a task sequence being edited to run a PowerShell script.":::
+
+1. Select **OK** to save the changes to your custom task sequence.
+
+## Deploy the custom task sequence and verify connection to Azure Arc
+
+Follow the steps outlined in Deploy a task sequence to deploy the task sequence to the target collection of devices. Choose the following parameter settings.
+
+- Under **Deployment Settings**, set **Purpose** as **Required** so that Configuration Manager automatically runs the task sequence according to the configured schedule. If **Purpose** is set to **Available** instead, the task sequence will need to be installed on demand from Software Center.
+- Under **Scheduling**, set **Rerun Behavior** to **Rerun if failed previous attempt**.
+
+## Verify successful connection to Azure Arc
+
+To verify that the machines have been successfully connected to Azure Arc, verify that they are visible in the [Azure portal](https://aka.ms/hybridmachineportal).
++
+## Next steps
+
+- Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.
+- Review connection troubleshooting information in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).
+- Learn how to manage your machine using [Azure Policy](/azure/governance/policy/overview) for such things as VM [guest configuration](/azure/governance/policy/concepts/guest-configuration), verifying that the machine is reporting to the expected Log Analytics workspace, enabling monitoring with [VM insights](/azure/azure-monitor/vm/vminsights-enable-policy), and much more.
azure-arc Onboard Configuration Manager Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/onboard-configuration-manager-powershell.md
+
+ Title: Connect machines at scale by running PowerShell scripts with Configuration Manager
+description: You can use Configuration Manager to run a PowerShell script that automates at-scale onboarding to Azure Arc-enabled servers.
Last updated : 01/20/2022+++
+# Connect machines at scale by running PowerShell scripts with Configuration Manager
+
+Microsoft Endpoint Configuration Manager facilitates comprehensive management of servers supporting the secure and scalable deployment of applications, software updates, and operating systems. Configuration Manager has an integrated ability to run PowerShell scripts.
+
+You can use Configuration Manager to run a PowerShell script that automates at-scale onboarding to Azure Arc-enabled servers.
+
+Before you get started, be sure to review the [prerequisites](agent-overview.md#prerequisites) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Prerequisites for Configuration Manager to run PowerShell scripts
+
+The following prerequisites must be met to use PowerShell scripts in Configuration
+
+- The Configuration Manager version must be 1706 or higher.
+- To import and author scripts, your Configuration Manager account must have **Create** permissions for **SMS Scripts**.
+- To approve or deny scripts, your Configuration Manager account must have **Approve** permissions for **SMS Scripts**.
+- To run scripts, your Configuration Manager account must have **Run Script** permissions for **Collections**.
+
+## Generate a service principal and prepare the installation script
+
+Before you can run the script to connect your machines, you'll need to do the following:
+
+1. Follow the steps to [create a service principal for onboarding at scale](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale). Assign the **Azure Connected Machine Onboarding** role to your service principal, and limit the scope of the role to the target Azure landing zone. Make a note of the Service Principal Secret, as you'll need this value later.
+
+2. Follow the steps to [generate the installation script from the Azure portal](onboard-service-principal.md#generate-the-installation-script-from-the-azure-portal). While you will use this installation script later, do not run the script in PowerShell.
+
+## Create the script in Configuration Manager
+
+Before you begin, check in **Configuration Manager Default Settings** that the PowerShell execution policy under **Computer Agent** is set to **Bypass**.
+
+1. In the Configuration Manager console, select **Software Library**.
+1. In the **Software Library** workspace, select **Scripts**.
+1. On the **Home** tab, in the **Create** group, select **Create Script**.
+1. On the **Script** page of the **Create Script** wizard, configure the following settings:
+ 1. **Script Name** ΓÇô Onboard Azure Arc
+ 1. **Script language** - PowerShell
+ 1. **Import** ΓÇô Import the installation script that you generated in the Azure portal.
+ :::image type="content" source="media/onboard-configuration-manager-powershell/configuration-manager-create-script.png" alt-text="Screenshot of the Create Script screen in Configuration Manager.":::
+1. In the Script Wizard, paste the script generated from Azure portal. Edit this pasted script with the Service Principal Secret for the service principal you generated.
+1. Complete the wizard. The new script is displayed in the **Script** list with a status of **Waiting for approval**.
+
+## Approve the script in Configuration Manager
+
+With an account that has **Approve** permissions for **SMS Scripts**, do the following:
+
+1. In the Configuration Manager console, select **Software Library**.
+1. In the **Software Library** workspace, select **Scripts**.
+1. In the **Script** list, choose the script you want to approve or deny. Then, on the Home tab, in the Script group, select **Approve/Deny**.
+1. In the **Approve or deny script** dialog box, select **Approve** for the script.
+ :::image type="content" source="media/onboard-configuration-manager-powershell/configuration-manager-approve-script.png" alt-text="Screenshot of the Approve or deny script screen in Configuration Manager.":::
+1. Complete the wizard, then confirm that the new script is shown as **Approved** in the **Script** list.
+
+## Run the script in Configuration Manager
+
+Select a collection of targets for your script by doing the following:
+
+1. In the Configuration Manager console, select **Assets and Compliance**.
+1. In the **Assets and Compliance** workspace, select **Device Collections**.
+1. In the **Device Collections** list, select the collection of devices on which you want to run the script.
+1. Select a collection of your choice, and then select **Run Script**.
+1. On the **Script** page of the **Run Script** wizard, choose the script you authored and approved.
+1. Click **Next**, and then complete the wizard.
+
+## Verify successful connection to Azure Arc
+
+The script status monitoring will indicate whether the script has successfully installed the Connected Machine Agent to the collection of devices. Successfully onboarded Azure Arc-enabled servers will also be visible in the [Azure portal](https://aka.ms/hybridmachineportal).
++
+## Next steps
+
+- Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.
+- Review connection troubleshooting information in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).
+- Learn how to manage your machine using [Azure Policy](/azure/governance/policy/overview) for such things as VM [guest configuration](/azure/governance/policy/concepts/guest-configuration), verifying that the machine is reporting to the expected Log Analytics workspace, enabling monitoring with [VM insights](/azure/azure-monitor/vm/vminsights-enable-policy), and much more.
azure-cache-for-redis Cache Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-managed-identity.md
+
+ Title: Managed Identity
+
+description: Learn to Azure Cache for Redis
+++ Last updated : 01/21/2022+++
+# Managed identity with Azure Cache for Redis (Preview)
+
+[Managed identities](/azure/active-directory/managed-identities-azure-resources/overview) are a common tool used in Azure to help developers minimize the burden of managing secrets and login information. Managed identities are useful when Azure services connect to each other. Instead of managing authorization between each service, [Azure Active Directory](/azure/active-directory/fundamentals/active-directory-whatis) (Azure AD) can be used to provide a managed identity that makes the authentication process more streamlined and secure.
+
+## Managed identity with storage accounts
+
+Azure Cache for Redis can use a managed identity to connect with a storage account, useful in two scenarios:
+
+- [Data Persistence](cache-how-to-premium-persistence.md)--scheduled backups of data in your cache through an RDB or AOF file.
+
+- [Import or Export](cache-how-to-import-export-data.md)--saving snapshots of cache data or importing data from a saved file.
+
+Managed identity lets you simplify the process of securely connecting to your chosen storage account for these tasks.
+
+ > [!NOTE]
+ > This functionality does not yet support authentication for connecting to a cache instance.
+ >
+
+Azure Cache for Redis supports [both types of managed identity](/azure/active-directory/managed-identities-azure-resources/overview):
+
+- **System-assigned identity** is specific to the resource. In this case, the cache is the resource. When the cache is deleted, the identity is deleted.
+
+- **User-assigned identity** is specific to a user, not the resource. It can be assigned to any resource that supports managed identity and remains even when you delete the cache.
+
+Each type of managed identity has advantages, but in Azure Cache for Redis, the functionality is the same.
+
+### Enable managed identity
+
+Managed identity can be enabled either when you create a cache instance or after the cache has been created. During the creation of a cache, only a system-assigned identity can be assigned. Either identity type can be added to an existing cache.
+
+### Prerequisites and limitations
+
+To use managed identity, you must have a premium-tier cache.
+
+## Create a new cache with managed identity using the portal
+
+1. Sign into the [Azure portal](https://portal.azure.com/)
+
+1. Create a new Azure Cache for Redis resource with a **Cache type** of any of the premium tiers. Complete **Basics** tab with all the required information.
+ > [!NOTE]
+ > Managed identity functionality is only available in the Premium tier.
+ >
+ :::image type="content" source="media/cache-managed-identity/basics.png" alt-text="create a premium azure cache":::
+
+1. Click the **Advanced** tab. Then, scroll down to **(PREVIEW) System assigned managed identity** and select **On**.
+
+ :::image type="content" source="media/cache-managed-identity/system-assigned.png" alt-text="Advanced page of the form":::
+
+1. Complete the creation process. Once the cache has been created and deployed, open it, and select the **(PREVIEW) Identity** tab under the **Settings** section on the left.
+
+ :::image type="content" source="media/cache-managed-identity/identity-resource.png" alt-text="(Preview) Identity in the Resource menu":::
+
+1. You see that a system-assigned **object ID** has been assigned to the cache **Identity**.
+
+ :::image type="content" source="media/cache-managed-identity/user-assigned.png" alt-text="System assigned resource settings for identity":::
+
+## Add system assigned identity to an existing cache
+
+1. Navigate to your Azure Cache for Redis resource from the Azure portal. Select **(PREVIEW) Identity** from the Resource menu on the left.
+ > [!NOTE]
+ > Managed identity functionality is only available in the Premium tier.
+ >
+
+1. To enable a system-assigned identity, select the **System assigned (preview)** tab, and select **On** under **Status**. Select **Save** to confirm.
+
+ :::image type="content" source="media/cache-managed-identity/identity-save.png" alt-text="System assigned identity status is on":::
+
+1. A dialog pops up saying that your cache will be registered with Azure Active Directory and that it can be granted permissions to access resources protected by Azure AD. Select **Yes**.
+
+1. You see an **Object (principal) ID**, indicating that the identity has been assigned.
+
+ :::image type="content" source="media/cache-managed-identity/user-assigned.png" alt-text="new Object principal ID shown for system assigned identity":::
+
+## Add a user assigned identity to an existing cache
+
+1. Navigate to your Azure Cache for Redis resource from the Azure portal. Select **(PREVIEW) Identity** from the Resource menu on the left.
+ > [!NOTE]
+ > Managed identity functionality is only available in the Premium tier.
+ >
+
+1. To enable user assigned identity, select the **User assigned (preview)** tab and select **Add**.
+
+ :::image type="content" source="media/cache-managed-identity/identity-add.png" alt-text="User assigned identity status is on":::
+
+1. A sidebar pops up to allow you to select any available user-assigned identity to your subscription. Choose an identity and select **Add**. For more information on user assigned managed identities, see [manage user-assigned identity](/azure/active-directory/managed-identities-azure-resources/manage-user-assigned-managed-identities.md).
+ >[!Note]
+ >You need to [create a user assigned identity](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp) in advance of this step.
+ >
+ :::image type="content" source="media/cache-managed-identity/choose-identity.png" alt-text="new Object principal ID shown for user assigned identity":::
+
+1. You see the user-assigned identity listed in the **User assigned (preview)** pane.
+
+ :::image type="content" source="media/cache-managed-identity/identity-list.png" alt-text="list of identity names":::
+
+## Enable managed identity using the Azure CLI
+
+Use the Azure CLI for creating a new cache with managed identity or updating an existing cache to use managed identity. For more information, see [az redis create](/cli/azure/redis?view=azure-cli-latest.md) or [az redis identity](/cli/azure/redis/identity?view=azure-cli-latest).
+
+For example, to update a cache to use system-managed identity use the following CLI command:
+
+```azurecli-interactive
+
+az redis identity assign \--mi-system-assigned \--name MyCacheName \--resource-group MyResource Group
+```
+
+## Enable managed identity using Azure PowerShell
+
+Use Azure PowerShell for creating a new cache with managed identity or updating an existing cache to use managed identity. For more information, see [New-AzRedisCache](/powershell/module/az.rediscache/new-azrediscache?view=azps-7.1.0) or [Set-AzRedisCache](/powershell/module/az.rediscache/set-azrediscache?view=azps-7.1.0).
+
+For example, to update a cache to use system-managed identity, use the following PowerShell command:
+
+```powershell-interactive
+Set-AzRedisCache -ResourceGroupName \"MyGroup\" -Name \"MyCache\" -IdentityType "SystemAssigned"
+```
+
+## Configure storage account to use managed identity
+
+> [!IMPORTANT]
+> Managed identity must be configured in the storage account before Azure Cache for Redis can access the account for persistence or import/export functionality. If this step is not done correctly, you see errors or no data written.
+
+1. Create a new storage account or open an existing storage account that you would like to connect to your cache instance.
+
+2. Open the **Access control (IAM)** from the Resource menu. Then, select **Add**, and **Add role assignment**.
+
+ :::image type="content" source="media/cache-managed-identity/demo-storage.png" alt-text="access control (iam) settings":::
+
+3. Search for the **Storage Blob Data Contributor** on the Role pane. Select it and **Next**.
+
+ :::image type="content" source="media/cache-managed-identity/role-assignment.png" alt-text="add role assignment form with list of roles":::
+
+4. Select the **Members** tab. Under **Assign access to** select **Managed Identity**, and select on **Select members**. A sidebar pops up on the right.
+
+ :::image type="content" source="media/cache-managed-identity/select-members.png" alt-text="add role assignment form with members pane":::
+
+5. Use the drop-down under **Managed Identity** to choose either a **User-assigned managed identity** or a **System-assigned managed identity**. If you have many managed identities, you can search by name. Choose the managed identities you want and then **Select**. Then, **Review + assign** to confirm.
+
+ :::image type="content" source="media/cache-managed-identity/review-assign.png" alt-text="select managed identities form pop up":::
+
+6. You can confirm if the identity has been assigned successfully by checking your storage account's role assignments under **Storage Blob Data Contributor**.
+
+ :::image type="content" source="media/cache-managed-identity/blob-data.png" alt-text="storag blob data contributor list":::
+
+> [!NOTE]
+> Adding an Azure Cache for Redis instance as a storage blog data contributor through system-assigned identity will conveniently add the cache instance to the [trusted services list](/azure/storage/common/storage-network-security?tabs=azure-portal), making firewall exceptions easier to implement.
+
+## Use managed identity to access a storage account
+
+### Use managed identity with data persistence
+
+1. Open the Azure Cache for Redis instance that has been assigned the Storage Blob Data Contributor role and go to the **Data persistence** on the Resource menu.
+
+2. Change the **Authentication Method** to **(PREVIEW) Managed Identity** and select the storage account you configured above. select **Save**.
+
+ :::image type="content" source="media/cache-managed-identity/data-persistence.png" alt-text="data persistence pane with authentication method selected":::
+
+ > [!IMPORTANT]
+ > The identity defaults to the system-assigned identity if it is enabled. Otherwise, the first listed user-assigned identity is used.
+ >
+
+3. Data persistence backups can now be saved to the storage account using managed identity authentication.
+
+ :::image type="content" source="media/cache-managed-identity/redis-persistence.png" alt-text="export data in resource menu":::
+
+### Use managed identity to import and export cache data
+
+1. Open your Azure Cache for Redis instance that has been assigned the Storage Blob Data Contributor role and go to the **Import** or **Export** tab under **Administration**.
+
+2. If importing data, choose the blob storage location that holds your chosen RDB file. If exporting data, type your desired blob name prefix and storage container. In both situations, you must use the storage account you've configured for managed identity access.
+
+ :::image type="content" source="media/cache-managed-identity/export-data.png" alt-text="export data from the resource menu":::
+
+3. Under **Authentication Method**, choose **(PREVIEW) Managed Identity** and select **Import** or **Export**, respectively.
+
+> [!NOTE]
+> It will take a few minutes to import or export the data.
+>
+
+> [!IMPORTANt]
+>If you see an export or import failure, double check that your storage account has been configured with your cache's system-assigned or user-assigned identity. The identity used will default to system-assigned identity if it is enabled. Otherwise, the first listed user-assigned identity is used.
+
+## Next steps
+
+- [Learn more](cache-overview.md#service-tiers) about Azure Cache for Redis features
+- [What are managed identifies](/azure/active-directory/managed-identities-azure-resources/overview)
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-whats-new.md
Previously updated : 10/11/2021 Last updated : 01/21/2022 # What's New in Azure Cache for Redis
+## January 2022
+
+### Support for managed identity in Azure Cache for Redis
+
+Azure Cache for Redis now supports authenticating storage account connections using managed identity. Identity is established through Azure Active Directory, and both system-assigned and user-assigned identities are supported. This further allows the service to establish trusted access to storage for uses including data persistence and importing/exporting cache data.
+
+For more information, see [Managed identity with Azure Cache for Redis (Preview)](cache-managed-identity.md).
+ ## October 2021 ### Azure Cache for Redis 6.0 GA
azure-functions Functions Create Function Linux Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-function-linux-custom-image.md
Title: Create Azure Functions on Linux using a custom image description: Learn how to create Azure Functions running on a custom Linux image. Previously updated : 12/2/2020 Last updated : 01/20/2021 zone_pivot_groups: programming-languages-set-functions-full
COPY --from=mcr.microsoft.com/dotnet/core/sdk:3.1 /usr/share/dotnet /usr/share/d
``` ::: zone-end Add a function to your project by using the following command, where the `--name` argument is the unique name of your function and the `--template` argument specifies the function's trigger. `func new` creates a C# code file in your project. ```console
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-versions.md
Title: Azure Functions runtime versions overview
description: Azure Functions supports multiple versions of the runtime. Learn the differences between them and how to choose the one that's right for you. Previously updated : 11/18/2021 Last updated : 01/22/2022 zone_pivot_groups: programming-languages-set-functions
To update your app to Azure Functions 4.x, update your local installation of [Az
#### Azure
+A pre-upgrade validator is available to help identify potential issues when migrating a function app to 4.x. Before you migrate an existing app, follow these steps to run the validator:
+
+1. In the Azure portal, navigate to your function app
+
+1. Open the *Diagnose and solve problems* blade
+
+1. In *Search for common problems or tools*, enter and select **Functions 4.x Pre-Upgrade Validator**
+ To migrate an app from 3.x to 4.x, set the `FUNCTIONS_EXTENSION_VERSION` application setting to `~4` with the following Azure CLI command: ```bash
azure-maps How To Manage Creator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-manage-creator.md
Title: Manage Microsoft Azure Maps Creator
description: In this article, you'll learn how to manage Microsoft Azure Maps Creator. Previously updated : 11/11/2021 Last updated : 01/20/2022
# Manage Azure Maps Creator
-You can use Azure Maps Creator to create private indoor map data. Using the Azure Maps API and the Indoor Maps module, you can develop interactive and dynamic indoor map web applications. For pricing information see the *Creator* section in [Azure Maps pricing](https://aka.ms/CreatorPricing).
+You can use Azure Maps Creator to create private indoor map data. Using the Azure Maps API and the Indoor Maps module, you can develop interactive and dynamic indoor map web applications. For pricing information, see the *Creator* section in [Azure Maps pricing](https://aka.ms/CreatorPricing).
This article takes you through the steps to create and delete a Creator resource in an Azure Maps account.
This article takes you through the steps to create and delete a Creator resource
2. Navigate to the Azure portal menu. Select **All resources**, and then select your Azure Maps account.
- :::image type="content" border="true" source="./media/how-to-manage-creator/select-all-resources.png" alt-text="Select Azure Maps account":::
+ :::image type="content" border="true" source="./media/how-to-manage-creator/select-all-resources.png" alt-text="A screenshot of the Azure portal showing the All resources selected in the Azure Services section of the page.":::
-3. In the navigation pane, select **Creator overview**, and then select **Create**.
+3. In the navigation pane, select **Creator**, then select the **Create** button.
- :::image type="content" border="true" source="./media/how-to-manage-creator/creator-blade-settings.png" alt-text="Create Azure Maps Creator page":::
+ :::image type="content" border="true" source="./media/how-to-manage-creator/creator-blade-settings.png" alt-text="A screenshot of the Azure Maps Account page showing the Creator page with the Create button highlighted.":::
4. Enter the name, location, and map provisioning storage units for your Creator resource, then select **Review + create**.
- :::image type="content" source="./media/how-to-manage-creator/creator-creation-dialog.png" alt-text="Enter Creator account information page":::
+ :::image type="content" source="./media/how-to-manage-creator/creator-creation-dialog.png" alt-text="A screenshot of the Azure Maps Create a Creator resource page showing the Creator name, storage units and location fields with suggested values and the Review + create button highlighted.":::
-5. Review your settings, and then select **Create**.
-
- :::image type="content" source="./media/how-to-manage-creator/creator-create-dialog.png" alt-text="Confirm Creator account settings page":::
-
- After the deployment completes, you'll see a page with a success or a failure message.
-
- :::image type="content" source="./media/how-to-manage-creator/creator-resource-created.png" alt-text="Resource deployment status page":::
+5. Review your settings, and then select **Create**. After the deployment completes, you'll see a page with a success or a failure message.
6. Select **Go to resource**. Your Creator resource view page shows the status of your Creator resource and the chosen demographic region.
- :::image type="content" source="./media/how-to-manage-creator/creator-resource-view.png" alt-text="Creator status page":::
>[!NOTE] >To return to the Azure Maps account, select **Azure Maps Account** in the navigation pane.
This article takes you through the steps to create and delete a Creator resource
To delete the Creator resource:
-1. In your Azure Maps account, select **Overview** under **Creator**.
+1. In your Azure Maps account, select **Creator**.
2. Select **Delete**. >[!WARNING] >When you delete the Creator resource of your Azure Maps account, you also delete the conversions, datasets, tilesets, and feature statesets that were created using Creator services. Once a Creator resource is deleted, it cannot be undone.
- :::image type="content" source="./media/how-to-manage-creator/creator-delete.png" alt-text="Creator page with delete button":::
+ :::image type="content" source="./media/how-to-manage-creator/creator-delete.png" alt-text="A screenshot of the Azure Maps Creator Resource page with the delete button highlighted.":::
3. You'll be asked to confirm deletion by typing in the name of your Creator resource. After the resource is deleted, you see a confirmation page that looks like the following:
- :::image type="content" source="./media/how-to-manage-creator/creator-confirm-delete.png" alt-text="Creator page with delete confirmation":::
+ :::image type="content" source="./media/how-to-manage-creator/creator-confirm-delete.png" alt-text="A screenshot of the Azure Maps Creator Resource deletion confirmation page.":::
## Authentication
azure-monitor Diagnostics Extension To Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/diagnostics-extension-to-application-insights.md
description: Update the Azure Diagnostics public configuration to send data to A
Previously updated : 03/19/2016 Last updated : 01/20/2022
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/diagnostic-settings.md
Any destinations for the diagnostic setting must be created before creating the
| Event Hubs | The shared access policy for the namespace defines the permissions that the streaming mechanism has. Streaming to Event Hubs requires Manage, Send, and Listen permissions. To update the diagnostic setting to include streaming, you must have the ListKey permission on that Event Hubs authorization rule.<br><br>The event hub namespace needs to be in the same region as the resource being monitored if the resource is regional. <br><br> Diagnostic settings can't access Event Hubs resources when virtual networks are enabled. You have to enable the *Allow trusted Microsoft services* to bypass this firewall setting in Event Hub, so that Azure Monitor (Diagnostic Settings) service is granted access to your Event Hubs resources.| | Partner integrations | Varies by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
-### Azure Data Lake Storage Gen2 as a destination
-
-> [!NOTE]
-> Azure Data Lake Storage Gen2 accounts are not currently supported as a destination for diagnostic settings even though they may be listed as a valid option in the Azure portal.
- ## Create in Azure portal You can configure diagnostic settings in the Azure portal either from the Azure Monitor menu or from the menu for the resource.
azure-netapp-files Azure Netapp Files Register https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-register.md
na Previously updated : 10/04/2021 Last updated : 01/21/2022 # Register for NetApp Resource Provider
To use the Azure NetApp Files service, you need to register the NetApp Resource
![Azure Cloud Shell icon](../media/azure-netapp-files/azure-netapp-files-azure-cloud-shell.png)
-2. If you have multiple subscriptions on your Azure account, select the one that has been approved for Azure NetApp Files:
+2. If you have multiple subscriptions on your Azure account, select the one that you want to configure for Azure NetApp Files:
```azurecli az account set --subscription <subscriptionId> ```
-3. In the Azure Cloud Shell console, enter the following command to verify that your subscription has been approved:
-
- ```azurecli
- az feature list | grep NetApp
- ```
-
- The command output appears as follows:
-
- ```output
- "id": "/subscriptions/<SubID>/providers/Microsoft.Features/providers/Microsoft.NetApp/features/ANFGA",
- "name": "Microsoft.NetApp/ANFGA"
- ```
-
- `<SubID>` is your subscription ID.
---
-4. In the Azure Cloud Shell console, enter the following command to register the Azure Resource Provider:
+3. In the Azure Cloud Shell console, enter the following command to register the Azure Resource Provider:
```azurecli az provider register --namespace Microsoft.NetApp --wait
To use the Azure NetApp Files service, you need to register the NetApp Resource
The `--wait` parameter instructs the console to wait for the registration to complete. The registration process can take some time to complete.
-5. In the Azure Cloud Shell console, enter the following command to verify that the Azure Resource Provider has been registered:
+4. In the Azure Cloud Shell console, enter the following command to verify that the Azure Resource Provider has been registered:
```azurecli az provider show --namespace Microsoft.NetApp
To use the Azure NetApp Files service, you need to register the NetApp Resource
`<SubID>` is your subscription ID. The `state` parameter value indicates `Registered`.
-6. From the Azure portal, click the **Subscriptions** blade.
-7. In the Subscriptions blade, click your subscription ID.
-8. In the settings of the subscription, click **Resource providers** to verify that Microsoft.NetApp Provider indicates the Registered status:
+5. From the Azure portal, click the **Subscriptions** blade.
+6. In the Subscriptions blade, click your subscription ID.
+7. In the settings of the subscription, click **Resource providers** to verify that Microsoft.NetApp Provider indicates the Registered status:
![Registered Microsoft.NetApp](../media/azure-netapp-files/azure-netapp-files-registered-resource-providers.png)
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 01/14/2022 Last updated : 01/21/2022 # Create and manage Active Directory connections for Azure NetApp Files
Several features of Azure NetApp Files require that you have an Active Directory
* The admin account you use must have the capability to create machine accounts in the organizational unit (OU) path that you will specify.
+* The admin account you use must have the capability to create machine accounts in the organizational unit (OU) path that you will specify. In some cases, `msDS-SupportedEncryptionTypes` write permission is required to set account attributes within AD.
+ * If you change the password of the Active Directory user account that is used in Azure NetApp Files, be sure to update the password configured in the [Active Directory Connections](#create-an-active-directory-connection). Otherwise, you will not be able to create new volumes, and your access to existing volumes might also be affected depending on the setup. * Proper ports must be open on the applicable Windows Active Directory (AD) server.
Several features of Azure NetApp Files require that you have an Active Directory
If you have domain controllers that are unreachable by the Azure NetApp Files delegated subnet, you can specify an Active Directory site during creation of the Active Directory connection. Azure NetApp Files needs to communicate only with domain controllers in the site where the Azure NetApp Files delegated subnet address space is. See [Designing the site topology](/windows-server/identity/ad-ds/plan/designing-the-site-topology) about AD sites and services. +
+* Avoid configuring overlapping subnets in the AD machine. Even if the site name is defined in the Active Directory connections, overlapping subnets might result in the wrong site being discovered, thus affecting the service. It might also affect new volume creation or AD modification.
* You can enable AES encryption for AD Authentication by checking the **AES Encryption** box in the [Join Active Directory](#create-an-active-directory-connection) window. Azure NetApp Files supports DES, Kerberos AES 128, and Kerberos AES 256 encryption types (from the least secure to the most secure). If you enable AES encryption, the user credentials used to join Active Directory must have the highest corresponding account option enabled that matches the capabilities enabled for your Active Directory.
Several features of Azure NetApp Files require that you have an Active Directory
[LDAP channel binding](https://support.microsoft.com/help/4034879/how-to-add-the-ldapenforcechannelbinding-registry-entry) configuration alone has no effect on the Azure NetApp Files service. However, if you use both LDAP channel binding and secure LDAP (for example, LDAPS or `start_tls`), then the SMB volume creation will fail.
-* For non-AD integrated DNS, you should add a DNS A/PTR record to enable Azure NetApp Files to function by using a ΓÇ£friendly name".
+* Azure NetApp Files will attempt to add an A/PTR record in DNS for AD integrated DNS servers. Add a reverse lookup zone if one is missing under Reverse Lookup Zones on AD server. For non-AD integrated DNS, you should add a DNS A/PTR record to enable Azure NetApp Files to function by using a ΓÇ£friendly name".
* The following table describes the Time to Live (TTL) settings for the LDAP cache. You need to wait until the cache is refreshed before trying to access a file or directory through a client. Otherwise, an access or permission denied message appears on the client.
azure-netapp-files Cross Region Replication Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/cross-region-replication-requirements-considerations.md
na Previously updated : 10/14/2021 Last updated : 01/21/2022
This article describes requirements and considerations about [using the volume c
* Cascading and fan in/out topologies are not supported. * Configuring volume replication for source volumes created from snapshot is not supported at this time. * After you set up cross-region replication, the replication process creates *snapmirror snapshots* to provide references between the source volume and the destination volume. Snapmirror snapshots are cycled automatically when a new one is created for every incremental transfer. You cannot delete snapmirror snapshots until replication relationship and volume is deleted.
+* You cannot mount a dual-protocol volume until you [authorize replication from the source volume](cross-region-replication-create-peering.md#authorize-replication-from-the-source-volume) and the initial [transfer](cross-region-replication-display-health-status.md#display-replication-status) happens.
* You can delete manual snapshots on the source volume of a replication relationship when the replication relationship is active or broken, and also after the replication relationship is deleted. You cannot delete manual snapshots for the destination volume until the replication relationship is broken. * You cannot revert a source or destination volume of cross-region replication to a snapshot. The snapshot revert functionality is greyed out for volumes in a replication relationship.
azure-netapp-files Faq Smb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/faq-smb.md
Previously updated : 10/11/2021 Last updated : 01/21/2022 # SMB FAQs for Azure NetApp Files
Azure NetApp Files supports Windows Server 2008r2SP1-2019 versions of Active Dir
As a best practice, set the maximum tolerance for computer clock synchronization to five minutes. For more information, see [Maximum tolerance for computer clock synchronization](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/jj852172(v=ws.11)).
-## Can I manage `SMB Shares`, `Sessions`, and `Open Files` through Computer Management Console (MMC)?
+## Can I manage `SMB Shares`, `Sessions`, and `Open Files` through Microsoft Management Console (MMC)?
-Management of `SMB Shares`, `Sessions`, and `Open Files` through Computer Management Console (MMC) is currently not supported.
+Azure NetApp Files supports modifying `SMB Shares` by using MMC. However, modifying share properties has significant risk. If the users or groups assigned to the share properties are removed from the Active Directory, or if the permissions for the share become unusable, then the entire share will become inaccessible.
+
+Azure NetApp Files does not support using MMC to manage `Sessions` and `Open Files`.
## How can I obtain the IP address of an SMB volume via the portal?
To use an Azure NetApp Files SMB share as a DFS-N folder target, provide the Uni
## Can the SMB share permissions be changed?
-No, the share permissions cannot be changed. However, the NTFS permissions of the `root` volume can be changed using the [NTFS file and folder permissions](azure-netapp-files-create-volumes-smb.md#ntfs-file-and-folder-permissions) procedure.
+Azure NetApp Files supports modifying `SMB Shares` by using Microsoft Management Console (MMC). However, modifying share properties has significant risk. If the users or groups assigned to the share properties are removed from the Active Directory, or if the permissions for the share become unusable, then the entire share will become inaccessible.
+You can change the NTFS permissions of the root volume by using [NTFS file and folder permissions](azure-netapp-files-create-volumes-smb.md#ntfs-file-and-folder-permissions) procedure.
## Next steps
azure-netapp-files Troubleshoot Volumes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/troubleshoot-volumes.md
na Previously updated : 01/06/2022 Last updated : 01/21/2022 # Troubleshoot volume errors for Azure NetApp Files
This article describes error messages and resolutions that can help you troubles
| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError","message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-D9A2\". Reason: SecD Error: ou not found Details: Error: Machine account creation procedure failed\n [ 561] Loaded the preliminary configuration.\n [ 665] Successfully connected to ip 10.x.x.x, port 88 using TCP\n [ 1039] Successfully connected to ip 10.x.x.x, port 389 using TCP\n**[ 1147] FAILURE: Specifed OU 'OU=AADDC Com' does not exist in\n** contoso.com\n. "}]}` | Make sure that the OU path specified for joining the AD connection is correct. If you use Azure ADDS, make sure that the organizational unit path is `OU=AADDC Computers`. | | The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-ANF-VOL. Reason: LDAP Error: Local error occurred Details: Error: Machine account creation procedure failed. [nnn] Loaded the preliminary configuration. [nnn] Successfully connected to ip 10.x.x.x, port 88 using TCP [nnn] Successfully connected to ip 10.x.x.x, port 389 using [nnn] Entry for host-address: 10.x.x.x not found in the current source: FILES. Ignoring and trying next available source [nnn] Source: DNS unavailable. Entry for host-address:10.x.x.x found in any of the available sources\n*[nnn] FAILURE: Unable to SASL bind to LDAP server using GSSAPI: local error [nnn] Additional info: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Cannot determine realm for numeric host address) [nnn] Unable to connect to LDAP (Active Directory) service on contoso.com (Error: Local error) [nnn] Unable to make a connection (LDAP (Active Directory):contosa.com, result: 7643. ` | The pointer (PTR) record of the AD host machine might be missing on the DNS server. You need to create a reverse lookup zone on the DNS server, and then add a PTR record of the AD host machine in that reverse lookup zone. <br> For example, assume that the IP address of the AD machine is `10.x.x.x`, the hostname of the AD machine (as found by using the `hostname` command) is `AD1`, and the domain name is `contoso.com`. The PTR record added to the reverse lookup zone should be `10.x.x.x` -> `contoso.com`. | | The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-ANF-VOL\". Reason: Kerberos Error: KDC has no support for encryption type Details: Error: Machine account creation procedure failed [nnn]Loaded the preliminary configuration. [nnn]Successfully connected to ip 10.x.x.x, port 88 using TCP [nnn]FAILURE: Could not authenticate as 'contosa.com': KDC has no support for encryption type (KRB5KDC_ERR_ETYPE_NOSUPP) ` | Make sure that [AES Encryption](./create-active-directory-connections.md#create-an-active-directory-connection) is enabled both in the Active Directory connection and for the service account. |
-| The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-NTAP-VOL\". Reason: LDAP Error: Strong authentication is required Details: Error: Machine account creation procedure failed\n [ 338] Loaded the preliminary configuration.\n [ nnn] Successfully connected to ip 10.x.x.x, port 88 using TCP\n [ nnn ] Successfully connected to ip 10.x.x.x, port 389 using TCP\n [ 765] Unable to connect to LDAP (Active Directory) service on\n dc51.area51.com (Error: Strong(er) authentication\n required)\n*[ nnn] FAILURE: Unable to make a connection (LDAP (Active\n* Directory):contoso.com), result: 7609\n. "` | The LDAP Signing option is not selected, but the AD client has LDAP signing. [Enable LDAP Signing](create-active-directory-connections.md#create-an-active-directory-connection) and retry. |
+| The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-NTAP-VOL\". Reason: LDAP Error: Strong authentication is required Details: Error: Machine account creation procedure failed\n [ 338] Loaded the preliminary configuration.\n [ nnn] Successfully connected to ip 10.x.x.x, port 88 using TCP\n [ nnn ] Successfully connected to ip 10.x.x.x, port 389 using TCP\n [ 765] Unable to connect to LDAP (Active Directory) service on\n dc51.area51.com (Error: Strong(er) authentication\n required)\n*[ nnn] FAILURE: Unable to make a connection (LDAP (Active\n* Directory):contoso.com), result: 7609\n. "` | The LDAP Signing option is not selected, but the AD client has LDAP signing. [Enable LDAP Signing](create-active-directory-connections.md#create-an-active-directory-connection) and retry. |
+| SMB volume creation fails with the following error: <br> `Failed to create the Active Directory machine account. Reason: LDAP Error: Intialization of LDAP library failed Details: Error: Machine account creation procedure failed` | This error occurs because the service or user account used in the Azure NetApp Files Active Directory connections does not have sufficient privilege to create computer objects or make modifications to the newly created computer object. <br> To solve the issue, you should grant the account being used greater privilege. You can apply a default role with sufficient privilege. You can also delegate additional privilege to the user or service account or to a group it is part of. |
## Errors for dual-protocol volumes
azure-percept Azure Percept Devkit Software Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/azure-percept-devkit-software-release-notes.md
This page provides information of changes and fixes for each Azure Percept DK OS
To download the update images, refer to [Azure Percept DK software releases for USB cable update](./software-releases-usb-cable-updates.md) or [Azure Percept DK software releases for OTA update](./software-releases-over-the-air-updates.md).
+## January (2201) Release
+
+- Setup Experience
+ - Fixed the compatibility issue with Windows 11 PC during OOBE setup.
+- Operating System
+ - Latest security updates on vim package.
+ ## November (2111) Release - Operating System
azure-percept Software Releases Over The Air Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/software-releases-over-the-air-updates.md
Microsoft would service each dev kit release with OTA packages. However, as ther
|Release|Applicable Version(s)|Download Links|Note| |||||
-|November Service Release (2111)|2021.106.111.115,<br>2021.107.129.116,<br>2021.109.129.108 |[2021.111.124.109 OTA update package](<https://download.microsoft.com/download/2/5/3/253f56fe-1a26-4fe7-b1b6-c03f070acc35/2021.111.124.109 OTA update package.zip>)||
+|January Service Release (2201)|2021.106.111.115,<br>2021.107.129.116,<br>2021.109.129.108, <br>2021.111.124.109 |[2022.101.112.106 OTA update package](<https://download.microsoft.com/download/e/b/3/eb3a3c51-a60a-4d45-9406-9a4805127c62/2022.101.112.106 OTA update package.zip>)||
**Hard-stop releases:**
azure-percept Software Releases Usb Cable Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/software-releases-usb-cable-updates.md
This page provides information and download links for all the dev kit OS/firmwar
## Latest releases - **Latest service release**
-November Service Release (2111): [Azure-Percept-DK-1.0.20211124.1851-public_preview_1.0.zip](<https://download.microsoft.com/download/9/5/4/95464a73-109e-46c7-8624-251ceed0c5ea/Azure-Percept-DK-1.0.20211124.1851-public_preview_1.0.zip>)
+January Service Release (2201): [Azure-Percept-DK-1.0.20220112.1519-public_preview_1.0.zip](<https://download.microsoft.com/download/1/6/4/164cfcf2-ce52-4e75-9dee-63bb4a128e71/Azure-Percept-DK-1.0.20220112.1519-public_preview_1.0.zip>)
- **Latest major update or known stable version** Feature Update (2104): [Azure-Percept-DK-1.0.20210409.2055.zip](https://download.microsoft.com/download/6/4/d/64d53e60-f702-432d-a446-007920a4612c/Azure-Percept-DK-1.0.20210409.2055.zip)
Feature Update (2104): [Azure-Percept-DK-1.0.20210409.2055.zip](https://download
|Release|Download Links|Note| |||::|
+|January Service Release (2201)|[Azure-Percept-DK-1.0.20220112.1519-public_preview_1.0.zip](<https://download.microsoft.com/download/1/6/4/164cfcf2-ce52-4e75-9dee-63bb4a128e71/Azure-Percept-DK-1.0.20220112.1519-public_preview_1.0.zip>)||
|November Service Release (2111)|[Azure-Percept-DK-1.0.20211124.1851-public_preview_1.0.zip](<https://download.microsoft.com/download/9/5/4/95464a73-109e-46c7-8624-251ceed0c5ea/Azure-Percept-DK-1.0.20211124.1851-public_preview_1.0.zip>)|| |September Service Release (2109)|[Azure-Percept-DK-1.0.20210929.1747-public_preview_1.0.zip](https://go.microsoft.com/fwlink/?linkid=2174462)|| |July Service Release (2107)|[Azure-Percept-DK-1.0.20210729.0957-public_preview_1.0.zip](https://download.microsoft.com/download/f/a/9/fa95d9d9-a739-493c-8fad-bccf839072c9/Azure-Percept-DK-1.0.20210729.0957-public_preview_1.0.zip)||
azure-percept Voice Control Your Inventory Then Visualize With Power Bi Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/voice-control-your-inventory-then-visualize-with-power-bi-dashboard.md
+
+ Title: Voice control your inventory and visualize with Power BI dashboard
+description: This article will give detailed instructions for building the main components of the solution and deploying the edge speech AI.
++++ Last updated : 12/14/2021 +++++
+# Tutorial: Voice control your inventory and visualize with Power BI Dashboard
+This article will give detailed instructions for building the main components of the solution and deploying the edge speech AI. The solution uses Azure Percept DK device and the Audio SoM, Azure Speech Services -Custom Commands, Azure Function App, SQL Database, and Power BI. Users can learn how to manage their inventory with voice using Azure Percept Audio and visualize results with Power BI. The goal of this article is to empower users to create a basic inventory management solution.
+
+Users who want to take their solution further can add an additional edge module for visual inventory inspection or expand on the inventory visualizations within Power BI.
+
+In this tutorial, you learn how to:
+
+- Create an Azure SQL Server and SQL Database
+- Create an Azure function project and publish to Azure
+- Import an available template to Custom Commands
+- Create a Custom Commands using an available template
+- Deploy modules to your Devkit
+- Import dataset from Azure SQL to Power BI
++
+## Prerequisites
+- Percept DK ([Purchase](https://www.microsoft.com/store/build/azure-percept/8v2qxmzbz9vc))
+- Azure Subscription : [Free trial account](https://azure.microsoft.com/free/)
+- [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md)
+- [Azure Percept Audio setup](./quickstart-percept-audio-setup.md)
+- Speaker or headphones that can connect to 3.5mm audio jack (optional)
+- Install [Power BI Desktop](https://powerbi.microsoft.com/downloads/)
+- Install [VS code](https://code.visualstudio.com/download)
+- Install the [IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) and [IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) Extension in VS Code
+- The [Azure Functions Core Tools](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/azure-functions/functions-run-local.md) version 3.x.
+- The [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code.
+- The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code.
+- Create an [Azure SQL server](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/azure-sql/database/single-database-create-quickstart.md)
++
+## Software Architecture
+![Solution Architecture](./media/voice-control-your-inventory-images/voice-control-solution-architect.png)
++
+## Section 1: Create an Azure SQL Server and SQL Database
+In this section, you will learn how to create the table for this lab. This table will be the main source of truth for your current inventory and the basis of data visualized in Power BI.
+
+1. Set SQL server firewall
+ 1. Click Set server firewall
+ ![Set server firewall](./media/voice-control-your-inventory-images/set-server-firewall.png)
+ 2. Add Rule name workshop - Start IP 0.0.0.0 and End IP 255.255.255.255 to the IP allowlist for lab purpose
+ ![Rule name workshop](./media/voice-control-your-inventory-images/save-workshop.png)
+ 3. Click Query editor to login your sql database <br />
+ ![Query editor to login your sql database](./media/voice-control-your-inventory-images/query-editor.png) <br />
+ 4. Login to your SQL database through SQL Server Authentication <br />
+ ![SQL Server Authentication](./media/voice-control-your-inventory-images/sql-authentication.png) <br />
+2. Run the T-SQL query below in the query editor to create the table <br />
+
+
+ ```sql
+ -- Create table stock
+ CREATE TABLE Stock
+ (
+     color varchar(255),
+     num_box int
+ )
+
+ ```
+
+ ![create the table](./media/voice-control-your-inventory-images/create-sql-table.png)
+
+## Section 2: Create an Azure function project and publish to Azure
+In this section, you will use Visual Studio Code to create a local Azure Functions project in Python. Later in this article, you'll publish your function code to Azure.
+
+1. Go to the [GitHub link](https://github.com/microsoft/Azure-Percept-Reference-Solutions/tree/main/voice-control-inventory-management) and clone the repository
+ 1. Click Code and HTTPS tab
+ ![Code and HTTPS tab](./media/voice-control-your-inventory-images/clone-git.png)
+ 2. Copy the command below in your terminal to clone the repository
+ ![clone the repository](./media/voice-control-your-inventory-images/clone-git-command.png)
+
+ ```
+ git clone https://github.com/microsoft/Azure-Percept-Reference-Solutions/tree/main/voice-control-inventory-management
+ ```
+
+2. Enable Azure Function
+ 1. Click Azure Logo in the task bar
+
+ ![Azure Logo in the task bar](./media/voice-control-your-inventory-images/select-azure-icon.png)
+ 2. Click "..." and check the ΓÇ£FunctionsΓÇ¥ has been checked
+ ![check the ΓÇ£FunctionsΓÇ¥](./media/voice-control-your-inventory-images/select-function.png)
+
+3. Create your local project
+ 1. Create a folder (ex: airlift_az_func) for your project workspace
+ ![Create a folder](./media/voice-control-your-inventory-images/create-new-folder.png)
+ 2. Choose the Azure icon in the Activity bar, then in the Azure: Functions area, select the <strong>Create new project...</strong> icon
+ ![select Azure icon](./media/voice-control-your-inventory-images/select-function-visio-studio.png)
+ 3. Choose the directory location you just created for your project workspace and choose **Select**.
+ ![the directory location](./media/voice-control-your-inventory-images/select-airlift-folder.png)
+ 4. <strong>Provide the following information at the prompts</strong>: Select a language for your function project: Choose <strong>Python</strong>.
+ ![following information at the prompts](./media/voice-control-your-inventory-images/language-python.png)
+ 5. <strong>Select a Python alias to create a virtual environment</strong>: Choose the location of your Python interpreter. If the location isn't shown, type in the full path to your Python binary. Select skip virtual environment you donΓÇÖt have python installed.
+ ![create a virtual environment](./media/voice-control-your-inventory-images/skip-virtual-env.png)
+ 6. <strong>Select a template for your project's first function</strong>: Choose <strong>HTTP trigger</strong>.
+ ![Select a template](./media/voice-control-your-inventory-images/http-trigger.png)
+ 7. <strong>Provide a function name</strong>: Type <strong>HttpExample</strong>.
+ ![Provide a function name](./media/voice-control-your-inventory-images/http-example.png)
+ 8. <strong>Authorization level</strong>: Choose <strong>Anonymous</strong>, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/azure-functions/functions-bindings-http-webhook-trigger.md).
+ ![power pi dashboard](./media/voice-control-your-inventory-images/create-http-trigger.png)
+ 9. <strong>Select how you would like to open your project</strong>: Choose Add to workspace. Trust folder and enable all features
+ ![Authorization keys](./media/voice-control-your-inventory-images/trust-authorize.png)
+ 10. You will see the HTTPExample function has been initiated
+ ![ HTTPExample function](./media/voice-control-your-inventory-images/modify-init-py.png)
+
+4. Develop CRUD.py to update Azure SQL on Azure Function
+ 1. Replace the content of the <strong>__init__.py</strong> in [here](https://github.com/microsoft/Azure-Percept-Reference-Solutions/blob/main/voice-control-inventory-management/azure-functions/__init__.py) by copying the raw content of <strong>__init__.py</strong>
+ [ ![copying the raw content](./media/voice-control-your-inventory-images/copy-raw-content-mini.png) ](./media/voice-control-your-inventory-images/copy-raw-content.png#lightbox)
+ 2. Drag and drop the <strong>CRUD.py</strong> to the same layer of <strong>init.py</strong>
+ ![Drag and drop-1](./media/voice-control-your-inventory-images/crud-file.png)
+ ![Drag and drop-2](./media/voice-control-your-inventory-images/show-crud-file.png)
+ 3. Update the value of the <strong>sql server full address</strong>, <strong>database</strong>, <strong>username</strong>, <strong>password</strong> you created in section 1 in <strong>CRUD.py</strong>
+ [ ![Update the value-1](./media/voice-control-your-inventory-images/server-name-mini.png) ](./media/voice-control-your-inventory-images/server-name.png#lightbox)
+ ![Update the value-2](./media/voice-control-your-inventory-images/server-parameter.png)
+ 4. Replace the content of the <strong>requirements.txt</strong> in here by copying the raw content of requirements.txt
+ ![Replace the content-1](./media/voice-control-your-inventory-images/select-requirements-u.png)
+ [ ![Replace the content-2](./media/voice-control-your-inventory-images/view-requirement-file-mini.png) ](./media/voice-control-your-inventory-images/view-requirement-file.png#lightbox)
+ 5. Press ΓÇ£Ctrl + sΓÇ¥ to save the content
+
+5. Sign in to Azure
+ 1. Before you can publish your app, you must sign into Azure. If you aren't already signed in, choose the Azure icon in the Activity bar, then in the Azure: Functions area, choose <strong>Sign in to Azure...</strong>.If you're already signed in, go to the next section.
+ ![sign into Azure](./media/voice-control-your-inventory-images/sign-in-to-azure.png)
+
+ 2. When prompted in the browser, choose your Azure account and sign in using your Azure account credentials.
+ 3. After you've successfully signed in, you can close the new browser window. The subscriptions that belong to your Azure account are displayed in the Side bar.
+
+6. Publish the project to Azure
+ 1. Choose the Azure icon in the Activity bar, then in the <strong>Azure: Functions area</strong>, choose the <strong>Deploy to function app...</strong> button.
+ ![icon in the Act bar](./media/voice-control-your-inventory-images/upload-to-cloud.png)
+ 2. Provide the following information at the prompts:
+ 1. <strong>Select folder</strong>: Choose a folder from your workspace or browse to one that contains your function app. You won't see this if you already have a valid function app opened.
+ 2. <strong>Select subscription</strong>: Choose the subscription to use. You won't see this if you only have one subscription.
+ 3. <strong>Select Function App in Azure</strong>: Choose + Create new Function App. (Don't choose the Advanced option, which isn't covered in this article.)
+ 4. <strong>Enter a globally unique name for the function app</strong>: Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions.
+ 5. <strong>Select a runtime</strong>: Choose the version of <strong>3.9</strong>
+ ![Choose the version](./media/voice-control-your-inventory-images/latest-python-version.png)
+ 1. <strong>Select a location for new resources</strong>: Choose the region.
+ 2. Select <strong>View Output</strong> in this notification to view the creation and deployment results, including the Azure resources that you created. If you miss the notification, select the bell icon in the lower right corner to see it again.
+
+ ![including the Azure resources](./media/voice-control-your-inventory-images/select-view-output.png)
+ 3. <strong>Note down the HTTP Trigger Url</strong> for further use in the section 4
+ ![Note down the HTTP Trigger](./media/voice-control-your-inventory-images/example-http.png)
+
+7. Test your Azure Function App
+ 1. Choose the Azure icon in the Activity bar, expand your subscription, your new function app, and Functions.
+ 2. Right-click the HttpExample function and choose <strong>Execute Function Now</strong>....
+ ![Right-click the HttpExample ](./media/voice-control-your-inventory-images/function.png)
+ 3. In Enter request body you see the request message body value of
+ ```
+ { "color": "yellow", "num_box" :"2", "action":"remove" }
+ ```
+ ![request message body](./media/voice-control-your-inventory-images/type-new-command.png)
+ Press Enter to send this request message to your function.
+
+ 1. When the function executes in Azure and returns a response, a notification is raised in Visual Studio Code.
+ ![a notification](./media/voice-control-your-inventory-images/example-output.png)
+
+## Section 3: Import an Inventory Speech template to Custom Commands
+In this section, you will import an existing application config json file to Custom Commands.
+
+1. Create an Azure Speech resource in a region that supports Custom Commands.
+ 1. Click [Create Speech Services portal](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) to create an Azure Speech resource
+ 1. Select your Subscription
+ 2. Use the Resource group you just created in exercise 1
+ 3. Select the Region(Please check here to see the support region in custom commands)
+ 4. Create Name for your speech service
+ 5. Select Pricing tier to Free F0
+ 2. Go to the Speech Studio for Custom Commands
+ 1. In a web browser, go to [Speech Studio](https://speech.microsoft.com/portal).
+ 2. Select <strong>Custom Commands</strong>.
+ The default view is a list of the Custom Commands applications you have under your selected subscription.
+ ![Custom Commands applications](./media/voice-control-your-inventory-images/cognitive-service.png)
+ 3. Select your Speech <strong>subscription</strong> and <strong>resource group</strong>, and then select <strong>Use resource</strong>.
+ ![Select your Speech](./media/voice-control-your-inventory-images/speech-studio.png)
+ 3. Import an existing application config as a new Custom Commands project
+ 1. Select <strong>New project</strong> to create a project.
+ ![ a new Custom Commands](./media/voice-control-your-inventory-images/create-new-project.png)
+ 2. In the <strong>Name</strong> box, enter project name as Stock (or something else of your choice).
+ 3. In the <strong>Language</strong> list, select <strong>English (United States)</strong>.
+ 4. Select <strong>Browse files</strong> and in the browse window, select the <strong>smart-stock.json</strong> file in the <strong>custom-commands folder</strong>
+ ![the browse window-1](./media/voice-control-your-inventory-images/smart-stock.png)
+ ![the browse window-2](./media/voice-control-your-inventory-images/chose-smart-stock.png)
+
+ 5. In the <strong>LUIS authoring resource</strong> list, select an authoring resource. If there are no valid authoring resources, create one by selecting <strong>Create new LUIS authoring resource</strong>.
+ ![Create new LUIS](./media/voice-control-your-inventory-images/luis-resource.png)
+
+ 6. In the <strong>Resource Name</strong> box, enter the name of the resource.
+ 7. In the <strong>Resource Group</strong> list, select a resource group.
+ 8. In the <strong>Location list</strong>, select a region.
+ 9. In the <strong>Pricing Tier</strong> list, select a tier.
+ 10. Next, select <strong>Create</strong> to create your project. After the project is created, select your project. You should now see overview of your new Custom Commands application.
++
+## Section 4: Train, test, and publish the Custom Command
+In this section, you will train, test, and publish your Custom Commands
+
+1. Replace the web endpoints URL
+ 1. Click Web endpoints and replace the URL
+ 2. Replace the value in the URL to the <strong>HTTP Trigger Url</strong> you noted down in section 2 (ex: https://xxx.azurewebsites.net/api/httpexample)
+ ![Replace the value in the URL](./media/voice-control-your-inventory-images/web-point-url.png)
+2. Create LUIS prediction resource
+ 1. Click <strong>settings</strong> and create a <strong>S0</strong> prediction resource under LUIS <strong>prediction resource</strong>.
+ ![prediction resource-1](./media/voice-control-your-inventory-images/predict-source.png)
+ ![prediction resource-2](./media/voice-control-your-inventory-images/tier-s0.png)
+3. Train and Test with your custom command
+ 1. Click <strong>Save</strong> to save the Custom Commands Project
+ 2. Click <strong>Train</strong> to Train your custom commands service
+ ![custom commands service-1](./media/voice-control-your-inventory-images/train-model.png)
+ 3. Click <strong>Test</strong> to test your custom commands service
+ ![custom commands service-2](./media/voice-control-your-inventory-images/test-model.png)
+ 4. Type ΓÇ£Add 2 green boxesΓÇ¥ in the pop-up window to see if it can respond correctly
+ ![pop-up window](./media/voice-control-your-inventory-images/outcome.png)
+4. Publish your custom command
+ 1. Click Publish to publish the custom commands
+ ![publish the custom commands](./media/voice-control-your-inventory-images/publish.png)
+5. Note down your application ID, speech key in the settings for further use
+ ![application id](./media/voice-control-your-inventory-images/application-id.png)
+
+## Section 5: Deploy modules to your Devkit
+In this section, you will learn how to use deployment manifest to deploy modules to your device.
+1. Set IoT Hub Connection String
+ 1. Go to your IoT Hub service in Azure portal. Click <strong>Shared access policies</strong> -> <strong>Iothubowner</strong>
+ 2. Click <strong>Copy</strong> the get the <strong>primary connection string</strong>
+ ![primary connection string](./media/voice-control-your-inventory-images/iot-hub-owner.png)
+ 3. In Explorer of VS Code, click "Azure IoT Hub".
+ ![click on hub](./media/voice-control-your-inventory-images/azure-iot-hub-studio.png)
+ 4. Click "Set IoT Hub Connection String" in context menu
+ ![choose hub string](./media/voice-control-your-inventory-images/connection-string.png)
+ 5. An input box will pop up, then enter your IoT Hub Connection String<br />
+2. Open VSCode to open the folder you cloned in the section 1 <br />
+ ![Open VSCode](./media/voice-control-your-inventory-images/open-folder.png)
+3. Modify the envtemplate<br />
+ 1. Right click the <strong>envtemplate</strong> and rename to <strong>.env</strong>. Provide values for all variables such as below.<br />
+ ![click on env template](./media/voice-control-your-inventory-images/env-template.png)
+ ![select the end env template](./media/voice-control-your-inventory-images/env-file.png)
+ 2. Relace your Application ID and Speech resource key by checking your Speech Studio<br />
+ ![check the speech studio-1](./media/voice-control-your-inventory-images/general-app-id.png)
+ ![check the speech studio-2](./media/voice-control-your-inventory-images/region-westus.png)
+ 3. Check the region by checking your Azure speech service, and mapping the <strong>display name</strong> (e.g. West US) to <strong>name</strong> (e.g., westus) [here](https://azuretracks.com/2021/04/current-azure-region-names-reference/).
+ ![confirm region](./media/voice-control-your-inventory-images/portal-westus.png)
+ 4. Replace the Speech Region to the name (e.g.: westus) you just get from the mapping table. (Check all characters are in lower case.)
+ ![change region](./media/voice-control-your-inventory-images/region-westus-2.png)
+
+4. Deploy modules to device
+ 1. Right click on deployment.template.json and <strong>select Generate IoT Edge Deployment Manifest</strong>
+ ![generate Manifest](./media/voice-control-your-inventory-images/deployment-manifest.png)
+ 2. After you generated the manifest, you can see <strong>deployment.amd64.json</strong> is under config folder. Right click on deployment.amd64.json and choose Create Deployment for <strong>Single Device</strong>
+ ![create deployment](./media/voice-control-your-inventory-images/config-deployment-manifest.png)
+ 3. Choose the IoT Hub device you are going to deploy
+ ![choose device](./media/voice-control-your-inventory-images/iot-hub-device.png)
+ 4. Check your log of the azurespeechclient module
+ 1. Go to Azure portal to click your Azure IoT Hub
+ ![select hub](./media/voice-control-your-inventory-images/voice-iothub.png)
+ 2. Click IoT Edge
+ ![go to edge](./media/voice-control-your-inventory-images/portal-iotedge.png)
+ 3. Click your Edge device to see if the modules run well
+ ![confirm module](./media/voice-control-your-inventory-images/device-id.png)
+ 4. Click <strong>azureearspeechclientmodule</strong> module
+ ![select ear mod](./media/voice-control-your-inventory-images/azure-ear-module.png)
+ 5. Click <strong>Troubleshooting</strong> tab of the azurespeechclientmodule
+ ![selct client mod](./media/voice-control-your-inventory-images/troubleshoot.png)
+
+ 5. Check your log of the azurespeechclient module
+ 1. Change the Time range to 3 minutes to check the latest log
+ ![confirm log](./media/voice-control-your-inventory-images/time-range.png)
+ 2. Speak <strong>ΓÇ£Computer, remove 2 red boxesΓÇ¥</strong> to your Azure Percept Audio
+ (Computer is the wake word to wake Azure Percept DK, and remove 2 red boxes is the command)
+ Check the log in the speech log if it shows <strong>ΓÇ£sure, remove 2 red boxes. 2 red boxes have been removed.ΓÇ¥</strong>
+ ![verify log](./media/voice-control-your-inventory-images/speech-regconizing.png)
+ >[!NOTE]
+ >If you have set up the wake word before, please use the wake work you set up to wake your DK.
+
+
+## Section 6: Import dataset from Azure SQL to Power BI
+In this section, you will create a Power BI report and check if the report has been updated after you speak commands to your Azure Percept Audio.
+1. Open the Power BI Desktop Application and import data from Azure SQL Server
+ 1. Click close of the pop-up window
+ ![close import data from SQL Server](./media/voice-control-your-inventory-images/power-bi-get-started.png)
+ 2. Import data from SQL Server
+ ![Import data from SQL Server](./media/voice-control-your-inventory-images/import-sql-server.png)
+ 3. Enter your sql server name \<sql server name\>.database.windows.NET, and choose DirectQuery
+ ![enter name for importing data from SQL Server](./media/voice-control-your-inventory-images/direct-query.png)
+ 4. Select Database, and enter the username and the password
+ ![select databae for importing data from SQL Server](./media/voice-control-your-inventory-images/database-pw.png)
+ 5. <strong>Select</strong> the table Stock, and Click <strong>Load</strong> to load dataset to Power BI Desktop<br />
+
+ ![choose strong option for import data from SQL Server](./media/voice-control-your-inventory-images/stock-table.png)
+2. Create your Power BI report
+ 1. Click color, num_box columns in the Fields. And choose visualization Clustered column chart to present your chart.<br />
+ ![Power BI report column box](./media/voice-control-your-inventory-images/color.png)
+ ![Power BI report cluster column](./media/voice-control-your-inventory-images/graph.png)
+ 2. Drag and drop the <strong>color</strong>column to the <strong>Legend</strong> and you will get the chart that looks like below.
+ ![Power BI report-1](./media/voice-control-your-inventory-images/pull-out-color.png)
+ ![Power BI report-2](./media/voice-control-your-inventory-images/number-box-by-color.png)
+ 3. Click <strong>format</strong> and click Data colors to change the colors accordingly. You will have the charts that look like below.
+ ![Power BI report-3](./media/voice-control-your-inventory-images/finish-color-graph.png)
+ 4. Select card visualization
+ ![Power BI report-4](./media/voice-control-your-inventory-images/choose-card.png)
+ 5. Check the num_box
+ ![Power BI report-5](./media/voice-control-your-inventory-images/check-number-box.png)
+ 6. Drag and drop the <strong>color</strong> column to <strong>Filters on this visual</strong>
+ ![Power BI report-6](./media/voice-control-your-inventory-images/pull-color-to-data-fields.png)
+ 7. Select green in the Filters on this visual
+
+ ![Power BI report-7](./media/voice-control-your-inventory-images/visual-filter.png)
+ 8. Double click the column name of the column in the Fields and change the name of the column from ΓÇ£Count of the green boxΓÇ¥
+ ![Power BI report-8](./media/voice-control-your-inventory-images/show-number-box.png)
+3. Speak command to your Devkit and refresh Power BI
+ 1. Speak ΓÇ£Add three green boxesΓÇ¥ to Azure Percept Audio
+ 2. Click ΓÇ£RefreshΓÇ¥. You will see the number of green boxes has been updated.
+ ![Power BI report-9](./media/voice-control-your-inventory-images/refresh-power-bi.png)
+
+Congratulations! You finally know how to develop your own voice assistant. It is not easy to configure such a lot of configurations and set up the custom commands for the first time. But you did it! You can start trying more complex scenarios after this tutorial. Looking forward to seeing you design more interesting scenarios and let voice assistant help in the future.
+
+<!-- 6. Clean up resources
+Required. If resources were created during the tutorial. If no resources were created,
+state that there are no resources to clean up in this section.
+-->
+
+## Clean up resources
+
+If you're not going to continue to use this application, delete
+resources with the following steps:
+
+1. Login to the [Azure portal](https://portal.azure.com), go to `Resource Group` you have been using for this tutorial. Delete the SQL DB, Azure Function, and Speech Service resources.
+
+2. Go into [Azure Percept Studio](https://ms.portal.azure.com/#blade/AzureEdgeDevices/Main/overview), select your device from the `Device` blade, click the `Speech` tab within your device, and under `Configuration` remove reference to your custom command.
+
+3. Go in to [Speech Studio](https://speech.microsoft.com/portal) and delete project created for this tutorial.
+
+4. Login to [Power BI](https://msit.powerbi.com/home) and select your Workspace (this is the same Group Workspace you used while creating the Stream Analytics job output), and delete workspace.
+++
+<!-- 7. Next steps
+Required: A single link in the blue box format. Point to the next logical tutorial
+in a series, or, if there are no other tutorials, to some other cool thing the
+customer can do.
+-->
+
+## Next steps
+
+Check out the other tutorial under Advanced prototyping with Azure Percept section for your Azure Percept DK.
++
+<!--
+Remove all the comments in this template before you sign-off or merge to the
+main branch.
+-->
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/overview.md
Title: Bicep language for deploying Azure resources description: Describes the Bicep language for deploying infrastructure to Azure. It provides an improved authoring experience over using JSON to develop templates. Previously updated : 01/19/2022 Last updated : 01/20/2022 # What is Bicep?
Bicep automatically manages dependencies between resources. You can avoid settin
The structure of the Bicep file is more flexible than the JSON template. You can declare parameters, variables, and outputs anywhere in the file. In JSON, you have to declare all parameters, variables, and outputs within the corresponding sections of the template.
-## FAQ
-
-**Why create a new language instead of using an existing one?**
-
-You can think of Bicep as a revision to the existing ARM template language rather than a new language. The syntax has changed, but the core functionality and runtime remain the same.
-
-Before developing Bicep, we considered using an existing programming language. We decided our target audience would find it easier to learn Bicep rather than getting started with another language.
-
-**Why not focus your energy on Terraform or other third-party Infrastructure as Code offerings?**
-
-Different users prefer different configuration languages and tools. We want to make sure all of these tools provide a great experience on Azure. Bicep is part of that effort.
-
-If you're happy using Terraform, there's no reason to switch. Microsoft is committed to making sure Terraform on Azure is the best it can be.
-
-For customers who have selected ARM templates, we believe Bicep improves the authoring experience. Bicep also helps with the transition for customers who haven't adopted infrastructure as code.
-
-**Is this ready for production use?**
-
-Yes. Starting with version 0.3, Bicep is supported by Microsoft support plans. Bicep has parity with what can be accomplished with ARM Templates. There are no breaking changes that are currently planned, but it's possible we'll need to create breaking changes in the future.
-
-**Is Bicep only for Azure?**
-
-Currently, we aren't planning for Bicep to extend beyond Azure. We want to fully support Azure and optimize the deployment experience.
-
-Meeting that goal requires working with some APIs that are outside of Azure. We expect to provide extensibility points for those scenarios.
-
-**What happens to my existing ARM templates?**
-
-They continue to function exactly as they always have. You don't need to make any changes. We'll continue to support the underlying ARM template JSON language. Bicep files compile to JSON, and that JSON is sent to Azure for deployment.
-
-When you're ready, you can [decompile the JSON files to Bicep](./decompile.md).
-
-**Can I use Bicep to deploy to Azure Stack Hub?**
-
-Yes, you can use Bicep for your Azure Stack Hub deployments, but note that Bicep may show types that are not yet available in Azure Stack Hub. You can view a set of examples in the [Azure Stack Hub QuickStart Template GitHub repo](https://github.com/Azure/AzureStack-QuickStart-Templates/tree/master/Bicep).
- ## Next steps Get started with the [Quickstart](./quickstart-create-bicep-use-visual-studio-code.md).+
+For answers to common questions, see [Frequently asked questions for Bicep](frequently-asked-questions.yml).
azure-resource-manager Virtual Machines Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-limitations/virtual-machines-move-limitations.md
Title: Move Azure VMs to new subscription or resource group description: Use Azure Resource Manager to move virtual machines to a new resource group or subscription. Previously updated : 12/13/2021 Last updated : 01/21/2022
The following scenarios aren't yet supported:
* Virtual Machine Scale Sets with Standard SKU Load Balancer or Standard SKU Public IP can't be moved. * Virtual machines in an existing virtual network can't be moved to a new subscription when you aren't moving all resources in the virtual network. * Virtual machines created from Marketplace resources with plans attached can't be moved across subscriptions. For a potential workaround, see [Virtual machines with Marketplace plans](#virtual-machines-with-marketplace-plans).
-* Low priority virtual machines and low priority virtual machine scale sets can't be moved across resource groups or subscriptions.
+* Low-priority virtual machines and low-priority virtual machine scale sets can't be moved across resource groups or subscriptions.
* Virtual machines in an availability set can't be moved individually. ## Azure disk encryption You can't move a virtual machine that is integrated with a key vault to implement [Azure Disk Encryption for Linux VMs](../../../virtual-machines/linux/disk-encryption-overview.md) or [Azure Disk Encryption for Windows VMs](../../../virtual-machines/windows/disk-encryption-overview.md). To move the VM, you must disable encryption.
+# [Azure CLI](#tab/azure-cli)
+ ```azurecli-interactive az vm encryption disable --resource-group demoRG --name myVm1 --volume-type all ```
+# [PowerShell](#tab/azure-powershell)
+ ```azurepowershell-interactive Disable-AzVMDiskEncryption -ResourceGroupName demoRG -VMName myVm1 -VolumeType all ``` ++ ## Virtual machines with Marketplace plans Virtual machines created from Marketplace resources with plans attached can't be moved across subscriptions. To work around this limitation, you can de-provision the virtual machine in the current subscription, and deploy it again in the new subscription. The following steps help you recreate the virtual machine in the new subscription. However, they might not work for all scenarios. If the plan is no longer available in the Marketplace, these steps won't work. 1. Get information about the plan.
- ```azurepowershell
- $vm = get-AzVM -ResourceGroupName demoRG -Name myVm1
- $vm.Plan
- ```
-
- ```azurecli
- az vm show --resource-group demoRG --name myVm1 --query plan
- ```
+ # [Azure CLI](#tab/azure-cli)
+
+ ```azurecli
+ az vm show --resource-group demoRG --name myVm1 --query plan
+ ```
+
+ # [PowerShell](#tab/azure-powershell)
+
+ ```azurepowershell
+ $vm = get-AzVM -ResourceGroupName demoRG -Name myVm1
+ $vm.Plan
+ ```
+
+
1. Check that the offering still exists in the Marketplace.
- ```azurepowershell
- Get-AzVMImageSku -Location "Central US" -PublisherName "Fabrikam" -Offer "LinuxServer"
- ```
+ # [Azure CLI](#tab/azure-cli)
+
+ ```azurecli
+ az vm image list-skus --publisher Fabrikam --offer LinuxServer --location centralus
+ ```
- ```azurecli
- az vm image list-skus --publisher Fabrikam --offer LinuxServer --location centralus
- ```
+ # [PowerShell](#tab/azure-powershell)
+
+ ```azurepowershell
+ Get-AzVMImageSku -Location "Central US" -PublisherName "Fabrikam" -Offer "LinuxServer"
+ ```
+
+
1. Either clone the OS disk to the destination subscription, or move the original disk after deleting the virtual machine from source subscription. 1. In the destination subscription, accept the Marketplace terms for your plan. You can accept the terms by running the following PowerShell command:
- ```azurepowershell
- Get-AzMarketplaceTerms -Publisher {publisher} -Product {product/offer} -Name {name/SKU} | Set-AzMarketplaceTerms -Accept
- ```
+ # [Azure CLI](#tab/azure-cli)
+
+ ```azurecli
+ az vm image terms accept --publisher {publisher} --offer {product/offer} --plan {name/SKU}
+ ```
- ```azurecli
- az vm image terms accept --publisher {publisher} --offer {product/offer} --plan {name/SKU}
- ```
+ # [PowerShell](#tab/azure-powershell)
- Or, you can create a new instance of a virtual machine with the plan through the portal. You can delete the virtual machine after accepting the terms in the new subscription.
+ ```azurepowershell
+ Get-AzMarketplaceTerms -Publisher {publisher} -Product {product/offer} -Name {name/SKU} | Set-AzMarketplaceTerms -Accept
+ ```
+
+
+
+ Or, you can create a new instance of a virtual machine with the plan through the portal. You can delete the virtual machine after accepting the terms in the new subscription.
1. In the destination subscription, recreate the virtual machine from the cloned OS disk using PowerShell, CLI, or an Azure Resource Manager template. Include the marketplace plan that's attached to the disk. The information about the plan should match the plan you purchased in the new subscription. For more information, see [Create the VM](../../../virtual-machines/marketplace-images.md#create-the-vm).
+For more information, see [Move a Marketplace Azure Virtual Machine to another subscription](../../../virtual-machines/azure-cli-change-subscription-marketplace.md).
+ ## Virtual machines with Azure Backup To move virtual machines configured with Azure Backup, you must delete the restore points collections (snapshots) from the vault. Restore points already copied to the vault can be retained and moved.
If [soft delete](../../../backup/soft-delete-virtual-machines.md) is enabled for
3. Move the VM to the target resource group. 4. Reconfigure the backup.
-### PowerShell
+### Script
1. Find the location of your virtual machine.
If [soft delete](../../../backup/soft-delete-virtual-machines.md) is enabled for
1. If you're moving only one virtual machine, get the restore point collection for that virtual machine.
- ```azurepowershell-interactive
- $restorePointCollection = Get-AzResource -ResourceGroupName AzureBackupRG_<VM location>_1 -name AzureBackup_<VM name>* -ResourceType Microsoft.Compute/restorePointCollections
- ```
+ # [Azure CLI](#tab/azure-cli)
- Delete this resource. This operation deletes only the instant recovery points, not the backed-up data in the vault.
+ ```azurecli-interactive
+ RESTOREPOINTCOL=$(az resource list -g AzureBackupRG_<VM location>_1 --resource-type Microsoft.Compute/restorePointCollections --query "[?starts_with(name, 'AzureBackup_<VM name>')].id" --output tsv)
+ ```
- ```azurepowershell-interactive
- Remove-AzResource -ResourceId $restorePointCollection.ResourceId -Force
- ```
+ # [PowerShell](#tab/azure-powershell)
-1. If you're moving all the virtual machines with back ups in this location, get the restore point collections for those virtual machines.
+ ```azurepowershell-interactive
+ $restorePointCollection = Get-AzResource -ResourceGroupName AzureBackupRG_<VM location>_1 -name AzureBackup_<VM name>* -ResourceType Microsoft.Compute/restorePointCollections
+ ```
- ```azurepowershell-interactive
- $restorePointCollection = Get-AzResource -ResourceGroupName AzureBackupRG_<VM location>_1 -ResourceType Microsoft.Compute/restorePointCollections
- ```
+
- Delete each resource. This operation deletes only the instant recovery points, not the backed-up data in the vault.
+ Delete this resource. This operation deletes only the instant recovery points, not the backed-up data in the vault.
- ```azurepowershell-interactive
- foreach ($restorePoint in $restorePointCollection)
- {
- Remove-AzResource -ResourceId $restorePoint.ResourceId -Force
- }
- ```
+ # [Azure CLI](#tab/azure-cli)
-### Azure CLI
+ ```azurecli-interactive
+ az resource delete --ids $RESTOREPOINTCOL
+ ```
-1. Find the location of your virtual machine.
+ # [PowerShell](#tab/azure-powershell)
-1. Find a resource group with the naming pattern - `AzureBackupRG_<VM location>_1`. For example, the name might be `AzureBackupRG_westus2_1`.
+ ```azurepowershell-interactive
+ Remove-AzResource -ResourceId $restorePointCollection.ResourceId -Force
+ ```
-1. If you're moving only one virtual machine, get the restore point collection for that virtual machine.
+
- ```azurecli-interactive
- RESTOREPOINTCOL=$(az resource list -g AzureBackupRG_<VM location>_1 --resource-type Microsoft.Compute/restorePointCollections --query "[?starts_with(name, 'AzureBackup_<VM name>')].id" --output tsv)
- ```
+1. If you're moving all the virtual machines with back ups in this location, get the restore point collections for those virtual machines.
- Delete this resource. This operation deletes only the instant recovery points, not the backed-up data in the vault.
+ # [Azure CLI](#tab/azure-cli)
- ```azurecli-interactive
- az resource delete --ids $RESTOREPOINTCOL
- ```
+ ```azurecli-interactive
+ RESTOREPOINTCOL=$(az resource list -g AzureBackupRG_<VM location>_1 --resource-type Microsoft.Compute/restorePointCollections)
+ ```
-1. If you're moving all the virtual machines with back ups in this location, get the restore point collections for those virtual machines.
+ # [PowerShell](#tab/azure-powershell)
+
+ ```azurepowershell-interactive
+ $restorePointCollection = Get-AzResource -ResourceGroupName AzureBackupRG_<VM location>_1 -ResourceType Microsoft.Compute/restorePointCollections
+ ```
+
+
+
+ Delete each resource. This operation deletes only the instant recovery points, not the backed-up data in the vault.
+
+ # [Azure CLI](#tab/azure-cli)
+
+ ```azurecli-interactive
+ az resource delete --ids $RESTOREPOINTCOL
+ ```
- ```azurecli-interactive
- RESTOREPOINTCOL=$(az resource list -g AzureBackupRG_<VM location>_1 --resource-type Microsoft.Compute/restorePointCollections)
- ```
+ # [PowerShell](#tab/azure-powershell)
- Delete each resource. This operation deletes only the instant recovery points, not the backed-up data in the vault.
+ ```azurepowershell-interactive
+ foreach ($restorePoint in $restorePointCollection)
+ {
+ Remove-AzResource -ResourceId $restorePoint.ResourceId -Force
+ }
+ ```
- ```azurecli-interactive
- az resource delete --ids $RESTOREPOINTCOL
- ```
+
## Next steps
azure-resource-manager Deployment Modes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-modes.md
Title: Deployment modes description: Describes how to specify whether to use a complete or incremental deployment mode with Azure Resource Manager. Previously updated : 07/22/2020 Last updated : 01/21/2022 # Azure Resource Manager deployment modes
If the resource group is [locked](../management/lock-resources.md), complete mod
In incremental mode, Resource Manager **leaves unchanged** resources that exist in the resource group but aren't specified in the template. Resources in the template **are added** to the resource group.
-> [!NOTE]
+> [!IMPORTANT]
> When redeploying an existing resource in incremental mode, all properties are reapplied. The **properties aren't incrementally added**. A common misunderstanding is to think properties that aren't specified in the template are left unchanged. If you don't specify certain properties, Resource Manager interprets the deployment as overwriting those values. Properties that aren't included in the template are reset to the default values. Specify all non-default values for the resource, not just the ones you're updating. The resource definition in the template always contains the final state of the resource. It can't represent a partial update to an existing resource.+
+> [!WARNING]
+> In rare cases, you can specify properties either on a resource or on one of its child resources. Two common examples are **subnets on virtual networks** and **site configuration values for web apps**. In these cases, you must handle incremental updates carefully.
+>
+> For subnets, specify the values through the `subnets` property on the [Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks) resource. Don't define the values through the child resource [Microsoft.Network/virtualNetworks/subnets](/azure/templates/microsoft.network/virtualnetworks/subnets). As long as the subnets are defined on the virtual network, you can redeploy the virtual network and not lose the subnets.
>
-> In rare cases, properties that you specify for a resource are actually implemented as a child resource. For example, when you provide site configuration values for a web app, those values are implemented in the child resource type `Microsoft.Web/sites/config`. If you redeploy the web app and specify an empty object for the site configuration values, the child resource isn't updated. However, if you provide new site configuration values, the child resource type is updated.
+> For site configuration values, the values are implemented in the child resource type `Microsoft.Web/sites/config`. If you redeploy the web app and specify an empty object for the site configuration values, the child resource isn't updated. However, if you provide new site configuration values, the child resource type is updated.
## Example result
azure-resource-manager Template Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions.md
Title: Template functions description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values, work with strings and numerics, and retrieve deployment information. Previously updated : 11/23/2021 Last updated : 01/20/2022 + # ARM template functions This article describes all the functions you can use in an Azure Resource Manager template (ARM template). For information about using functions in your template, see [template syntax](template-expressions.md).
Resource Manager provides several functions for working with arrays.
* [take](template-functions-array.md#take) * [union](template-functions-array.md#union)
+For Bicep files, use the Bicep [array](../bicep/bicep-functions-array.md) functions.
+ <a id="coalesce" aria-hidden="true"></a> <a id="equals" aria-hidden="true"></a> <a id="less" aria-hidden="true"></a>
Resource Manager provides several functions for making comparisons in your templ
* [greater](template-functions-comparison.md#greater) * [greaterOrEquals](template-functions-comparison.md#greaterorequals)
+For Bicep files, use the Bicep [coalesce](../bicep/operators-logical.md) logical operator. For comparisons, use the Bicep [comparison](../bicep/operators-comparison.md) operators.
+ <a id="deployment" aria-hidden="true"></a> <a id="parameters" aria-hidden="true"></a> <a id="variables" aria-hidden="true"></a>
Resource Manager provides the following functions for working with dates.
* [dateTimeAdd](template-functions-date.md#datetimeadd) * [utcNow](template-functions-date.md#utcnow)
+For Bicep files, use the Bicep [date](../bicep/bicep-functions-date.md) functions.
+ ## Deployment value functions Resource Manager provides the following functions for getting values from sections of the template and values related to the deployment:
Resource Manager provides the following functions for getting values from sectio
* [parameters](template-functions-deployment.md#parameters) * [variables](template-functions-deployment.md#variables)
+For Bicep files, use the Bicep [deployment](../bicep/bicep-functions-deployment.md) functions.
+ <a id="and" aria-hidden="true"></a> <a id="bool" aria-hidden="true"></a> <a id="if" aria-hidden="true"></a>
Resource Manager provides the following functions for working with logical condi
* [or](template-functions-logical.md#or) * [true](template-functions-logical.md#true)
+For Bicep files, use the Bicep [bool](../bicep/bicep-functions-logical.md) logical function. For other logical values, use Bicep [logical](../bicep/operators-logical.md) operators.
+ <a id="add" aria-hidden="true"></a> <a id="copyindex" aria-hidden="true"></a> <a id="div" aria-hidden="true"></a>
Resource Manager provides the following functions for working with integers:
* [mul](template-functions-numeric.md#mul) * [sub](template-functions-numeric.md#sub)
+For Bicep files that use `int`, `min`, and `max` use Bicep [numeric](../bicep/bicep-functions-numeric.md) functions. For other numeric values, use Bicep [numeric](../bicep/operators-numeric.md) operators.
+ <a id="json" aria-hidden="true"></a> ## Object functions
Resource Manager provides several functions for working with objects.
* [null](template-functions-object.md#null) * [union](template-functions-object.md#union)
+For Bicep files, use the Bicep [object](../bicep/bicep-functions-object.md) functions.
+ <a id="extensionResourceId" aria-hidden="true"></a> <a id="listkeys" aria-hidden="true"></a> <a id="list" aria-hidden="true"></a>
Resource Manager provides the following functions for getting resource values:
* [subscriptionResourceId](template-functions-resource.md#subscriptionresourceid) * [tenantResourceId](template-functions-resource.md#tenantresourceid)
+For Bicep files, use the Bicep [resource](../bicep/bicep-functions-resource.md) functions.
+ <a id="managementgroup" aria-hidden="true"></a> <a id="resourcegroup" aria-hidden="true"></a> <a id="subscription" aria-hidden="true"></a>
Resource Manager provides the following functions for getting deployment scope v
* [subscription](template-functions-scope.md#subscription) - can only be used in deployments to a resource group or subscription. * [tenant](template-functions-scope.md#tenant) - can be used for deployments at any scope.
+For Bicep files, use the Bicep [scope](../bicep/bicep-functions-scope.md) functions.
+ <a id="base64" aria-hidden="true"></a> <a id="base64tojson" aria-hidden="true"></a> <a id="base64tostring" aria-hidden="true"></a>
Resource Manager provides the following functions for working with strings:
* [uriComponent](template-functions-string.md#uricomponent) * [uriComponentToString](template-functions-string.md#uricomponenttostring)
+For Bicep files, use the Bicep [string](../bicep/bicep-functions-string.md) functions.
+ ## Next steps * For a description of the sections in an ARM template, see [Understand the structure and syntax of ARM templates](./syntax.md). * To merge multiple templates, see [Using linked and nested templates when deploying Azure resources](linked-templates.md). * To iterate a specified number of times when creating a type of resource, see [Resource iteration in ARM templates](copy-resources.md).
-* To see how to deploy the template you've created, see [Deploy resources with ARM templates and Azure PowerShell](deploy-powershell.md).
+* To see how to deploy the template you've created, see [Deploy resources with ARM templates and Azure PowerShell](deploy-powershell.md).
azure-sql-edge Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/deploy-portal.md
The following steps use the Azure SQL Edge command-line tool, **sqlcmd**, inside
> [!NOTE] > SQL Command line tools (sqlcmd) are not available inside the ARM64 version of Azure SQL Edge containers.
-1. Use the `docker exec -it` command to start an interactive bash shell inside your running container. In the following example `azuresqledge` is name specified by the `Name` parameter of your IoT Edge Module.
+1. Use the `docker exec -it` command to start an interactive bash shell inside your running container. In the following example `AzureSQLEdge` is name specified by the `Name` parameter of your IoT Edge Module.
```bash
- sudo docker exec -it azuresqledge "bash"
+ sudo docker exec -it AzureSQLEdge "bash"
``` 2. Once inside the container, connect locally with sqlcmd. Sqlcmd is not in the path by default, so you have to specify the full path.
In this quickstart, you deployed a SQL Edge Module on an IoT Edge device.
- [Machine Learning and Artificial Intelligence with ONNX in SQL Edge](onnx-overview.md) - [Building an end to end IoT Solution with SQL Edge using IoT Edge](tutorial-deploy-azure-resources.md) - [Data Streaming in Azure SQL Edge](stream-data.md)-- [Troubleshoot deployment errors](troubleshoot.md)
+- [Troubleshoot deployment errors](troubleshoot.md)
azure-sql High Availability Sla https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/high-availability-sla.md
description: Learn about the Azure SQL Database and SQL Managed Instance service
-+ ms.devlang: Previously updated : 09/24/2021 Last updated : 1/20/2022 # High availability for Azure SQL Database and SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
-The goal of the high availability architecture in Azure SQL Database and SQL Managed Instance is to guarantee that your database is up and running minimum of 99.99% of time without worrying about the impact of maintenance operations and outages. For more information regarding specific SLA for different tiers, please refer to [SLA for Azure SQL Database](https://azure.microsoft.com/support/legal/sla/azure-sql-database) and SLA for [Azure SQL Managed Instance](https://azure.microsoft.com/support/legal/sla/azure-sql-sql-managed-instance/).
+The goal of the high availability architecture in Azure SQL Database and SQL Managed Instance is to guarantee that your database is up and running minimum of 99.99% of time without worrying about the impact of maintenance operations and outages. For more information regarding specific SLA for different tiers, refer to [SLA for Azure SQL Database](https://azure.microsoft.com/support/legal/sla/azure-sql-database) and SLA for [Azure SQL Managed Instance](https://azure.microsoft.com/support/legal/sla/azure-sql-sql-managed-instance/).
-Azure automatically handles critical servicing tasks, such as patching, backups, Windows and Azure SQL upgrades, as well as unplanned events such as underlying hardware, software, or network failures. When the underlying database in Azure SQL Database is patched or fails over, the downtime is not noticeable if you [employ retry logic](develop-overview.md#resiliency) in your app. SQL Database and SQL Managed Instance can quickly recover even in the most critical circumstances ensuring that your data is always available.
+Azure automatically handles critical servicing tasks, such as patching, backups, Windows and Azure SQL upgrades, and unplanned events such as underlying hardware, software, or network failures. When the underlying database in Azure SQL Database is patched or fails over, the downtime is not noticeable if you [employ retry logic](develop-overview.md#resiliency) in your app. SQL Database and SQL Managed Instance can quickly recover even in the most critical circumstances ensuring that your data is always available.
The high availability solution is designed to ensure that committed data is never lost due to failures, that maintenance operations do not affect your workload, and that the database will not be a single point of failure in your software architecture. There are no maintenance windows or downtimes that should require you to stop the workload while the database is upgraded or maintained. There are two high availability architectural models: - **Standard availability model** that is based on a separation of compute and storage. It relies on high availability and reliability of the remote storage tier. This architecture targets budget-oriented business applications that can tolerate some performance degradation during maintenance activities.-- **Premium availability model** that is based on a cluster of database engine processes. It relies on the fact that there is always a quorum of available database engine nodes. This architecture targets mission critical applications with high IO performance, high transaction rate and guarantees minimal performance impact to your workload during maintenance activities.
+- **Premium availability model** that is based on a cluster of database engine processes. It relies on the fact that there is always a quorum of available database engine nodes. This architecture targets mission-critical applications with high IO performance, high transaction rate and guarantees minimal performance impact to your workload during maintenance activities.
SQL Database and SQL Managed Instance both run on the latest stable version of the SQL Server database engine and Windows operating system, and most users would not notice that upgrades are performed continuously. ## Basic, Standard, and General Purpose service tier locally redundant availability
-The Basic, Standard, and General Purpose service tiers leverage the standard availability architecture for both serverless and provisioned compute. The following figure shows four different nodes with the separated compute and storage layers.
+The Basic, Standard, and General Purpose service tiers use the standard availability architecture for both serverless and provisioned compute. The following figure shows four different nodes with the separated compute and storage layers.
![Separation of compute and storage](./media/high-availability-sla/general-purpose-service-tier.png)
Whenever the database engine or the operating system is upgraded, or a failure i
## General Purpose service tier zone redundant availability (Preview)
-Zone redundant configuration for the general purpose service tier is offered for both serverless and provisioned compute. This configuration utilizes [Azure Availability Zones](../../availability-zones/az-overview.md)  to replicate databases across multiple physical locations within an Azure region. By selecting zone redundancy, you can make your new and existing serverless and provisioned general purpose single databases and elastic pools resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes of the application logic.
+Zone-redundant configuration for the general purpose service tier is offered for both serverless and provisioned compute. This configuration utilizes [Azure Availability Zones](../../availability-zones/az-overview.md)  to replicate databases across multiple physical locations within an Azure region. By selecting zone-redundancy, you can make your new and existing serverless and provisioned general purpose single databases and elastic pools resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes of the application logic.
-Zone redundant configuration for the general purpose tier has two layers:
+Zone-redundant configuration for the general purpose tier has two layers:
- A stateful data layer with the database files (.mdf/.ldf) that are stored in ZRS(zone-redundant storage). Using [ZRS](../../storage/common/storage-redundancy.md) the data and log files are synchronously copied across three physically-isolated Azure availability zones.-- A stateless compute layer that runs the sqlservr.exe process and contains only transient and cached data, such as TempDB, model databases on the attached SSD, and plan cache, buffer pool, and columnstore pool in memory. This stateless node is operated by Azure Service Fabric that initializes sqlservr.exe, controls health of the node, and performs failover to another node if necessary. For zone redundant serverless and provisioned general purpose databases, nodes with spare capacity are readily available in other Availability Zones for failover.
+- A stateless compute layer that runs the sqlservr.exe process and contains only transient and cached data, such as TempDB, model databases on the attached SSD, and plan cache, buffer pool, and columnstore pool in memory. This stateless node is operated by Azure Service Fabric that initializes sqlservr.exe, controls health of the node, and performs failover to another node if necessary. For zone-redundant serverless and provisioned general purpose databases, nodes with spare capacity are readily available in other Availability Zones for failover.
-The zone redundant version of the high availability architecture for the general purpose service tier is illustrated by the following diagram:
+The zone-redundant version of the high availability architecture for the general purpose service tier is illustrated by the following diagram:
![Zone redundant configuration for general purpose](./media/high-availability-sla/zone-redundant-for-general-purpose.png) > [!IMPORTANT]
-> Zone redundant configuration is only available when the Gen5 compute hardware is selected. This feature is not available in SQL Managed Instance. Zone redundant configuration for serverless and provisioned general purpose tier is only available in the following regions: East US, East US 2, West US 2, North Europe, West Europe, Southeast Asia, Australia East, Japan East, UK South, and France Central.
+> Zone-redundant configuration is only available when the Gen5 compute hardware is selected. This feature is not available in SQL Managed Instance. Zone-redundant configuration for serverless and provisioned general purpose tier is only available in the following regions: East US, East US 2, West US 2, North Europe, West Europe, Southeast Asia, Australia East, Japan East, UK South, and France Central.
> [!NOTE]
-> General Purpose databases with a size of 80 vcore may experience performance degradation with zone redundant configuration. Additionally, operations such as backup, restore, database copy, setting up Geo-DR relationships, and downgrading a zone redundant database from Business Critical to General Purpose may experience slower performance for any single databases larger than 1 TB. Please see our [latency documentation on scaling a database](single-database-scale.md) for more information.
+> General Purpose databases with a size of 80 vcore may experience performance degradation with zone-redundant configuration. Additionally, operations such as backup, restore, database copy, setting up Geo-DR relationships, and downgrading a zone-redundant database from Business Critical to General Purpose may experience slower performance for any single databases larger than 1 TB. Please see our [latency documentation on scaling a database](single-database-scale.md) for more information.
> > [!NOTE] > The preview is not covered under Reserved Instance ## Premium and Business Critical service tier locally redundant availability
-Premium and Business Critical service tiers leverage the Premium availability model, which integrates compute resources (`sqlservr.exe` process) and storage (locally attached SSD) on a single node. High availability is achieved by replicating both compute and storage to additional nodes creating a three to four-node cluster.
+Premium and Business Critical service tiers use the Premium availability model, which integrates compute resources (`sqlservr.exe` process) and storage (locally attached SSD) on a single node. High availability is achieved by replicating both compute and storage to additional nodes creating a three to four-node cluster.
![Cluster of database engine nodes](./media/high-availability-sla/business-critical-service-tier.png)
-The underlying database files (.mdf/.ldf) are placed on the attached SSD storage to provide very low latency IO to your workload. High availability is implemented using a technology similar to SQL Server [Always On availability groups](/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server). The cluster includes a single primary replica that is accessible for read-write customer workloads, and up to three secondary replicas (compute and storage) containing copies of data. The primary node constantly pushes changes to the secondary nodes in order and ensures that the data is synchronized to at least one secondary replica before committing each transaction. This process guarantees that if the primary node crashes for any reason, there is always a fully synchronized node to fail over to. The failover is initiated by the Azure Service Fabric. Once the secondary replica becomes the new primary node, another secondary replica is created to ensure the cluster has enough nodes (quorum set). Once failover is complete, Azure SQL connections are automatically redirected to the new primary node.
+The underlying database files (.mdf/.ldf) are placed on the attached SSD storage to provide very low latency IO to your workload. High availability is implemented using a technology similar to SQL Server [Always On availability groups](/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server). The cluster includes a single primary replica that is accessible for read-write customer workloads, and up to three secondary replicas (compute and storage) containing copies of data. The primary node constantly pushes changes to the secondary nodes in order and ensures that the data is persisted to at least one secondary replica before committing each transaction. This process guarantees that if the primary node crashes for any reason, there is always a fully synchronized node to fail over to. The failover is initiated by the Azure Service Fabric. Once the secondary replica becomes the new primary node, another secondary replica is created to ensure the cluster has enough nodes (quorum set). Once failover is complete, Azure SQL connections are automatically redirected to the new primary node.
As an extra benefit, the premium availability model includes the ability to redirect read-only Azure SQL connections to one of the secondary replicas. This feature is called [Read Scale-Out](read-scale-out.md). It provides 100% additional compute capacity at no extra charge to off-load read-only operations, such as analytical workloads, from the primary replica. ## Premium and Business Critical service tier zone redundant availability
-By default, the cluster of nodes for the premium availability model is created in the same datacenter. With the introduction of [Azure Availability Zones](../../availability-zones/az-overview.md), SQL Database can place different replicas of the Business Critical database to different availability zones in the same region. To eliminate a single point of failure, the control ring is also duplicated across multiple zones as three gateway rings (GW). The routing to a specific gateway ring is controlled by [Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md) (ATM). Because the zone redundant configuration in the Premium or Business Critical service tiers does not create additional database redundancy, you can enable it at no extra cost. By selecting a zone redundant configuration, you can make your Premium or Business Critical databases resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes to the application logic. You can also convert any existing Premium or Business Critical databases or pools to the zone redundant configuration.
+By default, the cluster of nodes for the premium availability model is created in the same datacenter. With the introduction of [Azure Availability Zones](../../availability-zones/az-overview.md), SQL Database can place different replicas of the Business Critical database to different availability zones in the same region. To eliminate a single point of failure, the control ring is also duplicated across multiple zones as three gateway rings (GW). The routing to a specific gateway ring is controlled by [Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md) (ATM). Because the zone-redundant configuration in the Premium or Business Critical service tiers does not create additional database redundancy, you can enable it at no extra cost. By selecting a zone-redundant configuration, you can make your Premium or Business Critical databases resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes to the application logic. You can also convert any existing Premium or Business Critical databases or pools to the zone-redundant configuration.
-Because the zone redundant databases have replicas in different datacenters with some distance between them, the increased network latency may increase the commit time and thus impact the performance of some OLTP workloads. You can always return to the single-zone configuration by disabling the zone redundancy setting. This process is an online operation similar to the regular service tier upgrade. At the end of the process, the database or pool is migrated from a zone redundant ring to a single zone ring or vice versa.
+Because the zone-redundant databases have replicas in different datacenters with some distance between them, the increased network latency may increase the commit time and thus impact the performance of some OLTP workloads. You can always return to the single-zone configuration by disabling the zone-redundancy setting. This process is an online operation similar to the regular service tier upgrade. At the end of the process, the database or pool is migrated from a zone-redundant ring to a single zone ring or vice versa.
> [!IMPORTANT]
-> When using the Business Critical tier, zone redundant configuration is only available when the Gen5 compute hardware is selected. For up to date information about the regions that support zone redundant databases, see [Services support by region](../../availability-zones/az-region.md).
+> When using the Business Critical tier, zone-redundant configuration is only available when the Gen5 compute hardware is selected. For up to date information about the regions that support zone-redundant databases, see [Services support by region](../../availability-zones/az-region.md).
> [!NOTE] > This feature is not available in SQL Managed Instance.
-The zone redundant version of the high availability architecture is illustrated by the following diagram:
+The zone-redundant version of the high availability architecture is illustrated by the following diagram:
![high availability architecture zone redundant](./media/high-availability-sla/zone-redundant-business-critical-service-tier.png)
The availability model in Hyperscale includes four layers:
- A stateless compute layer that runs the `sqlservr.exe` processes and contains only transient and cached data, such as non-covering RBPEX cache, TempDB, model database, etc. on the attached SSD, and plan cache, buffer pool, and columnstore pool in memory. This stateless layer includes the primary compute replica and optionally a number of secondary compute replicas that can serve as failover targets. - A stateless storage layer formed by page servers. This layer is the distributed storage engine for the `sqlservr.exe` processes running on the compute replicas. Each page server contains only transient and cached data, such as covering RBPEX cache on the attached SSD, and data pages cached in memory. Each page server has a paired page server in an active-active configuration to provide load balancing, redundancy, and high availability.-- A stateful transaction log storage layer formed by the compute node running the Log service process, the transaction log landing zone, and transaction log long term storage. Landing zone and long term storage use Azure Storage, which provides availability and [redundancy](../../storage/common/storage-redundancy.md) for transaction log, ensuring data durability for committed transactions.
+- A stateful transaction log storage layer formed by the compute node running the Log service process, the transaction log landing zone, and transaction log long-term storage. Landing zone and long-term storage use Azure Storage, which provides availability and [redundancy](../../storage/common/storage-redundancy.md) for transaction log, ensuring data durability for committed transactions.
- A stateful data storage layer with the database files (.mdf/.ndf) that are stored in Azure Storage and are updated by page servers. This layer uses data availability and [redundancy](../../storage/common/storage-redundancy.md) features of Azure Storage. It guarantees that every page in a data file will be preserved even if processes in other layers of Hyperscale architecture crash, or if compute nodes fail. Compute nodes in all Hyperscale layers run on Azure Service Fabric, which controls health of each node and performs failovers to available healthy nodes as necessary.
For more information on high availability in Hyperscale, see [Database High Avai
## Testing application fault resiliency
-High availability is a fundamental part of the SQL Database and SQL Managed Instance platform that works transparently for your database application. However, we recognize that you may want to test how the automatic failover operations initiated during planned or unplanned events would impact an application before you deploy it to production. You can manually trigger a failover by calling a special API to restart a database, an elastic pool, or a managed instance. In the case of a zone redundant serverless or provisioned General Purpose database or elastic pool, the API call would result in redirecting client connections to the new primary in an Availability Zone different from the Availability Zone of the old primary. So in addition to testing how failover impacts existing database sessions, you can also verify if it changes the end-to-end performance due to changes in network latency. Because the restart operation is intrusive and a large number of them could stress the platform, only one failover call is allowed every 15 minutes for each database, elastic pool, or managed instance.
+High availability is a fundamental part of the SQL Database and SQL Managed Instance platform that works transparently for your database application. However, we recognize that you may want to test how the automatic failover operations initiated during planned or unplanned events would impact an application before you deploy it to production. You can manually trigger a failover by calling a special API to restart a database, an elastic pool, or a managed instance. In the case of a zone-redundant serverless or provisioned General Purpose database or elastic pool, the API call would result in redirecting client connections to the new primary in an Availability Zone different from the Availability Zone of the old primary. So in addition to testing how failover impacts existing database sessions, you can also verify if it changes the end-to-end performance due to changes in network latency. Because the restart operation is intrusive and a large number of them could stress the platform, only one failover call is allowed every 15 minutes for each database, elastic pool, or managed instance.
A failover can be initiated using PowerShell, REST API, or Azure CLI:
A failover can be initiated using PowerShell, REST API, or Azure CLI:
## Conclusion
-Azure SQL Database and Azure SQL Managed Instance feature a built-in high availability solution, that is deeply integrated with the Azure platform. It is dependent on Service Fabric for failure detection and recovery, on Azure Blob storage for data protection, and on Availability Zones for higher fault tolerance (as mentioned earlier in document not applicable to Azure SQL Managed Instance yet). In addition, SQL Database and SQL Managed Instance leverage the Always On availability group technology from the SQL Server instance for replication and failover. The combination of these technologies enables applications to fully realize the benefits of a mixed storage model and support the most demanding SLAs.
+Azure SQL Database and Azure SQL Managed Instance feature a built-in high availability solution, that is deeply integrated with the Azure platform. It is dependent on Service Fabric for failure detection and recovery, on Azure Blob storage for data protection, and on Availability Zones for higher fault tolerance (as mentioned earlier in document not applicable to Azure SQL Managed Instance yet). In addition, SQL Database and SQL Managed Instance use the Always On availability group technology from the SQL Server instance for replication and failover. The combination of these technologies enables applications to fully realize the benefits of a mixed storage model and support the most demanding SLAs.
## Next steps
azure-sql Read Scale Out https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/read-scale-out.md
Previously updated : 11/5/2021 Last updated : 1/20/2022 # Use read-only replicas to offload read-only query workloads [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
If you wish to ensure that the application connects to the primary replica regar
## Data consistency
-Data changes made on the primary replica propagate to read-only replicas asynchronously. Within a session connected to a read-only replica, reads are always transactionally consistent. However, because data propagation latency is variable, different replicas can return data at slightly different points in time relative to the primary and each other. If a read-only replica becomes unavailable and the session reconnects, it may connect to a replica that is at a different point in time than the original replica. Likewise, if an application changes data using a read-write session and immediately reads it using a read-only session, it is possible that the latest changes are not immediately visible on the read-only replica.
+Data changes made on the primary replica are persisted on read-only replicas synchronously or asynchronously depending on replica type. However, for all replica types, reads from a read-only replica are always asynchronous with respect to the primary. Within a session connected to a read-only replica, reads are always transactionally consistent. Because data propagation latency is variable, different replicas can return data at slightly different points in time relative to the primary and each other. If a read-only replica becomes unavailable and a session reconnects, it may connect to a replica that is at a different point in time than the original replica. Likewise, if an application changes data using a read-write session on the primary and immediately reads it using a read-only session on a read-only replica, it is possible that the latest changes will not be immediately visible.
Typical data propagation latency between the primary replica and read-only replicas varies in the range from tens of milliseconds to single-digit seconds. However, there is no fixed upper bound on data propagation latency. Conditions such as high resource utilization on the replica can increase latency substantially. Applications that require guaranteed data consistency across sessions, or require committed data to be readable immediately should use the primary replica.
azure-sql Saas Dbpertenant Get Started Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/saas-dbpertenant-get-started-deploy.md
Choose your names now, and write them down.
1. To open the Wingtip Tickets SaaS database-per-tenant deployment template in the Azure portal, select **Deploy to Azure**.
- [![Image showing a button labeled "Deploy to Azure".](https://azuredeploy.net/deploybutton.png)](https://aka.ms/deploywingtipdpt)
+ [![Image showing a button labeled "Deploy to Azure".](../../media/template-deployments/deploy-to-azure.svg)](https://aka.ms/deploywingtipdpt)
1. Enter values in the template for the required parameters.
azure-sql Connect Vm Instance Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/connect-vm-instance-configure.md
The easiest way to create a client virtual machine with all necessary tools is t
1. Make sure that you're signed in to the Azure portal in another browser tab. Then, select the following button to create a client virtual machine and install SQL Server Management Studio:
- [![Image showing a button labeled "Deploy to Azure".](https://azuredeploy.net/deploybutton.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fjovanpop-msft%2Fazure-quickstart-templates%2Fsql-win-vm-w-tools%2F201-vm-win-vnet-sql-tools%2Fazuredeploy.json)
+ [![Image showing a button labeled "Deploy to Azure".](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fjovanpop-msft%2Fazure-quickstart-templates%2Fsql-win-vm-w-tools%2F201-vm-win-vnet-sql-tools%2Fazuredeploy.json)
2. Fill out the form using the information in the following table:
azure-sql Management Operations Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/management-operations-overview.md
The following tables summarize operations and typical overall durations, based o
|Operation |Long-running segment |Estimated duration | |||| |First instance in an empty subnet|Virtual cluster creation|90% of operations finish in 4 hours.|
-|First instance of another hardware generation in a non-empty subnet (for example, first Gen5 instance in a subnet with Gen4 instances)|Virtual cluster creation<sup>1</sup>|90% of operations finish in 4 hours.|
+|First instance of another hardware generation or maintenance window in a non-empty subnet (for example, first Premium series instance in a subnet with Standard series instances)|Virtual cluster creation<sup>1</sup>|90% of operations finish in 4 hours.|
|Subsequent instance creation within the non-empty subnet (2nd, 3rd, etc. instance)|Virtual cluster resizing|90% of operations finish in 2.5 hours.| | | |
The following tables summarize operations and typical overall durations, based o
|Operation |Long-running segment |Estimated duration | |||| |Instance property change (admin password, Azure AD login, Azure Hybrid Benefit flag)|N/A|Up to 1 minute.|
-|Instance storage scaling up/down (General Purpose service tier)|No long-running segment|99% of operations finish in 5 minutes.|
-|Instance storage scaling up/down (Business Critical service tier)|- Virtual cluster resizing<br>- Always On availability group seeding|90% of operations finish in 2.5 hours + time to seed all databases (220 GB/hour).|
-|Instance compute (vCores) scaling up and down (General Purpose)|- Virtual cluster resizing<br>- Attaching database files|90% of operations finish in 2.5 hours.|
+|Instance storage scaling up/down (General Purpose)|No long-running segment|99% of operations finish in 5 minutes.|
+|Instance storage scaling up/down (Business Critical)|- Virtual cluster resizing<br>- Always On availability group seeding|90% of operations finish in 2.5 hours + time to seed all databases (220 GB/hour).|
+|Instance compute (vCores) scaling up and down (General Purpose)|- Virtual cluster resizing|90% of operations finish in 2.5 hours.|
|Instance compute (vCores) scaling up and down (Business Critical)|- Virtual cluster resizing<br>- Always On availability group seeding|90% of operations finish in 2.5 hours + time to seed all databases (220 GB/hour).| |Instance service tier change (General Purpose to Business Critical and vice versa)|- Virtual cluster resizing<br>- Always On availability group seeding|90% of operations finish in 2.5 hours + time to seed all databases (220 GB/hour).|
-| | |
+|Instance hardware generation or maintenance window change (General Purpose)|- Virtual cluster creation or resizing<sup>1</sup>|90% of operations finish in 4 hours (creation) or 2.5 hours (resizing) .|
+|Instance hardware generation or maintenance window change (Business Critical)|- Virtual cluster creation or resizing<sup>1</sup><br>- Always On availability group seeding|90% of operations finish in 4 hours (creation) or 2.5 hours (resizing) + time to seed all databases (220 GB/hour).|
+| | |
+
+<sup>1</sup> Managed instance must be placed in a virtual cluster with the corresponding hardware generation and maintenance window. If there is no such virtual cluster in the subnet, a new one must be created first to accommodate the instance.
**Category: Delete**
azure-sql Virtual Network Subnet Create Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/virtual-network-subnet-create-arm-template.md
Azure SQL Managed Instance must be deployed within an Azure [virtual network](..
> > If you plan to use an existing virtual network, you need to modify that network configuration to accommodate SQL Managed Instance. For more information, see [Modify an existing virtual network for SQL Managed Instance](vnet-existing-add-subnet.md). >
-> After a managed instance is created, moving the managed instance or virtual network to another resource group or subscription is not supported. Moving the managed instance to another subnet also is not supported.
->
+> After a managed instance is created, moving the managed instance or virtual network to another resource group or subscription is not supported.
+
+> [!IMPORTANT]
+> You can [move the instance to another subnet inside the Vnet](vnet-subnet-move-instance.md).
## Create a virtual network
The easiest way to create and configure a virtual network is to use an Azure Res
2. Select the **Deploy to Azure** button:
- [![Image showing a button labeled "Deploy to Azure".](https://azuredeploy.net/deploybutton.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.sql%2Fsql-managed-instance-azure-environment%2Fazuredeploy.json)
+ [![Image showing a button labeled "Deploy to Azure".](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.sql%2Fsql-managed-instance-azure-environment%2Fazuredeploy.json)
This button opens a form that you can use to configure the network environment where you can deploy SQL Managed Instance.
azure-sql Vnet Existing Add Subnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/vnet-existing-add-subnet.md
If one of the following cases applies to you, you can validate and modify your n
> [!Note] > You can create a managed instance only in virtual networks created through the Azure Resource Manager deployment model. Azure virtual networks created through the classic deployment model are not supported. Calculate subnet size by following the guidelines in the [Determine the size of subnet for SQL Managed Instance](vnet-subnet-determine-size.md) article. You can't resize the subnet after you deploy the resources inside. >
-> After the managed instance is created, moving the instance or VNet to another resource group or subscription is not supported.
+> After the managed instance is created, you can [move the instance to another subnet inside the Vnet](vnet-subnet-move-instance.md), but moving the instance or VNet to another resource group or subscription is not supported.
## Validate and modify an existing virtual network
azure-sql Vnet Subnet Determine Size https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/vnet-subnet-determine-size.md
Previously updated : 12/06/2021 Last updated : 01/21/2022 # Determine required subnet size and range for Azure SQL Managed Instance [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)] Azure SQL Managed Instance must be deployed within an Azure [virtual network](../../virtual-network/virtual-networks-overview.md). The number of managed instances that can be deployed in the subnet of a virtual network depends on the size of the subnet (subnet range).
-When you create a managed instance, Azure allocates a number of virtual machines that depends on the tier you selected during provisioning. Because these virtual machines are associated with your subnet, they require IP addresses. To ensure high availability during regular operations and service maintenance, Azure might allocate more virtual machines. The number of required IP addresses in a subnet then becomes larger than the number of managed instances in that subnet.
+When you create a managed instance, Azure allocates a number of virtual machines that depend on the tier you selected during provisioning. Because these virtual machines are associated with your subnet, they require IP addresses. To ensure high availability during regular operations and service maintenance, Azure might allocate more virtual machines. The number of required IP addresses in a subnet then becomes larger than the number of managed instances in that subnet.
By design, a managed instance needs a minimum of 32 IP addresses in a subnet. As a result, you can use a minimum subnet mask of /27 when defining your subnet IP ranges. We recommend careful planning of subnet size for your managed instance deployments. Consider the following inputs during planning: - Number of managed instances, including the following instance parameters: - Service tier
- - Hardware generation
- Number of vCores
+ - [Hardware generation](resource-limits.md#hardware-generation-characteristics)
- [Maintenance window](../database/maintenance-window.md)-- Plans to scale up/down or change the service tier
+- Plans to scale up/down or change the service tier, hardware generation, or maintenance window
> [!IMPORTANT] > A subnet size of 16 IP addresses (subnet mask /28) allows the deployment of a single managed instance inside it. It should be used only for evaluation or for dev/test scenarios where scaling operations won't be performed.
Size your subnet according to your future needs for instance deployment and scal
- Azure uses five IP addresses in the subnet for its own needs. - Each virtual cluster allocates an additional number of addresses. -- Each managed instance uses a number of addresses that depends on pricing tier and hardware generation.
+- Each managed instance uses a number of addresses that depend on pricing tier and hardware generation.
- Each scaling request temporarily allocates an additional number of addresses. > [!IMPORTANT]
GP = general purpose;
BC = business critical; VC = virtual cluster
-| **Hardware generation** | **Pricing tier** | **Azure usage** | **VC usage** | **Instance usage** | **Total** |
-| | | | | | |
-| Gen4 | GP | 5 | 1 | 5 | 11 |
-| Gen4 | BC | 5 | 1 | 5 | 11 |
-| Gen5 | GP | 5 | 6 | 3 | 14 |
-| Gen5 | BC | 5 | 6 | 5 | 16 |
+| **Pricing tier** | **Azure usage** | **VC usage** | **Instance usage** | **Total** |
+| | | | | |
+| GP | 5 | 6 | 3 | 14 |
+| BC | 5 | 6 | 5 | 16 |
In the preceding table: -- The **Total** column displays the total number of addresses that are used by a single deployed instance to the subnet. -- When you add more instances to the subnet, the number of addresses used by the instance increases. The total number of addresses then also increases. For example, adding another Gen4 GP managed instance would increase the **Instance usage** value to 10 and would increase the **Total** value of used addresses to 16. -- Addresses represented in the **Azure usage** column are shared across multiple virtual clusters.
+- The **Total** column displays the total number of addresses that are used by a single-deployed instance to the subnet.
+- When you add more instances to the subnet, the number of addresses used by the instance increases. The total number of addresses then also increases.
+- Addresses represented in the **Azure usage** column are shared across multiple virtual clusters.
- Addresses represented in the **VC usage** column are shared across instances placed in that virtual cluster. Also consider the [maintenance window feature](../database/maintenance-window.md) when you're determining the subnet size, especially when multiple instances will be deployed inside the same subnet. Specifying a maintenance window for a managed instance during its creation or afterward means that it must be placed in a virtual cluster with the corresponding maintenance window. If there is no such virtual cluster in the subnet, a new one must be created first to accommodate the instance.
+The same scenario as for the maintenance window applies for changing the [hardware generation](resource-limits.md#hardware-generation-characteristics) as virtual cluster is built per hardware generation. In case of new instance creation or changing the hardware generation of the existing instance, if there is no such virtual cluster in the subnet, a new one must be created first to accommodate the instance.
+ An update operation typically requires [resizing the virtual cluster](management-operations-overview.md). When a new create or update request comes, the SQL Managed Instance service communicates with the compute platform with a request for new nodes that need to be added. Based on the compute response, the deployment system either expands the existing virtual cluster or creates a new one. Even if in most cases the operation will be completed within same virtual cluster, a new one might be created on the compute side. ## Update scenarios
-During a scaling operation, instances temporarily require additional IP capacity that depends on pricing tier and hardware generation:
-
-| **Hardware generation** | **Pricing tier** | **Scenario** | **Additional addresses** |
-| | | | |
-| Gen4<sup>1</sup> | GP or BC | Scaling vCores | 5 |
-| Gen4<sup>1</sup> | GP or BC | Scaling storage | 5 |
-| Gen4 | GP or BC | Switching from GP to BC or BC to GP | 5 |
-| Gen4 | GP | Switching to Gen5 | 9 |
-| Gen4 | BC | Switching to Gen5 | 11 |
-| Gen5 | GP | Scaling vCores | 3 |
-| Gen5 | GP | Scaling storage | 0 |
-| Gen5 | GP | Switching to BC | 5 |
-| Gen5 | BC | Scaling vCores | 5 |
-| Gen5 | BC | Scaling storage | 5 |
-| Gen5 | BC | Switching to GP | 3 |
-
-<sup>1</sup> Gen4 hardware is being phased out and is no longer available for new deployments. Updating the hardware generation from Gen4 to Gen5 will take advantage of capabilities specific to Gen5.
-
+During a scaling operation, instances temporarily require additional IP capacity that depends on pricing tier:
+
+| **Pricing tier** | **Scenario** | **Additional addresses** |
+| | | |
+| GP | Scaling vCores | 3 |
+| GP | Scaling storage | 0 |
+| GP | Switching to BC | 5 |
+| BC | Scaling vCores | 5 |
+| BC | Scaling storage | 5 |
+| BC | Switching to GP | 3 |
+ ## Calculate the number of IP addresses
-We recommend the following formula for calculating the total number of IP addresses. This formula takes into account the potential creation of a new virtual cluster during a later create request or instance update. It also takes into account the maintenance window requirements of virtual clusters.
+We recommend the following formula for calculating the total number of IP addresses. This formula takes into account the potential creation of a new virtual cluster during a later create request or instance update. It also takes into account the maintenance window and hardware generation requirements of virtual clusters.
**Formula: 5 + (a * 12) + (b * 16) + (c * 16)** - a = number of GP instances - b = number of BC instances-- c = number of different maintenance window configurations
+- c = number of different maintenance window configurations and hardware generations
Explanation: - 5 = number of IP addresses reserved by Azure-- 12 addresses per GP instance = 6 for virtual cluster, 3 for managed instance, 3 additional for scaling operation-- 16 addresses per BC instance = 6 for virtual cluster, 5 for managed instance, 5 additional for scaling operation
+- 12 addresses per GP instance = 6 for virtual cluster, 3 for managed instance, 3 more for scaling operation
+- 16 addresses per BC instance = 6 for virtual cluster, 5 for managed instance, 5 more for scaling operation
- 16 addresses as a backup = scenario where new virtual cluster is created Example:
azure-sql Availability Group Manually Configure Tutorial Single Subnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/availability-group-manually-configure-tutorial-single-subnet.md
Add the other SQL Server to the cluster.
### Add a cluster quorum file share
-In this example, the Windows cluster uses a file share to create a cluster quorum. This tutorial uses a Node and File Share Majority quorum. For more information, see [Understanding Quorum Configurations in a Failover Cluster](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc731739(v=ws.11)).
+In this example, the Windows cluster uses a file share to create a cluster quorum. This tutorial uses a Node and File Share Majority quorum. For more information, see [Configure and Manage Quorum](/windows-server/failover-clustering/manage-cluster-quorum).
1. Connect to the file share witness member server with a remote desktop session.
In this example, the Windows cluster uses a file share to create a cluster quoru
Next, set the cluster quorum.
+ > [!NOTE]
+ > Depending on the configuration of your availability group it may be necessary to change the quorum vote of a node partipating in the Windows Server Failover Cluster. For more information, see [Configure Cluster Quorum for SQL Server on Azure VMs](hadr-cluster-quorum-configure-how-to.md).
+ >
+ 1. Connect to the first cluster node with remote desktop. 1. In **Failover Cluster Manager**, right-click the cluster, point to **More Actions**, and select **Configure Cluster Quorum Settings...**.
To learn more, see:
- [Windows Server Failover Cluster with SQL Server on Azure VMs](hadr-windows-server-failover-cluster-overview.md) - [Always On availability groups with SQL Server on Azure VMs](availability-group-overview.md) - [Always On availability groups overview](/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server)-- [HADR settings for SQL Server on Azure VMs](hadr-cluster-best-practices.md)
+- [HADR settings for SQL Server on Azure VMs](hadr-cluster-best-practices.md)
azure-sql Failover Cluster Instance Dnn Interoperability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/failover-cluster-instance-dnn-interoperability.md
For client access, the **Failover Partner** property can handle database mirrori
## MSDTC
-The FCI can participate in distributed transactions coordinated by Microsoft Distributed Transaction Coordinator (MSDTC). Though both clustered MSDTC and local MSDTC are supported with FCI DNN, in Azure, a load balancer is still necessary for clustered MSDTC. The DNN defined in the FCI does not replace the Azure Load Balancer requirement for the clustered MSDTC in Azure.
+The FCI can participate in distributed transactions coordinated by Microsoft Distributed Transaction Coordinator (MSDTC). Clustered MSDTC and local MSDTC are supported with FCI DNN. In Azure, an Azure Load Balancer is necessary for a clustered MSDTC deployment.
+
+> [!TIP]
+>The DNN defined in the FCI does not replace the Azure Load Balancer requirement for the clustered MSDTC.
## FileStream
azure-web-pubsub Tutorial Build Chat https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/tutorial-build-chat.md
Implement the `OnMessageReceivedAsync()` method in `SampleChatHub`.
{ await _serviceClient.SendToAllAsync($"[{request.ConnectionContext.UserId}] {request.Data}");
- return request.CreateResponse($"[SYSTEM] ack."));
+ return request.CreateResponse($"[SYSTEM] ack.");
} } ```
cognitive-services Copy Move Projects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/copy-move-projects.md
Title: Copy and move Custom Vision projects
+ Title: Copy and back up Custom Vision projects
-description: Learn how to use the ExportProject and ImportProject APIs to copy and move your Custom Vision projects.
+description: Learn how to use the ExportProject and ImportProject APIs to copy and back up your Custom Vision projects.
Previously updated : 09/08/2020 Last updated : 01/20/2022
-# Copy and move your Custom Vision projects
+# Copy and back up your Custom Vision projects
-After you've created and trained a Custom Vision project, you may want to copy your project to another resource. For example, you might want to move a project from a development to production environment, or back up a project to an account in a different Azure region for increased data security.
+After you've created and trained a Custom Vision project, you may want to copy your project to another resource. If your app or business depends on the use of a Custom Vision project, we recommend you copy your model to another Custom Vision account in another region. Then if a regional outage occurs, you can access your project in the region where it was copied.
+
+As a part of Azure, Custom Vision Service has components that are maintained across multiple regions. Service zones and regions are used by all of our services to provide continued service to our customers. For more information on zones and regions, see [Azure regions](/azure/availability-zones/az-overview). If you need additional information or have any issues, please [contact support](/answers/topics/azure-custom-vision.html).
The **[ExportProject](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)** and **[ImportProject](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc7548b571998fddee3)** APIs enable this scenario by allowing you to copy projects from one Custom Vision account into others. This guide shows you how to use these REST APIs with cURL. You can also use an HTTP request service like Postman to issue the requests. > [!TIP] > For an example of this scenario using the Python client library, see the [Move Custom Vision Project](https://github.com/Azure-Samples/custom-vision-move-project/tree/master/) repository on GitHub.
-## Business scenarios
-
-If your app or business depends on the use of a Custom Vision project, we recommend you copy your model to another Custom Vision account in another region. Then if a regional outage occurs, you can access your project in the region where it was copied.
## Prerequisites
cognitive-services Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/storage-integration.md
The `"exportStatus"` field may be either `"ExportCompleted"` or `"ExportFailed"`
## Next steps
-In this guide, you learned how to copy and move a project between Custom Vision resources. Next, explore the API reference docs to see what else you can do with Custom Vision.
+In this guide, you learned how to copy and back up a project between Custom Vision resources. Next, explore the API reference docs to see what else you can do with Custom Vision.
* [REST API reference documentation (training)](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3) * [REST API reference documentation (prediction)](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.1/operations/5eb37d24548b571998fde5f3)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
# Language and voice support for the Speech service
-Language support varies by Speech service functionality. The following tables summarize language support for [Speech-to-Text](#speech-to-text), [Text-to-Speech](#text-to-speech), [Speech translation](#speech-translation), and [Speaker Recognition](#speaker-recognition) service offerings.
+Language support varies by Speech service functionality. The following tables summarize language support for [speech-to-text](#speech-to-text), [text-to-speech](#text-to-speech), [speech translation](#speech-translation), and [speaker recognition](#speaker-recognition) service offerings.
-## Speech-to-Text
+## Speech-to-text
-Both the Microsoft Speech SDK and the REST API support the following languages (locales).
-
-To improve accuracy, customization is available for some languages and baseline model versions by uploading **Audio + Human-labeled Transcripts**, **Plain Text**, **Structured Text**, and **Pronunciation**. By default, Plain Text customization is supported for all available baseline models. To learn more about customization, see [Get started with Custom Speech](./custom-speech-overview.md).
+Both the Microsoft Speech SDK and the REST API support the languages (locales) in the following table.
+To improve accuracy, customization is available for some languages and baseline model versions by uploading audio + human-labeled transcripts, plain text, structured text, and pronunciation. By default, plain text customization is supported for all available baseline models. To learn more about customization, see [Get started with Custom Speech](./custom-speech-overview.md).
| Language | Locale (BCP-47) | Customizations | |--|--|--|
-| Arabic (Algeria) | `ar-DZ` | Plain Text |
-| Arabic (Bahrain), modern standard | `ar-BH` | Plain Text |
-| Arabic (Egypt) | `ar-EG` | Plain Text |
-| Arabic (Iraq) | `ar-IQ` | Plain Text |
-| Arabic (Israel) | `ar-IL` | Plain Text |
-| Arabic (Jordan) | `ar-JO` | Plain Text |
-| Arabic (Kuwait) | `ar-KW` | Plain Text |
-| Arabic (Lebanon) | `ar-LB` | Plain Text |
-| Arabic (Libya) | `ar-LY` | Plain Text |
-| Arabic (Morocco) | `ar-MA` | Plain Text |
-| Arabic (Oman) | `ar-OM` | Plain Text |
-| Arabic (Qatar) | `ar-QA` | Plain Text |
-| Arabic (Saudi Arabia) | `ar-SA` | Plain Text |
-| Arabic (Palestinian Authority) | `ar-PS` | Plain Text |
-| Arabic (Syria) | `ar-SY` | Plain Text |
-| Arabic (Tunisia) | `ar-TN` | Plain Text |
-| Arabic (United Arab Emirates) | `ar-AE` | Plain Text |
-| Arabic (Yemen) | `ar-YE` | Plain Text |
-| Bulgarian (Bulgaria) | `bg-BG` | Plain Text |
-| Catalan (Spain) | `ca-ES` | Plain Text<br/>Pronunciation |
-| Chinese (Cantonese, Traditional) | `zh-HK` | Plain Text |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Plain Text |
-| Chinese (Taiwanese Mandarin) | `zh-TW` | Plain Text |
-| Croatian (Croatia) | `hr-HR` | Plain Text<br/>Pronunciation |
-| Czech (Czech) | `cs-CZ` | Plain Text<br/>Pronunciation |
-| Danish (Denmark) | `da-DK` | Plain Text<br/>Pronunciation |
-| Dutch (Netherlands) | `nl-NL` | Plain Text<br/>Pronunciation |
-| English (Australia) | `en-AU` | Plain Text<br/>Pronunciation |
-| English (Canada) | `en-CA` | Plain Text<br/>Pronunciation |
-| English (Ghana) | `en-GH` | Plain Text<br/>Pronunciation |
-| English (Hong Kong) | `en-HK` | Plain Text<br/>Pronunciation |
-| English (India) | `en-IN` | Plain Text<br>Structured Text (20210907)<br>Pronunciation |
-| English (Ireland) | `en-IE` | Plain Text<br/>Pronunciation |
-| English (Kenya) | `en-KE` | Plain Text<br/>Pronunciation |
-| English (New Zealand) | `en-NZ` | Plain Text<br/>Pronunciation |
-| English (Nigeria) | `en-NG` | Plain Text<br/>Pronunciation |
-| English (Philippines) | `en-PH` | Plain Text<br/>Pronunciation |
-| English (Singapore) | `en-SG` | Plain Text<br/>Pronunciation |
-| English (South Africa) | `en-ZA` | Plain Text<br/>Pronunciation |
-| English (Tanzania) | `en-TZ` | Plain Text<br/>Pronunciation |
-| English (United Kingdom) | `en-GB` | Audio (20201019)<br>Plain Text<br>Structured Text (20210906)<br>Pronunciation |
-| English (United States) | `en-US` | Audio (20201019, 20210223)<br>Plain Text<br>Structured Text (20211012)<br>Pronunciation |
-| Estonian(Estonia) | `et-EE` | Plain Text<br/>Pronunciation |
-| Filipino (Philippines) | `fil-PH` | Plain Text<br/>Pronunciation |
-| Finnish (Finland) | `fi-FI` | Plain Text<br/>Pronunciation |
-| French (Canada) | `fr-CA` | Audio (20201015)<br>Plain Text<br>Structured Text (20210908)<br>Pronunciation |
-| French (France) | `fr-FR` | Audio (20201015)<br>Plain Text<br>Structured Text (20210908)<br>Pronunciation |
-| French (Switzerland) | `fr-CH` | Plain Text<br/>Pronunciation |
-| German (Austria) | `de-AT` | Plain Text<br/>Pronunciation |
-| German (Switzerland) | `de-CH` | Plain Text<br/>Pronunciation |
-| German (Germany) | `de-DE` | Audio (20201127)<br>Plain Text<br>Structured Text (20210831)<br>Pronunciation |
-| Greek (Greece) | `el-GR` | Plain Text |
-| Gujarati (Indian) | `gu-IN` | Plain Text |
-| Hebrew (Israel) | `he-IL` | Plain Text |
-| Hindi (India) | `hi-IN` | Plain Text |
-| Hungarian (Hungary) | `hu-HU` | Plain Text<br/>Pronunciation |
-| Indonesian (Indonesia) | `id-ID` | Plain Text<br/>Pronunciation |
-| Irish (Ireland) | `ga-IE` | Plain Text<br/>Pronunciation |
-| Italian (Italy) | `it-IT` | Audio (20201016)<br>Plain Text<br>Pronunciation |
-| Japanese (Japan) | `ja-JP` | Plain Text |
-| Kannada (India) | `kn-IN` | Plain Text |
-| Korean (Korea) | `ko-KR` | Audio (20201015)<br>Plain Text |
-| Latvian (Latvia) | `lv-LV` | Plain Text<br/>Pronunciation |
-| Lithuanian (Lithuania) | `lt-LT` | Plain Text<br/>Pronunciation |
-| Malay (Malaysia) | `ms-MY` | Plain Text |
-| Maltese (Malta) | `mt-MT` | Plain Text |
-| Marathi (India) | `mr-IN` | Plain Text |
-| Norwegian (Bokmål, Norway) | `nb-NO` | Plain Text |
-| Persian (Iran) | `fa-IR` | Plain Text |
-| Polish (Poland) | `pl-PL` | Plain Text<br/>Pronunciation |
-| Portuguese (Brazil) | `pt-BR` | Audio (20201015)<br>Plain Text<br>Pronunciation |
-| Portuguese (Portugal) | `pt-PT` | Plain Text<br/>Pronunciation |
-| Romanian (Romania) | `ro-RO` | Plain Text<br/>Pronunciation |
-| Russian (Russia) | `ru-RU` | Plain Text |
-| Slovak (Slovakia) | `sk-SK` | Plain Text<br/>Pronunciation |
-| Slovenian (Slovenia) | `sl-SI` | Plain Text<br/>Pronunciation |
-| Spanish (Argentina) | `es-AR` | Plain Text<br/>Pronunciation |
-| Spanish (Bolivia) | `es-BO` | Plain Text<br/>Pronunciation |
-| Spanish (Chile) | `es-CL` | Plain Text<br/>Pronunciation |
-| Spanish (Colombia) | `es-CO` | Plain Text<br/>Pronunciation |
-| Spanish (Costa Rica) | `es-CR` | Plain Text<br/>Pronunciation |
-| Spanish (Cuba) | `es-CU` | Plain Text<br/>Pronunciation |
-| Spanish (Dominican Republic) | `es-DO` | Plain Text<br/>Pronunciation |
-| Spanish (Ecuador) | `es-EC` | Plain Text<br/>Pronunciation |
-| Spanish (El Salvador) | `es-SV` | Plain Text<br/>Pronunciation |
-| Spanish (Equatorial Guinea) | `es-GQ` | Plain Text |
-| Spanish (Guatemala) | `es-GT` | Plain Text<br/>Pronunciation |
-| Spanish (Honduras) | `es-HN` | Plain Text<br/>Pronunciation |
-| Spanish (Mexico) | `es-MX` | Plain Text<br>Structured Text (20210908)<br>Pronunciation |
-| Spanish (Nicaragua) | `es-NI` | Plain Text<br/>Pronunciation |
-| Spanish (Panama) | `es-PA` | Plain Text<br/>Pronunciation |
-| Spanish (Paraguay) | `es-PY` | Plain Text<br/>Pronunciation |
-| Spanish (Peru) | `es-PE` | Plain Text<br/>Pronunciation |
-| Spanish (Puerto Rico) | `es-PR` | Plain Text<br/>Pronunciation |
-| Spanish (Spain) | `es-ES` | Audio (20201015)<br>Plain Text<br>Structured Text (20210908)<br>Pronunciation |
-| Spanish (Uruguay) | `es-UY` | Plain Text<br/>Pronunciation |
-| Spanish (USA) | `es-US` | Plain Text<br/>Pronunciation |
-| Spanish (Venezuela) | `es-VE` | Plain Text<br/>Pronunciation |
-| Swahili (Kenya) | `sw-KE` | Plain Text |
-| Swedish (Sweden) | `sv-SE` | Plain Text<br/>Pronunciation |
-| Tamil (India) | `ta-IN` | Plain Text |
-| Telugu (India) | `te-IN` | Plain Text |
-| Thai (Thailand) | `th-TH` | Plain Text |
-| Turkish (Turkey) | `tr-TR` | Plain Text |
-| Vietnamese (Vietnam) | `vi-VN` | Plain Text |
-
-## Text-to-Speech
-
-Both the Microsoft Speech SDK and REST APIs support these neural voices, each of which supports a specific language and dialect, identified by locale. You can also get a full list of languages and voices supported for each specific region/endpoint through the [voices list API](rest-text-to-speech.md#get-a-list-of-voices).
+| Arabic (Algeria) | `ar-DZ` | Plain text |
+| Arabic (Bahrain), modern standard | `ar-BH` | Plain text |
+| Arabic (Egypt) | `ar-EG` | Plain text |
+| Arabic (Iraq) | `ar-IQ` | Plain text |
+| Arabic (Israel) | `ar-IL` | Plain text |
+| Arabic (Jordan) | `ar-JO` | Plain text |
+| Arabic (Kuwait) | `ar-KW` | Plain text |
+| Arabic (Lebanon) | `ar-LB` | Plain text |
+| Arabic (Libya) | `ar-LY` | Plain text |
+| Arabic (Morocco) | `ar-MA` | Plain text |
+| Arabic (Oman) | `ar-OM` | Plain text |
+| Arabic (Palestinian Authority) | `ar-PS` | Plain text |
+| Arabic (Qatar) | `ar-QA` | Plain text |
+| Arabic (Saudi Arabia) | `ar-SA` | Plain text |
+| Arabic (Syria) | `ar-SY` | Plain text |
+| Arabic (Tunisia) | `ar-TN` | Plain text |
+| Arabic (United Arab Emirates) | `ar-AE` | Plain text |
+| Arabic (Yemen) | `ar-YE` | Plain text |
+| Bulgarian (Bulgaria) | `bg-BG` | Plain text |
+| Catalan (Spain) | `ca-ES` | Plain text<br/>Pronunciation |
+| Chinese (Cantonese, Traditional) | `zh-HK` | Plain text |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Plain text |
+| Chinese (Taiwanese Mandarin) | `zh-TW` | Plain text |
+| Croatian (Croatia) | `hr-HR` | Plain text<br/>Pronunciation |
+| Czech (Czech) | `cs-CZ` | Plain text<br/>Pronunciation |
+| Danish (Denmark) | `da-DK` | Plain text<br/>Pronunciation |
+| Dutch (Netherlands) | `nl-NL` | Plain text<br/>Pronunciation |
+| English (Australia) | `en-AU` | Plain text<br/>Pronunciation |
+| English (Canada) | `en-CA` | Plain text<br/>Pronunciation |
+| English (Ghana) | `en-GH` | Plain text<br/>Pronunciation |
+| English (Hong Kong) | `en-HK` | Plain text<br/>Pronunciation |
+| English (India) | `en-IN` | Plain text<br>Structured Text (20210907)<br>Pronunciation |
+| English (Ireland) | `en-IE` | Plain text<br/>Pronunciation |
+| English (Kenya) | `en-KE` | Plain text<br/>Pronunciation |
+| English (New Zealand) | `en-NZ` | Plain text<br/>Pronunciation |
+| English (Nigeria) | `en-NG` | Plain text<br/>Pronunciation |
+| English (Philippines) | `en-PH` | Plain text<br/>Pronunciation |
+| English (Singapore) | `en-SG` | Plain text<br/>Pronunciation |
+| English (South Africa) | `en-ZA` | Plain text<br/>Pronunciation |
+| English (Tanzania) | `en-TZ` | Plain text<br/>Pronunciation |
+| English (United Kingdom) | `en-GB` | Audio (20201019)<br>Plain text<br>Structured Text (20210906)<br>Pronunciation |
+| English (United States) | `en-US` | Audio (20201019, 20210223)<br>Plain text<br>Structured Text (20211012)<br>Pronunciation |
+| Estonian (Estonia) | `et-EE` | Plain text<br/>Pronunciation |
+| Filipino (Philippines) | `fil-PH` | Plain text<br/>Pronunciation |
+| Finnish (Finland) | `fi-FI` | Plain text<br/>Pronunciation |
+| French (Canada) | `fr-CA` | Audio (20201015)<br>Plain text<br>Structured Text (20210908)<br>Pronunciation |
+| French (France) | `fr-FR` | Audio (20201015)<br>Plain text<br>Structured Text (20210908)<br>Pronunciation |
+| French (Switzerland) | `fr-CH` | Plain text<br/>Pronunciation |
+| German (Austria) | `de-AT` | Plain text<br/>Pronunciation |
+| German (Germany) | `de-DE` | Plain text<br/>Pronunciation |
+| German (Switzerland) | `de-CH` | Audio (20201127)<br>Plain text<br>Structured Text (20210831)<br>Pronunciation |
+| Greek (Greece) | `el-GR` | Plain text |
+| Gujarati (Indian) | `gu-IN` | Plain text |
+| Hebrew (Israel) | `he-IL` | Plain text |
+| Hindi (India) | `hi-IN` | Plain text |
+| Hungarian (Hungary) | `hu-HU` | Plain text<br/>Pronunciation |
+| Indonesian (Indonesia) | `id-ID` | Plain text<br/>Pronunciation |
+| Irish (Ireland) | `ga-IE` | Plain text<br/>Pronunciation |
+| Italian (Italy) | `it-IT` | Audio (20201016)<br>Plain text<br>Pronunciation |
+| Japanese (Japan) | `ja-JP` | Plain text |
+| Kannada (India) | `kn-IN` | Plain text |
+| Korean (Korea) | `ko-KR` | Audio (20201015)<br>Plain text |
+| Latvian (Latvia) | `lv-LV` | Plain text<br/>Pronunciation |
+| Lithuanian (Lithuania) | `lt-LT` | Plain text<br/>Pronunciation |
+| Malay (Malaysia) | `ms-MY` | Plain text |
+| Maltese (Malta) | `mt-MT` | Plain text |
+| Marathi (India) | `mr-IN` | Plain text |
+| Norwegian (Bokmål, Norway) | `nb-NO` | Plain text |
+| Persian (Iran) | `fa-IR` | Plain text |
+| Polish (Poland) | `pl-PL` | Plain text<br/>Pronunciation |
+| Portuguese (Brazil) | `pt-BR` | Audio (20201015)<br>Plain text<br>Pronunciation |
+| Portuguese (Portugal) | `pt-PT` | Plain text<br/>Pronunciation |
+| Romanian (Romania) | `ro-RO` | Plain text<br/>Pronunciation |
+| Russian (Russia) | `ru-RU` | Plain text |
+| Slovak (Slovakia) | `sk-SK` | Plain text<br/>Pronunciation |
+| Slovenian (Slovenia) | `sl-SI` | Plain text<br/>Pronunciation |
+| Spanish (Argentina) | `es-AR` | Plain text<br/>Pronunciation |
+| Spanish (Bolivia) | `es-BO` | Plain text<br/>Pronunciation |
+| Spanish (Chile) | `es-CL` | Plain text<br/>Pronunciation |
+| Spanish (Colombia) | `es-CO` | Plain text<br/>Pronunciation |
+| Spanish (Costa Rica) | `es-CR` | Plain text<br/>Pronunciation |
+| Spanish (Cuba) | `es-CU` | Plain text<br/>Pronunciation |
+| Spanish (Dominican Republic) | `es-DO` | Plain text<br/>Pronunciation |
+| Spanish (Ecuador) | `es-EC` | Plain text<br/>Pronunciation |
+| Spanish (El Salvador) | `es-SV` | Plain text<br/>Pronunciation |
+| Spanish (Equatorial Guinea) | `es-GQ` | Plain text |
+| Spanish (Guatemala) | `es-GT` | Plain text<br/>Pronunciation |
+| Spanish (Honduras) | `es-HN` | Plain text<br/>Pronunciation |
+| Spanish (Mexico) | `es-MX` | Plain text<br>Structured Text (20210908)<br>Pronunciation |
+| Spanish (Nicaragua) | `es-NI` | Plain text<br/>Pronunciation |
+| Spanish (Panama) | `es-PA` | Plain text<br/>Pronunciation |
+| Spanish (Paraguay) | `es-PY` | Plain text<br/>Pronunciation |
+| Spanish (Peru) | `es-PE` | Plain text<br/>Pronunciation |
+| Spanish (Puerto Rico) | `es-PR` | Plain text<br/>Pronunciation |
+| Spanish (Spain) | `es-ES` | Audio (20201015)<br>Plain text<br>Structured Text (20210908)<br>Pronunciation |
+| Spanish (Uruguay) | `es-UY` | Plain text<br/>Pronunciation |
+| Spanish (USA) | `es-US` | Plain text<br/>Pronunciation |
+| Spanish (Venezuela) | `es-VE` | Plain text<br/>Pronunciation |
+| Swahili (Kenya) | `sw-KE` | Plain text |
+| Swedish (Sweden) | `sv-SE` | Plain text<br/>Pronunciation |
+| Tamil (India) | `ta-IN` | Plain text |
+| Telugu (India) | `te-IN` | Plain text |
+| Thai (Thailand) | `th-TH` | Plain text |
+| Turkish (Turkey) | `tr-TR` | Plain text |
+| Vietnamese (Vietnam) | `vi-VN` | Plain text |
+
+## Text-to-speech
+
+Both the Microsoft Speech SDK and REST APIs support these neural voices, each of which supports a specific language and dialect, identified by locale. You can also get a full list of languages and voices supported for each specific region or endpoint through the [voices list API](rest-text-to-speech.md#get-a-list-of-voices).
> [!IMPORTANT]
-> Pricing varies for Prebuilt Neural Voice (referred as *Neural* on the pricing page) and Custom Neural Voice (referred as *Custom Neural* on the pricing page). Please visit the [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) page for additional information.
+> Pricing varies for Prebuilt Neural Voice (referred to as *Neural* on the pricing page) and Custom Neural Voice (referred to as *Custom Neural* on the pricing page). For more information, see the [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) page.
### Prebuilt neural voices
-Below table lists out the prebuilt neural voices supported in each language. You can [try the demo and hear the voices here](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#features).
+The following table lists the prebuilt neural voices supported in each language. You can try the demo and hear the voices on [this website](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#features).
> [!NOTE]
-> Prebuilt neural voices are created from samples that use a 24 khz sample rate.
+> Prebuilt neural voices are created from samples that use a 24-khz sample rate.
> All voices can upsample or downsample to other sample rates when synthesizing. | Language | Locale | Gender | Voice name | Style support |
Below table lists out the prebuilt neural voices supported in each language. You
| English (Australia) | `en-AU` | Male | `en-AU-WilliamNeural` | General | | English (Canada) | `en-CA` | Female | `en-CA-ClaraNeural` | General | | English (Canada) | `en-CA` | Male | `en-CA-LiamNeural` | General |
-| English (Hongkong) | `en-HK` | Female | `en-HK-YanNeural` | General |
-| English (Hongkong) | `en-HK` | Male | `en-HK-SamNeural` | General |
+| English (Hong Kong) | `en-HK` | Female | `en-HK-YanNeural` | General |
+| English (Hong Kong) | `en-HK` | Male | `en-HK-SamNeural` | General |
| English (India) | `en-IN` | Female | `en-IN-NeerjaNeural` | General | | English (India) | `en-IN` | Male | `en-IN-PrabhatNeural` | General | | English (Ireland) | `en-IE` | Female | `en-IE-EmilyNeural` | General |
Below table lists out the prebuilt neural voices supported in each language. You
| Zulu (South Africa) | `zu-ZA` | Male | `zu-ZA-ThembaNeural` <sup>New</sup> | General | > [!IMPORTANT]
-> The English (United Kingdom) voice `en-GB-MiaNeural` retired on **30 October 2021**. All service requests to `en-GB-MiaNeural` now will be re-directed to `en-GB-SoniaNeural` automatically since **30 October 2021**.
-> If you are using container Neural TTS, please [download](speech-container-howto.md#get-the-container-image-with-docker-pull) and deploy the latest version, starting from **30 October 2021**, all requests with previous versions will be rejected.
+> The English (United Kingdom) voice `en-GB-MiaNeural` retired on October 30, 2021. All service requests to `en-GB-MiaNeural` will be redirected to `en-GB-SoniaNeural` automatically as of October 30, 2021.
+> If you're using container Neural TTS, [download](speech-container-howto.md#get-the-container-image-with-docker-pull) and deploy the latest version. Starting from October 30,2021, all requests with previous versions will be rejected.
### Prebuilt neural voices in preview
-Below neural voices are in public preview.
+The following neural voices are in public preview.
| Language | Locale | Gender | Voice name | Style support | |-||--|-||
-| English (United States) | `en-US` | Female | `en-US-JennyMultilingualNeural` <sup>New</sup> | General,multi-lingual capabilities available [using SSML](speech-synthesis-markup.md#create-an-ssml-document) |
+| English (United States) | `en-US` | Female | `en-US-JennyMultilingualNeural` <sup>New</sup> | General,multilingual capabilities available [using SSML](speech-synthesis-markup.md#create-an-ssml-document) |
| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaochenNeural` <sup>New</sup> | Optimized for spontaneous conversation | | Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoyanNeural` <sup>New</sup> | Optimized for customer service | | Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoshuangNeural` <sup>New</sup> | Child voice,optimized for child story and chat; multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles)| | Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoqiuNeural` <sup>New</sup> | Optimized for narrating | > [!IMPORTANT]
-> Voices in public preview are only available in 3 service regions: East US, West Europe and Southeast Asia.
+> Voices in public preview are only available in three service regions: East US, West Europe, and Southeast Asia.
-> [!TIP]
-> `en-US-JennyNeuralMultilingual` supports multiple languages. Check the [voices list API](rest-text-to-speech.md#get-a-list-of-voices) for supported languages list.
+The `en-US-JennyNeuralMultilingual` voice supports multiple languages. Check the [voices list API](rest-text-to-speech.md#get-a-list-of-voices) for a supported languages list.
For more information about regional availability, see [regions](regions.md#prebuilt-neural-voices). To learn how you can configure and adjust neural voices, such as Speaking Styles, see [Speech Synthesis Markup Language](speech-synthesis-markup.md#adjust-speaking-styles). > [!IMPORTANT]
-> The `en-US-JessaNeural` voice has changed to `en-US-AriaNeural`. If you were using "Jessa" before, convert over to "Aria".
+> The `en-US-JessaNeural` voice has changed to `en-US-AriaNeural`. If you were using "Jessa" before, convert to "Aria."
-> [!TIP]
-> You can continue to use the full service name mapping like "Microsoft Server Speech Text to Speech Voice (en-US, AriaNeural)" in your speech synthesis requests.
+You can continue to use the full service name mapping like "Microsoft Server Speech Text to Speech Voice (en-US, AriaNeural)" in your speech synthesis requests.
### Voice styles and roles
-In some cases you can adjust the speaking style to express different emotions like cheerfulness, empathy, and calm, or optimize the voice for different scenarios like customer service, newscast, and voice assistant. With roles the same voice can act as a different age and gender.
+In some cases, you can adjust the speaking style to express different emotions like cheerfulness, empathy, and calm. You can optimize the voice for different scenarios like customer service, newscast, and voice assistant. With roles, the same voice can act as a different age and gender.
-To learn how you can configure and adjust neural voice styles and roles see [Speech Synthesis Markup Language](speech-synthesis-markup.md#adjust-speaking-styles).
+To learn how you can configure and adjust neural voice styles and roles, see [Speech Synthesis Markup Language](speech-synthesis-markup.md#adjust-speaking-styles).
-Use this table to determine supported styles and roles for each neural voice.
+Use the following table to determine supported styles and roles for each neural voice.
|Voice|Styles|Style degree|Roles| |--|--|--|--|
Use this table to determine supported styles and roles for each neural voice.
|zh-CN-YunyangNeural|`customerservice`|Supported|| |zh-CN-YunyeNeural|`angry`, `calm`, `cheerful`, `disgruntled`, `fearful`, `sad`, `serious`|Supported|Supported|
-### Custom neural voice
+### Custom Neural Voice
-Custom neural voice lets you create synthetic voices that are rich in speaking styles. You can create a unique brand voice in multiple languages and styles by using a small set of recording data.
+Custom Neural Voice lets you create synthetic voices that are rich in speaking styles. You can create a unique brand voice in multiple languages and styles by using a small set of recording data.
Select the right locale that matches the training data you have to train a custom neural voice model. For example, if the recording data you have is spoken in English with a British accent, select `en-GB`.
-With the cross-lingual feature (preview), you can transfer you custom neural voice model to speak a second language. For example, with the `zh-CN` data, you can create a voice that speaks `en-AU` or any of the languages marked 'yes' in the 'cross-lingual' column below.
+With the cross-lingual feature (preview), you can transfer your custom neural voice model to speak a second language. For example, with the `zh-CN` data, you can create a voice that speaks `en-AU` or any of the languages marked "Yes" in the Cross-lingual column in the following table.
| Language | Locale | Cross-lingual (preview) | |--|--|--|
With the cross-lingual feature (preview), you can transfer you custom neural voi
## Language identification
-With language identification, you set and get one of the supported locales below. But we only compare at the language level such as English and German. If you include multiple locales of the same language (for example, `en-IN` and `en-US`), we'll only compare English (`en`) with the other candidate languages.
+With language identification, you set and get one of the supported locales in the following table. We only compare at the language level, such as English and German. If you include multiple locales of the same language, for example, `en-IN` and `en-US`, we'll only compare English (`en`) with the other candidate languages.
|Language|Locale (BCP-47)| |--|--|
Arabic|`ar-DZ`<br/>`ar-BH`<br/>`ar-EG`<br/>`ar-IQ`<br/>`ar-OM`<br/>`ar-SY`|
|Thai|`th-TH`| |Turkish|`tr-TR`| - ## Pronunciation assessment
-The [Pronunciation assessment](how-to-pronunciation-assessment.md) feature currently supports the `en-US` locale, which is available with all speech-to-text regions. Support for `en-GB` and `zh-CN` languages is in preview.
+The [pronunciation assessment](how-to-pronunciation-assessment.md) feature currently supports the `en-US` locale, which is available with all speech-to-text regions. Support for `en-GB` and `zh-CN` languages is in preview.
## Speech translation
-The **Speech Translation** API supports different languages for speech-to-speech and speech-to-text translation. The source language must always be from the Speech-to-text language table. The available target languages depend on whether the translation target is speech or text. You may translate incoming speech into any of the [supported languages](https://www.microsoft.com/translator/business/languages/). A subset of languages is available for [speech synthesis](language-support.md#text-languages).
+The Speech Translation API supports different languages for speech-to-speech and speech-to-text translation. The source language must always be from the speech-to-text language table. The available target languages depend on whether the translation target is speech or text. You may translate incoming speech into any of the [supported languages](https://www.microsoft.com/translator/business/languages/). A subset of languages is available for [speech synthesis](language-support.md#text-languages).
### Text languages
The **Speech Translation** API supports different languages for speech-to-speech
| Welsh | `cy` | | Yucatec Maya | `yua` |
-## Speaker Recognition
+## Speaker recognition
-Speaker recognition is mostly language agnostic. We built a universal model for text-independent speaker recognition by combining various data sources from multiple languages. We have tuned and evaluated the model on the languages and locales that appear in the following table. See the [overview](speaker-recognition-overview.md) for additional information on Speaker Recognition.
+Speaker recognition is mostly language agnostic. We built a universal model for text-independent speaker recognition by combining various data sources from multiple languages. We've tuned and evaluated the model on the languages and locales that appear in the following table. For more information on speaker recognition, see the [overview](speaker-recognition-overview.md).
| Language | Locale (BCP-47) | Text-dependent verification | Text-independent verification | Text-independent identification | |-|-|-|-|-|
Speaker recognition is mostly language agnostic. We built a universal model for
|Spanish (Mexico) | `es-MX` | n/a | Yes | Yes| |Spanish (Spain) | `es-ES` | n/a | Yes | Yes|
-## Custom Keyword and Keyword Verification
+## Custom keyword and keyword verification
-The following table outlines supported languages for Custom Keyword and Keyword Verification.
+The following table outlines supported languages for custom keyword and keyword verification.
-| Language | Locale (BCP-47) | Custom Keyword | Keyword Verification |
+| Language | Locale (BCP-47) | Custom keyword | Keyword verification |
| -- | | -- | -- | | Chinese (Mandarin, Simplified) | zh-CN | Yes | Yes | | English (United States) | en-US | Yes | Yes |
cognitive-services Rest Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/rest-speech-to-text.md
Title: Speech-to-text API reference (REST) - Speech service
-description: Learn how to use the speech-to-text REST API. In this article, you'll learn about authorization options, query options, how to structure a request and receive a response.
+description: Learn how to use REST APIs to convert speech to text.
ms.devlang: csharp
-# Speech-to-text REST API
+# Speech-to-text REST APIs
-Speech-to-text has two different REST APIs. Each API serves its special purpose and uses different sets of endpoints.
-
-The Speech-to-text REST APIs are:
-- [Speech-to-text REST API v3.0](#speech-to-text-rest-api-v30) is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). v3.0 is a [successor of v2.0](./migrate-v2-to-v3.md).-- [Speech-to-text REST API for short audio](#speech-to-text-rest-api-for-short-audio) is used for online transcription as an alternative to the [Speech SDK](speech-sdk.md). Requests using this API can transmit only up to 60 seconds of audio per request.
+Speech-to-text has two REST APIs. Each API serves a special purpose and uses its own set of endpoints. In this article, you learn how to use those APIs, including authorization options, query options, how to structure a request, and how to interpret a response.
## Speech-to-text REST API v3.0
-Speech-to-text REST API v3.0 is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). If you need to communicate with the online transcription via REST, use [Speech-to-text REST API for short audio](#speech-to-text-rest-api-for-short-audio).
+Speech-to-text REST API v3.0 is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). v3.0 is a [successor of v2.0](./migrate-v2-to-v3.md). If you need to communicate with the online transcription via REST, use the [speech-to-text REST API for short audio](#speech-to-text-rest-api-for-short-audio).
Use REST API v3.0 to:-- Copy models to other subscriptions in case you want colleagues to have access to a model you built, or in cases where you want to deploy a model to more than one region-- Transcribe data from a container (bulk transcription) as well as provide multiple audio file URLs-- Upload data from Azure Storage accounts through the use of a SAS Uri-- Get logs per endpoint if logs have been requested for that endpoint-- Request the manifest of the models you create, for the purpose of setting up on-premises containers
+- Copy models to other subscriptions if you want colleagues to have access to a model that you built, or if you want to deploy a model to more than one region.
+- Transcribe data from a container (bulk transcription) and provide multiple URLs for audio files.
+- Upload data from Azure storage accounts by using a shared access signature (SAS) URI.
+- Get logs for each endpoint if logs have been requested for that endpoint.
+- Request the manifest of the models that you create, to set up on-premises containers.
REST API v3.0 includes such features as:-- **Notifications-Webhooks**ΓÇöAll running processes of the service now support webhook notifications. REST API v3.0 provides the calls to enable you to register your webhooks where notifications are sent
+- **Webhook notifications**: All running processes of the service now support webhook notifications. REST API v3.0 provides the calls to enable you to register your webhooks where notifications are sent.
- **Updating models behind endpoints** -- **Model adaptation with multiple data sets**ΓÇöAdapt a model using multiple data set combinations of acoustic, language, and pronunciation data-- **Bring your own storage**ΓÇöUse your own storage accounts for logs, transcription files, and other data
+- **Model adaptation with multiple datasets**: Adapt a model by using multiple dataset combinations of acoustic, language, and pronunciation data.
+- **Bring your own storage**: Use your own storage accounts for logs, transcription files, and other data.
-See examples on using REST API v3.0 with the Batch transcription is [this article](batch-transcription.md).
+For examples of using REST API v3.0 with batch transcription, see [How to use batch transcription](batch-transcription.md).
-If you are using Speech-to-text REST API v2.0, see how you can migrate to v3.0 in [this guide](./migrate-v2-to-v3.md).
+For information about migrating to the latest version of the speech-to-text REST API, see [Migrate code from v2.0 to v3.0 of the REST API](./migrate-v2-to-v3.md).
-See the full Speech-to-text REST API v3.0 Reference [here](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0).
+You can find the full speech-to-text REST API v3.0 reference on the [Microsoft developer portal](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0).
## Speech-to-text REST API for short audio
-As an alternative to the [Speech SDK](speech-sdk.md), the Speech service allows you to convert Speech-to-text using a REST API.
-The REST API for short audio is very limited, and it should only be used in cases were the [Speech SDK](speech-sdk.md) cannot.
+As an alternative to the [Speech SDK](speech-sdk.md), the Speech service allows you to convert speech to text by using the [REST API for short audio](#speech-to-text-rest-api-for-short-audio).
+This API is very limited. Use it only in cases where you can't use the Speech SDK.
-Before using the Speech-to-text REST API for short audio, consider the following:
+Before you use the speech-to-text REST API for short audio, consider the following limitations:
-* Requests that use the REST API for short audio and transmit audio directly can only contain up to 60 seconds of audio.
-* The Speech-to-text REST API for short audio only returns final results. Partial results are not provided.
+* Requests that use the REST API for short audio and transmit audio directly can contain no more than 60 seconds of audio.
+* The REST API for short audio returns only final results. It doesn't provide partial results.
-If sending longer audio is a requirement for your application, consider using the [Speech SDK](speech-sdk.md) or [Speech-to-text REST API v3.0](#speech-to-text-rest-api-v30).
+If sending longer audio is a requirement for your application, consider using the Speech SDK or [speech-to-text REST API v3.0](#speech-to-text-rest-api-v30).
> [!TIP]
-> See [this article](sovereign-clouds.md) for Azure Government and Azure China endpoints.
+> For Azure Government and Azure China endpoints, see [this article about sovereign clouds](sovereign-clouds.md).
[!INCLUDE [](../../../includes/cognitive-services-speech-service-rest-auth.md)]
The endpoint for the REST API for short audio has this format:
https://<REGION_IDENTIFIER>.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1 ```
-Replace `<REGION_IDENTIFIER>` with the identifier matching the region of your subscription from this table:
+Replace `<REGION_IDENTIFIER>` with the identifier that matches the region of your subscription from this table:
[!INCLUDE [](../../../includes/cognitive-services-speech-service-region-identifier.md)] > [!NOTE]
-> The language parameter must be appended to the URL to avoid receiving an 4xx HTTP error. For example, the language set to US English using the West US endpoint is: `https://westus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US`.
+> You must append the language parameter to the URL to avoid receiving a 4xx HTTP error. For example, the language set to US English via the West US endpoint is: `https://westus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US`.
### Query parameters
-These parameters may be included in the query string of the REST request.
+These parameters might be included in the query string of the REST request:
-| Parameter | Description | Required / Optional |
+| Parameter | Description | Required or optional |
|--|-||
-| `language` | Identifies the spoken language that is being recognized. See [Supported languages](language-support.md#speech-to-text). | Required |
+| `language` | Identifies the spoken language that's being recognized. See [Supported languages](language-support.md#speech-to-text). | Required |
| `format` | Specifies the result format. Accepted values are `simple` and `detailed`. Simple results include `RecognitionStatus`, `DisplayText`, `Offset`, and `Duration`. Detailed responses include four different representations of display text. The default setting is `simple`. | Optional |
-| `profanity` | Specifies how to handle profanity in recognition results. Accepted values are `masked`, which replaces profanity with asterisks, `removed`, which removes all profanity from the result, or `raw`, which includes the profanity in the result. The default setting is `masked`. | Optional |
-| `cid` | When using the [Custom Speech portal](./custom-speech-overview.md) to create custom models, you can use custom models via their **Endpoint ID** found on the **Deployment** page. Use the **Endpoint ID** as the argument to the `cid` query string parameter. | Optional |
+| `profanity` | Specifies how to handle profanity in recognition results. Accepted values are: <br><br>`masked`, which replaces profanity with asterisks. <br>`removed`, which removes all profanity from the result. <br>`raw`, which includes profanity in the result. <br><br>The default setting is `masked`. | Optional |
+| `cid` | When you're using the [Custom Speech portal](./custom-speech-overview.md) to create custom models, you can take advantage of the **Endpoint ID** value from the **Deployment** page. Use the **Endpoint ID** value as the argument to the `cid` query string parameter. | Optional |
### Request headers
-This table lists required and optional headers for Speech-to-text requests.
+This table lists required and optional headers for speech-to-text requests:
-|Header| Description | Required / Optional |
+|Header| Description | Required or optional |
||-||
-| `Ocp-Apim-Subscription-Key` | Your Speech service subscription key. | Either this header or `Authorization` is required. |
+| `Ocp-Apim-Subscription-Key` | Your subscription key for the Speech service. | Either this header or `Authorization` is required. |
| `Authorization` | An authorization token preceded by the word `Bearer`. For more information, see [Authentication](#authentication). | Either this header or `Ocp-Apim-Subscription-Key` is required. |
-| `Pronunciation-Assessment` | Specifies the parameters for showing pronunciation scores in recognition results, which assess the pronunciation quality of speech input, with indicators of accuracy, fluency, completeness, etc. This parameter is a base64 encoded json containing multiple detailed parameters. See [Pronunciation assessment parameters](#pronunciation-assessment-parameters) for how to build this header. | Optional |
+| `Pronunciation-Assessment` | Specifies the parameters for showing pronunciation scores in recognition results. These scores assess the pronunciation quality of speech input, with indicators like accuracy, fluency, and completeness. <br><br>This parameter is a Base64-encoded JSON that contains multiple detailed parameters. To learn how to build this header, see [Pronunciation assessment parameters](#pronunciation-assessment-parameters). | Optional |
| `Content-type` | Describes the format and codec of the provided audio data. Accepted values are `audio/wav; codecs=audio/pcm; samplerate=16000` and `audio/ogg; codecs=opus`. | Required |
-| `Transfer-Encoding` | Specifies that chunked audio data is being sent, rather than a single file. Only use this header if chunking audio data. | Optional |
-| `Expect` | If using chunked transfer, send `Expect: 100-continue`. The Speech service acknowledges the initial request and awaits additional data.| Required if sending chunked audio data. |
-| `Accept` | If provided, it must be `application/json`. The Speech service provides results in JSON. Some request frameworks provide an incompatible default value. It is good practice to always include `Accept`. | Optional, but recommended. |
+| `Transfer-Encoding` | Specifies that chunked audio data is being sent, rather than a single file. Use this header only if you're chunking audio data. | Optional |
+| `Expect` | If you're using chunked transfer, send `Expect: 100-continue`. The Speech service acknowledges the initial request and awaits additional data.| Required if you're sending chunked audio data. |
+| `Accept` | If provided, it must be `application/json`. The Speech service provides results in JSON. Some request frameworks provide an incompatible default value. It's good practice to always include `Accept`. | Optional, but recommended. |
### Audio formats Audio is sent in the body of the HTTP `POST` request. It must be in one of the formats in this table:
-| Format | Codec | Bit rate | Sample Rate |
+| Format | Codec | Bit rate | Sample rate |
|--|-|-|--| | WAV | PCM | 256 kbps | 16 kHz, mono | | OGG | OPUS | 256 kpbs | 16 kHz, mono | >[!NOTE]
->The above formats are supported through REST API for short audio and WebSocket in the Speech service. The [Speech SDK](speech-sdk.md) currently supports the WAV format with PCM codec as well as [other formats](how-to-use-codec-compressed-audio-input-streams.md).
+>The preceding formats are supported through the REST API for short audio and WebSocket in the Speech service. The [Speech SDK](speech-sdk.md) currently supports the WAV format with PCM codec as well as [other formats](how-to-use-codec-compressed-audio-input-streams.md).
### Pronunciation assessment parameters
-This table lists required and optional parameters for pronunciation assessment.
+This table lists required and optional parameters for pronunciation assessment:
-| Parameter | Description | Required? |
+| Parameter | Description | Required or optional |
|--|-||
-| ReferenceText | The text that the pronunciation will be evaluated against. | Required |
-| GradingSystem | The point system for score calibration. The `FivePoint` system gives a 0-5 floating point score, and `HundredMark` gives a 0-100 floating point score. Default: `FivePoint`. | Optional |
-| Granularity | The evaluation granularity. Accepted values are `Phoneme`, which shows the score on the full text, word and phoneme level, `Word`, which shows the score on the full text and word level, `FullText`, which shows the score on the full text level only. The default setting is `Phoneme`. | Optional |
-| Dimension | Defines the output criteria. Accepted values are `Basic`, which shows the accuracy score only, `Comprehensive` shows scores on more dimensions (e.g. fluency score and completeness score on the full text level, error type on word level). Check [Response parameters](#response-parameters) to see definitions of different score dimensions and word error types. The default setting is `Basic`. | Optional |
-| EnableMiscue | Enables miscue calculation. With this enabled, the pronounced words will be compared to the reference text, and will be marked with omission/insertion based on the comparison. Accepted values are `False` and `True`. The default setting is `False`. | Optional |
-| ScenarioId | A GUID indicating a customized point system. | Optional |
+| `ReferenceText` | The text that the pronunciation will be evaluated against. | Required |
+| `GradingSystem` | The point system for score calibration. The `FivePoint` system gives a 0-5 floating point score, and `HundredMark` gives a 0-100 floating point score. Default: `FivePoint`. | Optional |
+| `Granularity` | The evaluation granularity. Accepted values are:<br><br> `Phoneme`, which shows the score on the full-text, word, and phoneme levels.<br>`Word`, which shows the score on the full-text and word levels. <br>`FullText`, which shows the score on the full-text level only.<br><br> The default setting is `Phoneme`. | Optional |
+| `Dimension` | Defines the output criteria. Accepted values are:<br><br> `Basic`, which shows the accuracy score only. <br>`Comprehensive`, which shows scores on more dimensions (for example, fluency score and completeness score on the full-text level, and error type on the word level).<br><br> To see definitions of different score dimensions and word error types, see [Response parameters](#response-parameters). The default setting is `Basic`. | Optional |
+| `EnableMiscue` | Enables miscue calculation. With this parameter enabled, the pronounced words will be compared to the reference text. They'll be marked with omission or insertion based on the comparison. Accepted values are `False` and `True`. The default setting is `False`. | Optional |
+| `ScenarioId` | A GUID that indicates a customized point system. | Optional |
-Below is an example JSON containing the pronunciation assessment parameters:
+Here's example JSON that contains the pronunciation assessment parameters:
```json {
var pronAssessmentParamsBytes = Encoding.UTF8.GetBytes(pronAssessmentParamsJson)
var pronAssessmentHeader = Convert.ToBase64String(pronAssessmentParamsBytes); ```
-We strongly recommend streaming (chunked) uploading while posting the audio data, which can significantly reduce the latency. See [sample code in different programming languages](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment) for how to enable streaming.
+We strongly recommend streaming (chunked) uploading while you're posting the audio data, which can significantly reduce the latency. To learn how to enable streaming, see the [sample code in various programming languages](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment).
>[!NOTE]
-> The pronunciation assessment feature currently supports `en-US` language, which is available on all [speech-to-text regions](regions.md#speech-to-text). The support for `en-GB` and `zh-CN` languages is under preview.
+> The pronunciation assessment feature currently supports the `en-US` language, which is available on all [speech-to-text regions](regions.md#speech-to-text). Support for `en-GB` and `zh-CN` languages is under preview.
### Sample request
-The sample below includes the hostname and required headers. It's important to note that the service also expects audio data, which is not included in this sample. As mentioned earlier, chunking is recommended, however, not required.
+The following sample includes the host name and required headers. It's important to note that the service also expects audio data, which is not included in this sample. As mentioned earlier, chunking is recommended but not required.
```HTTP POST speech/recognition/conversation/cognitiveservices/v1?language=en-US&format=detailed HTTP/1.1
Transfer-Encoding: chunked
Expect: 100-continue ```
-To enable pronunciation assessment, you can add below header. See [Pronunciation assessment parameters](#pronunciation-assessment-parameters) for how to build this header.
+To enable pronunciation assessment, you can add the following header. To learn how to build this header, see [Pronunciation assessment parameters](#pronunciation-assessment-parameters).
```HTTP Pronunciation-Assessment: eyJSZWZlcm...
Pronunciation-Assessment: eyJSZWZlcm...
The HTTP status code for each response indicates success or common errors.
-| HTTP status code | Description | Possible reason |
+| HTTP status code | Description | Possible reasons |
||-|--|
-| `100` | Continue | The initial request has been accepted. Proceed with sending the rest of the data. (Used with chunked transfer) |
-| `200` | OK | The request was successful; the response body is a JSON object. |
-| `400` | Bad request | Language code not provided, not a supported language, invalid audio file, etc. |
-| `401` | Unauthorized | Subscription key or authorization token is invalid in the specified region, or invalid endpoint. |
-| `403` | Forbidden | Missing subscription key or authorization token. |
+| 100 | Continue | The initial request has been accepted. Proceed with sending the rest of the data. (This code is used with chunked transfer.) |
+| 200 | OK | The request was successful. The response body is a JSON object. |
+| 400 | Bad request | The language code wasn't provided, the language isn't supported, or the audio file is invalid (for example). |
+| 401 | Unauthorized | A subscription key or an authorization token is invalid in the specified region, or an endpoint is invalid. |
+| 403 | Forbidden | A subscription key or authorization token is missing. |
### Chunked transfer
-Chunked transfer (`Transfer-Encoding: chunked`) can help reduce recognition latency. It allows the Speech service to begin processing the audio file while it is transmitted. The REST API for short audio does not provide partial or interim results.
+Chunked transfer (`Transfer-Encoding: chunked`) can help reduce recognition latency. It allows the Speech service to begin processing the audio file while it's transmitted. The REST API for short audio does not provide partial or interim results.
-This code sample shows how to send audio in chunks. Only the first chunk should contain the audio file's header. `request` is an `HttpWebRequest` object connected to the appropriate REST endpoint. `audioFile` is the path to an audio file on disk.
+The following code sample shows how to send audio in chunks. Only the first chunk should contain the audio file's header. `request` is an `HttpWebRequest` object that's connected to the appropriate REST endpoint. `audioFile` is the path to an audio file on disk.
```csharp var request = (HttpWebRequest)HttpWebRequest.Create(requestUri);
request.AllowWriteStreamBuffering = false;
using (var fs = new FileStream(audioFile, FileMode.Open, FileAccess.Read)) {
- // Open a request stream and write 1024 byte chunks in the stream one at a time.
+ // Open a request stream and write 1,024-byte chunks in the stream one at a time.
byte[] buffer = null; int bytesRead = 0; using (var requestStream = request.GetRequestStream()) {
- // Read 1024 raw bytes from the input audio file.
+ // Read 1,024 raw bytes from the input audio file.
buffer = new Byte[checked((uint)Math.Min(1024, (int)fs.Length))]; while ((bytesRead = fs.Read(buffer, 0, buffer.Length)) != 0) {
using (var fs = new FileStream(audioFile, FileMode.Open, FileAccess.Read))
### Response parameters
-Results are provided as JSON. The `simple` format includes these top-level fields.
+Results are provided as JSON. The `simple` format includes the following top-level fields:
| Parameter | Description | |--|--|
-|`RecognitionStatus`|Status, such as `Success` for successful recognition. See next table.|
-|`DisplayText`|The recognized text after capitalization, punctuation, inverse text normalization (conversion of spoken text to shorter forms, such as 200 for "two hundred" or "Dr. Smith" for "doctor smith"), and profanity masking. Present only on success.|
+|`RecognitionStatus`|Status, such as `Success` for successful recognition. See the next table.|
+|`DisplayText`|The recognized text after capitalization, punctuation, inverse text normalization, and profanity masking. Present only on success. Inverse text normalization is conversion of spoken text to shorter forms, such as 200 for "two hundred" or "Dr. Smith" for "doctor smith."|
|`Offset`|The time (in 100-nanosecond units) at which the recognized speech begins in the audio stream.| |`Duration`|The duration (in 100-nanosecond units) of the recognized speech in the audio stream.|
-The `RecognitionStatus` field may contain these values:
+The `RecognitionStatus` field might contain these values:
| Status | Description | |--|-|
-| `Success` | The recognition was successful and the `DisplayText` field is present. |
-| `NoMatch` | Speech was detected in the audio stream, but no words from the target language were matched. Usually means the recognition language is a different language from the one the user is speaking. |
-| `InitialSilenceTimeout` | The start of the audio stream contained only silence, and the service timed out waiting for speech. |
-| `BabbleTimeout` | The start of the audio stream contained only noise, and the service timed out waiting for speech. |
+| `Success` | The recognition was successful, and the `DisplayText` field is present. |
+| `NoMatch` | Speech was detected in the audio stream, but no words from the target language were matched. This status usually means that the recognition language is different from the language that the user is speaking. |
+| `InitialSilenceTimeout` | The start of the audio stream contained only silence, and the service timed out while waiting for speech. |
+| `BabbleTimeout` | The start of the audio stream contained only noise, and the service timed out while waiting for speech. |
| `Error` | The recognition service encountered an internal error and could not continue. Try again if possible. | > [!NOTE] > If the audio consists only of profanity, and the `profanity` query parameter is set to `remove`, the service does not return a speech result. The `detailed` format includes additional forms of recognized results.
-When using the `detailed` format, `DisplayText` is provided as `Display` for each result in the `NBest` list.
+When you're using the `detailed` format, `DisplayText` is provided as `Display` for each result in the `NBest` list.
The object in the `NBest` list can include: | Parameter | Description | |--|-|
-| `Confidence` | The confidence score of the entry from 0.0 (no confidence) to 1.0 (full confidence) |
+| `Confidence` | The confidence score of the entry, from 0.0 (no confidence) to 1.0 (full confidence). |
| `Lexical` | The lexical form of the recognized text: the actual words recognized. |
-| `ITN` | The inverse-text-normalized ("canonical") form of the recognized text, with phone numbers, numbers, abbreviations ("doctor smith" to "dr smith"), and other transformations applied. |
+| `ITN` | The inverse-text-normalized (ITN) or canonical form of the recognized text, with phone numbers, numbers, abbreviations ("doctor smith" to "dr smith"), and other transformations applied. |
| `MaskedITN` | The ITN form with profanity masking applied, if requested. |
-| `Display` | The display form of the recognized text, with punctuation and capitalization added. This parameter is the same as `DisplayText` provided when format is set to `simple`. |
-| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. Word and full text level accuracy score is aggregated from phoneme level accuracy score. |
-| `FluencyScore` | Fluency of the given speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. |
+| `Display` | The display form of the recognized text, with punctuation and capitalization added. This parameter is the same as what `DisplayText` provides when the format is set to `simple`. |
+| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. The accuracy score at the word and full-text levels is aggregated from the accuracy score at the phoneme level. |
+| `FluencyScore` | Fluency of the provided speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. |
| `CompletenessScore` | Completeness of the speech, determined by calculating the ratio of pronounced words to reference text input. |
-| `PronScore` | Overall score indicating the pronunciation quality of the given speech. This is aggregated from `AccuracyScore`, `FluencyScore` and `CompletenessScore` with weight. |
-| `ErrorType` | This value indicates whether a word is omitted, inserted or badly pronounced, compared to `ReferenceText`. Possible values are `None` (meaning no error on this word), `Omission`, `Insertion` and `Mispronunciation`. |
+| `PronScore` | Overall score that indicates the pronunciation quality of the provided speech. This score is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. |
+| `ErrorType` | Value that indicates whether a word is omitted, inserted, or badly pronounced, compared to `ReferenceText`. Possible values are `None` (meaning no error on this word), `Omission`, `Insertion`, and `Mispronunciation`. |
### Sample responses
-A typical response for `simple` recognition:
+Here's a typical response for `simple` recognition:
```json {
A typical response for `simple` recognition:
} ```
-A typical response for `detailed` recognition:
+Here's a typical response for `detailed` recognition:
```json {
A typical response for `detailed` recognition:
} ```
-A typical response for recognition with pronunciation assessment:
+Here's a typical response for recognition with pronunciation assessment:
```json {
A typical response for recognition with pronunciation assessment:
- [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/) - [Customize acoustic models](./how-to-custom-speech-train-model.md) - [Customize language models](./how-to-custom-speech-train-model.md)-- [Get familiar with Batch transcription](batch-transcription.md)
+- [Get familiar with batch transcription](batch-transcription.md)
cognitive-services Rest Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
Title: Text-to-speech API reference (REST) - Speech service
-description: Learn how to use the text-to-speech REST API. In this article, you'll learn about authorization options, query options, how to structure a request and receive a response.
+description: Learn how to use the REST API to convert text into synthesized speech.
-# Text-to-Speech REST API
+# Text-to-speech REST API
-The Speech service allows you to [convert text into synthesized speech](#convert-text-to-speech) and [get a list of supported voices](#get-a-list-of-voices) for a region using a set of REST APIs. Each available endpoint is associated with a region. A subscription key for the endpoint/region you plan to use is required.
+The Speech service allows you to [convert text into synthesized speech](#convert-text-to-speech) and [get a list of supported voices](#get-a-list-of-voices) for a region by using a REST API. In this article, you'll learn about authorization options, query options, how to structure a request, and how to interpret a response.
-The Text-to-Speech REST API supports neural Text-to-Speech voices, which support a specific language and dialect, identified by locale.
+The text-to-speech REST API supports neural text-to-speech voices, which support specific languages and dialects that are identified by locale. Each available endpoint is associated with a region. A subscription key for the endpoint or region that you plan to use is required. Here are links to more information:
-* For a complete list of voices, see [language support](language-support.md#text-to-speech).
-* For information about regional availability, see [regions](regions.md#text-to-speech).
+- For a complete list of voices, see [Language and voice support for the Speech service](language-support.md#text-to-speech).
+- For information about regional availability, see [Speech service supported regions](regions.md#text-to-speech).
+- For Azure Government and Azure China endpoints, see [this article about sovereign clouds](sovereign-clouds.md).
> [!IMPORTANT]
-> Costs vary for prebuilt neural voices (referred as *Neural* on the pricing page) and custom neural voices (referred as *Custom Neural* on the pricing page). For more information, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+> Costs vary for prebuilt neural voices (called *Neural* on the pricing page) and custom neural voices (called *Custom Neural* on the pricing page). For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-Before using this API, understand:
-
-* The Text-to-Speech REST API requires an Authorization header. This means that you need to complete a token exchange to access the service. For more information, see [Authentication](#authentication).
-
-> [!TIP]
-> See [this article](sovereign-clouds.md) for Azure Government and Azure China endpoints.
+Before you use the text-to-speech REST API, understand that you need to complete a token exchange as part of authentication to access the service.
[!INCLUDE [](../../../includes/cognitive-services-speech-service-rest-auth.md)] ## Get a list of voices
-The `voices/list` endpoint allows you to get a full list of voices for a specific region/endpoint.
-
-### Regions and endpoints
+You can use the `voices/list` endpoint to get a full list of voices for a specific region or endpoint:
| Region | Endpoint | |--|-|
The `voices/list` endpoint allows you to get a full list of voices for a specifi
| West US 2 | `https://westus2.tts.speech.microsoft.com/cognitiveservices/voices/list` | > [!TIP]
-> [Voices in preview](language-support.md#prebuilt-neural-voices-in-preview) are only available in these 3 regions: East US, West Europe and Southeast Asia.
+> [Voices in preview](language-support.md#prebuilt-neural-voices-in-preview) are available in only these three regions: East US, West Europe, and Southeast Asia.
### Request headers
-This table lists required and optional headers for text-to-speech requests.
+This table lists required and optional headers for text-to-speech requests:
-| Header | Description | Required / Optional |
+| Header | Description | Required or optional |
|--|-||
-| `Ocp-Apim-Subscription-Key` | Your Speech service subscription key. | Either this header or `Authorization` is required. |
+| `Ocp-Apim-Subscription-Key` | Your subscription key for the Speech service. | Either this header or `Authorization` is required. |
| `Authorization` | An authorization token preceded by the word `Bearer`. For more information, see [Authentication](#authentication). | Either this header or `Ocp-Apim-Subscription-Key` is required. | -- ### Request body A body isn't required for `GET` requests to this endpoint. ### Sample request
-This request only requires an authorization header.
+This request requires only an authorization header:
```http GET /cognitiveservices/voices/list HTTP/1.1
Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY
This response has been truncated to illustrate the structure of a response. > [!NOTE]
-> Voice availability varies by region/endpoint.
+> Voice availability varies by region or endpoint.
```json [
The HTTP status code for each response indicates success or common errors.
| HTTP status code | Description | Possible reason | ||-|--| | 200 | OK | The request was successful. |
-| 400 | Bad Request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common issue is a header that is too long. |
-| 401 | Unauthorized | The request is not authorized. Check to make sure your subscription key or token is valid and in the correct region. |
-| 429 | Too Many Requests | You have exceeded the quota or rate of requests allowed for your subscription. |
-| 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. |
+| 400 | Bad request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common reason is a header that's too long. |
+| 401 | Unauthorized | The request is not authorized. Make sure your subscription key or token is valid and in the correct region. |
+| 429 | Too many requests | You have exceeded the quota or rate of requests allowed for your subscription. |
+| 502 | Bad gateway | There's a network or server-side problem. This status might also indicate invalid headers. |
-## Convert Text-to-Speech
+## Convert text to speech
-The `v1` endpoint allows you to convert Text-to-Speech using [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md).
+The `v1` endpoint allows you to convert text to speech by using [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md).
### Regions and endpoints
-These regions are supported for Text-to-Speech using the REST API. Make sure that you select the endpoint that matches your subscription region.
+These regions are supported for text-to-speech through the REST API. Be sure to select the endpoint that matches your subscription region.
[!INCLUDE [](includes/cognitive-services-speech-service-endpoints-text-to-speech.md)] ### Request headers
-This table lists required and optional headers for Text-to-Speech requests.
+This table lists required and optional headers for text-to-speech requests:
-| Header | Description | Required / Optional |
+| Header | Description | Required or optional |
|--|-|| | `Authorization` | An authorization token preceded by the word `Bearer`. For more information, see [Authentication](#authentication). | Required | | `Content-Type` | Specifies the content type for the provided text. Accepted value: `application/ssml+xml`. | Required |
-| `X-Microsoft-OutputFormat` | Specifies the audio output format. For a complete list of accepted values, see [audio outputs](#audio-outputs). | Required |
-| `User-Agent` | The application name. The value provided must be less than 255 characters. | Required |
+| `X-Microsoft-OutputFormat` | Specifies the audio output format. For a complete list of accepted values, see [Audio outputs](#audio-outputs). | Required |
+| `User-Agent` | The application name. The provided value must be fewer than 255 characters. | Required |
### Audio outputs
-This is a list of supported audio formats that are sent in each request as the `X-Microsoft-OutputFormat` header. Each incorporates a bitrate and encoding type. The Speech service supports 24 kHz, 16 kHz, and 8 kHz audio outputs.
+This is a list of supported audio formats that are sent in each request as the `X-Microsoft-OutputFormat` header. Each format incorporates a bit rate and encoding type. The Speech service supports 24-kHz, 16-kHz, and 8-kHz audio outputs.
```output raw-16khz-16bit-mono-pcm riff-16khz-16bit-mono-pcm
ogg-48khz-16bit-mono-opus
``` > [!NOTE]
-> If your selected voice and output format have different bit rates, the audio is resampled as necessary.
-> ogg-24khz-16bit-mono-opus can be decoded with [opus codec](https://opus-codec.org/downloads/)
+> If your selected voice and output format have different bit rates, the audio is resampled as necessary. You can decode the `ogg-24khz-16bit-mono-opus` format by using the [Opus codec](https://opus-codec.org/downloads/).
### Request body
-The body of each `POST` request is sent as [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). SSML allows you to choose the voice and language of the synthesized speech returned by the Text-to-Speech service. For a complete list of supported voices, see [language support](language-support.md#text-to-speech).
+The body of each `POST` request is sent as [SSML](speech-synthesis-markup.md). SSML allows you to choose the voice and language of the synthesized speech that the text-to-speech feature returns. For a complete list of supported voices, see [Language and voice support for the Speech service](language-support.md#text-to-speech).
> [!NOTE]
-> If using a custom neural voice, the body of a request can be sent as plain text (ASCII or UTF-8).
+> If you're using a custom neural voice, the body of a request can be sent as plain text (ASCII or UTF-8).
### Sample request
-This HTTP request uses SSML to specify the voice and language. If the body length is long, and the resulting audio exceeds 10 minutes - it is truncated to 10 minutes. In other words, the audio length cannot exceed 10 minutes.
+This HTTP request uses SSML to specify the voice and language. If the body length is long, and the resulting audio exceeds 10 minutes, it's truncated to 10 minutes. In other words, the audio length can't exceed 10 minutes.
```http POST /cognitiveservices/v1 HTTP/1.1
Authorization: Bearer [Base64 access_token]
### HTTP status codes
-The HTTP status code for each response indicates success or common errors.
+The HTTP status code for each response indicates success or common errors:
| HTTP status code | Description | Possible reason | ||-|--|
-| 200 | OK | The request was successful; the response body is an audio file. |
-| 400 | Bad Request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common issue is a header that is too long. |
-| 401 | Unauthorized | The request is not authorized. Check to make sure your subscription key or token is valid and in the correct region. |
-| 415 | Unsupported Media Type | It's possible that the wrong `Content-Type` was provided. `Content-Type` should be set to `application/ssml+xml`. |
-| 429 | Too Many Requests | You have exceeded the quota or rate of requests allowed for your subscription. |
-| 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. |
+| 200 | OK | The request was successful. The response body is an audio file. |
+| 400 | Bad request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common reason is a header that's too long. |
+| 401 | Unauthorized | The request is not authorized. Make sure your subscription key or token is valid and in the correct region. |
+| 415 | Unsupported media type | It's possible that the wrong `Content-Type` value was provided. `Content-Type` should be set to `application/ssml+xml`. |
+| 429 | Too many requests | You have exceeded the quota or rate of requests allowed for your subscription. |
+| 502 | Bad gateway | There's a network or server-side problem. This status might also indicate invalid headers. |
If the HTTP status is `200 OK`, the body of the response contains an audio file in the requested format. This file can be played as it's transferred, saved to a buffer, or saved to a file.
cognitive-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-to-text.md
Title: Speech-to-text overview - Speech service
-description: Speech-to-text software enables real-time transcription of audio streams into text. Your applications, tools, or devices can consume, display, and take action on this text input. This article is an overview of the benefits and capabilities of the speech-to-text service.
+description: Get an overview of the benefits and capabilities of the speech-to-text feature of the Speech Service.
keywords: speech to text, speech to text software
# What is speech-to-text?
-In this overview, you learn about the benefits and capabilities of the speech-to-text service.
-Speech-to-text, also known as speech recognition, enables real-time transcription of audio streams into text. Your applications, tools, or devices can consume, display, and take action on this text as command input. This service is powered by the same recognition technology that Microsoft uses for Cortana and Office products. It seamlessly works with the <a href="./speech-translation.md" target="_blank">translation </a> and <a href="./text-to-speech.md" target="_blank">text-to-speech </a> service offerings. For a full list of available speech-to-text languages, see [supported languages](language-support.md#speech-to-text).
+In this overview, you learn about the benefits and capabilities of the speech-to-text feature of the Speech service, which is part of Azure Cognitive Services.
-The speech-to-text service defaults to using the Universal language model. This model was trained using Microsoft-owned data and is deployed in the cloud. It's optimal for conversational and dictation scenarios. When using speech-to-text for recognition and transcription in a unique environment, you can create and train custom acoustic, language, and pronunciation models. Customization is helpful for addressing ambient noise or industry-specific vocabulary.
+Speech-to-text, also known as speech recognition, enables real-time transcription of audio streams into text. Your applications, tools, or devices can consume, display, and take action on this text as command input.
+
+This feature uses the same recognition technology that Microsoft uses for Cortana and Office products. It seamlessly works with the <a href="./speech-translation.md" target="_blank">translation </a> and <a href="./text-to-speech.md" target="_blank">text-to-speech </a> offerings of the Speech service. For a full list of available speech-to-text languages, see [Language and voice support for the Speech service](language-support.md#speech-to-text).
+
+The speech-to-text feature defaults to using the Universal Language Model. This model was trained through Microsoft-owned data and is deployed in the cloud. It's optimal for conversational and dictation scenarios.
+
+When you're using speech-to-text for recognition and transcription in a unique environment, you can create and train custom acoustic, language, and pronunciation models. Customization is helpful for addressing ambient noise or industry-specific vocabulary.
> [!NOTE] > Bing Speech was decommissioned on October 15, 2019. If your applications, tools, or products are using the Bing Speech APIs, see [Migrate from Bing Speech to the Speech service](how-to-migrate-from-bing-speech.md). ## Get started
-See the [quickstart](get-started-speech-to-text.md) to get started with speech-to-text. The service is available via the [Speech SDK](speech-sdk.md), the [REST API](rest-speech-to-text.md#pronunciation-assessment-parameters), and the [Speech CLI](spx-overview.md).
+To get started with speech-to-text, see the [quickstart](get-started-speech-to-text.md). Speech-to-text is available via the [Speech SDK](speech-sdk.md), the [REST API](rest-speech-to-text.md#pronunciation-assessment-parameters), and the [Speech CLI](spx-overview.md).
## Sample code
-Sample code for the Speech SDK is available on GitHub. These samples cover common scenarios like reading audio from a file or stream, continuous and at-start recognition, and working with custom models.
+Sample code for the Speech SDK is available on GitHub. These samples cover common scenarios like reading audio from a file or stream, continuous and at-start recognition, and working with custom models:
- [Speech-to-text samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk) - [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch)
Sample code for the Speech SDK is available on GitHub. These samples cover commo
## Customization
-In addition to the standard Speech service model, you can create custom models. Customization helps to overcome speech recognition barriers such as speaking style, vocabulary and background noise, see [Custom Speech](./custom-speech-overview.md). Customization options vary by language/locale, see [supported languages](./language-support.md) to verify support.
+In addition to the standard Speech service model, you can create custom models. Customization helps to overcome speech recognition barriers such as speaking style, vocabulary, and background noise. For more information, see [Custom Speech](./custom-speech-overview.md).
+
+Customization options vary by language or locale. To verify support, see [Language and voice support for the Speech service](./language-support.md).
## Batch transcription
-Batch transcription is a set of REST API operations that enable you to transcribe a large amount of audio in storage. You can point to audio files with a shared access signature (SAS) URI and asynchronously receive transcription results. See the [how-to](batch-transcription.md) for more information on how to use the batch transcription API.
+Batch transcription is a set of REST API operations that enable you to transcribe a large amount of audio in storage. You can point to audio files with a shared access signature (SAS) URI and asynchronously receive transcription results. For more information on how to use the batch transcription API, see [How to use batch transcription](batch-transcription.md).
## Reference docs
-The [Speech SDK](speech-sdk.md) provides most of the functionalities needed to interact with the Speech service. For scenarios such as model development and batch transcription you can use the REST API.
+The [Speech SDK](speech-sdk.md) provides most of the functionalities that you need to interact with the Speech service. For scenarios such as model development and batch transcription, you can use the REST API.
### Speech SDK reference docs
Use the following list to find the appropriate Speech SDK reference docs:
- <a href="https://aka.ms/csspeech/objectivecref" target="_blank" rel="noopener">Objective-C SDK </a> > [!TIP]
-> The Speech service SDK is actively maintained and updated. To track changes, updates and feature additions refer to the [Speech SDK release notes](releasenotes.md).
+> The Speech service SDK is actively maintained and updated. To track changes, updates, and feature additions, see the [Speech SDK release notes](releasenotes.md).
### REST API references
-For Speech-to-text REST APIs, refer to the listing below:
+For speech-to-text REST APIs, see the following resources:
- [REST API: Speech-to-text](rest-speech-to-text.md) - [REST API: Pronunciation assessment](rest-speech-to-text.md#pronunciation-assessment-parameters)
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/text-to-speech.md
Title: Text-to-speech overview - Speech service
-description: The text-to-speech feature in the Speech service enables your applications, tools, or devices to convert text into natural human-like synthesized speech. This article is an overview of the benefits and capabilities of the text-to-speech service.
+description: Get an overview of the benefits and capabilities of the text-to-speech feature of the Speech service.
keywords: text to speech
-# What is Text-to-Speech?
+# What is text-to-speech?
-In this overview, you learn about the benefits and capabilities of the Text-to-Speech service, which enables your applications, tools, or devices to convert text into human-like synthesized speech. Use human-like prebuilt neural voices out-of-the-box, or create a custom neural voice unique to your product or brand. For a full list of supported voices, languages, and locales, see [supported languages](language-support.md#text-to-speech).
+In this overview, you learn about the benefits and capabilities of the text-to-speech feature of the Speech service, which is part of Azure Cognitive Services.
+
+Text-to-speech enables your applications, tools, or devices to convert text into humanlike synthesized speech. The text-to-speech capability is also known as speech synthesis. Use humanlike prebuilt neural voices out of the box, or create a custom neural voice that's unique to your product or brand. For a full list of supported voices, languages, and locales, see [Language and voice support for the Speech service](language-support.md#text-to-speech).
> [!NOTE] >
In this overview, you learn about the benefits and capabilities of the Text-to-S
## Core features
-The Text-to-Speech service includes the following features.
+Text-to-speech includes the following features:
| Feature| Summary | Demo | |--|-||
-| Prebuilt Neural Voice (referred as *Neural* on [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)) | Highly natural out-of-the-box voices powered by deep neural networks. Create an Azure account and Speech service subscription, then use the [Speech SDK](./get-started-text-to-speech.md) or visit the [Speech Studio portal](https://speech.microsoft.com/portal), and select prebuilt neural voices to get started. Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details. | Check the voice samples [here](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#overview) and determine the right voice for your business needs. |
-| Custom Neural Voice (referred as *Custom Neural* on [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)) | Easy-to-use self-service for creating a natural brand voice, with limited access for responsible use. Create an Azure account and Speech service subscription (with S0 tier), and [apply](https://aka.ms/customneural) to use custom neural feature. After you've been granted the access, visit the [Speech Studio portal](https://speech.microsoft.com/portal) and then select Custom Voice to get started. Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details. | Check the voice samples [here](https://aka.ms/customvoice). |
+| Prebuilt neural voice (called *Neural* on the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)) | Highly natural out-of-the-box voices. Create an Azure account and Speech service subscription, and then use the [Speech SDK](./get-started-text-to-speech.md) or visit the [Speech Studio portal](https://speech.microsoft.com/portal) and select prebuilt neural voices to get started. Check the [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). | Check the [voice samples](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#overview) and determine the right voice for your business needs. |
+| Custom neural voice (called *Custom Neural* on the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)) | Easy-to-use self-service for creating a natural brand voice, with limited access for responsible use. Create an Azure account and Speech service subscription (with the S0 tier), and [apply](https://aka.ms/customneural) to use the custom neural feature. After you've been granted access, visit the [Speech Studio portal](https://speech.microsoft.com/portal) and select **Custom Voice** to get started. Check the [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). | Check the [voice samples](https://aka.ms/customvoice). |
+
+### More about neural text-to-speech features
+The text-to-speech feature of the Speech service on Azure has been fully upgraded to the neural text-to-speech engine. This engine uses deep neural networks to make the voices of computers nearly indistinguishable from the recordings of people. With the clear articulation of words, neural text-to-speech significantly reduces listening fatigue when users interact with AI systems.
+
+The patterns of stress and intonation in spoken language are called _prosody_. Traditional text-to-speech systems break down prosody into separate linguistic analysis and acoustic prediction steps that are governed by independent models. That can result in muffled, buzzy voice synthesis.
-### Learn more about neural Text-to-Speech features
-Text-to-Speech (TTS), also known as speech synthesis, enables your applications to speak. The Text-to-Speech feature of Speech service on Azure has been fully upgraded to the neural TTS engine, which uses deep neural networks to make the voices of computers nearly indistinguishable from the recordings of people. With the human-like natural prosody and clear articulation of words, neural Text-to-Speech has significantly reduced listening fatigue when you interact with AI systems.
+Here's more information about neural text-to-speech features in the Speech service, and how they overcome the limits of traditional text-to-speech systems:
-The patterns of stress and intonation in spoken language are called _prosody_. Traditional Text-to-Speech systems break down prosody into separate linguistic analysis and acoustic prediction steps that are governed by independent models. That can result in muffled, buzzy voice synthesis. Microsoft neural Text-to-Speech capability does prosody prediction and voice synthesis simultaneously, uses deep neural networks to overcome the limits of traditional Text-to-Speech systems in matching the patterns of stress and intonation in spoken language, and synthesizes the units of speech into a computer voice. The result is a more fluid and natural-sounding voice.
+* **Real-time speech synthesis**: Use the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md) to convert text-to-speech by using [prebuilt neural voices](language-support.md#text-to-speech) or [custom neural voices](custom-neural-voice.md).
-* Real-time speech synthesis - Use the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md) to convert Text-to-Speech using [prebuilt neural voices](language-support.md#text-to-speech) or [custom neural voices](custom-neural-voice.md).
+* **Asynchronous synthesis of long audio**: Use the [Long Audio API](long-audio-api.md) to asynchronously synthesize text-to-speech files longer than 10 minutes (for example, audio books or lectures). Unlike synthesis performed via the Speech SDK or speech-to-text REST API, responses aren't returned in real time. The expectation is that requests are sent asynchronously, responses are polled for, and synthesized audio is downloaded when the service makes it available.
-* Asynchronous synthesis of long audio - Use the [Long Audio API](long-audio-api.md) to asynchronously synthesize Text-to-Speech files longer than 10 minutes (for example audio books or lectures). Unlike synthesis performed using the Speech SDK or speech-to-text REST API, responses aren't returned in real time. The expectation is that requests are sent asynchronously, responses are polled for, and that the synthesized audio is downloaded when made available from the service.
+* **Prebuilt neural voices**: Microsoft neural text-to-speech capability uses deep neural networks to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis happen simultaneously, which results in more fluid and natural-sounding outputs. You can use neural voices to:
-* Prebuilt neural voices - Deep neural networks are used to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis are performed simultaneously, which results in more fluid and natural-sounding outputs. Neural voices can be used to make interactions with chatbots and voice assistants more natural and engaging, convert digital texts such as e-books into audiobooks, and enhance in-car navigation systems. With the human-like natural prosody and clear articulation of words, neural voices significantly reduce listening fatigue when you interact with AI systems. For a full list of platform neural voices, see [supported languages](language-support.md#text-to-speech).
+ - Make interactions with chatbots and voice assistants more natural and engaging.
+ - Convert digital texts such as e-books into audiobooks.
+ - Enhance in-car navigation systems.
+
+ For a full list of platform neural voices, see [Language and voice support for the Speech service](language-support.md#text-to-speech).
-* Fine-tune Text-to-Speech output with SSML - Speech Synthesis Markup Language (SSML) is an XML-based markup language used to customize Text-to-Speech outputs. With SSML, you can not only adjust pitch, add pauses, improve pronunciation, change speaking rate, adjust volume, and attribute multiple voices to a single document, but also define your own lexicons or switch to different speaking styles. With the [multi-lingual voices](https://techcommunity.microsoft.com/t5/azure-ai/azure-text-to-speech-updates-at-build-2021/ba-p/2382981), you can also adjust the speaking languages via SSML. See [how to use SSML](speech-synthesis-markup.md) to fine-tune the voice output for your scenario.
+* **Fine-tuning text-to-speech output with SSML**: Speech Synthesis Markup Language (SSML) is an XML-based markup language that's used to customize text-to-speech outputs. With SSML, you can adjust pitch, add pauses, improve pronunciation, change speaking rate, adjust volume, and attribute multiple voices to a single document.
-* Visemes - [Visemes](how-to-speech-synthesis-viseme.md) are the key poses in observed speech, including the position of the lips, jaw and tongue when producing a particular phoneme. Visemes have a strong correlation with voices and phonemes. Using viseme events in Speech SDK, you can generate facial animation data, which can be used to animate faces in lip-reading communication, education, entertainment, and customer service. Viseme is currently only supported for the `en-US` English (United States) [neural voices](language-support.md#text-to-speech).
+ You can use SSML to define your own lexicons or switch to different speaking styles. With the [multilingual voices](https://techcommunity.microsoft.com/t5/azure-ai/azure-text-to-speech-updates-at-build-2021/ba-p/2382981), you can also adjust the speaking languages via SSML. To fine-tune the voice output for your scenario, see [Improve synthesis with Speech Synthesis Markup Language](speech-synthesis-markup.md).
+
+* **Visemes**: [Visemes](how-to-speech-synthesis-viseme.md) are the key poses in observed speech, including the position of the lips, jaw, and tongue in producing a particular phoneme. Visemes have a strong correlation with voices and phonemes.
+
+ By using viseme events in Speech SDK, you can generate facial animation data. This data can be used to animate faces in lip-reading communication, education, entertainment, and customer service. Viseme is currently supported only for the `en-US` (US English) [neural voices](language-support.md#text-to-speech).
> [!NOTE]
+> We plan to retire the traditional/standard voices and non-neural custom voice in 2024. After that, we'll no longer support them.
>
-> The traditional/standard voices and non-neural custom voice will be retired and no longer be supported in 2024. If your applications, tools, or products are using any of the standard voices and custom voices, we've created guides to help you migrate to the neural version.
->
-> * [Migrate to neural voices](migration-overview-neural-voice.md)
+> If your applications, tools, or products are using any of the standard voices and custom voices, we've created guides to help you migrate to the neural version. For more information, see [Migrate to neural voices](migration-overview-neural-voice.md).
## Get started
-See the [quickstart](get-started-text-to-speech.md) to get started with Text-to-Speech. The Text-to-Speech service is available via the [Speech SDK](speech-sdk.md), the [REST API](rest-text-to-speech.md), and the [Speech CLI](spx-overview.md)
+To get started with text-to-speech, see the [quickstart](get-started-text-to-speech.md). Text-to-speech is available via the [Speech SDK](speech-sdk.md), the [REST API](rest-text-to-speech.md), and the [Speech CLI](spx-overview.md).
## Sample code
-Sample code for Text-to-Speech is available on GitHub. These samples cover Text-to-Speech conversion in most popular programming languages.
+Sample code for text-to-speech is available on GitHub. These samples cover text-to-speech conversion in most popular programming languages:
-* [Text-to-Speech samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk)
-* [Text-to-Speech samples (REST)](https://github.com/Azure-Samples/Cognitive-Speech-TTS)
+* [Text-to-speech samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk)
+* [Text-to-speech samples (REST)](https://github.com/Azure-Samples/Cognitive-Speech-TTS)
## Custom neural voice
-In addition to prebuilt neural voices, you can create and fine-tune custom neural voices unique to your product or brand. All it takes to get started are a handful of audio files and the associated transcriptions. For more information, see [Get started with custom neural voice](how-to-custom-voice.md)
+In addition to prebuilt neural voices, you can create and fine-tune custom neural voices that are unique to your product or brand. All it takes to get started is a handful of audio files and the associated transcriptions. For more information, see [Get started with custom neural voice](how-to-custom-voice.md).
## Pricing note
-When using the Text-to-Speech service, you are billed for each character that is converted to speech, including punctuation. While the SSML document itself is not billable, optional elements that are used to adjust how the text is converted to speech, like phonemes and pitch, are counted as billable characters. Here's a list of what's billable:
+When you use the text-to-speech feature, you're billed for each character that's converted to speech, including punctuation. Although the SSML document itself is not billable, optional elements that are used to adjust how the text is converted to speech, like phonemes and pitch, are counted as billable characters. Here's a list of what's billable:
-* Text passed to the Text-to-Speech service in the SSML body of the request
+* Text passed to the text-to-speech feature in the SSML body of the request
* All markup within the text field of the request body in the SSML format, except for `<speak>` and `<voice>` tags * Letters, punctuation, spaces, tabs, markup, and all white-space characters * Every code point defined in Unicode
-For detailed information, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+For detailed information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
> [!IMPORTANT]
-> Each Chinese characters are counted as two characters for billing, including Kanji used in Japanese, Hanja used in Korean, or Hanzi used in other languages.
+> Each Chinese character is counted as two characters for billing, including kanji used in Japanese, hanja used in Korean, or hanzi used in other languages.
## Reference docs * [Speech SDK](speech-sdk.md)
-* [REST API: Text-to-Speech](rest-text-to-speech.md)
+* [REST API: Text-to-speech](rest-text-to-speech.md)
## Next steps
cognitive-services Translate With Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/custom-translator/v2-preview/how-to/translate-with-custom-model.md
> [!IMPORTANT] > Custom Translator v2.0 is currently in public preview. Some features may not be supported or have constrained capabilities.
-After you publish your custom model you can access it with the Translator API by using the `Category ID` parameter. To retrieve, choose the copy icon:
-
- :::image type="content" source="../media/how-to/publish-model.png" alt-text="{alt-text}":::
+After you publish your custom model, you can access it with the Translator API by using the `Category ID` parameter.
## How to translate
cognitive-services View Model Test Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/custom-translator/v2-preview/how-to/view-model-test-translation.md
Once your model has successfully trained, you can use translations to evaluate t
BLEU (Bilingual Evaluation Understudy) is an algorithm for evaluating the precision or accuracy of text that has been machine translated from one language to another. Custom Translator uses the BLEU metric as one way of conveying translation accuracy.
-A BLEU score is a number between zero and 100. A score of zero indicates a very low quality translation where nothing in the translation matched the reference. A score of 100 indicates a perfect translation that is identical to the reference. It's not necessary to attain a score of 100ΓÇöa BLEU score between 40 and 60 indicates a high quality translation.
+A BLEU score is a number between zero and 100. A score of zero indicates a low-quality translation where nothing in the translation matched the reference. A score of 100 indicates a perfect translation that is identical to the reference. It's not necessary to attain a score of 100ΓÇöa BLEU score between 40 and 60 indicates a high-quality translation.
[Read more](/azure/cognitive-services/translator/custom-translator/what-is-bleu-score?WT.mc_id=aiml-43548-heboelma)
cognitive-services Project Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/custom-translator/v2-preview/project-overview.md
- Title: What is a project? - Custom Translator-
-description: This article will explain the project categories and labels for the Custom Translator service.
----- Previously updated : 01/20/2022--
-#Customer intent: As a Custom Translator user, I want to concept of a project, so that I can use it efficiently.
-
-# What is a Custom Translator project?
-
-A project contains translation models for one language pair. Each project
-initially includes all documents that are uploaded to a workspace with the correct language pair. For example, if you have both an English-to-Spanish project and a Spanish-to-English project, the same documents will be included in both projects. Each project has an associated `CategoryID` that is used when querying the [V3 API](../../reference/v3-0-translate.md?tabs=curl) for translations. The `CategoryID` is parameter used to get translations from a customized system built with Custom Translator.
-
-## Project category
-
-The project `Category ID` identifies the domainΓÇöthe area of terminology and style you want to use for your project. Choose the category most relevant to the contents of your documents.
-
-In the same workspace, you may create projects for the same language pair in
-different categories. Custom Translator prevents creation of a duplicate project
-with the same language pair and category. Applying a label to your project
-allows you to avoid this restriction. Don't use labels unless you're building translation systems for multiple clients, because adding a unique label to your project will be reflected in your projects `Category ID`.
-
-## Project label
-
-Custom Translator allows you to assign a project label to your project. The
-project label distinguishes between multiple projects with the same language
-pair and category. As a best practice, avoid using project labels unless
-necessary.
-
-The project label is used as part of the `Category ID`. If the project label is
-left unset or is set identically across projects, then, projects with the same
-category and *different* language pairs will share the same `Category ID`. This approach is advantageous because it allows you or your customer to switch between
-languages when using the Text Translator API without worrying about which `Category ID` to use.
-
-For example, if you want to enable translations in the technology domain from
-English-to-French and French-to-English, create two projects: one for English → French, and one for French → English. Specify the same category (technology) for both and leave the project label blank. The `Category ID` for both projects will be the same. When you call the Text API to translate from both models, only change the _from_ and _to_ languages without modifying the CategoryID.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn how to manage projects](how-to/create-manage-project.md)
cognitive-services Markdown Format https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/question-answering/reference/markdown-format.md
+
+ Title: Markdown format - question answering
+description: Following is the list of markdown formats that you can use your answer text.
+++++ Last updated : 01/21/2022++
+# Markdown format supported in answer text
+
+Question answering stores answer text as markdown. There are many flavors of markdown. In order to make sure the answer text is returned and displayed correctly, use this reference.
+
+Use the **[CommonMark](https://commonmark.org/help/tutorial/https://docsupdatetracker.net/index.html)** tutorial to validate your markdown. The tutorial has a **Try it** feature for quick copy/paste validation.
+
+## When to use rich-text editing versus markdown
+
+Rich-text editing of answers allows you, as the author, to use a formatting toolbar to quickly select and format text.
+
+Markdown is a better tool when you need to autogenerate content to create knowledge bases to be imported as part of a CI/CD pipeline or for batch testing.
+
+## Supported markdown format
+
+Following is the list of markdown formats that you can use in your answer text.
+
+|Purpose|Format|Example markdown|
+|--|--|--|
+A new line between 2 sentences.|`\n\n`|`How can I create a bot with \n\n question answering?`|
+|Headers from h1 to h6, the number of `#` denotes which header. 1 `#` is the h1.|`\n# text \n## text \n### text \n####text \n#####text` |`## Creating a bot \n ...text.... \n### Important news\n ...text... \n### Related Information\n ....text...`<br><br>`\n# my h1 \n## my h2\n### my h3 \n#### my h4 \n##### my h5`|
+|Italics |`*text*`|`How do I create a bot with *question answering*?`|
+|Strong (bold)|`**text**`|`How do I create a bot with **question answering***?`|
+|URL for link|`[text](https://www.my.com)`|`How do I create a bot with [question answering](https://language.cognitive.azure.com/)?`|
+|*URL for public image|`![text](https://www.my.com/image.png)`|`How can I create a bot with ![question answering](path-to-your-image.png)`|
+|Strikethrough|`~~text~~`|`some ~~questions~~ questions need to be asked`|
+|Bold and italics|`***text***`|`How can I create a ***question answering**** bot?`|
+|Bold URL for link|`[**text**](https://www.my.com)`|`How do I create a bot with [**question answering**](https://language.cognitive.azure.com/)?`|
+|Italics URL for link|`[*text*](https://www.my.com)`|`How do I create a bot with [*question answering*](https://language.cognitive.azure.com/)?`|
+|Escape markdown symbols|`\*text\*`|`How do I create a bot with \*question answering*\*?`|
+|Ordered list|`\n 1. item1 \n 1. item2`|`This is an ordered list: \n 1. List item 1 \n 1. List item 2`<br>The preceding example uses automatic numbering built into markdown.<br>`This is an ordered list: \n 1. List item 1 \n 2. List item 2`<br>The preceding example uses explicit numbering.|
+|Unordered list|`\n * item1 \n * item2`<br>or<br>`\n - item1 \n - item2`|`This is an unordered list: \n * List item 1 \n * List item 2`|
+|Nested lists|`\n * Parent1 \n\t * Child1 \n\t * Child2 \n * Parent2`<br><br>`\n * Parent1 \n\t 1. Child1 \n\t * Child2 \n 1. Parent2`<br><br>You can nest ordered and unordered lists together. The tab, `\t`, indicates the indentation level of the child element.|`This is an unordered list: \n * List item 1 \n\t * Child1 \n\t * Child2 \n * List item 2`<br><br>`This is an ordered nested list: \n 1. Parent1 \n\t 1. Child1 \n\t 1. Child2 \n 1. Parent2`|
+
+* Question answering doesn't process the image in any way. It is the client application's role to render the image.
+
+If you want to add content using update/replace knowledge base APIs and the content/file contains html tags, you can preserve the HTML in your file by ensuring that opening and closing of the tags are converted in the encoded format.
+
+| Preserve HTML | Representation in the API request | Representation in KB |
+|--||-|
+| Yes | \&lt;br\&gt; | &lt;br&gt; |
+| Yes | \&lt;h3\&gt;header\&lt;/h3\&gt; | &lt;h3&gt;header&lt;/h3&gt; |
+
+Additionally, `CR LF(\r\n)` are converted to `\n` in the KB. `LF(\n)` is kept as is. If you want to escape any escape sequence like a \t or \n you can use backslash, for example: '\\\\r\\\\n' and '\\\\t'
+
+## Next steps
+
+* [Import a knowledge base](../how-to/migrate-knowledge-base.md)
container-registry Container Registry Auto Purge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-auto-purge.md
At a minimum, specify the following when you run `acr purge`:
`acr purge` supports several optional parameters. The following two are used in examples in this article:
-* `--untagged` - Specifies that manifests that don't have associated tags (*untagged manifests*) are deleted.
+* `--untagged` - Specifies that all manifests that don't have associated tags (*untagged manifests*) are deleted.
* `--dry-run` - Specifies that no data is deleted, but the output is the same as if the command is run without this flag. This parameter is useful for testing a purge command to make sure it does not inadvertently delete data you intend to preserve. * `--keep` - Specifies that the latest x number of to-be-deleted tags are retained. * `--concurrency` - Specifies a number of purge tasks to process concurrently. A default value is used if this parameter is not provided.
For additional parameters, run `acr purge --help`.
### Run in an on-demand task
-The following example uses the [az acr run][az-acr-run] command to run the `acr purge` command on-demand. This example deletes all image tags and manifests in the `hello-world` repository in *myregistry* that were modified more than 1 day ago. The container command is passed using an environment variable. The task runs without a source context.
+The following example uses the [az acr run][az-acr-run] command to run the `acr purge` command on-demand. This example deletes all image tags and manifests in the `hello-world` repository in *myregistry* that were modified more than 1 day ago and all untagged manifests. The container command is passed using an environment variable. The task runs without a source context.
```azurecli # Environment variable for container command line
cost-management-billing Prepay Jboss Eap Integrated Support App Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/prepay-jboss-eap-integrated-support-app-service.md
+
+ Title: Save on JBoss EAP Integrated Support on Azure App Service with reservations
+description: Learn how you can save on your JBoss EAP Integrated Support fee on Azure App Service.
+++++ Last updated : 10/01/2021++++
+# Reduce costs with JBoss EAP Integrated Support reservations
+
+You can save on your JBoss EAP Integrated Support costs on Azure App Service when you purchase reservations for one year. You can purchase a reservation for JBoss EAP Integrated Support at any time.
+
+## Save with JBoss EAP Integrated Support reservations
+
+When you purchase a JBoss EAP Integrated Support reservation, the discount is automatically applied to the JBoss apps that match the reservation scope. You don't need to assign a reservation to an instance to receive the discount.
+
+## Buy a JBoss EAP Integrated Support reservation
+
+You can buy a reservation for JBoss EAP Integrated Support in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/CreateBlade/referrer/documentation/filters/%7B%22reservedResourceType%22%3A%22VirtualMachines%22%7D). Pay for the reservation [up front or with monthly payments](prepare-buy-reservation.md)
+
+- You must be in an Owner role for at least one EA subscription or a subscription with a pay-as-you-go rate.
+- For EA subscriptions, the **Add Reserved Instances** option must be enabled in the [EA portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an EA Admin for the subscription.
+- For the Cloud Solution Provider (CSP) program, only the admin agents or sales agents can buy reservations.
+
+To buy an instance:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Select **All services** > **Reservations**.
+3. Select **Add** to purchase a new reservation and then click **Instance**.
+4. Enter required fields.
+
+If you have an EA agreement, you can use the **Add more option** to quickly add additional instances. The option isn't available for other subscription types.
++
+## Cancel, exchange, or refund reservations
+
+You can cancel, exchange, or refund reservations with certain limitations. For more information, see [Self-service exchanges and refunds for Azure Reservations](exchange-and-refund-azure-reservations.md).
+
+## Discount application shown in usage data
+
+Your usage data has an effective price of zero for the usage that gets a reservation discount. The usage data shows the reservation discount for each stamp instance in each reservation.
+
+For more information about how reservation discount shows in usage data, see [Get Enterprise Agreement reservation costs and usage](understand-reserved-instance-usage-ea.md) if you're an Enterprise Agreement (EA) customer. Otherwise see, [Understand Azure reservation usage for your individual subscription with pay-as-you-go rates](understand-reserved-instance-usage.md).
+
+## Next steps
+
+- To learn more about Azure Reservations, see the following articles:
+ - [What are Azure Reservations?](save-compute-costs-reservations.md)
+ - [Understand how an Azure App Service Isolated Stamp reservation discount is applied](reservation-discount-app-service.md)
+ - [Understand reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md)
data-factory Azure Integration Runtime Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/azure-integration-runtime-ip-addresses.md
Previously updated : 01/06/2020 Last updated : 01/21/2022 # Azure Integration Runtime IP addresses
Allow traffic from the IP addresses listed for the Azure Integration runtime in
## Next steps
-* [Security considerations for data movement in Azure Data Factory](data-movement-security-considerations.md)
+* [Security considerations for data movement in Azure Data Factory](data-movement-security-considerations.md)
data-factory Concepts Pipelines Activities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-pipelines-activities.md
Control activity | Description
[Web Activity](control-flow-web-activity.md) | Web Activity can be used to call a custom REST endpoint from a pipeline. You can pass datasets and linked services to be consumed and accessed by the activity. [Webhook Activity](control-flow-webhook-activity.md) | Using the webhook activity, call an endpoint, and pass a callback URL. The pipeline run waits for the callback to be invoked before proceeding to the next activity.
+## Creating a pipeline with UI
+
+# [Azure Data Factory](#tab/data-factory)
+To create a new pipeline, navigate to the Author tab in Data Factory Studio (represented by the pencil icon), then click the plus sign and choose Pipeline from the menu, and Pipeline again from the submenu.
++
+Data factory will display the pipeline editor where you can find:
+
+1. All activities that can be used within the pipeline.
+1. The pipeline editor canvas, where activities will appear when added to the pipeline.
+1. The pipeline configurations pane, including parameters, variables, general settings, and output.
+1. The pipeline properties pane, where the pipeline name, optional description, and annotations can be configured. This pane will also show any related items to the pipeline within the data factory.
++
+# [Synapse Analytics](#tab/synapse-analytics)
+To create a new pipeline, navigate to the Integrate tab in Synapse Studio (represented by the pipeline icon), then click the plus sign and choose Pipeline from the menu.
++
+Synapse will display the pipeline editor where you can find:
+
+1. All activities that can be used within the pipeline.
+1. The pipeline editor canvas, where activities will appear when added to the pipeline.
+1. The pipeline configurations pane, including parameters, variables, general settings, and output.
+1. The pipeline properties pane, where the pipeline name, optional description, and annotations can be configured. This pane will also show any related items to the pipeline in the Synapse workspace.
++++ ## Pipeline JSON Here is how a pipeline is defined in JSON format:
policy | Policies that affect the run-time behavior of the activity. This proper
dependsOn | This property is used to define activity dependencies, and how subsequent activities depend on previous activities. For more information, see [Activity dependency](#activity-dependency) | No ### Activity policy
-Policies affect the run-time behavior of an activity, giving configurability options. Activity Policies are only available for execution activities.
+Policies affect the run-time behavior of an activity, giving configuration options. Activity Policies are only available for execution activities.
### Activity policy JSON definition
data-factory Concepts Roles Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-roles-permissions.md
Title: Roles and permissions for Azure Data Factory description: Describes the roles and permissions required to create Data Factories and to work with child resources. Previously updated : 11/5/2018 Last updated : 01/21/2022
data-factory Connector Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-data-warehouse.md
Previously updated : 12/29/2021 Last updated : 01/14/2022 # Copy and transform data in Azure Synapse Analytics by using Azure Data Factory or Synapse pipelines
To copy data to Azure Synapse Analytics, set the sink type in Copy Activity to *
| tableOption | Specifies whether to [automatically create the sink table](copy-activity-overview.md#auto-create-sink-tables) if not exists based on the source schema. Allowed values are: `none` (default), `autoCreate`. |No | | disableMetricsCollection | The service collects metrics such as Azure Synapse Analytics DWUs for copy performance optimization and recommendations, which introduce additional master DB access. If you are concerned with this behavior, specify `true` to turn it off. | No (default is `false`) | | maxConcurrentConnections |The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No |
+| WriteBehavior | Specify the write behavior for copy activity to load data into Azure SQL Database. <br/> The allowed value is **Insert** and **Upsert**. By default, the service uses insert to load data. | No |
+| upsertSettings | Specify the group of the settings for write behavior. <br/> Apply when the WriteBehavior option is `Upert`. | No |
+| ***Under `upsertSettings`:*** | | |
+| keys | Specify the column names for unique row identification. Either a single key or a series of keys can be used. If not specified, the primary key is used. | No |
+| interimSchemaName | Specify the interim schema for creating interim table. Note: user need to have the permission for creating and deleting table. By default, interim table will share the same schema as sink table. | No |
-#### Azure Synapse Analytics sink example
+#### Example 1: Azure Synapse Analytics sink
```json "sink": {
To copy data to Azure Synapse Analytics, set the sink type in Copy Activity to *
} ```
+#### Example 2: Upsert data
+
+```json
+"sink": {
+ "type": "SqlDWSink",
+ "writeBehavior": "Upsert",
+ "upsertSettings": {
+ "keys": [
+ "<column name>"
+ ],
+ "interimSchemaName": "<interim schema name>"
+ },
+}
+```
+ ## Parallel copy from Azure Synapse Analytics The Azure Synapse Analytics connector in copy activity provides built-in data partitioning to copy data in parallel. You can find data partitioning options on the **Source** tab of the copy activity.
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-database.md
Previously updated : 12/24/2021 Last updated : 01/14/2022 # Copy and transform data in Azure SQL Database by using Azure Data Factory or Azure Synapse Analytics
To copy data to Azure SQL Database, the following properties are supported in th
| writeBatchTimeout | The wait time for the batch insert operation to finish before it times out.<br/> The allowed value is **timespan**. An example is "00:30:00" (30 minutes). | No | | disableMetricsCollection | The service collects metrics such as Azure SQL Database DTUs for copy performance optimization and recommendations, which introduces additional master DB access. If you are concerned with this behavior, specify `true` to turn it off. | No (default is `false`) | | maxConcurrentConnections |The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No |
+| WriteBehavior | Specify the write behavior for copy activity to load data into Azure SQL Database. <br/> The allowed value is **Insert** and **Upsert**. By default, the service uses insert to load data. | No |
+| upsertSettings | Specify the group of the settings for write behavior. <br/> Apply when the WriteBehavior option is `Upert`. | No |
+| ***Under `upsertSettings`:*** | | |
+| useTempDB | Specify whether to use the a global temporary table or physical table as the interim table for upsert. <br>By default, the service uses global temporary table as the interim table. value is `true`. | No |
+| interimSchemaName | Specify the interim schema for creating interim table if physical table is used. Note: user need to have the permission for creating and deleting table. By default, interim table will share the same schema as sink table. <br/> Apply when the useTempDB option is `False`. | No |
+| keys | Specify the column names for unique row identification. Either a single key or a series of keys can be used. If not specified, the primary key is used. | No |
**Example 1: Append data**
Learn more details from [Invoke a stored procedure from a SQL sink](#invoke-a-st
] ```
+**Example 3: Upsert data**
+
+```json
+"activities":[
+ {
+ "name": "CopyToAzureSQLDatabase",
+ "type": "Copy",
+ "inputs": [
+ {
+ "referenceName": "<input dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "<Azure SQL Database output dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "typeProperties": {
+ "source": {
+ "type": "<source type>"
+ },
+ "sink": {
+ "type": "AzureSqlSink",
+ "tableOption": "autoCreate",
+ "writeBehavior": "upsert",
+ "upsertSettings": {
+ "useTempDB": true,
+ "keys": [
+ "<column name>"
+ ]
+ },
+ }
+ }
+ }
+]
+```
+ ## Parallel copy from SQL database The Azure SQL Database connector in copy activity provides built-in data partitioning to copy data in parallel. You can find data partitioning options on the **Source** tab of the copy activity.
Appending data is the default behavior of this Azure SQL Database sink connector
### Upsert data
-**Option 1:** When you have a large amount of data to copy, you can bulk load all records into a staging table by using the copy activity, then run a stored procedure activity to apply a [MERGE](/sql/t-sql/statements/merge-transact-sql) or INSERT/UPDATE statement in one shot.
-
-Copy activity currently doesn't natively support loading data into a database temporary table. There is an advanced way to set it up with a combination of multiple activities, refer to [Optimize Azure SQL Database Bulk Upsert scenarios](https://github.com/scoriani/azuresqlbulkupsert). Below shows a sample of using a permanent table as staging.
-
-As an example, you can create a pipeline with a **Copy activity** chained with a **Stored Procedure activity**. The former copies data from your source store into an Azure SQL Database staging table, for example, **UpsertStagingTable**, as the table name in the dataset. Then the latter invokes a stored procedure to merge source data from the staging table into the target table and clean up the staging table.
--
-In your database, define a stored procedure with MERGE logic, like the following example, which is pointed to from the previous stored procedure activity. Assume that the target is the **Marketing** table with three columns: **ProfileID**, **State**, and **Category**. Do the upsert based on the **ProfileID** column.
-
-```sql
-CREATE PROCEDURE [dbo].[spMergeData]
-AS
-BEGIN
- MERGE TargetTable AS target
- USING UpsertStagingTable AS source
- ON (target.[ProfileID] = source.[ProfileID])
- WHEN MATCHED THEN
- UPDATE SET State = source.State
- WHEN NOT matched THEN
- INSERT ([ProfileID], [State], [Category])
- VALUES (source.ProfileID, source.State, source.Category);
- TRUNCATE TABLE UpsertStagingTable
-END
-```
-
-**Option 2:** You can choose to [invoke a stored procedure within the copy activity](#invoke-a-stored-procedure-from-a-sql-sink). This approach runs each batch (as governed by the `writeBatchSize` property) in the source table instead of using bulk insert as the default approach in the copy activity.
-
-**Option 3:** You can use [Mapping Data Flow](#sink-transformation) which offers built-in insert/upsert/update methods.
+Copy activity now supports natively loading data into a database temporary table and then update the data in sink table if key exists and otherwise insert new data.
### Overwrite the entire table
data-factory Connector Azure Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-managed-instance.md
Previously updated : 12/28/2021 Last updated : 01/14/2022 # Copy and transform data in Azure SQL Managed Instance using Azure Data Factory or Synapse Analytics
To copy data to SQL Managed Instance, the following properties are supported in
| writeBatchSize |Number of rows to insert into the SQL table *per batch*.<br/>Allowed values are integers for the number of rows. By default, the service dynamically determines the appropriate batch size based on the row size. |No | | writeBatchTimeout |This property specifies the wait time for the batch insert operation to complete before it times out.<br/>Allowed values are for the timespan. An example is "00:30:00," which is 30 minutes. |No | | maxConcurrentConnections |The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No |
+| WriteBehavior | Specify the write behavior for copy activity to load data into Azure SQL MI. <br/> The allowed value is **Insert** and **Upsert**. By default, the service uses insert to load data. | No |
+| upsertSettings | Specify the group of the settings for write behavior. <br/> Apply when the WriteBehavior option is `Upert`. | No |
+| ***Under `upsertSettings`:*** | | |
+| useTempDB | Specify whether to use the a global temporary table or physical table as the interim table for upsert. <br>By default, the service uses global temporary table as the interim table. value is `true`. | No |
+| interimSchemaName | Specify the interim schema for creating interim table if physical table is used. Note: user need to have the permission for creating and deleting table. By default, interim table will share the same schema as sink table. <br/> Apply when the useTempDB option is `False`. | No |
+| keys | Specify the column names for unique row identification. Either a single key or a series of keys can be used. If not specified, the primary key is used. | No |
**Example 1: Append data**
Learn more details from [Invoke a stored procedure from a SQL MI sink](#invoke-a
] ```
+**Example 3: Upsert data**
+
+```json
+"activities":[
+ {
+ "name": "CopyToAzureSqlMI",
+ "type": "Copy",
+ "inputs": [
+ {
+ "referenceName": "<input dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "<SQL Managed Instance output dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "typeProperties": {
+ "source": {
+ "type": "<source type>"
+ },
+ "sink": {
+ "type": "SqlMISink",
+ "tableOption": "autoCreate",
+ "writeBehavior": "upsert",
+ "upsertSettings": {
+ "useTempDB": true,
+ "keys": [
+ "<column name>"
+ ]
+ },
+ }
+ }
+ }
+]
+```
+ ## Parallel copy from SQL MI The Azure SQL Managed Instance connector in copy activity provides built-in data partitioning to copy data in parallel. You can find data partitioning options on the **Source** tab of the copy activity.
Appending data is the default behavior of the SQL Managed Instance sink connecto
### Upsert data
-**Option 1:** When you have a large amount of data to copy, you can bulk load all records into a staging table by using the copy activity, then run a stored procedure activity to apply a [MERGE](/sql/t-sql/statements/merge-transact-sql) or INSERT/UPDATE statement in one shot.
-
-Copy activity currently doesn't natively support loading data into a database temporary table. There is an advanced way to set it up with a combination of multiple activities, refer to [Optimize SQL Database Bulk Upsert scenarios](https://github.com/scoriani/azuresqlbulkupsert). Below shows a sample of using a permanent table as staging.
-
-As an example, you can create a pipeline with a **Copy activity** chained with a **Stored Procedure activity**. The former copies data from your source store into an Azure SQL Managed Instance staging table, for example, **UpsertStagingTable**, as the table name in the dataset. Then the latter invokes a stored procedure to merge source data from the staging table into the target table and clean up the staging table.
--
-In your database, define a stored procedure with MERGE logic, like the following example, which is pointed to from the previous stored procedure activity. Assume that the target is the **Marketing** table with three columns: **ProfileID**, **State**, and **Category**. Do the upsert based on the **ProfileID** column.
-
-```sql
-CREATE PROCEDURE [dbo].[spMergeData]
-AS
-BEGIN
- MERGE TargetTable AS target
- USING UpsertStagingTable AS source
- ON (target.[ProfileID] = source.[ProfileID])
- WHEN MATCHED THEN
- UPDATE SET State = source.State
- WHEN NOT matched THEN
- INSERT ([ProfileID], [State], [Category])
- VALUES (source.ProfileID, source.State, source.Category);
-
- TRUNCATE TABLE UpsertStagingTable
-END
-```
-
-**Option 2:** You can choose to [invoke a stored procedure within the copy activity](#invoke-a-stored-procedure-from-a-sql-sink). This approach runs each batch (as governed by the `writeBatchSize` property) in the source table instead of using bulk insert as the default approach in the copy activity.
+Copy activity now supports natively loading data into a database temporary table and then update the data in sink table if key exists and otherwise insert new data.
### Overwrite the entire table
data-factory Connector Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sql-server.md
Previously updated : 12/20/2021 Last updated : 01/14/2022 # Copy and transform data to and from SQL Server by using Azure Data Factory or Azure Synapse Analytics
To copy data to SQL Server, set the sink type in the copy activity to **SqlSink*
| writeBatchSize |Number of rows to insert into the SQL table *per batch*.<br/>Allowed values are integers for the number of rows. By default, the service dynamically determines the appropriate batch size based on the row size. |No | | writeBatchTimeout |This property specifies the wait time for the batch insert operation to complete before it times out.<br/>Allowed values are for the timespan. An example is "00:30:00" for 30 minutes. If no value is specified, the timeout defaults to "02:00:00". |No | | maxConcurrentConnections |The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No |
+| WriteBehavior | Specify the write behavior for copy activity to load data into SQL Server Database. <br/> The allowed value is **Insert** and **Upsert**. By default, the service uses insert to load data. | No |
+| upsertSettings | Specify the group of the settings for write behavior. <br/> Apply when the WriteBehavior option is `Upert`. | No |
+| ***Under `upsertSettings`:*** | | |
+| useTempDB | Specify whether to use the a global temporary table or physical table as the interim table for upsert. <br>By default, the service uses global temporary table as the interim table. value is `true`. | No |
+| interimSchemaName | Specify the interim schema for creating interim table if physical table is used. Note: user need to have the permission for creating and deleting table. By default, interim table will share the same schema as sink table. <br/> Apply when the useTempDB option is `False`. | No |
+| keys | Specify the column names for unique row identification. Either a single key or a series of keys can be used. If not specified, the primary key is used. | No |
**Example 1: Append data**
Learn more details from [Invoke a stored procedure from a SQL sink](#invoke-a-st
] ```
+**Example 3: Upsert data**
+
+```json
+"activities":[
+ {
+ "name": "CopyToSQLServer",
+ "type": "Copy",
+ "inputs": [
+ {
+ "referenceName": "<input dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "<SQL Server output dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "typeProperties": {
+ "source": {
+ "type": "<source type>"
+ },
+ "sink": {
+ "type": "SqlSink",
+ "tableOption": "autoCreate",
+ "writeBehavior": "upsert",
+ "upsertSettings": {
+ "useTempDB": true,
+ "keys": [
+ "<column name>"
+ ]
+ },
+ }
+ }
+ }
+]
+```
++ ## Parallel copy from SQL database The SQL Server connector in copy activity provides built-in data partitioning to copy data in parallel. You can find data partitioning options on the **Source** tab of the copy activity.
Appending data is the default behavior of this SQL Server sink connector. the se
### Upsert data
-**Option 1:** When you have a large amount of data to copy, you can bulk load all records into a staging table by using the copy activity, then run a stored procedure activity to apply a [MERGE](/sql/t-sql/statements/merge-transact-sql) or INSERT/UPDATE statement in one shot.
-
-Copy activity currently doesn't natively support loading data into a database temporary table. There is an advanced way to set it up with a combination of multiple activities, refer to [Optimize SQL Database Bulk Upsert scenarios](https://github.com/scoriani/azuresqlbulkupsert). Below shows a sample of using a permanent table as staging.
-
-As an example, you can create a pipeline with a **Copy activity** chained with a **Stored Procedure activity**. The former copies data from your source store into a SQL Server staging table, for example, **UpsertStagingTable**, as the table name in the dataset. Then the latter invokes a stored procedure to merge source data from the staging table into the target table and clean up the staging table.
--
-In your database, define a stored procedure with MERGE logic, like the following example, which is pointed to from the previous stored procedure activity. Assume that the target is the **Marketing** table with three columns: **ProfileID**, **State**, and **Category**. Do the upsert based on the **ProfileID** column.
-
-```sql
-CREATE PROCEDURE [dbo].[spMergeData]
-AS
-BEGIN
- MERGE TargetTable AS target
- USING UpsertStagingTable AS source
- ON (target.[ProfileID] = source.[ProfileID])
- WHEN MATCHED THEN
- UPDATE SET State = source.State
- WHEN NOT matched THEN
- INSERT ([ProfileID], [State], [Category])
- VALUES (source.ProfileID, source.State, source.Category);
-
- TRUNCATE TABLE UpsertStagingTable
-END
-```
-
-**Option 2:** You can choose to [invoke a stored procedure within the copy activity](#invoke-a-stored-procedure-from-a-sql-sink). This approach runs each batch (as governed by the `writeBatchSize` property) in the source table instead of using bulk insert as the default approach in the copy activity.
+Copy activity now supports natively loading data into a database temporary table and then update the data in sink table if key exists and otherwise insert new data.
### Overwrite the entire table
data-factory How To Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-expression-language-functions.md
Previously updated : 03/08/2020 Last updated : 01/21/2022 # How to use parameters, expressions and functions in Azure Data Factory
data-factory Store Credentials In Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/store-credentials-in-key-vault.md
Previously updated : 04/13/2020 Last updated : 01/21/2022
databox-online Azure Stack Edge Gpu Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-overview.md
Previously updated : 10/05/2021 Last updated : 01/21/2022
-#Customer intent: As an IT admin, I need to understand what Azure Stack Edge Pro GPU is and how it works so I can use it to process and transform data before sending to Azure.
+#Customer intent: As an IT admin, I need to understand what Azure Stack Edge Pro GPU is and how it works so I can use it to process and transform data before sending it to Azure.
# What is Azure Stack Edge Pro with GPU?
-Azure Stack Edge Pro with GPU is an AI-enabled edge computing device with network data transfer capabilities. This article provides you an overview of the Azure Stack Edge Pro solution, benefits, key capabilities, and the scenarios where you can deploy this device.
+Azure Stack Edge Pro with GPU is an AI-enabled edge computing device with network data transfer capabilities. This article provides you an overview of the Azure Stack Edge Pro solution, benefits, key capabilities, and scenarios where you can deploy this device. The article also explains the pricing model for your device.
-Azure Stack Edge Pro with GPU is a Hardware-as-a-service solution. Microsoft ships you a cloud-managed device that acts as network storage gateway and has a built-in Graphical Processing Unit (GPU) that enables accelerated AI-inferencing.
+Azure Stack Edge Pro with GPU is a Hardware-as-a-Service solution. Microsoft ships you a cloud-managed device that acts as a network storage gateway. A built-in Graphical Processing Unit (GPU) enables accelerated AI-inferencing.
## Use cases
Azure Stack Edge Pro GPU has the following capabilities:
<!--|ExpressRoute | Added security through ExpressRoute. Use peering configuration where traffic from local devices to the cloud storage endpoints travels over the ExpressRoute. For more information, see [ExpressRoute overview](../expressroute/expressroute-introduction.md).|--> - ## Components
-The Azure Stack Edge Pro GPU solution comprises of Azure Stack Edge resource, Azure Stack Edge Pro GPU physical device, and a local web UI.
+The Azure Stack Edge Pro GPU solution includes the Azure Stack Edge resource, Azure Stack Edge Pro GPU physical device, and a local web UI.
* **Azure Stack Edge Pro GPU physical device** - A 1U rack-mounted server supplied by Microsoft that can be configured to send data to Azure.
- [!INCLUDE [azure-stack-edge-gateway-edge-hardware-center-overview](../../includes/azure-stack-edge-gateway-edge-hardware-center-overview.md)]
+ [!INCLUDE [azure-stack-edge-gateway-edge-hardware-center-overview](../../includes/azure-stack-edge-gateway-edge-hardware-center-overview.md)]
For more information, go to [Create an order for your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-deploy-prep.md#create-a-new-resource).
Azure Stack Edge Pro GPU physical device, Azure resource, and target storage acc
- **Device availability** - For a list of all the countries/regions where the Azure Stack Edge Pro GPU device is available, go to **Availability** section in the **Azure Stack Edge Pro** tab for [Azure Stack Edge Pro GPU pricing](https://azure.microsoft.com/pricing/details/azure-stack/edge/#azureStackEdgePro). -- **Destination Storage accounts** - The storage accounts that store the data are available in all Azure regions. The regions where the storage accounts store Azure Stack Edge Pro GPU data should be located close to where the device is located for optimum performance. A storage account located far from the device results in long latencies and slower performance.
+- **Destination Storage accounts** - The storage accounts that store the data are available in all Azure regions. For best performance, the regions where the storage accounts store Azure Stack Edge Pro GPU data should be close to the device location. A storage account located far from the device results in long latencies and slower performance.
Azure Stack Edge service is a non-regional service. For more information, see [Regions and Availability Zones in Azure](../availability-zones/az-overview.md). Azure Stack Edge service does not have dependency on a specific Azure region, making it resilient to zone-wide outages and region-wide outages. For a discussion of considerations for choosing a region for the Azure Stack Edge service, device, and data storage, see [Choosing a region for Azure Stack Edge](azure-stack-edge-gpu-regions.md).
+## Billing model
+
+Microsoft Azure charges a monthly, recurring subscription fee for an Azure Stack Edge device. In addition, there is a onetime fee for shipping. There is no on-premises software license for the device although guest virtual machine (VMs) may require their own licenses under Bring Your Own License (BYOL).
+
+Currency conversion and discounts are handled centrally by the Azure Commerce billing platform, and you get one unified, itemized bill at the end of each month.
+
+Billing starts 14 days after a device is marked as **Shipped** and ends when you initiate return of your device.
+
+The billing happens against the order resource. If you activate the device against a different resource, the order and billing details move to the new resource.
+
+For more information, see [FAQ: Billing for Azure Stack Edge Pro GPU](./azure-stack-edge-gpu-faq-billing-model.yml).
+ ## Next steps - Review the [Azure Stack Edge Pro GPU system requirements](azure-stack-edge-gpu-system-requirements.md).
ddos-protection Ddos Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-protection-overview.md
Distributed denial of service (DDoS) attacks are some of the largest availabilit
Azure DDoS Protection Standard, combined with application design best practices, provides enhanced DDoS mitigation features to defend against DDoS attacks. It is automatically tuned to help protect your specific Azure resources in a virtual network. Protection is simple to enable on any new or existing virtual network, and it requires no application or resource changes.
-Azure DDoS protection does not store customer data.
- ## Features - **Native platform integration:** Natively integrated into Azure. Includes configuration through the Azure portal. DDoS Protection Standard understands your resources and resource configuration.
ddos-protection Ddos Protection Reference Architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-protection-reference-architectures.md
Title: Azure DDoS Protection reference architectures description: Learn Azure DDoS protection reference architectures. Previously updated : 09/08/2020 Last updated : 01/19/2022 + # DDoS Protection reference architectures
In this architecture, traffic destined to the HDInsight cluster from the interne
For more information on this reference architecture, see the [Extend Azure HDInsight using an Azure Virtual Network](../hdinsight/hdinsight-plan-virtual-network-deployment.md?toc=%2fazure%2fvirtual-network%2ftoc.json) documentation. +
+> [!NOTE]
+> Azure App Service Environment for PowerApps or API management in a virtual network with a public IP are both not natively supported.
+
+## Hub-and-spoke network topology with Azure Firewall and Azure Bastion
+
+This reference architecture details a hub-and-spoke topology with Azure Firewall inside the hub as a DMZ for scenarios that require central control over security aspects. Azure Firewall is a managed firewall as a service and is placed in its own subnet. Azure Bastion is deployed and placed in its own subnet.
+
+There are two spokes that are connected to the hub using VNet peering and there is no spoke-to-spoke connectivity. If you require spoke-to-spoke connectivity, then you need to create routes to forward traffic from one spoke to the firewall, which can then route it to the other spoke.
++
+Azure DDoS Protection Standard is enabled on the hub virtual network. Therefore, all the Public IPs that are inside the hub are protected by the DDoS Standard plan. In this scenario, the firewall in the hub helps control the ingress traffic from the internet, while the firewall's public IP is being protected. Azure DDoS Protection Standard also protects the public IP of the bastion.
+
+DDoS Protection Standard is designed for services that are deployed in a virtual network. For more information, see [Deploy dedicated Azure service into virtual networks](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network).
+
+> [!NOTE]
+> DDoS Protection Standard protects the Public IPs of Azure resource. DDoS Protection Basic, which requires no configuration and is enabled by default, only protects the Azure underlying platform infrastructure (e.g. Azure DNS). For more information, see [Azure DDoS Protection Standard overview](ddos-protection-overview.md).
+For more information about hub-and-spoke topology, see [Hub-spoke network topology](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke?tabs=cli).
+ ## Next steps - Learn how to [create a DDoS protection plan](manage-ddos-protection.md).
digital-twins How To Query Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-query-graph.md
Get digital twins by **properties** (including ID and metadata):
As shown in the query above, the ID of a digital twin is queried using the metadata field `$dtId`. >[!TIP]
-> If you are using Cloud Shell to run a query with metadata fields that begin with `$`, you should escape the `$` with a backtick to let Cloud Shell know it's not a variable and should be consumed as a literal in the query text.
+> If you are using Cloud Shell to run a query with metadata fields that begin with `$`, you should escape the `$` with a backslash to let Cloud Shell know it's not a variable and should be consumed as a literal in the query text.
You can also get twins based on **whether a certain property is defined**. Here's a query that gets twins that have a defined *Location* property:
digital-twins How To Set Up Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-cli.md
This article covers the steps to **set up a new Azure Digital Twins instance**,
[!INCLUDE [digital-twins-setup-steps.md](../../includes/digital-twins-setup-steps.md)]
-### Set up Cloud Shell session
## Create the Azure Digital Twins instance
-In this section, you'll **create a new instance of Azure Digital Twins** using the Cloud Shell command. You'll need to provide:
+In this section, you'll **create a new instance of Azure Digital Twins** using the CLI command. You'll need to provide:
* A resource group where the instance will be deployed. If you don't already have an existing resource group in mind, you can create one now with this command: ```azurecli-interactive az group create --location <region> --name <name-for-your-resource-group>
az dt create --dt-name <name-for-your-Azure-Digital-Twins-instance> --resource-g
### Verify success and collect important values
-If the instance was created successfully, the result in Cloud Shell looks something like this, outputting information about the resource you've created:
+If the instance was created successfully, the result in the CLI looks something like this, outputting information about the resource you've created:
Note the Azure Digital Twins instance's **hostName**, **name**, and **resourceGroup** from the output. These values are all important and you may need to use them as you continue working with your Azure Digital Twins instance, to set up authentication and related Azure resources. If other users will be programming against the instance, you should share these values with them.
digital-twins Tutorial Command Line Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-command-line-cli.md
To get the files on your machine, use the navigation links above and copy the fi
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment-h3.md)]
-### Set up Cloud Shell session
### Prepare an Azure Digital Twins instance
Navigate on your machine to the *Room.json* file that you created in the [Prereq
After designing models, you need to upload them to your Azure Digital Twins instance. Doing so configures your Azure Digital Twins service instance with your own custom domain vocabulary. Once you've uploaded the models, you can create twin instances that use them.
-1. To add models using Cloud Shell, you'll need to upload your model files to Cloud Shell's storage so the files will be available when you run the Cloud Shell command that uses them. To do so, select the "Upload/Download files" icon and choose "Upload".
+1. If you're using a local installation of the Azure CLI, you can skip this step. If you're using Cloud Shell, you'll need to upload your model files to Cloud Shell's storage so the files will be available when you run the Cloud Shell command that uses them. To do so, select the "Upload/Download files" icon and choose "Upload".
:::image type="content" source="media/how-to-set-up-instance/cloud-shell/cloud-shell-upload.png" alt-text="Screenshot of Cloud Shell browser window showing selection of the Upload icon."::: Navigate to the *Room.json* file on your machine and select "Open." Then, repeat this step for *Floor.json*.
-1. Next, use the [az dt model create](/cli/azure/dt/model#az_dt_model_create) command as shown below to upload your updated Room model to your Azure Digital Twins instance. The second command uploads another model, Floor, which you'll also use in the next section to create different types of twins.
+1. Next, use the [az dt model create](/cli/azure/dt/model#az_dt_model_create) command as shown below to upload your updated Room model to your Azure Digital Twins instance. The second command uploads another model, Floor, which you'll also use in the next section to create different types of twins. If you're using Cloud Shell, *Room.json* and *Floor.json* are in the main storage directory, so you can just use the file names directly in the command below where a path is required.
```azurecli-interactive
- az dt model create --dt-name <Azure-Digital-Twins-instance-name> --models Room.json
- az dt model create --dt-name <Azure-Digital-Twins-instance-name> --models Floor.json
+ az dt model create --dt-name <Azure-Digital-Twins-instance-name> --models <path-to-Room.json>
+ az dt model create --dt-name <Azure-Digital-Twins-instance-name> --models <path-to-Floor.json>
``` The output from each command will show information about the successfully uploaded model.
Now that some models have been uploaded to your Azure Digital Twins instance, yo
To create a digital twin, you use the [az dt twin create](/cli/azure/dt/twin#az_dt_twin_create) command. You must reference the model that the twin is based on, and can optionally define initial values for any properties in the model. You don't have to pass any relationship information at this stage.
-1. Run this code in the Cloud Shell to create several twins, based on the Room model you updated earlier and another model, Floor. Recall that Room has three properties, so you can provide arguments with the initial values for these properties. (Initializing property values is optional in general, but they're needed for this tutorial.)
+1. Run this code in the CLI to create several twins, based on the Room model you updated earlier and another model, Floor. Recall that Room has three properties, so you can provide arguments with the initial values for these properties. (Initializing property values is optional in general, but they're needed for this tutorial.)
```azurecli-interactive az dt twin create --dt-name <Azure-Digital-Twins-instance-name> --dtmi "dtmi:example:Room;2" --twin-id room0 --properties '{"RoomName":"Room0", "Temperature":70, "HumidityLevel":30}'
To create a digital twin, you use the [az dt twin create](/cli/azure/dt/twin#az_
``` >[!NOTE]
- > If you're using Cloud Shell in the PowerShell environment, you may need to escape the quotation mark characters in order for the `--properties` JSON value to be parsed correctly. With this edit, the commands to create the room twins look like this:
- >
- > ```azurecli-interactive
- > az dt twin create --dt-name <Azure-Digital-Twins-instance-name> --dtmi "dtmi:example:Room;2" --twin-id room0 --properties '{\"RoomName\":\"Room0\", \"Temperature\":70, \"HumidityLevel\":30}'
- > az dt twin create --dt-name <Azure-Digital-Twins-instance-name> --dtmi "dtmi:example:Room;2" --twin-id room1 --properties '{\"RoomName\":\"Room1\", \"Temperature\":80, \"HumidityLevel\":60}'
- > ```
- > This is reflected in the screenshot below.
+ > It's recommended to use the CLI in the Bash environment for this tutorial. If you're using the PowerShell environment, you may need to escape the quotation mark characters in order for the `--properties` JSON value to be parsed correctly.
The output from each command will show information about the successfully created twin (including properties for the room twins that were initialized with them).
You can also modify the properties of a twin you've created.
``` >[!NOTE]
- > If you're using Cloud Shell in the PowerShell environment, you may need to escape the quotation mark characters in order for the `--json-patch` JSON value to be parsed correctly. With this edit, the command to update the twin looks like this:
- >
- > ```azurecli-interactive
- > az dt twin update --dt-name <Azure-Digital-Twins-instance-name> --twin-id room0 --json-patch '{\"op\":\"add\", \"path\":\"/RoomName\", \"value\": \"PresidentialSuite\"}'
- > ```
- > This is reflected in the screenshot below.
+ > It's recommended to use the CLI in the Bash environment for this tutorial. If you're using the PowerShell environment, you may need to escape the quotation mark characters in order for the `--json-patch` JSON value to be parsed correctly.
The output from this command will show the twin's current information, and you should see the new value for the `RoomName` in the result.
To add a relationship, use the [az dt twin relationship create](/cli/azure/dt/tw
> ```azurecli-interactive > ... --properties '{"ownershipUser":"MyUser", "ownershipDepartment":"MyDepartment"}' > ```
- >
- > If you're using Cloud Shell in the PowerShell environment, you may need to escape the quotation mark characters in order for the `--properties` JSON value to be parsed correctly.
The output from each command will show information about the successfully created relationship.
A main feature of Azure Digital Twins is the ability to [query](concepts-query-l
[!INCLUDE [digital-twins-query-latency-note.md](../../includes/digital-twins-query-latency-note.md)]
-Run the following queries in the Cloud Shell to answer some questions about the sample environment.
+Run the following queries in the CLI to answer some questions about the sample environment.
1. **What are all the entities from my environment represented in Azure Digital Twins?** (query all)
Run the following queries in the Cloud Shell to answer some questions about the
1. **What are all the rooms on floor0?** (query by relationship) ```azurecli-interactive
- az dt twin query --dt-name <Azure-Digital-Twins-instance-name> --query-command "SELECT room FROM DIGITALTWINS floor JOIN room RELATED floor.contains where floor.`$dtId = 'floor0'"
+ az dt twin query --dt-name <Azure-Digital-Twins-instance-name> --query-command "SELECT room FROM DIGITALTWINS floor JOIN room RELATED floor.contains where floor.\$dtId = 'floor0'"
```
- You can query based on relationships in your graph, to get information about how twins are connected or to restrict your query to a certain area. Only room0 is on floor0, so it's the only room in the result.
+ You can query based on relationships in your graph, to get information about how twins are connected or to restrict your query to a certain area. This query also illustrates that a twin's ID (like floor0 in the query above) is queried using the metadata field `$dtId`. Only room0 is on floor0, so it's the only room in the result for this query.
:::image type="content" source="media/tutorial-command-line/cli/output-query-relationship.png" alt-text="Screenshot of Cloud Shell showing result of relationship query, which includes room0." lightbox="media/tutorial-command-line/cli/output-query-relationship.png"::: > [!NOTE]
- > A twin's ID (like floor0 in the query above) is queried using the metadata field `$dtId`.
- >
- >When using Cloud Shell to run a query with metadata fields like this one that begin with `$`, you should escape the `$` with a backtick to let Cloud Shell know it's not a variable and should be consumed as a literal in the query text. This is reflected in the screenshot above.
+ >When using Cloud Shell to run a query with metadata fields like this one that begin with `$`, you should escape the `$` with a backslash to let Cloud Shell know it's not a variable and should be consumed as a literal in the query text. This is reflected in the screenshot above.
1. **What are all the twins in my environment with a temperature above 75?** (query by property)
Run the following queries in the Cloud Shell to answer some questions about the
1. **What are all the rooms on *floor0* with a temperature above 75?** (compound query) ```azurecli-interactive
- az dt twin query --dt-name <Azure-Digital-Twins-instance-name> --query-command "SELECT room FROM DIGITALTWINS floor JOIN room RELATED floor.contains where floor.`$dtId = 'floor0' AND IS_OF_MODEL(room, 'dtmi:example:Room;2') AND room.Temperature > 75"
+ az dt twin query --dt-name <Azure-Digital-Twins-instance-name> --query-command "SELECT room FROM DIGITALTWINS floor JOIN room RELATED floor.contains where floor.\$dtId = 'floor0' AND IS_OF_MODEL(room, 'dtmi:example:Room;2') AND room.Temperature > 75"
``` You can also combine the earlier queries like you would in SQL, using combination operators such as `AND`, `OR`, `NOT`. This query uses `AND` to make the previous query about twin temperatures more specific. The result now only includes rooms with temperatures above 75 that are on floor0ΓÇöwhich in this case, is none of them. The result set is empty.
digital-twins Tutorial End To End https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-end-to-end.md
In this tutorial, you will...
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment-h3.md)]
-### Set up Cloud Shell session
[!INCLUDE [Azure Digital Twins tutorial: configure the sample project](../../includes/digital-twins-tutorial-sample-configure.md)]
Here are the actions you'll complete to set up this device connection:
Azure Digital Twins is designed to work alongside [IoT Hub](../iot-hub/about-iot-hub.md), an Azure service for managing devices and their data. In this step, you'll set up an IoT hub that will manage the sample device in this tutorial.
-In Azure Cloud Shell, use this command to create a new IoT hub:
+In the Azure CLI, use this command to create a new IoT hub:
```azurecli-interactive az iot hub create --name <name-for-your-IoT-hub> --resource-group <your-resource-group> --sku S1
Back on the *Create Event Subscription* page, select **Create**.
This section creates a device representation in IoT Hub with the ID thermostat67. The simulated device will connect into this representation, which is how telemetry events will go from the device into IoT Hub. The IoT hub is where the subscribed Azure function from the previous step is listening, ready to pick up the events and continue processing.
-In Azure Cloud Shell, create a device in IoT Hub with the following command:
+In the Azure CLI, create a device in IoT Hub with the following command:
```azurecli-interactive az iot hub device-identity create --device-id thermostat67 --hub-name <your-IoT-hub-name> --resource-group <your-resource-group>
After completing this tutorial, you can choose which resources you want to remov
[!INCLUDE [digital-twins-cleanup-basic.md](../../includes/digital-twins-cleanup-basic.md)]
-* **If you want to continue using the Azure Digital Twins instance you set up in this article, but clear out some or all of its models, twins, and relationships**, you can use the [az dt](/cli/azure/dt) CLI commands in an [Azure Cloud Shell](https://shell.azure.com) window to delete the elements you want to remove.
+* **If you want to continue using the Azure Digital Twins instance you set up in this article, but clear out some or all of its models, twins, and relationships**, you can use the [az dt](/cli/azure/dt) CLI commands to delete the elements you want to remove.
This option won't remove any of the other Azure resources created in this tutorial (IoT Hub, Azure Functions app, and so on). You can delete these individually using the [dt commands](/cli/azure/reference-index) appropriate for each resource type.
event-grid Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/whats-new.md
Last updated 01/13/2022
Azure Event Grid receives improvements on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about the features that are added or updated in a release.
-## .NET 6.2.0-preview (2021-06)
-This release corresponds to api-version 2021-06-01-preview which includes the following new features:
+## .NET 6.2.0-preview (REST API version 2021-06)
+This release corresponds to REST API version 2021-06-01-preview, which includes the following new features:
- [Azure Active Directory authentication for topics and domains, and partner namespaces](authenticate-with-active-directory.md)-- Private link support for partner namespaces. Azure portal doesn't support it yet. -- IP Filtering for partner namespaces. Azure portal doesn't support it yet. -- System Identity for partner topics. Azure portal doesn't support it yet.
+- [Private link support for partner namespaces](/rest/api/eventgrid/controlplane-version2021-06-01-preview/partner-namespaces/create-or-update#privateendpoint). Azure portal doesn't support it yet.
+- [IP Filtering for partner namespaces](/rest/api/eventgrid/controlplane-version2021-06-01-preview/partner-namespaces/create-or-update#inboundiprule). Azure portal doesn't support it yet.
+- [System Identity for partner topics](/rest/api/eventgrid/controlplane-version2021-06-01-preview/partner-topics/update#request-body). Azure portal doesn't support it yet.
- [User Identity for system topics, custom topics and domains](enable-identity-custom-topics-domains.md) ## 6.1.0-preview (2020-10)
event-hubs Monitor Event Hubs Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/monitor-event-hubs-reference.md
Title: Monitoring Azure Event Hubs data reference
description: Important reference material needed when you monitor Azure Event Hubs. Previously updated : 06/11/2021 Last updated : 01/20/2022
Counts the number of data and management operations requests.
| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions | | - | - | -- | | | |
-| Incoming Requests| Yes | Count | Total | The number of requests made to the Event Hubs service over a specified period. | Entity name|
+| Incoming Requests| Yes | Count | Total | The number of requests made to the Event Hubs service over a specified period. This metric includes all the data and management plane operations. | Entity name|
| Successful Requests| No | Count | Total | The number of successful requests made to the Event Hubs service over a specified period. | Entity name<br/><br/>Operation Result | | Throttled Requests| No | Count | Total | The number of requests that were throttled because the usage was exceeded. | Entity name<br/><br/>Operation Result |
The following two types of errors are classified as **user errors**:
> [!NOTE]
-> These values are point-in-time values. Incoming messages that were consumed immediately after that point-in-time may not be reflected in these metrics.
+> - These values are point-in-time values. Incoming messages that were consumed immediately after that point-in-time may not be reflected in these metrics.
+> - The **Incoming requests** metric includes all the data and management plane operations. The **Incoming messages** metric gives you the total number of events that are sent to the event hub. For example, if you send a batch of 100 events to an event hub, it'll count as 1 incoming request and 100 incoming messages.
### Capture metrics | Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions | | - | -- | | | | | | Captured Messages| No | Count| Total | The number of captured messages. | Entity name |
-| Captured Bytes | No | Bytes | Total | Captured bytes for an event hubs | Entity name |
-| Capture Backlog | No | Count| Total | Capture backlog for an event hubs | Entity name |
+| Captured Bytes | No | Bytes | Total | Captured bytes for an event hub | Entity name |
+| Capture Backlog | No | Count| Total | Capture backlog for an event hub | Entity name |
### Connection metrics
Azure Event Hubs supports the following dimensions for metrics in Azure Monitor.
[!INCLUDE [event-hubs-diagnostic-log-schema](./includes/event-hubs-diagnostic-log-schema.md)]
-## Runtime Audit Logs
-Runtime Audit Logs captures aggregated diagnostic logs for all data plane access operations (such as send or receive events) in Dedicated SKU.
+## Runtime audit Logs
+Runtime audit logs capture aggregated diagnostic information for all data plane access operations (such as send or receive events) in the Event Hubs dedicated cluster.
> [!NOTE]
-> Runtime audit logs are currently available in *Dedicated* tier only.
+> Runtime audit logs are currently available only in the **dedicated** tier.
-Runtime Audit Logs include the elements listed in the following table:
+Runtime audit logs include the elements listed in the following table:
Name | Description - | -
Name | Description
`Timestamp` | Aggregation time. `Status` | Status of the activity (success or failure). `Protocol` | Type of the protocol associated with the operation.
-`AuthType` | Type of authentication (AAD or SAS Policy).
-`AuthKey` | AAD application Id or SAS policy name which is used to authenticate to a resource.
-`NetworkType` | Type of the network: PublicNetworkAccess, PrivateNetworkAccess.
-`ClientIP` | IP address of client application.
+`AuthType` | Type of authentication (Azure Active Directory or SAS Policy).
+`AuthKey` | Azure Active Directory application ID or SAS policy name that's used to authenticate to a resource.
+`NetworkType` | Type of the network access: `PublicNetworkAccess`, `PrivateNetworkAccess`.
+`ClientIP` | IP address of the client application.
`Count` | Total number of operations performed during the aggregated period of 1 minute. `Properties` | Metadata that are specific to the data plane operation. `Category` | Log category
-The following code is an example of a runtime audit log JSON string:
-
-Example:
+Here's an example of a runtime audit log entry:
```json {
Example:
"Time": "1/1/2021 8:40:06 PM +00:00", "Status": "Success | Failure", "Protocol": "AMQP | KAFKA | HTTP | Web Sockets",
- "AuthType": "SAS | AAD",
+ "AuthType": "SAS | Azure Active Directory",
"AuthId": "<app name | SAS policy name>", "NetworkType": "PublicNetworkAccess | PrivateNetworkAccess", "ClientIp": "x.x.x.x",
Example:
```
-## Application Metrics Logs
-Application Metrics Logs captures the aggregated information on certain metrics related data plane operations. This includes following runtime metrics.
+## Application metrics Logs
+Application metrics logs capture the aggregated information on certain metrics related to data plane operations. The captured information includes the following runtime metrics.
Name | Description - | -
-ConsumerLag | Indicate the lag between the consumers and producers.
-NamespaceActiveConnections | Details of the active connections established from a client to Event Hub.
-GetRuntimeInfo | Obtain run time information from Event Hubs.
-GetPartitionRuntimeInfo | Obtain the approximate runtime information for a logical partition of an Event Hub.
+`ConsumerLag` | Indicate the lag between consumers and producers.
+`NamespaceActiveConnections` | Details of active connections established from a client to the event hub.
+`GetRuntimeInfo` | Obtain run time information from Event Hubs.
+`GetPartitionRuntimeInfo` | Obtain the approximate runtime information for a logical partition of an event hub.
## Azure Monitor Logs tables
event-hubs Monitor Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/monitor-event-hubs.md
Title: Monitoring Azure Event Hubs
description: Learn how to use Azure Monitor to view, analyze, and create alerts on metrics from Azure Event Hubs. Previously updated : 06/13/2021 Last updated : 01/20/2022 # Monitor Azure Event Hubs
See [Create diagnostic setting to collect platform logs and metrics in Azure](..
If you use **Azure Storage** to store the diagnostic logging information, the information is stored in containers named **insights-logs-operationlogs** and **insights-metrics-pt1m**. Sample URL for an operation log: `https://<Azure Storage account>.blob.core.windows.net/insights-logs-operationallogs/resourceId=/SUBSCRIPTIONS/<Azure subscription ID>/RESOURCEGROUPS/<Resource group name>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<Namespace name>/y=<YEAR>/m=<MONTH-NUMBER>/d=<DAY-NUMBER>/h=<HOUR>/m=<MINUTE>/PT1H.json`. The URL for a metric log is similar.
-If you use **Azure Event Hubs** to store the diagnostic logging information, the information is stored in event hubs named **insights-logs-operationlogs** and **insights-metrics-pt1m**. You can also select your own event hub.
+If you use **Azure Event Hubs** to store the diagnostic logging information, the information is stored in Event Hubs instances named **insights-logs-operationlogs** and **insights-metrics-pt1m**. You can also select your own event hub.
If you use **Log Analytics** to store the diagnostic logging information, the information is stored in tables named **AzureDiagnostics** and **AzureMetrics**.
For a detailed reference of the logs and metrics, see [Azure Event Hubs monitori
Following are sample queries that you can use to help you monitor your Azure Event Hubs resources:
-+ Get errors from the past 7 days
++ Get errors from the past seven days ```Kusto AzureDiagnostics
Following are sample queries that you can use to help you monitor your Azure Eve
| where Category == "OperationalLogs" | summarize count() by "EventName"
-+ Get runtime audit logs during last hour.
++ Get runtime audit logs generated in the last one hour. ```Kusto AzureDiagnostics
firewall Firewall Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/firewall-preview.md
+
+ Title: Azure Firewall preview features
+description: Learn about Azure Firewall preview features that are currently publicly available.
++++ Last updated : 01/21/2022+++
+# Azure Firewall preview features
+
+The following Azure Firewall preview features are available publicly for you to deploy and test. Some of the preview features are available on the Azure portal, and some are only visible using a feature flag.
+
+## Feature flags
+
+As new features are released to preview, some of them will be behind a feature flag. To enable the functionality in your environment, you must enable the feature flag on your subscription. These features are applied at the subscription level for all firewalls (VNet firewalls and SecureHub firewalls).
+
+This article will be updated to reflect the features that are currently in preview with instructions to enable them. When the features move to General Availability (GA), they'll be available to all customers without the need to enable a feature flag.
+
+Commands are run in Azure PowerShell to enable the features. For the feature to immediately take effect, an operation needs to be run on the firewall. This can be a rule change (least intrusive), a setting change, or a stop/start operation. Otherwise, the firewall/s is updated with the feature within several days.
+
+## Preview features
+
+The following features are available in preview.
+
+### Network rule name logging (preview)
+
+Currently, a network rule hit event shows the following attributes in the logs:
+
+ - Source and destination IP/port
+ - Action (allow, or deny)
+
+ With this new feature, the event logs for network rules also show the following attributes:
+ - Policy name
+ - Rule collection group
+ - Rule collection
+ - Rule name
+
+Run the following Azure PowerShell commands to configure Azure Firewall network rule name logging:
+
+```azurepowershell
+Connect-AzAccount
+Select-AzSubscription -Subscription "subscription_id or subscription_name"
+Register-AzProviderFeature -FeatureName AFWEnableNetworkRuleNameLogging -ProviderNamespace Microsoft.Network
+```
+
+Run the following Azure PowerShell command to turn off this feature:
+
+```azurepowershell
+Unregister-AzProviderFeature -FeatureName AFWEnableNetworkRuleNameLogging -ProviderNamespace Microsoft.Network
+```
+
+### Azure Firewall Premium performance boost (preview)
+
+As more applications move to the cloud, the performance of the network elements can become a bottleneck. As the central piece of any network design, the firewall needs to support all the workloads. The Azure Firewall Premium performance boost feature allows more scalability for these deployments.
+
+This feature significantly increases the throughput of Azure Firewall Premium. For more details, see [Azure Firewall performance](firewall-performance.md).
+
+Currently, the performance boost feature isn't recommended for SecureHub Firewalls. Refer back to this article for the latest updates as we work to change this recommendation. Also, this setting has no effect on Standard Firewalls.
+
+Run the following Azure PowerShell commands to configure the Azure Firewall Premium performance boost:
+
+```azurepowershell
+Connect-AzAccount
+Select-AzSubscription -Subscription "subscription_id or subscription_name"
+Register-AzProviderFeature -FeatureName AFWEnableAccelnet -ProviderNamespace Microsoft.Network
+```
+
+Run the following Azure PowerShell command to turn off this feature:
+
+```azurepowershell
+Unregister-AzProviderFeature -FeatureName AFWEnableAccelnet -ProviderNamespace Microsoft.Network
+```
+
+## Next steps
+
+To learn more about Azure Firewall, see [What is Azure Firewall?](overview.md).
healthcare-apis Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/configure-private-link.md
Previously updated : 05/27/2021 Last updated : 01/20/2022
After the deployment is complete, you can go back to "Private endpoint connectio
![Options](media/private-link/private-link-options.png)
-## Test private endpoint
+## VNet Peering
-To ensure that your FHIR server is not receiving public traffic after disabling public network access, select the /metadata endpoint for your server from your computer. You should receive a 403 Forbidden.
+With Private Link configured, you can access the FHIR server in the same VNet or a different VNet that is peered to the VNet for the FHIR server. Follow the steps below to configure VNet peering and Private Link DNS zone configuration.
+### Configure VNet Peering
-> [!NOTE]
-> It can take up to 5 minutes after updating the public network access flag before public traffic is blocked.
+You can configure VNet peering from the portal or using PowerShell, CLI scripts, and Azure Resource Manager (ARM) template. The second VNet can be in the same or different subscriptions, and in the same or different regions. Make sure that you grant the **Network contributor** role. For more information on VNet Peering, see [Create a virtual network peering](../../virtual-network/create-peering-different-subscriptions.md).
-To ensure your private endpoint can send traffic to your server:
+### Add VNet link to the private link zone
-1. Create a virtual machine (VM) that is connected to the virtual network and subnet your private endpoint is configured on. To ensure your traffic from the VM is only using the private network, disable the outbound internet traffic using the network security group (NSG) rule.
-2. RDP into the VM.
-3. Access your FHIR serverΓÇÖs /metadata endpoint from the VM. You should receive the capability statement as a response.
+In the Azure portal, select the resource group of the FHIR server. Select and open the Private DNS zone, **privatelink.azurehealthcareapis.com**. Select **Virtual network links** under the *settings* section. Click the Add button to add your second VNet to the private DNS zone. Enter the link name of your choice, select the subscription and the VNet you just created. Optionally, you can enter the resource ID for the second VNet. Select **Enable auto registration**, which automatically adds a DNS record for your VM connected to the second VNet. When you delete a VNet link, the DNS record for the VM is also deleted.
+
+For more information on how private link DNS zone resolves the private endpoint IP address to the fully qualified domain name (FQDN) of the resource such as the FHIR server, see [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md).
+
+ :::image type="content" source="media/private-link/private-link-add-vnet-link.png" alt-text="Add VNet link." lightbox="media/private-link/private-link-add-vnet-link.png":::
+
+You can add more VNet links if needed, and view all VNet links you've added from the portal.
+
+ :::image type="content" source="media/private-link/private-link-vnet-links.png" alt-text="Private Link VNet links." lightbox="media/private-link/private-link-vnet-links.png":::
+
+From the Overview blade you can view the private IP addresses of the FHIR server and the VMs connected to peered virtual networks.
+
+ :::image type="content" source="media/private-link/private-link-dns-zone.png" alt-text="Private Link FHIR and VM Private IP Addresses." lightbox="media/private-link/private-link-dns-zone.png":::
## Manage private endpoint
Private endpoints and the associated network interface controller (NIC) are visi
Private endpoints can only be deleted from the Azure portal from the **Overview** blade or by selecting the **Remove** option under the **Networking Private endpoint connections** tab. Selecting **Remove** will delete the private endpoint and the associated NIC. If you delete all private endpoints to the FHIR resource and the public network, access is disabled and no request will make it to your FHIR server. ![Delete Private Endpoint](media/private-link/private-link-delete.png)+
+## Test and troubleshoot private link and VNet peering
+
+To ensure that your FHIR server is not receiving public traffic after disabling public network access, select the /metadata endpoint for your server from your computer. You should receive a 403 Forbidden.
+
+> [!NOTE]
+> It can take up to 5 minutes after updating the public network access flag before public traffic is blocked.
+
+### Create and use a VM
+
+To ensure your private endpoint can send traffic to your server:
+
+1. Create a virtual machine (VM) that is connected to the virtual network and subnet your private endpoint is configured on. To ensure your traffic from the VM is only using the private network, disable the outbound internet traffic using the network security group (NSG) rule.
+2. RDP into the VM.
+3. Access your FHIR serverΓÇÖs /metadata endpoint from the VM. You should receive the capability statement as a response.
+
+### Use nslookup
+
+You can use the **nslookup** tool to verify connectivity. If the private link is configured properly, you should see the FHIR server URL resolves to the valid private IP address, as shown below. Note that IP address **168.63.129.16** is a virtual public IP address used in Azure. For more information, see [What is IP address 168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md)
+
+```
+C:\Users\testuser>nslookup fhirserverxxx.azurehealthcareapis.com
+Server: UnKnown
+Address: 168.63.129.16
+
+Non-authoritative answer:
+Name: fhirserverxxx.privatelink.azurehealthcareapis.com
+Address: 172.21.0.4
+Aliases: fhirserverxxx.azurehealthcareapis.com
+```
+
+If the private link is not configured properly, you may see the public IP address instead and a few aliases including the Traffic Manager endpoint. This indicates that the private link DNS zone cannot resolve to the valid private IP address of the FHIR server. When VNet peering is configured, one possible reason is that the second peered VNet hasn't been added to the private link DNS zone. As a result, you will see the HTTP error 403, "Access to xxx was denied", when trying to access the /metadata endpoint of the FHIR server.
+
+```
+C:\Users\testuser>nslookup fhirserverxxx.azurehealthcareapis.com
+Server: UnKnown
+Address: 168.63.129.16
+
+Non-authoritative answer:
+Name: xxx.cloudapp.azure.com
+Address: 52.xxx.xxx.xxx
+Aliases: fhirserverxxx.azurehealthcareapis.com
+ fhirserverxxx.privatelink.azurehealthcareapis.com
+ xxx.trafficmanager.net
+```
+
+For more information, see [Troubleshoot Azure Private Link connectivity problems](../../private-link/troubleshoot-private-link-connectivity.md).
+
+## Next steps
+
+In this article, you've learned how to configure the private link and VNet peering. You also learned how to troubleshoot the private link and VNet configurations.
+
+Based on your private link setup and for more information about registering your applications, see
+
+* [Register a resource application](register-resource-azure-ad-client-app.md)
+* [Register a confidential client application](register-confidential-azure-ad-client-app.md)
+* [Register a public client application](register-public-azure-ad-client-app.md)
+* [Register a service application](register-service-azure-ad-client-app.md)
+
healthcare-apis Device Data Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/iot/device-data-through-iot-hub.md
Previously updated : 12/14/2021 Last updated : 1/20/2022
Below is a diagram of the IoT device message flow from IoT Hub into IoT connecto
## Create a managed identity for IoT Hub
-For this tutorial, we'll be using an IoT Hub with a [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
+For this tutorial, we'll be using an IoT Hub with a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) to provide access from the IoT Hub to the IoT connector device message event hub.
-The user-assigned managed identity will be used to provide access to your IoT connector device message event hub using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
+For more information about how to create a system-assigned managed identity with your IoT Hub, see [IoT Hub support for managed identities](../../iot-hub/iot-hub-managed-identity.md#system-assigned-managed-identity).
-Follow these directions to create a user-assigned managed identity with your IoT Hub: [IoT Hub support for managed identities](../../iot-hub/iot-hub-managed-identity.md#user-assigned-managed-identity).
+For more information on Azure role-based access control, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
## Connect IoT Hub with IoT connector
healthcare-apis Iot Metrics Diagnostics Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/iot/iot-metrics-diagnostics-export.md
Title: Export IoT connector Metrics through Diagnostic settings - Azure Healthcare APIs
-description: This article explains how to export IoT connector metrics through Diagnostic settings
+ Title: Configure IoT connector Diagnostic settings for metrics export - Azure Healthcare APIs
+description: This article explains how to configure IoT connector Diagnostic settings for metrics exporting.
Previously updated : 11/10/2021 Last updated : 1/20/2021
-# Export IoT connector Metrics through Diagnostic settings
+# Configure diagnostic setting for IoT connector metrics exporting
> [!IMPORTANT] > Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-In this article, you'll learn how to export IoT connector Metrics logs. The feature that enables Metrics logging is the [**Diagnostic settings**](../../azure-monitor/essentials/diagnostic-settings.md) in the Azure portal.
+In this article, you'll learn how to configure the diagnostic setting for IoT connector to export metrics to different destinations for audit, analysis, or backup.
-## Enable Metrics logging for IoT connector
-1. To enable Metrics logging for the IoT connector, select your Fast Healthcare Interoperability Resources (FHIR&#174;) service in the Azure portal.
+## Create diagnostic setting for IoT connector
+1. To enable metrics export for IoT connector, select **IoT connectors** in your Workspace.
+
+ :::image type="content" source="media/iot-metrics-export/iot-connector-logging-workspace.png" alt-text="Screenshot of select IoT connector within Workspace." lightbox="media/iot-metrics-export/iot-connector-logging-workspace.png":::
+
+2. Select the IoT connector that you want to configure metrics export for.
+
+ :::image type="content" source="media/iot-metrics-export/iot-connector-logging-select-connector.png" alt-text="Screenshot of select IoT connector for exporting metrics" lightbox="media/iot-metrics-export/iot-connector-logging-select-connector.png":::
+
+3. Select the **Diagnostic settings** button and then select the **+ Add diagnostic setting** button.
-2. Navigate to **Diagnostic settings**
+ :::image type="content" source="media/iot-metrics-export/iot-connector-logging-select-diagnostic-settings.png" alt-text="Screenshot of select the Diagnostic settings and select the + Add diagnostic setting buttons." lightbox="media/iot-metrics-export/iot-connector-logging-select-diagnostic-settings.png":::
-3. Select **+ Add diagnostic setting**
+4. After the **+ Add diagnostic setting** page opens, enter a name in the **Diagnostic setting name** dialog box.
- :::image type="content" source="media/iot-metrics-export/diagnostic-settings-main.png" alt-text="IoT connector1" lightbox="media/iot-metrics-export/diagnostic-settings-main.png":::
+ :::image type="content" source="media/iot-metrics-export/iot-connector-logging-select-diagnostic-configuration.png" alt-text="Screenshot diagnostic setting and required fields." lightbox="media/iot-metrics-export/iot-connector-logging-select-diagnostic-configuration.png":::
-4. Enter a name in the **Diagnostic setting name** dialog box.
+5. Under **Destination details**, select the destination you want to use to export your IoT connector metrics to. In the above example, we've selected an Azure storage account.
-5. Select the method you want to use to access your diagnostic logs:
+ Metrics can be exported to the following destinations:
- 1. **Archive to a storage account** for auditing or manual inspection. The storage account you want to use needs to be already created.
- 2. **Stream to Event Hub** for ingestion by a third-party service or custom analytic solution. You'll need to create an Event Hub namespace and Event Hub policy before you can configure this step.
- 3. **Stream to the Log Analytics** workspace in Azure Monitor. You'll need to create your Logs Analytics Workspace before you can select this option.
+ |Destination|Description|
+ |--|--|
+ |Log Analytics workspace|Metrics are converted to log form. This option may not be available for all resource types. Sending them to the Azure Monitor Logs store (which is searchable via Log Analytics) helps you to integrate them into queries, alerts, and visualizations with existing log data.|
+ |Azure storage account|Archiving logs and metrics to an Azure storage account is useful for audit, static analysis, or backup. Compared to Azure Monitor Logs and a Log Analytics workspace, Azure storage is less expensive and logs can be kept there indefinitely.|
+ |Event Hubs|Sending logs and metrics to Event Hubs allows you to stream data to external systems such as third-party SIEMs and other Log Analytics solutions.|
+ |Azure Monitor partner integrations|Specialized integrations between Azure Monitor and other non-Microsoft monitoring platforms. Useful when you're already using one of the partners.|
+
+ > [!Important]
+ > Each **Destination details** selection requires that certain resources (for example, an existing Azure storage account) be created and available before the selection can be successfully configured. Choose each selection to get a list of the required resources.
-6. Select **Errors, Traffic, and Latency** for IoT connector. Select any extra metric categories you want to capture for the FHIR service.
+6. Select **AllMetrics**.
-7. Select **Save**
+ > [!Note]
+ > To view a complete list of IoT connector metrics associated with **AllMetrics**, see [Supported metrics with Azure Monitor](/azure/azure-monitor/essentials/metrics-supported#microsofthealthcareapisworkspacesiotconnectors).
- :::image type="content" source="media/iot-metrics-export/diagnostic-setting-add.png" alt-text="IoT connector2" lightbox="media/iot-metrics-export/diagnostic-setting-add.png":::
+7. Select **Save**.
-> [!Note]
-> It might take up to 15 minutes for the first Metrics logs to display in the repository of your choice.
+ > [!Note]
+ > It might take up to 15 minutes for the first IoT connector metrics to display in the destination of your choice.
-For more information about how to work with diagnostic logs, see the [Azure Resource Log documentation](../../azure-monitor/essentials/platform-logs-overview.md)
+For more information about how to work with diagnostics logs, see the [Azure Resource Log documentation](../../azure-monitor/essentials/platform-logs-overview.md).
## Conclusion
-Having access to Metrics logs is essential for monitoring and troubleshooting. IoT connector allows you to do these actions through Metrics logs.
+Having access to metrics is essential for monitoring and troubleshooting. IoT connector allows you to do these actions through the export of metrics.
## Next steps
-Check out frequently asked questions about IoT connector.
+To view the frequently asked questions (FAQs) about IoT connector, see
>[!div class="nextstepaction"] >[IoT connector FAQs](iot-connector-faqs.md)
industrial-iot Overview What Is Industrial Iot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industrial-iot/overview-what-is-industrial-iot.md
Last updated 3/22/2021
![Industrial Iot](media/overview-what-is-Industrial-IoT/icon-255-px.png)
-Microsoft Azure Industrial Internet of Things (IIoT) is a suite of Azure modules and services that integrate the power of the cloud into industrial and manufacturing shop floors. Using industry-standard open interfaces such as the [open platform communications unified architecture (OPC UA)](https://opcfoundation.org/about/opc-technologies/opc-ua/), Azure IIoT provides you with the ability to integrate data from assets and sensors - including those that are already operating on your factory floor - into the Azure cloud. Having your data in the cloud enables it to be used more rapidly and flexibly as feedback for developing transformative business and industrial processes.
+Microsoft Azure Industrial Internet of Things (IIoT) is a suite of Azure cloud microservices and Azure IoT Edge modules. Azure Industrial IoT integrates the power of the cloud into industrial and manufacturing shop floors. Using industry-standard open interfaces such as the [open platform communications unified architecture (OPC UA)](https://opcfoundation.org/about/opc-technologies/opc-ua/), Azure IIoT provides you with the ability to integrate data from assets and sensors - including those systems that are already operating on your factory floor - into the Azure cloud. Having your data in the cloud enables it to be used more rapidly and flexibly as feedback for developing transformative business and industrial processes.
## Discover, register, and manage your Industrial Assets with Azure
-The Azure Industrial IoT Platform allows plant operators to discover OPC UA-enabled servers in a factory network and register them in Azure IoT Hub. Operations personnel can subscribe and react to events on the factory floor from anywhere in the world, receive alerts and alarms, and react to them in real time.
+The Azure Industrial IoT Platform allows plant operators to discover OPC UA-enabled servers in a factory network and register them in Azure IoT Hub. Operations personnel can subscribe and react to events on the factory floor from anywhere in the world. Azure IIoT will enable the reception of alerts and alarms, and will allow reaction to them in real time.
-IIoT provides a set of Microservices that implement OPC UA functionality. The Microservices REST APIs mirror the OPC UA services edge-side. It enables your cloud applications to browse server address spaces or read/write variables and execute methods using HTTPS and simple OPC UA JSON payloads. The edge services are implemented as Azure IoT Edge modules and run on-premises. The cloud microservices are implemented as ASP.NET microservices with a REST interface and run on managed Azure Kubernetes Services or standalone on Azure App Service. For both edge and cloud services, IIoT provides pre-built Docker containers in the Microsoft Container Registry (MCR). The edge and cloud services are leveraging each other and must be used together. IIoT also provides easy-to-use deployment scripts that allow one to deploy the entire platform with a single command.
+Azure IIoT provides a set of microservices that connect to OPC UA systems on the shop floor. The microservices REST APIs mirror the OPC UA systems functionality. The REST APIs enable your cloud applications to browse OPC UA server address spaces, read/write values of OPC UA nodes, and execute OPC UA methods. Components at the factory floor are implemented as Azure IoT Edge modules. The cloud microservices are ASP.NET microservices with a REST interface and run on managed Azure Kubernetes Services or standalone on Azure App Service. Azure IoT Edge modules and Azure IIoT cloud services are available as pre-built Docker containers in the Microsoft Container Registry (MCR).
-In addition, the REST APIs can be used with any programming language through its exposed Open API specification (Swagger). This means when integrating OPC UA into cloud management solutions, developers are free to choose technology that matches their skills, interests, and architecture choices. For example, a full stack web developer who develops an application for an alarm and event dashboard can write logic to respond to events in JavaScript or TypeScript without ramping up on a OPC UA SDK, C, C++, Java or C#.
+The edge modules and cloud services collaborate closely and must be used together. Azure IIoT provides easy-to-use deployment scripts that allow to deploy the entire platform with a single command.
+
+The REST APIs can be used with any programming language through its exposed Open API specification (Swagger). When integrating OPC UA into cloud management solutions, developers are free to choose technology that matches their skills, interests, and architecture choices. For example, a full stack web developer who develops an application for an alarm and event dashboard can write logic to respond to events in JavaScript or TypeScript without ramping up on an OPC UA SDK, C, C++, Java or C#.
## Manage certificates and trust groups
-Azure Industrial IoT manages OPC UA Application Certificates and Trust Lists of factory floor machinery and control systems to keep OPC UA client to server communication secure. It restricts which client is allowed to talk to which server. Storage of private keys and signing of certificates is backed by Azure Key Vault, which supports hardware based security (HSM).
+Azure Industrial IoT manages OPC UA Application Certificates and Trust Lists of factory floor machinery and control systems to keep OPC UA client to server communication secure. It restricts which client is allowed to talk to which server. Storage of private keys and signing of certificates is backed by Azure Key Vault, which supports hardware-based security (HSM).
## Industrial IoT Components
-Azure IIoT solutions are built from specific components. These include the following.
+Azure IIoT solutions are built from specific components:
- **At least one Azure IoT Hub.** - **IoT Edge devices.** - **Industrial Edge Modules.** ### IoT Hub
-The [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) acts as a central message hub for secure, bi-directional communications between any IoT application and the devices it manages. It's an open and flexible cloud platform as a service (PaaS) that supports open-source SDKs and multiple protocols.
-Gathering your industrial and business data onto an IoT Hub lets you store your data securely, perform business and efficiency analyses on it, and generate reports from it. You can also apply Microsoft Azure services and tools, such as [Power BI](https://powerbi.microsoft.com), on your consolidated data.
+The [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) acts as a central message hub for secure, bi-directional communications between any IoT application and the devices it manages. It's an open and flexible cloud platform as a service (PaaS) that supports open-source SDKs and multiple protocols.
+
+Gathering your industrial and business data onto an IoT Hub lets you store your data securely, perform business and efficiency analyses on it, and generate reports from it. You can also apply Microsoft Azure services and tools, such as [Power BI](https://powerbi.microsoft.com), on your combined data.
### IoT Edge devices
-The [edge services](https://azure.microsoft.com/services/iot-edge/) are implemented as Azure IoT Edge modules and run on-premises. The cloud microservices are implemented as ASP.NET microservices with a REST interface and run on managed Azure Kubernetes Services or stand-alone on Azure App Service. For both edge and cloud services, we have provided pre-built Docker containers in the Microsoft Container Registry (MCR), removing this step for the customer. The edge and cloud services are leveraging each other and must be used together. We have also provided easy-to-use deployment scripts that allow one to deploy the entire platform with a single command.
-An IoT Edge device is composed of Edge Runtime and Edge Modules.
-- **Edge Modules** are docker containers, which are the smallest unit of computation, like OPC Publisher and OPC Twin.
+The [edge services](https://azure.microsoft.com/services/iot-edge/) are implemented as Azure IoT Edge modules and run on-premises. The cloud microservices are implemented as ASP.NET microservices with a REST interface and run on managed Azure Kubernetes Services or stand-alone on Azure App Service. For both edge and cloud services, we have provided pre-built Docker containers in the Microsoft Container Registry (MCR), removing this step for the customer. The edge and cloud services are applying each other and must be used together. We have also provided easy-to-use deployment scripts that allow one to deploy the entire platform with a single command.
+
+An IoT Edge device is composed of an IoT Edge Runtime and IoT Edge Modules.
+
+- **Edge Modules** are docker containers, which are the smallest unit of computation, like OPC Publisher and OPC Twin.
- **Edge device** is used to deploy such modules, which act as mediator between OPC UA server and IoT Hub in cloud.
-### Industrial Edge Modules
-- **OPC Publisher**: The OPC Publisher runs inside IoT Edge. It connects to OPC UA servers and publishes JSON encoded telemetry data from these servers in OPC UA "Pub/Sub" format to Azure IoT Hub. All transport protocols supported by the Azure IoT Hub client SDK can be used, i.e. HTTPS, AMQP, and MQTT.-- **OPC Twin**: The OPC Twin consists of microservices that use Azure IoT Edge and IoT Hub to connect the cloud and the factory network. OPC Twin provides discovery, registration, and remote control of industrial devices through REST APIs. OPC Twin doesn't require an OPC Unified Architecture (OPC UA) SDK. It's programming language agnostic, and can be included in a serverless workflow.-- **Discovery**: The discovery module, represented by the discoverer identity, provides discovery services on the edge, which include OPC UA server discovery. If discovery is configured and enabled, the module will send the results of a scan probe via the IoT Edge and IoT Hub telemetry path to the Onboarding service. The service processes the results and updates all related Identities in the Registry.
+### Industrial IoT Edge Modules
+
+- **OPC Publisher**: The OPC Publisher module connects to OPC UA server systems and publishes JSON encoded telemetry data from these servers in OPC UA "Pub/Sub" format to Azure. The OPC Publisher can run in two modes:
+ - In combination with and controlled by the Industrial-IoT cloud microservices (orchestrated mode)
+ - Configured by a local configuration file to allow operation without any Industrial-IoT cloud microservice (standalone mode)
+- **OPC Twin**: The OPC Twin module enables connection from the cloud to OPC UA server systems on the factory network. OPC Twin provides access to OPC UA server systems through REST APIs exposed by the Industrial-IoT cloud microservices.
+- **Discovery**: The Discovery module works only in combination with the Industrial-IoT cloud microservices. The Discovery module implements OPC UA server system discovery and reports the results to the Industrial-IoT cloud microservices.
## Next steps+ Now that you have learned what Industrial IoT is, you can read more about the OPC Publisher or get started with deploying the IIoT Platform: > [!div class="nextstepaction"] > [What is the OPC Publisher?](overview-what-is-opc-publisher.md) > [!div class="nextstepaction"]
-> [Deploy the Industrial IoT Platform](tutorial-deploy-industrial-iot-platform.md)
+> [Deploy the Industrial IoT Platform](tutorial-deploy-industrial-iot-platform.md)
+>
industrial-iot Tutorial Publisher Configure Opc Publisher https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industrial-iot/tutorial-publisher-configure-opc-publisher.md
In this tutorial contains information on the configuration of the OPC Publisher.
In this tutorial, you learn how to: > [!div class="checklist"]
+>
> * Configure the OPC Publisher via Configuration File > * Configure the OPC Publisher via Command-line Arguments > * Configure the OPC Publisher via IoT Hub Direct Methods
In this tutorial, you learn how to:
IoT Edge provides OPC Publisher with its security configuration for accessing IoT Hub automatically. OPC Publisher can also run as a standalone Docker container by specifying a device connection string for accessing IoT Hub via the `dc` command-line parameter. A device for IoT Hub can be created and its connection string retrieved through the Azure portal.
-For accessing OPC UA-enabled assets, X.509 certificates and their associated private keys are used by OPC UA. This is called OPC UA application authentication and in addition to OPC UA user authentication. OPC Publisher uses a file system-based certificate store to manage all application certificates. During startup, OPC Publisher checks if there is a certificate it can use in this certificate stores and creates a new self-signed certificate and new associated private key if there is none. Self-signed certificates provide weak authentication, since they are not signed by a trusted Certificate Authority, but at least the communication to the OPC UA-enabled asset can be encrypted this way.
+To access OPC UA systems, X.509 certificates are used by OPC UA. The certificates are used for authentication and encryption of the data exchanged. OPC Publisher uses a file system-based certificate store to manage all application certificates. During startup, OPC Publisher checks existence of its own certificate in the certificate store. If no certificate exists for OPC Publisher, it creates a new self-signed certificate and associated private key. Self-signed certificates provide weak authentication, since they are not signed by a trusted Certificate Authority.
-Security is enabled in the configuration file via the `"UseSecurity": true,` flag. The most secure endpoint available on the OPC UA servers the OPC Publisher is supposed to connect to is automatically selected.
-By default, OPC Publisher uses anonymous user authentication (in additional to the application authentication described above). However, OPC Publisher also supports user authentication using username and password. This can be specified via the REST API configuration interface (described below) or the configuration file as follows:
-```
+Security is enabled in the configuration file via the `"UseSecurity": true,` flag. The most secure endpoint available on the OPC UA server the OPC Publisher is supposed to connect to is automatically selected.
+By default, OPC Publisher uses anonymous user authentication (in addition to the application authentication described above). However, OPC Publisher also supports user authentication using username and password. These credentials can be specified via the REST API configuration interface (described below) or the configuration file as follows:
+
+```json
"OpcAuthenticationMode": "UsernamePassword", "OpcAuthenticationUsername": "usr", "OpcAuthenticationPassword": "pwd", ```
-In addition, OPC Publisher version 2.5 and below encrypts the username and password in the configuration file. Version 2.6 and above only supports the username and password in plaintext. This will be improved in the next version of OPC Publisher.
-To persist the security configuration of OPC Publisher across restarts, the certificate and private key located in the certificate store directory must be mapped to the IoT Edge host OS filesystem. See "Specifying Container Create Options in the Azure portal" above.
+OPC Publisher version 2.5 and below encrypts the username and password in the configuration file. Version 2.6 and above only supports the username and password in plaintext.
+
+To persist the security configuration of OPC Publisher, the certificate store must be mapped to the IoT Edge host OS filesystem or a container data volume.
## Configuration via Configuration File
-The simplest way to configure OPC Publisher is via a configuration file. An example configuration file as well as documentation regarding its format is provided via the file [`publishednodes.json`](https://raw.githubusercontent.com/Azure/Industrial-IoT/main/components/opc-ua/src/Microsoft.Azure.IIoT.OpcUa.Edge.Publisher/tests/Engine/publishednodes.json) in this repository.
-Configuration file syntax has changed over time and OPC Publisher still can read old formats, but converts them into the latest format when persisting the configuration, done regularly in an automated fashion.
+The simplest way to configure OPC Publisher is via a configuration file. An example configuration file and format documentation are provided in the file [`publishednodes.json`](https://raw.githubusercontent.com/Azure/Industrial-IoT/main/components/opc-ua/src/Microsoft.Azure.IIoT.OpcUa.Edge.Publisher/tests/Engine/publishednodes.json) in this repository.
+Configuration file syntax has changed over time. OPC Publisher still can read old formats, but converts them into the latest format when writing the file.
A basic configuration file looks like this:
-```
+
+```json
[ { "EndpointUrl": "opc.tcp://testserver:62541/Quickstarts/ReferenceServer",
A basic configuration file looks like this:
] ```
-OPC UA assets optimize network bandwidth by only sending data changes to OPC Publisher when the data has changed. If data changes need to be published more often or at regular intervals, OPC Publisher supports a "heartbeat" for every configured data item that can be enabled by additionally specifying the HeartbeatInterval key in the data item's configuration. The interval is specified in seconds:
-```
+OPC UA server systems optimize network bandwidth by only sending data changes to OPC Publisher when the data has changed. If data changes need to be published more often or at regular intervals, OPC Publisher supports a "heartbeat" setting for every configured OPC UA node. This "heartbeat" can be enabled by specifying the `HeartbeatInterval` key in the node configuration. The interval is specified in seconds:
+
+```json
"HeartbeatInterval": 3600, ```
-An OPC UA asset always sends the current value of a data item when OPC Publisher first connects to it. To prevent publishing this data to IoT Hub, the SkipFirst key can be additionally specified in the data item's configuration:
-```
+An OPC UA server system always sends the current value of a node when OPC Publisher connects to it. To prevent publishing this data to the cloud, the `SkipFirst` key can be additionally specified in the node configuration:
+
+```json
"SkipFirst": true, ```
+>[!NOTE]
+> This feature is only available in version 2.5 and below of OPC Publisher.
+ Both settings can be enabled globally via command-line options, too. ## Configuration via Command-line Arguments
-There are several command-line arguments that can be used to set global settings for OPC Publisher. They are described [here](reference-command-line-arguments.md).
-
+There are [several command-line arguments](reference-command-line-arguments.md) that can be used to set global settings for OPC Publisher.
## Configuration via the built-in OPC UA Server Interface
->[!NOTE]
+>[!NOTE]
> This feature is only available in version 2.5 and below of OPC Publisher. OPC Publisher has a built-in OPC UA server, running on port 62222. It implements three OPC UA methods:
- - PublishNode
- - UnpublishNode
- - GetPublishedNodes
+* PublishNode
+* UnpublishNode
+* GetPublishedNodes
This interface can be accessed using an OPC UA client application, for example [UA Expert](https://www.unified-automation.com/products/development-tools/uaexpert.html). ## Configuration via IoT Hub Direct Methods
->[!NOTE]
+>[!NOTE]
> This feature is only available in version 2.5 and below of OPC Publisher.
-OPC Publisher implements the following [IoT Hub Direct Methods](../iot-hub/iot-hub-devguide-direct-methods.md), which can be called from an application (from anywhere in the world) leveraging the [IoT Hub Device SDK](../iot-hub/iot-hub-devguide-sdks.md):
-
- - PublishNodes
- - UnpublishNodes
- - UnpublishAllNodes
- - GetConfiguredEndpoints
- - GetConfiguredNodesOnEndpoint
- - GetDiagnosticInfo
- - GetDiagnosticLog
- - GetDiagnosticStartupLog
- - ExitApplication
- - GetInfo
+OPC Publisher implements the following [IoT Hub Direct Methods](../iot-hub/iot-hub-devguide-direct-methods.md), which can be called from an application (from anywhere in the world) using the [IoT Hub Device SDK](../iot-hub/iot-hub-devguide-sdks.md):
-We have provided a [sample configuration application](https://github.com/Azure-Samples/iot-edge-opc-publisher-nodeconfiguration) as well as an [application for reading diagnostic information](https://github.com/Azure-Samples/iot-edge-opc-publisher-diagnostics) from OPC Publisher open-source, leveraging this interface.
+* PublishNodes
+* UnpublishNodes
+* UnpublishAllNodes
+* GetConfiguredEndpoints
+* GetConfiguredNodesOnEndpoint
+* GetDiagnosticInfo
+* GetDiagnosticLog
+* GetDiagnosticStartupLog
+* ExitApplication
+* GetInfo
## Configuration via Cloud-based, Companion REST Microservice
->[!NOTE]
+>[!NOTE]
> This feature is only available in version 2.6 and above of OPC Publisher. A cloud-based, companion microservice with a REST interface is described and available [here](https://github.com/Azure/Industrial-IoT/blob/main/docs/services/publisher.md). It can be used to configure OPC Publisher via an OpenAPI-compatible interface, for example through Swagger. ## Configuration of the simple JSON telemetry format via Separate Configuration File
->[!NOTE]
+>[!NOTE]
> This feature is only available in version 2.5 and below of OPC Publisher.
-OPC Publisher allows filtering the parts of the non-standardized, simple telemetry format via a separate configuration file, which can be specified via the tc command line option. If no configuration file is specified, the full JSON telemetry format is sent to IoT Hub. The format of the separate telemetry configuration file is described [here](reference-opc-publisher-telemetry-format.md#opc-publisher-telemetry-configuration-file-format).
+OPC Publisher allows configuration of the simple telemetry format via a configuration file. This file can be specified via the `--tc` command-line option. By default the full JSON telemetry format is sent to IoT Hub. The format of the telemetry configuration file is described [here](reference-opc-publisher-telemetry-format.md#opc-publisher-telemetry-configuration-file-format).
## Next steps
-Now that you have configured the OPC Publisher, the next step is to learn how to tune the performance and memory of the Edge module:
+
+After a successful configuration of OPC Publisher, the next step is to learn how to tune the performance and memory of the module:
> [!div class="nextstepaction"] > [Performance and Memory Tuning](tutorial-publisher-performance-memory-tuning-opc-publisher.md)
industrial-iot Tutorial Publisher Deploy Opc Publisher Standalone https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industrial-iot/tutorial-publisher-deploy-opc-publisher-standalone.md
Last updated 3/22/2021
OPC Publisher is a fully supported Microsoft product, developed in the open, that bridges the gap between industrial assets and the Microsoft Azure cloud. It does so by connecting to OPC UA-enabled assets or industrial connectivity software and publishes telemetry data to [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) in various formats, including IEC62541 OPC UA PubSub standard format (from version 2.6 onwards).
-It runs on [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/) as a Module or on plain Docker as a container. Since it leverages the [.NET cross-platform runtime](/dotnet/core/introduction), it also runs natively on Linux and Windows 10.
+It runs on [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/) as a Module.
In this tutorial, you learn how to:
If you donΓÇÖt have an Azure subscription, create a free trial account
1. Pick the Azure subscription to use. If no Azure subscription is available, one must be created. 2. Pick the IoT Hub the OPC Publisher is supposed to send data to. If no IoT Hub is available, one must be created. 3. Pick the IoT Edge device the OPC Publisher is supposed to run on (or enter a name for a new IoT Edge device to be created).
-4. Click Create. The "Set modules on Device" page for the selected IoT Edge device opens.
-5. Click on "OPCPublisher" to open the OPC Publisher's "Update IoT Edge Module" page and then select "Container Create Options".
-6. Specify additional container create options based on your usage of OPC Publisher, see next section below.
+4. Select **Create**. The **Set modules on Device** page for the selected IoT Edge device opens.
+5. Select **OPCPublisher** to open the OPC Publisher's **Update IoT Edge Module** page and then select **Container Create Options**.
+6. Specify other container create options based on your usage of OPC Publisher, see next section below.
-All supported docker images for the docker OPC Publisher are listed [here](https://mcr.microsoft.com/v2/iotedge/opc-publisher/tags/list). For non-OPC UA-enabled assets, we have partnered with the leading industrial connectivity providers and helped them port their OPC UA adapter software to Azure IoT Edge. These adapters are available in the Azure [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?page=1).
+All supported docker images for the docker OPC Publisher are listed [here](https://mcr.microsoft.com/v2/iotedge/opc-publisher/tags/list). For non-OPC UA-enabled assets, leading industrial connectivity providers offer OPC UA adapter software. These adapters are available in the Azure [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?page=1).
## Specifying Container Create Options in the Azure portal
-When deploying OPC Publisher through the Azure portal, container create options can be specified in the Update IoT Edge Module page of OPC Publisher. These create options must be in JSON format. The OPC Publisher command line arguments can be specified via the Cmd key, e.g.:
+When deploying OPC Publisher through the Azure portal, container create options can be specified in the Update IoT Edge Module page of OPC Publisher. These create options must be in JSON format. The OPC Publisher command-line arguments can be specified via the Cmd key, for example:
``` "Cmd": [ "--pf=./pn.json",
A connection to an OPC UA server using its hostname without a DNS server configu
``` ## Next steps
-Now that you have deployed the OPC Publisher Edge module, the next step is to configure it:
+Now that you have deployed the OPC Publisher IoT Edge module, the next step is to configure it:
> [!div class="nextstepaction"]
-> [Configure the OPC Publisher](tutorial-publisher-configure-opc-publisher.md)
+> [Configure the OPC Publisher](tutorial-publisher-configure-opc-publisher.md)
iot-central Concepts App Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-app-templates.md
Title: What are application templates in Azure IoT Central | Microsoft Docs description: Azure IoT Central application templates allow you to jump in to IoT solution development.-- Previously updated : 12/21/2021++ Last updated : 01/18/2022
Application templates consist of:
- Pre-configured rules and jobs - Rich documentation including tutorials and how-tos
-You choose the application template when you create your application. You can't change the template after the application is created.
+You choose the application template when you create your application. You can't change the template an application uses after it's created.
## Custom templates
Azure IoT Central is an industry agnostic application platform. Application temp
## Connected logistics
-Global logistics spending is expected to reach $10.6 trillion in 2020. Transportation of goods accounts for the majority of this spending and shipping providers are under intense competitive pressure and constraints.
+Global logistics spending is expected to reach $10.6 trillion in 2020. Transportation of goods accounts for most of this spending and shipping providers are under intense competitive pressure and constraints.
You can use IoT sensors to collect and monitor ambient conditions such as temperature, humidity, tilt, shock, light, and the location of a shipment. You can combine telemetry gathered from IoT sensors and devices with other data sources such as weather and traffic information in cloud-based business intelligence systems. The benefits of a connected logistics solution include:
-* Shipment monitoring with real-time tracing and tracking.
-* Shipment integrity with real-time ambient condition monitoring.
-* Security from theft, loss, or damage of shipments.
-* Geo-fencing, route optimization, fleet management, and vehicle analytics.
-* Forecasting for predictable departure and arrival of shipments.
+- Shipment monitoring with real-time tracing and tracking.
+- Shipment integrity with real-time ambient condition monitoring.
+- Security from theft, loss, or damage of shipments.
+- Geo-fencing, route optimization, fleet management, and vehicle analytics.
+- Forecasting for predictable departure and arrival of shipments.
The following screenshots show the out-of-the-box dashboard in the application template. The dashboard is fully customizable to meet your specific solution requirements:
Solutions based on IoT enabled cameras can deliver transformational benefits by
The benefits of a digital distribution center include:
-* Cameras monitor goods as they arrive and move through the conveyor system.
-* Automatic identification of faulty goods.
-* Efficient order tracking.
-* Reduced costs, improved productivity, and optimized usage.
+- Cameras monitor goods as they arrive and move through the conveyor system.
+- Automatic identification of faulty goods.
+- Efficient order tracking.
+- Reduced costs, improved productivity, and optimized usage.
-The following screenshot shows the out-of-the-box dashboard in the application template. The dashboard is fully customizable to meet your specific solution requirements:
+The following screenshot shows the out-of-the-box dashboard in the application template. The dashboard is fully customizable to meet your specific solution requirements:
:::image type="content" source="media/concepts-app-templates/digital-distribution-center-dashboard.png" alt-text="Digital Distribution Center Dashboard":::
To learn more, see the [Deploy and walk through a digital distribution center ap
For many retailers, environmental conditions within their stores are a key differentiator from their competitors. Retailers want to maintain pleasant conditions within their stores for the benefit of their customers.
-You can use the IoT Central in-store analytics condition monitoring application template to build an end-to-end solution. The application template lets you digitally connect to and monitor a retail store environment using of different kinds of sensor devices. These sensor devices generate telemetry that you can convert into business insights helping the retailer to reduce operating costs and create a great experience for their customers.
+You can use the IoT Central in-store analytics condition monitoring application template to build an end-to-end solution. The application template lets you digitally connect to and monitor a retail store environment using of different kinds of sensor devices. These sensor devices generate telemetry that you can convert into business insights to help the retailer reduce operating costs and create a great experience for their customers.
Use the application template to:
-* Connect different kinds of IoT sensors to an IoT Central application instance.
-* Monitor and manage the health of the sensor network and any gateway devices in the environment.
-* Create custom rules around the environmental conditions within a store to trigger alerts for store managers.
-* Transform the environmental conditions within your store into insights that the retail store team can use to improve the customer experience.
-* Export the aggregated insights into existing or new business applications to provide useful and timely information to retail staff.
+- Connect different kinds of IoT sensors to an IoT Central application instance.
+- Monitor and manage the health of the sensor network and any gateway devices in the environment.
+- Create custom rules around the environmental conditions within a store to trigger alerts for store managers.
+- Transform the environmental conditions within your store into insights that the retail store team can use to improve the customer experience.
+- Export the aggregated insights into existing or new business applications to provide useful and timely information to retail staff.
-The application template comes with a set of device templates and uses a set of simulated devices to populate the dashboard.
+The application template comes with a set of device templates and uses a set of simulated devices to populate the dashboard.
-The following screenshot shows the out-of-the-box dashboard in the application template. The dashboard is fully customizable to meet your specific solution requirements:
+The following screenshot shows the out-of-the-box dashboard in the application template. The dashboard is fully customizable to meet your specific solution requirements:
:::image type="content" source="media/concepts-app-templates/in-store-analytics-condition-dashboard.png" alt-text="In-Store Analytics Condition Monitoring":::
You can use the IoT Central in-store analytics checkout application template to
Use the application template to:
-* Connect different kinds of IoT sensors to an IoT Central application instance.
-* Monitor and manage the health of the sensor network and any gateway devices in the environment.
-* Create custom rules around the checkout condition within a store to trigger alerts for retail staff.
-* Transform the checkout conditions within the store into insights that the retail store team can use to improve the customer experience.
-* Export the aggregated insights into existing or new business applications to provide useful and timely information to retail staff.
+- Connect different kinds of IoT sensors to an IoT Central application instance.
+- Monitor and manage the health of the sensor network and any gateway devices in the environment.
+- Create custom rules around the checkout condition within a store to trigger alerts for retail staff.
+- Transform the checkout conditions within the store into insights that the retail store team can use to improve the customer experience.
+- Export the aggregated insights into existing or new business applications to provide useful and timely information to retail staff.
-The application template comes with a set of device templates and uses a set of simulated devices to populate the dashboard with lane occupancy data.
+The application template comes with a set of device templates and uses a set of simulated devices to populate the dashboard with lane occupancy data.
-The following screenshot shows the out-of-the-box dashboard in the application template. The dashboard is fully customizable to meet your specific solution requirements:
+The following screenshot shows the out-of-the-box dashboard in the application template. The dashboard is fully customizable to meet your specific solution requirements:
:::image type="content" source="media/concepts-app-templates/In-Store-Analytics-Checkout-Dashboard.png" alt-text="In-Store Analytics Checkout":::
IoT data generated from radio-frequency identification (RFID) tags, beacons, and
The benefits of smart inventory management include:
-* Reducing the risk of items being out of stock and ensuring the desired customer service level.
-* In-depth analysis and insights into inventory accuracy in near real time.
-* Tools to help decide on the right amount of inventory to hold to meet customer orders.
+- Reducing the risk of items being out of stock and ensuring the desired customer service level.
+- In-depth analysis and insights into inventory accuracy in near real time.
+- Tools to help decide on the right amount of inventory to hold to meet customer orders.
This application template focuses on device connectivity, and the configuration and management of RFID and Bluetooth low energy (BLE) reader devices.
In the increasingly competitive retail landscape, retailers constantly face pres
The IoT Central micro-fulfillment center application template enables you to monitor and manage all aspects of your fully automated fulfillment centers. The template includes a set of simulated condition monitoring sensors and robotic carriers to accelerate the solution development process. These sensor devices capture meaningful signals that can be converted into business insights allowing retailers to reduce their operating costs and create experiences for their customers.
-The application template enables you to:
+The application template enables you to:
- Seamlessly connect different kinds of IoT sensors such as robots or condition monitoring sensors to an IoT Central application instance. - Monitor and manage the health of the sensor network, and any gateway devices in the environment. - Create custom rules around the environmental conditions within a fulfillment center to trigger appropriate alerts.-- Transform the environmental conditions within your fulfillment center into insights that can be leveraged by the retail warehouse team.
+- Transform the environmental conditions within your fulfillment center into insights that the retail warehouse team can use.
- Export the aggregated insights into existing or new business applications for the benefit of the retail staff members. The following screenshot shows the out-of-the-box dashboard in the application template. The dashboard is fully customizable to meet your specific solution requirements:
To learn more, see the [Deploy and walk through the micro-fulfillment center app
## Smart meter monitoring
- The smart meters not only enable automated billing, but also advanced metering use cases such as real-time readings and bi-directional communication. The smart meter app template enables utilities and partners to monitor smart meters status and data, define alarms and notifications. It provides sample commands, such as disconnect meter and update software. The meter data can be set up to egress to other business applications and to develop custom solutions.
+ The smart meters not only enable automated billing, but also advanced metering use cases such as real-time readings and bi-directional communication. The smart meter app template enables utilities and partners to monitor smart meters status and data, define alarms and notifications. It provides sample commands, such as disconnect meter and update software. The meter data can be set up to egress to other business applications and to develop custom solutions.
App's key functionalities: - Meter sample device model - Meter info and live status - Meter readings such as energy, power, and voltages-- Meter command samples
+- Meter command samples
- Built-in visualization and dashboards - Extensibility for custom solution development
After you deploy the app, you'll see the simulated meter data on the dashboard,
## Solar panel monitoring
-The solar panel monitoring app enables utilities and partners to monitor solar panels, such as their energy generation and connection status in near real time. It can send notifications based on defined threshold criteria. It provides sample commands, such as update firmware and other properties. The solar panel data can be set up to egress to other business applications and to develop custom solutions.
+The solar panel monitoring app enables utilities and partners to monitor solar panels, such as their energy generation and connection status in near real time. It can send notifications based on defined threshold criteria. It provides sample commands, such as update firmware and other properties. The solar panel data can be set up to egress to other business applications and to develop custom solutions.
-App's key functionalities:
+App's key functionalities:
-- Solar panel sample device model
+- Solar panel sample device model
- Solar Panel info and live status - Solar energy generation and other readings - Command and control samples
App's key functionalities:
You can try the [solar panel monitoring app for free](https://apps.azureiotcentral.com/build/new/solar-panel-monitoring) without an Azure subscription and any commitments.
-After you deploy the app, you'll see the simulated solar panel data within 1-2 minutes, as shown in the dashboard below. This template is a sample app that you can easily extend and customize for your specific use cases.
+After you deploy the app, you'll see the simulated solar panel data within 1-2 minutes, as shown in the dashboard below. This template is a sample app that you can easily extend and customize for your specific use cases.
:::image type="content" source="media/concepts-app-templates/solar-panel-app-dashboard.png" alt-text="Solar Panel App Dashboard":::
The App template consists of:
- Sample water quality monitor device templates - Simulated water quality monitor devices - Pre-configured rules and jobs-- Branding using white labeling
+- Branding using white labeling
Get started with the [Water Quality Monitoring application tutorial](../government/tutorial-water-quality-monitoring.md). ## Water Consumption Monitoring
-Traditional water consumption tracking relies on water operators manually reading water consumption meters at the meter sites. More and more cities are replacing traditional meters with advanced smart meters enabling remote monitoring of consumption and remotely controlling valves to control water flow. Water consumption monitoring coupled with digital feedback message to the citizen can increase awareness and reduce water consumption.
+Traditional water consumption tracking relies on water operators manually reading water consumption meters at the meter sites. More cities are replacing traditional meters with advanced smart meters enabling remote monitoring of consumption and remotely controlling valves to control water flow. Water consumption monitoring coupled with digital feedback message to the citizen can increase awareness and reduce water consumption.
-Water Consumption Monitoring app is an IoT Central app template to help you kickstart your IoT solution development to enable water utilities and cities to remotely monitor and control water flow to reduce consumption.
+Water Consumption Monitoring app is an IoT Central app template to help you kickstart your IoT solution development to enable water utilities and cities to remotely monitor and control water flow to reduce consumption.
:::image type="content" source="media/concepts-app-templates/water-consumption-monitoring-dashboard-full.png" alt-text="Water Consumption Monitoring App template":::
The Water Consumption Monitoring app template consists of pre-configured:
Get started with the [Water Consumption Monitoring application tutorial](../government/tutorial-water-consumption-monitoring.md).
-## Connected Waste Management
+## Connected Waste Management
-Connected Waste Management app is an IoT Central app template to help you kickstart your IoT solution development to enable smart cities to remotely monitor to maximize efficient waste collection.
+Connected Waste Management app is an IoT Central app template to help you kickstart your IoT solution development to enable smart cities to remotely monitor to maximize efficient waste collection.
:::image type="content" source="media/concepts-app-templates/connected-waste-management-dashboard.png" alt-text="Connected Waste Management App template":::
-The Connected Waste Management app template consist of pre-configured:
+The Connected Waste Management app template consists of pre-configured:
- Sample dashboards - Sample connected waste bin device templates - Simulated connected waste bin devices - Pre-configured rules and jobs-- Branding using white labeling
+- Branding using white labeling
Get started with the [Connected Waste Management application tutorial](../government/tutorial-connected-waste-management.md).
-## Continuous patient monitoring
+## Continuous patient monitoring
In the healthcare IoT space, Continuous Patient Monitoring is one of the key enablers of reducing the risk of readmissions, managing chronic diseases more effectively, and improving patient outcomes. Continuous Patient Monitoring can be split into two major categories:
This application template can be used to build solutions for both categories of
:::image type="content" source="media/concepts-app-templates/in-patient-dashboard.png" alt-text="CPM-dashboard"::: - ## Next steps Now that you know what IoT Central application templates are, get started by [creating an IoT Central Application](quick-deploy-iot-central.md).
iot-central Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-architecture.md
Devices connect to IoT Central using one the supported protocols: [MQTT, AMQP, o
## Gateways
-Local device gateways are useful in several scenarios, such as:
+Local gateway devices are useful in several scenarios, such as:
-- Devices may not be able to connect directly to IoT Central because they can't connect to the internet. For example, you may have a collection of Bluetooth enabled occupancy sensors that need to connect through a gateway.-- The quantity of data generated by your devices may be high. To reduce costs, you can combine or aggregate the data in a local gateway before it's sent to your IoT Central application.-- Your solution may require fast responses to anomalies in the data. You can run rules on a gateway that identify anomalies and take an action locally without the need to send data to your IoT Central application.
+- Devices can't connect directly to IoT Central because they can't connect to the internet. For example, you may have a collection of Bluetooth enabled occupancy sensors that need to connect through a gateway device.
+- The quantity of data generated by your devices is high. To reduce costs, combine or aggregate the data in a local gateway before you send it to your IoT Central application.
+- Your solution requires fast responses to anomalies in the data. You can run rules on a gateway device that identify anomalies and take an action locally without the need to send data to your IoT Central application.
-To learn more, see [Connect Azure IoT Edge devices to an Azure IoT Central application](concepts-iot-edge.md).
+Gateway devices typically require more processing power than a standalone device. One option to implement a gateway device is to use [Azure IoT Edge and apply one of the standard IoT Edge gateway patterns](concepts-iot-edge.md). You can also run your own custom gateway code on a suitable device.
## Data export
iot-central Concepts Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-iot-edge.md
Title: Azure IoT Edge and Azure IoT Central | Microsoft Docs
description: Understand how to use Azure IoT Edge with an IoT Central application. Previously updated : 12/28/2021 Last updated : 01/18/2022
# Connect Azure IoT Edge devices to an Azure IoT Central application
-Azure IoT Edge moves cloud analytics and custom business logic to devices so your organization can focus on business insights instead of data management. Scale out your IoT solution by packaging your business logic into standard containers, deploy those containers to your devices, and monitor them from the cloud.
+Azure IoT Edge moves cloud analytics and custom business logic from the cloud to your devices. This approach lets your cloud solution focus on business insights instead of data management. Scale out your IoT solution by packaging your business logic into standard containers, deploy those containers to your devices, and monitor them from the cloud.
This article describes:
IoT Central enables the following capabilities to for IoT Edge devices:
An IoT Edge device can be:
-* A standalone device composed of modules.
-* A *gateway device*, with downstream devices connecting to it.
-
-![IoT Central with IoT Edge Overview](./media/concepts-iot-edge/gatewayedge.png)
-
-A gateway device can be a:
-
-* *Transparent gateway* where the IoT Edge hub module behaves like IoT Central and handles connections from devices registered in IoT Central. Messages pass from downstream devices to IoT Central as if there's no gateway between them.
-
- > [!NOTE]
- > IoT Central currently doesn't support connecting an IoT Edge device as a downstream device to an IoT Edge transparent gateway. This is because all devices that connect to IoT Central are provisioned using the Device Provisioning Service (DPS) and DPS doesn't support nested IoT Edge scenarios.
-
-* *Translation gateway* where devices that can't connect to IoT Central on their own, connect to a custom IoT Edge module instead. The module in the IoT Edge device processes incoming downstream device messages and then forwards them to IoT Central.
-
-A single IoT Edge device can function as both a transparent gateway and a translation gateway.
-
-To learn more about the IoT Edge gateway patterns, see [How an IoT Edge device can be used as a gateway](../../iot-edge/iot-edge-as-gateway.md).
-
-## IoT Edge patterns
-
-IoT Central supports the following IoT Edge device patterns:
-
-### IoT Edge as leaf device
-
-![IoT Edge as leaf device](./media/concepts-iot-edge/edgeasleafdevice.png)
-
-The IoT Edge device is provisioned in IoT Central and any downstream devices and their telemetry is represented as coming from the IoT Edge device. Downstream devices connected to the IoT Edge device aren't provisioned in IoT Central.
-
-### IoT Edge gateway device connected to downstream devices with identity
-
-![IoT Edge with downstream device identity](./medieviceidentity.png)
-
-The IoT Edge device is provisioned in IoT Central along with the downstream devices connected to the IoT Edge device. Runtime support for provisioning downstream devices through the gateway isn't currently supported.
-
-### IoT Edge gateway device connected to downstream devices with identity provided by the IoT Edge gateway
-
-![IoT Edge with downstream device without identity](./medieviceidentity.png)
-
-The IoT Edge device is provisioned in IoT Central along with the downstream devices connected to the IoT Edge device. Currently, IoT Central doesn't have runtime support for a gateway to provide an identity and to provision downstream devices. If you bring your own identity translation module, IoT Central can support this pattern.
-
-### Downstream device relationships with a gateway and modules
-
-Downstream devices can connect to an IoT Edge gateway device through the *IoT Edge hub* module. In this scenario, the IoT Edge device is a transparent gateway:
--
-Downstream devices can also connect to an IoT Edge gateway device through a custom module. In the following scenario, downstream devices connect through a *Modbus* custom module. In this scenario, the IoT Edge device is a translation gateway:
--
-The following diagram shows connections to an IoT Edge gateway device through both types of modules. In this scenario, the IoT Edge device is both a transparent and a translation gateway:
--
-Downstream devices can connect to an IoT Edge gateway device through multiple custom modules. The following diagram shows downstream devices connecting through a Modbus custom module, a BLE custom module, and the *IoT Edge hub* module:
-
+* A standalone device composed of custom modules.
+* A *gateway device*, with downstream devices connecting to it. A gateway device may include custom modules.
## IoT Edge devices and IoT Central
When you replace the deployment manifest, any connected IoT Edge devices downloa
} ```
+## IoT Edge gateway patterns
+
+IoT Central supports the following IoT Edge device patterns:
+
+### IoT Edge as a transparent gateway
+
+Downstream devices connect to IoT Central through the gateway with their own identity.
+
+![IoT Edge as transparent gateway](./medieviceidentity.png)
+
+The IoT Edge device is provisioned in IoT Central along with the downstream devices connected to the IoT Edge device. Runtime support for provisioning downstream devices through the gateway isn't currently supported.
+
+The IoT Edge hub module behaves like IoT Central and handles connections from devices registered in IoT Central. Messages pass from downstream devices to IoT Central as if there's no gateway between them. In a transparent gateway, you can't use custom modules to manipulate the messages from the downstream devices.
+
+> [!NOTE]
+> IoT Central currently doesn't support connecting an IoT Edge device as a downstream device to an IoT Edge transparent gateway. This is because all devices that connect to IoT Central are provisioned using the Device Provisioning Service (DPS) and DPS doesn't currently support nested IoT Edge scenarios.
+
+### IoT Edge as a protocol translation gateway
+
+This pattern enables you to connect devices that can't use any of the protocols that IoT Central supports.
+
+![IoT Edge as protocol translation gateway](./media/concepts-iot-edge/edgeasleafdevice.png)
+
+The IoT Edge device is provisioned in IoT Central and any telemetry from your downstream devices is represented as coming from the IoT Edge device. Downstream devices connected to the IoT Edge device aren't provisioned in IoT Central.
+
+### IoT Edge as an identity translation gateway
+
+Downstream devices connect to a module in the gateway that provides IoT Central device identities for them.
+
+![IoT Edge as identity translation gateway](./medieviceidentity.png)
+
+The IoT Edge device is provisioned in IoT Central along with the downstream devices connected to the IoT Edge device. Currently, IoT Central doesn't have runtime support for a gateway to provide an identity and to provision downstream devices. If you bring your own identity translation module, IoT Central can support this pattern.
+
+The [Azure IoT Central gateway module for Azure Video Analyzer](https://github.com/iot-for-all/iotc-ava-gateway/blob/main/README.md) on GitHub uses this pattern.
+
+### Downstream device relationships with a gateway and modules
+
+If the downstream devices connect to an IoT Edge gateway device through the *IoT Edge hub* module, the IoT Edge device is a transparent gateway:
++
+If the downstream devices connect to an IoT Edge gateway device through a custom module, the IoT Edge device is a translation gateway. In the following example, downstream devices connect through a *Modbus* custom module that does the protocol translation:
++
+The following diagram shows connections to an IoT Edge gateway device through both types of modules. In this scenario, the IoT Edge device is both a transparent and a translation gateway:
++
+Downstream devices can connect to an IoT Edge gateway device through multiple custom modules. The following diagram shows downstream devices connecting through a Modbus custom module, a BLE custom module, and the *IoT Edge hub* module:
++
+To learn more about the IoT Edge gateway patterns, see [How an IoT Edge device can be used as a gateway](../../iot-edge/iot-edge-as-gateway.md).
+ ## Deploy the IoT Edge runtime To learn where you can run the IoT Edge runtime, see [Azure IoT Edge supported systems](../../iot-edge/support.md).
You can also install the IoT Edge runtime in the following environments:
* [Install and provision Azure IoT Edge for Linux on a Windows device (Preview)](../../iot-edge/how-to-provision-single-device-linux-on-windows-symmetric.md) * [Run Azure IoT Edge on Ubuntu Virtual Machines in Azure](../../iot-edge/how-to-install-iot-edge-ubuntuvm.md)
-## IoT Edge gateway devices
-
-If you selected an IoT Edge device to be a gateway device, you can add downstream relationships to device models for devices you want to connect to the gateway device.
-
-To learn more, see [How to connect devices through an IoT Edge transparent gateway](how-to-connect-iot-edge-transparent-gateway.md).
- ## Monitor your IoT Edge devices
-To learn how to remotely monitor your IoT Edge fleet using Azure Monitor and built-in metrics integration, see [Collect and transport metrics](../../iot-edge/how-to-collect-and-transport-metrics.md).
+To learn how to remotely monitor your IoT Edge fleet, see [Collect and transport metrics](../../iot-edge/how-to-collect-and-transport-metrics.md).
## Next steps
iot-central How To Connect Devices X509 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/how-to-connect-devices-x509.md
To connect the IoT Edge device to IoT Central using the X.509 device certificate
To learn more, see [Create and provision IoT Edge devices at scale on Linux using X.509 certificates](../../iot-edge/how-to-provision-devices-at-scale-linux-x509.md).
-## Connect an IoT Edge leaf device
+## Connect a downstream device to IoT Edge
-IoT Edge uses X.509 certificates to secure the connection between leaf devices and an IoT Edge device acting as a gateway. To learn more about configuring this scenario, see [Connect a downstream device to an Azure IoT Edge gateway](../../iot-edge/how-to-connect-downstream-device.md).
+IoT Edge uses X.509 certificates to secure the connection between downstream devices and an IoT Edge device acting as a transparent gateway. To learn more about configuring this scenario, see [Connect a downstream device to an Azure IoT Edge gateway](../../iot-edge/how-to-connect-downstream-device.md).
## Roll X.509 device certificates
iot-central How To Connect Iot Edge Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/how-to-connect-iot-edge-transparent-gateway.md
Title: Connect an IoT Edge transparent gateway to an Azure IoT Central applicati
description: How to connect devices through an IoT Edge transparent gateway to an IoT Central application Previously updated : 12/21/2021 Last updated : 01/18/2022
An IoT Edge device can act as a gateway that provides a connection between other devices on a local network and your IoT Central application. You use a gateway when the device can't access your IoT Central application directly.
-IoT Edge supports the [*transparent* and *translation* gateway patterns](../../iot-edge/iot-edge-as-gateway.md). This article summarizes how to implement the transparent gateway pattern. In this pattern, the gateway passes messages from the downstream device through to the IoT Hub endpoint in your IoT Central application.
+IoT Edge supports the [*transparent* and *translation* gateway patterns](../../iot-edge/iot-edge-as-gateway.md). This article summarizes how to implement the transparent gateway pattern. In this pattern, the gateway passes messages from the downstream device through to the IoT Hub endpoint in your IoT Central application. The gateway does not manipulate the messages as they pass through. In IoT Central, each downstream device appears as child to the gateway device:
-This article uses virtual machines to host the downstream device and gateway. In a real scenario, the downstream device and gateway would run on physical devices on your local network.
+
+For simplicity, this article uses virtual machines to host the downstream and gateway devices. In a real scenario, the downstream device and gateway would run on physical devices on your local network.
## Prerequisites
To complete the steps in this article, you need:
To follow the steps in this article, download the following files to your computer: -- [Thermostat device model](https://raw.githubusercontent.com/Azure/iot-plugandplay-models/main/dtmi/com/example/thermostat-1.json)-- [Transparent gateway manifest](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/master/transparent-gateway/EdgeTransparentGatewayManifest.json)
+- [Thermostat device model (thermostat-1.json)](https://raw.githubusercontent.com/Azure/iot-plugandplay-models/main/dtmi/com/example/thermostat-1.json) - this file is the device model for the downstream devices.
+- [Transparent gateway manifest (EdgeTransparentGatewayManifest.json)](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/master/transparent-gateway/EdgeTransparentGatewayManifest.json) - this file is the IoT Edge deployment manifest for the gateway device.
## Add device templates Both the downstream devices and the gateway device require device templates in IoT Central. IoT Central lets you model the relationship between your downstream devices and your gateway so you can view and manage them after they're connected.
-To create a device template for a downstream device, create a standard device template that models the capabilities of your device. The example shown in this article uses the thermostat device model.
+To create a device template for a downstream device, create a standard device template that models the capabilities of your device. The example shown in this article uses the thermostat device model you downloaded.
To create a device template for a downstream device:
To create a device template for a downstream device:
1. Publish the device template.
-To create a device template for a transparent IoT Edge gateway:
+To create a device template for an IoT Edge transparent gateway device:
1. Create a device template and choose **Azure IoT Edge** as the template type.
To create a device template for a transparent IoT Edge gateway:
1. Add an entry in **Relationships** to the downstream device template.
-The following screenshot shows the **Relationships** page for an IoT Edge gateway device that has downstream devices that use the **Thermostat** device template:
+The following screenshot shows the **Relationships** page for an IoT Edge gateway device with downstream devices that use the **Thermostat** device template:
:::image type="content" source="media/how-to-connect-iot-edge-transparent-gateway/device-template-relationship.png" alt-text="Screenshot showing IoT Edge gateway device template relationship with a thermostat downstream device template.":::
-The previous screenshot shows an IoT Edge gateway device template with no modules defined. A transparent gateway doesn't require any modules because the IoT Edge runtime forwards messages from the downstream devices to IoT Central. If the gateway itself needs to send telemetry, synchronize properties, or handle commands, you can define these capabilities in the root component or in a module.
+The previous screenshot shows an IoT Edge gateway device template with no modules defined. A transparent gateway doesn't require any modules because the IoT Edge runtime forwards messages from the downstream devices directly to IoT Central. If the gateway itself needs to send telemetry, synchronize properties, or handle commands, you can define these capabilities in the root component or in a module.
Add any required cloud properties and views before you publish the gateway and downstream device templates.
To let you try out this scenario, the following steps show you how to deploy the
> [!TIP] > To learn how to deploy the IoT Edge runtime to a physical device, see [Create an IoT Edge device](../../iot-edge/how-to-create-iot-edge-device.md) in the IoT Edge documentation.
-To try out the transparent gateway scenario, select the following button to deploy two Linux virtual machines. One virtual machine has the IoT Edge runtime installed and is a transparent IoT Edge gateway. The other virtual machine is a downstream device where you'll run code to send simulated telemetry:
+To try out the transparent gateway scenario, select the following button to deploy two Linux virtual machines. One virtual machine has the IoT Edge runtime installed and is the transparent IoT Edge gateway. The other virtual machine is a downstream device where you'll run code to send simulated thermostat telemetry:
-<a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmaster%2Ftransparent-gateway%2FDeployGatewayVMs.json" target="_blank">
- <img src="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png" alt="Deploy to Azure button" />
-</a>
+[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmaster%2Ftransparent-gateway%2FDeployGatewayVMs.json)
When the two virtual machines are deployed and running, verify the IoT Edge gateway device is running on the `edgegateway` virtual machine:
When the two virtual machines are deployed and running, verify the IoT Edge gate
:::image type="content" source="media/how-to-connect-iot-edge-transparent-gateway/iot-edge-runtime.png" alt-text="Screenshot showing the $edgeAgent and $edgeHub modules running on the IoT Edge gateway.":::
-> [!TIP]
-> You may have to wait for several minutes while the virtual machine starts up and the device is provisioned in your IoT Central application.
+ > [!TIP]
+ > You may have to wait for several minutes while the virtual machine starts up and the device is provisioned in your IoT Central application.
## Configure the gateway
Your transparent gateway is now configured and ready to start forwarding telemet
## Provision a downstream device
-Currently, IoT Edge can't automatically provision a downstream device to your IoT Central application. The following steps show you how to provision the `thermostat1` device. To complete these steps, you need an environment with Python 3.6 (or higher) installed and internet connectivity. The [Azure Cloud Shell](https://shell.azure.com/) has Python 3.7 pre-installed:
+IoT Central relies on the Device Provisioning Service (DPS) to provision devices in IoT Central. Currently, IoT Edge can't use DPS provision a downstream device to your IoT Central application. The following steps show you how to provision the `thermostat1` device manually. To complete these steps, you need an environment with Python 3.6 (or higher) installed and internet connectivity. The [Azure Cloud Shell](https://shell.azure.com/) has Python 3.7 pre-installed:
1. Run the following command to install the `azure.iot.device` module:
Currently, IoT Edge can't automatically provision a downstream device to your Io
pip install azure.iot.device ```
-1. Run the following command to download the Python script that does the provisioning:
+1. Run the following command to download the Python script that does the device provisioning:
```bash wget https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/master/transparent-gateway/provision_device.py ```
-1. To provision the `thermostat1` downstream device in your IoT Central application, run the following commands, replacing `{your application id scope}` and `{your device primary key}` :
+1. To provision the `thermostat1` downstream device in your IoT Central application, run the following commands, replacing `{your application id scope}` and `{your device primary key}`. You made a note of these values when you added the devices to your IoT Central application:
```bash export IOTHUB_DEVICE_DPS_DEVICE_ID=thermostat1
Currently, IoT Edge can't automatically provision a downstream device to your Io
python provision_device.py ```
-In your IoT Central application, verify that the **Device status** for the thermostat1 device is now **Provisioned**.
+In your IoT Central application, verify that the **Device status** for the `thermostat1` device is now **Provisioned**.
## Configure a downstream device In the previous section, you configured the `edgegateway` virtual machine with the demo certificates to enable it to run as gateway. The `leafdevice` virtual machine is ready for you to install a thermostat simulator that uses the gateway to connect to IoT Central.
-The `leafdevice` virtual machine needs a copy of the root CA certificate you created on the `edgegateway` virtual machine. Copy the */home/AzureUser/certs/certs/azure-iot-test-only.root.ca.cert.pem* file from the `edgegateway` virtual machine to your home directory on the `leafdevice` virtual machine. You can use the **scp** command to copy files to and from a Linux virtual machine.
+The `leafdevice` virtual machine needs a copy of the root CA certificate you created on the `edgegateway` virtual machine. Copy the */home/AzureUser/certs/certs/azure-iot-test-only.root.ca.cert.pem* file from the `edgegateway` virtual machine to your home directory on the `leafdevice` virtual machine. You can use the **scp** command to copy files between Linux virtual machines.
To learn how to check the connection from the downstream device to the gateway, see [Test the gateway connection](../../iot-edge/how-to-connect-downstream-device.md#test-the-gateway-connection). To run the thermostat simulator on the `leafdevice` virtual machine:
+1. Use SSH to connect to and sign in on your `leafdevice` virtual machine.
+ 1. Download the Python sample to your home directory: ```bash
iot-central Howto Connect Eflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-connect-eflow.md
To add the telemetry definitions to the device template:
The **management** interface now includes the **machine**, **ambient**, and **timeCreated** telemetry types: ### Add views to template
To enable an operator to view the telemetry from the device, define a view in th
1. Select **Save** to save the **View IoT Edge device telemetry** view. ### Publish the template
Before you can add a device that uses the **Environmental Sensor Edge Device** t
Navigate to the **Environmental Sensor Edge Device** template and select **Publish**. On the **Publish this device template to the application** panel, select **Publish** to publish the template: ## Add an IoT Edge device
Before you can connect a device to IoT Central, you must register the device in
You now have a new device with the status **Registered**: ### Get the device credentials
You've now finished configuring your IoT Central application to enable an IoT Ed
1. Use the **ID scope**, **Device ID** and the **Primary Key** you made a note of previously. ```powershell
- Provision-EflowVm -provisioningType DpsSymmetricKey -ΓÇïscopeId <ID_SCOPE_HERE> -registrationId <DEVCIE_ID_HERE> -symmKey <PRIMARY_KEY_HERE>
+ Provision-EflowVm -provisioningType DpsSymmetricKey -scopeId <ID_SCOPE_HERE> -registrationId <DEVCIE_ID_HERE> -symmKey <PRIMARY_KEY_HERE>
``` To learn about other ways you can deploy and provision an EFLOW device, see [Install and provision Azure IoT Edge for Linux on a Windows device](../../iot-edge/how-to-install-iot-edge-on-windows.md).
iot-central Howto Connect Rigado Cascade 500 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-connect-rigado-cascade-500.md
This article describes how you can connect a Rigado Cascade 500 gateway device t
## What is Cascade 500?
-Cascade 500 IoT gateway is a hardware offering from Rigado that is included as part of their Cascade Edge-as-a-Service solution. It provides commercial IoT project and product teams with flexible edge computing power, a robust containerized application environment, and a wide variety of wireless device connectivity options, including Bluetooth 5, LTE, & Wi-Fi.
+Cascade 500 IoT gateway is a hardware offering from Rigado that's part of their Cascade Edge-as-a-Service solution. It provides commercial IoT project and product teams with flexible edge computing power, a robust containerized application environment, and a wide variety of wireless device connectivity options such as Bluetooth 5, LTE, and Wi-Fi.
-Cascade 500 is certified for Azure IoT Plug and Play and allows you to easily onboard the device into your end to end solutions. The Cascade gateway allows you to wirelessly connect to a variety of condition monitoring sensors that are in proximity to the gateway device. These sensors can be onboarded into IoT Central via the gateway device.
+Cascade 500 is certified for Azure IoT Plug and Play and enables you to easily onboard the device into your end-to-end solutions. The Cascade gateway lets you wirelessly connect to various condition monitoring sensors that are in close proximity to the gateway device. You can use the gateway device to onboard these sensors into IoT Central.
## Prerequisites
To complete the steps in this how-to guide, you need:
## Add a device template
-In order to onboard a Cascade 500 gateway device into your Azure IoT Central application instance, you will need to configure a corresponding device template within your application.
+To onboard a Cascade 500 gateway device into your Azure IoT Central application instance, you need to configure a corresponding device template within your application.
-To add a Cascade 500 device template:
+To add a Cascade 500 device template:
+
+1. Navigate to the **Device Templates** tab in the left pane, select **+ New**:
+
+ ![Create new device template](./media/howto-connect-rigado-cascade-500/device-template-new.png)
+
+1. The page gives you an option to **Create a custom template** or **Use a preconfigured device template**.
-1. Navigate to the ***Device Templates*** tab in the left pane, select **+ New**:
-![Create new device template](./media/howto-connect-rigado-cascade-500/device-template-new.png)
-1. The page gives you an option to ***Create a custom template*** or ***Use a preconfigured device template***
1. Select the C500 device template from the list of preconfigured device templates as shown below:
-![Select C500 device template](./media/howto-connect-rigado-cascade-500/device-template-preconfigured.png)
-1. Select ***Next: Customize*** to continue to the next step.
-1. On the next screen, select ***Create*** to onboard the C500 device template into your IoT Central application.
+
+ ![Select C500 device template](./media/howto-connect-rigado-cascade-500/device-template-preconfigured.png)
+
+1. Select **Next: Customize** to continue to the next step.
+
+1. On the next screen, select **Create** to onboard the C500 device template into your IoT Central application.
## Retrieve application connection details
-You will now need to retrieve the **Scope ID** and **Primary key** for your Azure IoT Central application in order to connect the Cascade 500 device.
+To connect the Cascade 500 device to your IoT Central application, you need to retrieve the **ID Scope** and **Primary key** for your application.
+
+1. Navigate to **Administration** in the left pane and select **Device connection**.
+
+1. Make a note of the **ID Scope** for your IoT Central application:
+
+ ![App ID Scope](./media/howto-connect-rigado-cascade-500/app-scope-id.png)
+
+1. Now select **View Keys** and make a note of the **Primary key**:
-1. Navigate to **Administration** in the left pane and click on **Device connection**.
-2. Make a note of the **Scope ID** for your IoT Central application.
-![App Scope ID](./media/howto-connect-rigado-cascade-500/app-scope-id.png)
-3. Now click on **View Keys** and make a note of the **Primary key**
-![Primary Key](./media/howto-connect-rigado-cascade-500/primary-key-sas.png)
+ ![Primary Key](./media/howto-connect-rigado-cascade-500/primary-key-sas.png)
-## Contact Rigado to connect the gateway
+## Contact Rigado to connect the gateway
-In order to connect the Cascade 500 device to your IoT Central application, you will need to contact Rigado and provide them with the application connection details from the above steps.
+To connect the Cascade 500 device to your IoT Central application, you need to contact Rigado and provide them with the application connection details from the previous steps.
-Once the device is connected to the internet, Rigado will be able to push down a configuration update down to the Cascade 500 gateway device through a secure channel.
+When the device connects to the internet, Rigado can push down a configuration update to the Cascade 500 gateway device through a secure channel.
-This update will apply the IoT Central connection details on the Cascade 500 device and it will appear in your devices list.
+This update applies the IoT Central connection details on the Cascade 500 device and it then appears in your devices list:
![Devices list](./media/howto-connect-rigado-cascade-500/devices-list-c500.png)
-You are now ready to use your C500 device in your IoT Central application!
+You're now ready to use your C500 device in your IoT Central application.
## Next steps
iot-central Howto Connect Ruuvi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-connect-ruuvi.md
Last updated 08/20/2021
# Connect a RuuviTag sensor to your Azure IoT Central application
-This article describes how you can connect a RuuviTag sensor to your Microsoft Azure IoT Central application.
+A RuuviTag is an advanced open-source sensor beacon platform designed to fulfill the needs of business customers, developers, makers, students, and hobbyists. The device is set up to work as soon as you take it out of its box and is ready for you to deploy it where you need it. It's a Bluetooth Low Energy (BLE) beacon with a built-in environment sensor and accelerometer.
-What is a Ruuvi tag?
+A RuuviTag communicates over BLE and requires a gateway device to talk to Azure IoT Central. Make sure you have a gateway device, such as the Rigado Cascade 500, setup to enable a RuuviTag to connect to IoT Central. To learn more, see [Connect a Rigado Cascade 500 gateway device to your Azure IoT Central application](howto-connect-rigado-cascade-500.md).
-RuuviTag is an advanced open-source sensor beacon platform designed to fulfill the needs of business customers, developers, makers, students, and hobbyists. The device is set up to work as soon as you take it out of its box and is ready for you to deploy it where you need it. It's a Bluetooth LE beacon with an environment sensor and accelerometer built in.
-
-RuuviTag communicates over BLE (Bluetooth Low Energy) and requires a gateway device to talk to Azure IoT Central. Make sure you have a gateway device, such as the Rigado Cascade 500, setup to enable a RuuviTag to connect to IoT Central.
-
-Please follow the [instructions here](./howto-connect-rigado-cascade-500.md) if you'd like to set up a Rigado Cascade 500 gateway device.
+This article describes how to connect a RuuviTag sensor to your Azure IoT Central application.
## Prerequisites
To connect RuuviTag sensors, you need the following resources:
- A RuuviTag sensor. For more information, please visit [RuuviTag](https://ruuvi.com/). -- A Rigado Cascade 500 device or another BLE gateway. For more information, please visit [Rigado](https://www.rigado.com/).-
+- A Rigado Cascade 500 device or another BLE gateway. To learn more, visit [Rigado](https://www.rigado.com/).
## Add a RuuviTag device template
To onboard a RuuviTag sensor into your Azure IoT Central application instance, y
To add a RuuviTag device template:
-1. Navigate to the ***Device Templates*** tab in the left pane, select **+ New**:
+1. Navigate to the **Device Templates** tab in the left pane, select **+ New**:
+ ![Create new device template](./media/howto-connect-ruuvi/device-template-new.png)
- The page gives you an option to ***Create a custom template*** or ***Use a preconfigured device template***
-1. Select the RuuviTag Multisensor device template from the list of preconfigured device templates as shown below:
+
+ The page gives you an option to **Create a custom template** or **Use a preconfigured device template**.
+
+1. Select the RuuviTag Multisensor device template from the list of preconfigured device templates:
+ ![Select RuuviTag device template](./media/howto-connect-ruuvi/device-template-pre-configured.png)
-1. Select ***Next: Customize*** to continue to the next step.
-1. On the next screen, select ***Create*** to onboard the C500 device template into your IoT Central application.
+
+1. Select **Next: Customize** to continue to the next step.
+
+1. On the next screen, select **Create** to onboard the RuuviTag Multisensor device template into your IoT Central application.
## Connect a RuuviTag sensor
-As mentioned previously, to connect the RuuviTag with your IoT Central application, you need to set up a gateway device. The steps below assume that you've set up a Rigado Cascade 500 gateway device.
+To connect the RuuviTag with your IoT Central application, you need to set up a gateway device. The following steps assume that you've set up a Rigado Cascade 500 gateway device:
+
+1. Power on your Rigado Cascade 500 device and connect it to your wired or wireless network.
+
+1. Pop the cover off of the RuuviTag and pull the plastic tab to connect the battery.
+
+1. Place the RuuviTag close to the Rigado Cascade 500 gateway that's already configured in your IoT Central application.
+
+1. In a few seconds, your RuuviTag appears in the list of devices within IoT Central:
-1. Power on your Rigado Cascade 500 device and connect it to your network connection (via Ethernet or wireless)
-1. Pop the cover off of the RuuviTag and pull the plastic tab to secure the connection with the battery.
-1. Place the RuuviTag close to a Rigado Cascade 500 gateway that's already configured in your IoT Central application.
-1. In just a few seconds, your RuuviTag should appear in your list of devices within IoT Central.
![RuuviTag Device List](./media/howto-connect-ruuvi/ruuvi-device-list.png)
-You can now use this RuuviTag within your IoT Central application.
+You can now use this RuuviTag device within your IoT Central application.
## Create a simulated RuuviTag
If you don't have a physical RuuviTag device, you can create a simulated RuuviTa
To create a simulated RuuviTag: 1. Select **Devices > RuuviTag**.+ 1. Select **+ New**.+ 1. Specify a unique **Device ID** and a friendly **Device name**. + 1. Enable the **Simulated** setting.+ 1. Select **Create**. ## Next Steps
iot-central Howto Create Organizations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-organizations.md
Instead, you can use the CSV import feature to bulk register devices with your a
### Gateways
-You assign gateway and leaf devices to organizations. You don't have to assign a gateway and its associated leaf devices to the same organization. If you assign them to different organizations, it's possible that a user can see the gateway but not the leaf devices, or the leaf devices but not the gateway.
+You assign gateway and downstream devices to organizations. You don't have to assign a gateway and its associated downstream devices to the same organization. If you assign them to different organizations, it's possible that a user can see the gateway but not the downstream devices, or the downstream devices but not the gateway.
## Roles
iot-central Howto Manage Devices In Bulk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-devices-in-bulk.md
You can use Azure IoT Central to manage your connected devices at scale through
## Create and run a job
-The following example shows you how to create and run a job to set the light threshold for a group of logistic gateway devices. You use the job wizard to create and run jobs. You can save a job to run later.
+The following example shows you how to create and run a job to set the light threshold for a group of devices. You use the job wizard to create and run jobs. You can save a job to run later.
1. On the left pane, select **Jobs**.
iot-central Howto Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-transform-data.md
To build the custom module in the [Azure Cloud Shell](https://shell.azure.com/):
### Set up an IoT Edge device
-This scenario uses an IoT Edge gateway device to transform the data from any downstream devices. This section describes how to create IoT Central device templates for the gateway and downstream devices in your IoT Central application. IoT Edge devices use a deployment manifest to configure their modules.
+This scenario uses an IoT Edge gateway device to transform the data from any downstream devices. This section describes how to create IoT Central device template for the gateway device in your IoT Central application. IoT Edge devices use a deployment manifest to configure their modules.
-To create a device template for the downstream device, this scenario uses a simple thermostat device model:
-
-1. Download the [device model for the thermostat](https://raw.githubusercontent.com/Azure/iot-plugandplay-models/main/dtmi/com/example/thermostat-2.json) device to your local machine.
-
-1. Sign in to your IoT Central application and navigate to the **Device templates** page.
-
-1. Select **+ New**, select **IoT Device**, and select **Next: Customize**.
-
-1. Enter *Thermostat* as the template name and select **Next: Review**. Then select **Create**.
-
-1. Select **Import a model** and import the *thermostat-2.json* file you downloaded previously.
-
-1. Select **Publish** to publish the new device template.
+In this example, the downstream device doesn't need a device template. The downstream device is registered in IoT Central so you can generate the credentials it needs to connect the IoT Edge device. Because the IoT Edge module transforms the data, all the downstream device telemetry arrives in IoT Central as if it was sent by the IoT Edge device.
To create a device template for the IoT Edge gateway device:
To create a device template for the IoT Edge gateway device:
1. Select **+ New**, select **Azure IoT Edge**, and then select **Next: Customize**.
-1. Enter *IoT Edge gateway device* as the device template name. Select **This is a gateway device**. Select **Browse** to upload the *moduledeployment.json* deployment manifest file you edited previously.
+1. Enter *IoT Edge gateway device* as the device template name. Don't select **This is a gateway device**. Select **Browse** to upload the *moduledeployment.json* deployment manifest file you edited previously.
1. When the deployment manifest is validated, select **Next: Review**, then select **Create**.
-1. Under **Model**, select **Relationships**. Select **+ Add relationship**. Enter *Downstream device* as the display name, and select **Thermostat** as the target. Select **Save**.
+The deployment manifest doesn't specify the telemetry the module sends. To add the telemetry definitions to the device template:
-1. Select **Publish** to publish the device template.
+1. Select **Module transformmodule** in the **Modules** section of the **IoT Edge gateway device** template.
+
+1. Select **Add capability** and use the information in the following tables to add a new telemetry type:
+
+ | Setting | Value |
+ | | |
+ | Display name | Device |
+ | Name | device |
+ | Capability type | Telemetry |
+ | Semantic type | None |
+ | Schema | Object |
+
+ Object definition:
+
+ | Display name | Name | Schema |
+ | | -- | |
+ | Device ID | deviceId | String |
+
+ Save your changes.
-You now have two device templates in your IoT Central application. The **IoT Edge gateway device** template, and the **Thermostat** template as the downstream device.
+1. Select **Add capability** and use the information in the following tables to add a new telemetry type:
+
+ | Setting | Value |
+ | | |
+ | Display name | Measurements |
+ | Name | measurements |
+ | Capability type | Telemetry |
+ | Semantic type | None |
+ | Schema | Object |
+
+ Object definition:
+
+ | Display name | Name | Schema |
+ | | -- | |
+ | Temperature | temperature | Double |
+ | Pressure | pressure | Double |
+ | Humidity | humidity | Double |
+ | Scale | scale | String |
+
+ Save your changes.
+
+1. Select **Publish** to publish the device template.
To register a gateway device in IoT Central:
To register a downstream device in IoT Central:
1. In your IoT Central application, navigate to the **Devices** page.
-1. Select **Thermostat** and select **Create a device**. Enter *Thermostat* as the device name, enter *downstream-01* as the device ID, make sure **Thermostat** is selected as the device template. Select **Create**.
+1. Don't select a device template. Select **+ New**. Enter *Downstream 01* as the device name, enter *downstream-01* as the device ID, make sure that the device template is **Unassigned**. Select **Create**.
-1. In the list of devices, select the **Thermostat** and then select **Attach to Gateway**. Select the **IoT Edge gateway device** template and the **IoT Edge gateway device** instance. Select **Attach**.
+1. In the list of devices, click on the **Downstream 01**, and then select **Connect**.
-1. In the list of devices, click on the **Thermostat**, and then select **Connect**.
-
-1. Make a note of the **ID scope**, **Device ID**, and **Primary key** values for the **Thermostat** device. You use them later.
+1. Make a note of the **ID scope**, **Device ID**, and **Primary key** values for the **Downstream 01** device. You use them later.
### Deploy the gateway and downstream devices
For convenience, this article uses Azure virtual machines to run the gateway and
| Authentication Type | Password | | Admin Password Or Key | Your choice of password for the **AzureUser** account on both virtual machines. |
-<a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmaster%2Ftransparent-gateway%2FDeployGatewayVMs.json" target="_blank">
- <img src="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png" alt="Deploy to Azure button" />
-</a>
+[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmaster%2Ftransparent-gateway%2FDeployGatewayVMs.json)
Select **Review + Create**, and then **Create**. It takes a couple of minutes to create the virtual machines in the **ingress-scenario** resource group.
To connect a downstream device to the IoT Edge gateway device:
Sent telemetry for device downstream-01 ```
+For simplicity, the code for the downstream device provisions the device in IoT Central. Typically, downstream devices connect through a gateway because they can't connect to the internet and so can't connect to the Device Provisioning Service endpoint. To learn more, see [How to connect devices through an IoT Edge transparent gateway](how-to-connect-iot-edge-transparent-gateway.md).
+ ### Verify To verify the scenario is running, navigate to your **IoT Edge gateway device** in IoT Central:
To verify the scenario is running, navigate to your **IoT Edge gateway device**
:::image type="content" source="media/howto-transform-data/transformed-data.png" alt-text="Screenshot that shows transformed data on devices page."::: - Select **Modules**. Verify that the three IoT Edge modules **$edgeAgent**, **$edgeHub** and **transformmodule** are running.-- Select the **Downstream Devices** and verify that the downstream device is provisioned.-- Select **Raw data**. The telemetry data in the **Unmodeled data** column looks like:
+- Select **Raw data**. The telemetry data in the **Device** column looks like:
+
+ ```json
+ {"deviceId":"downstream-01"}
+ ```
+
+ The telemetry data in the **Measurements** column looks like:
```json
- {"device":{"deviceId":"downstream-01"},"measurements":{"temperature":85.21208,"pressure":59.97321,"humidity":77.718124,"scale":"farenheit"}}
+ {"temperature":85.21208,"pressure":59.97321,"humidity":77.718124,"scale":"farenheit"}
```
-Because the IoT Edge device is transforming the data from the downstream device, the telemetry is associated with the gateway device in IoT Central. To visualize the telemetry, create a new version of the **IoT Edge gateway device** template with definitions for the telemetry types.
+Because the IoT Edge device is transforming the data from the downstream device, the telemetry is associated with the gateway device in IoT Central. To visualize the transformed telemetry, create a view in the **IoT Edge gateway device** template and republish it.
## Data transformation at egress
iot-central Overview Iot Central Developer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-developer.md
A IoT device is a standalone device connects directly to IoT Central. A IoT devi
### IoT Edge device
-An IoT Edge device connects directly to IoT Central. An IoT Edge device can send its own telemetry, report its properties, and respond to writable property updates and commands. IoT Edge modules can process data locally on the IoT Edge device. An IoT Edge device can also act as an intermediary for other devices known as leaf devices. Scenarios that use IoT Edge devices include:
+An IoT Edge device connects directly to IoT Central. An IoT Edge device can send its own telemetry, report its properties, and respond to writable property updates and commands. IoT Edge modules can process data locally on the IoT Edge device. An IoT Edge device can also act as an intermediary for other devices known as downstream devices. Scenarios that use IoT Edge devices include:
- Aggregate or filter telemetry before it's sent to IoT Central. This approach can help to reduce the costs of sending data to IoT Central.-- Enable devices that can't connect directly to IoT Central to connect through the IoT Edge device. For example, a leaf device might use bluetooth to connect to the IoT Edge device, which then connects over the internet to IoT Central.-- Control leaf devices locally to avoid the latency associated with connecting to IoT Central over the internet.
+- Enable devices that can't connect directly to IoT Central to connect through the IoT Edge device. For example, a downstream device might use bluetooth to connect to the IoT Edge device, which then connects over the internet to IoT Central.
+- Control downstream devices locally to avoid the latency associated with connecting to IoT Central over the internet.
-IoT Central only sees the IoT Edge device, not the leaf devices connected to the IoT Edge device.
+IoT Central only sees the IoT Edge device, not the downstream devices connected to the IoT Edge device.
To learn more, see [Add an Azure IoT Edge device to your Azure IoT Central application](./tutorial-add-edge-as-leaf-device.md). ### Gateways
-A gateway device manages one or more downstream devices that connect to your IoT Central application. You use IoT Central to configure the relationships between the downstream devices and the gateway device. Both IoT devices and IoT Edge devices can act as gateways. To learn more, see [Define a new IoT gateway device type in your Azure IoT Central application](./tutorial-define-gateway-device-type.md).
+A gateway device manages one or more downstream devices that connect to your IoT Central application. A gateway device can process the telemetry from the downstream devices before it's forwarded to your IoT Central application. Both IoT devices and IoT Edge devices can act as gateways. To learn more, see [Define a new IoT gateway device type in your Azure IoT Central application](./tutorial-define-gateway-device-type.md) and [How to connect devices through an IoT Edge transparent gateway](how-to-connect-iot-edge-transparent-gateway.md).
## Connect a device
iot-central Tutorial Define Gateway Device Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-define-gateway-device-type.md
# Tutorial - Define a new IoT gateway device type in your Azure IoT Central application
-This tutorial shows you how to use a gateway device template to define a gateway device in your IoT Central application. You then configure several downstream devices that connect to your IoT Central application through the gateway device.
+This tutorial shows you how to use a gateway device template to define a gateway device in your IoT Central application. You then configure several downstream devices that connect to your IoT Central application through the gateway device.
In this tutorial, you create a **Smart Building** gateway device template. A **Smart Building** gateway device has relationships with other downstream devices.
To create a device template for an **S1 Sensor** device:
1. In the left pane, select **Device Templates**. Then select **+ New** to start adding the template.
-1. Scroll down until you can see the tile for the **Minew S1** device. Select the tile and then select **Next: Customize**.
+1. Scroll down until you can see the tile for the **Minew S1** device. Select the tile and then select **Next: Review**.
-1. On the **Review** page, select **Create** to add the device template to your application.
+1. On the **Review** page, select **Create** to add the device template to your application.
To create a device template for an **RS40 Occupancy Sensor** device: 1. In the left pane, select **Device Templates**. Then select **+ New** to start adding the template.
-1. Scroll down until you can see the tile for the ***RS40 Occupancy Sensor** device. Select the tile and then select **Next: Customize**.
+1. Scroll down until you can see the tile for the **Rigado RS40 Occupancy Sensor** device. Select the tile and then select **Next: Review**.
-1. On the **Review** page, select **Create** to add the device template to your application.
+1. On the **Review** page, select **Create** to add the device template to your application.
You now have device templates for the two downstream device types:
To add a new gateway device template to your application:
1. Enter **Smart Building gateway device** as the template name and then select **Next: Review**.
-1. On the **Review** page, select **Create**.
--
+1. On the **Review** page, select **Create**.
1. On the **Create a model** page, select the **Custom model** tile.
To add a new gateway device template to your application:
1. Enter **Send Data** as the display name, and then select **Property** as the capability type.
-1. Select **+ Add capability** to add another capability. Enter **Boolean Telemetry** as the display name, select **Telemetry** as the capability type, and then select **Boolean** as schema.
-
-1. Select **Save**.
+1. Select **Boolean** as the schema type and then select **Save**.
### Add relationships
Both your simulated downstream devices are now connected to your simulated gatew
## Connect real downstream devices
-In the [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) tutorial, the sample code shows how to include the model ID from the device template in the provisioning payload the device sends. The model ID lets IoT Central associate the device with the correct device template. For example:
-
-```python
-async def provision_device(provisioning_host, id_scope, registration_id, symmetric_key, model_id):
- provisioning_device_client = ProvisioningDeviceClient.create_from_symmetric_key(
- provisioning_host=provisioning_host,
- registration_id=registration_id,
- id_scope=id_scope,
- symmetric_key=symmetric_key,
- )
-
- provisioning_device_client.provisioning_payload = {"modelId": model_id}
- return await provisioning_device_client.register()
-```
+In the [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) tutorial, the sample code shows how to include the model ID from the device template in the provisioning payload the device sends.
When you connect a downstream device, you can modify the provisioning payload to include the the ID of the gateway device. The model ID lets IoT Central associate the device with the correct downstream device template. The gateway ID lets IoT Central establish the relationship between the downstream device and its gateway. In this case the provisioning payload the device sends looks like the following JSON:
When you connect a downstream device, you can modify the provisioning payload to
} ```
+A gateway can register and provision a downstream device, and associate the downstream device with the gateway as follows:
+
+# [JavaScript](#tab/javascript)
+
+```javascript
+var crypto = require('crypto');
++
+var ProvisioningTransport = require('azure-iot-provisioning-device-mqtt').Mqtt;
+var SymmetricKeySecurityClient = require('azure-iot-security-symmetric-key').SymmetricKeySecurityClient;
+var ProvisioningDeviceClient = require('azure-iot-provisioning-device').ProvisioningDeviceClient;
+
+var provisioningHost = "global.azure-devices-provisioning.net";
+var idScope = "<The ID scope from your SAS group enrollment in IoT Central>";
+var groupSymmetricKey = "<The primary key from the SAS group enrollment>";
+var registrationId = "<The device ID for the downstream device you're creating>";
+var modelId = "<The model you're downstream device should use>";
+var gatewayId = "<The device ID of your gateway device>";
+
+// Calculate the device key from the group enrollment key
+function computeDerivedSymmetricKey(deviceId, masterKey) {
+ return crypto.createHmac('SHA256', Buffer.from(masterKey, 'base64'))
+ .update(deviceId, 'utf8')
+ .digest('base64');
+}
+
+var symmetricKey = computeDerivedSymmetricKey(registrationId, groupSymmetricKey);
+
+var provisioningSecurityClient = new SymmetricKeySecurityClient(registrationId, symmetricKey);
+
+var provisioningClient = ProvisioningDeviceClient.create(provisioningHost, idScope, new ProvisioningTransport(), provisioningSecurityClient);
+
+// Use the DPS payload to:
+// - specify the device capability model to use.
+// - associate the device with a gateway.
+var provisioningPayload = {modelId: modelId, iotcGateway: { iotcGatewayId: gatewayId}}
+
+provisioningClient.setProvisioningPayload(provisioningPayload);
+
+provisioningClient.register(function(err, result) {
+ if (err) {
+ console.log("Error registering device: " + err);
+ } else {
+ console.log('The registration status is: ' + result.status)
+ }
+});
+```
+
+# [Python](#tab/python)
+
+```python
+from azure.iot.device import ProvisioningDeviceClient
+import os
+import base64
+import hmac
+import hashlib
+
+provisioning_host = "global.azure-devices-provisioning.net"
+
+id_scope = "<The ID scope from your SAS group enrollment in IoT Central>"
+group_symmetric_key = "<The primary key from the SAS group enrollment>"
+registration_id = "<The device ID for the downstream device you're creating>"
+model_id = "<The model you're downstream device should use>"
+gateway_id = "<The device ID of your gateway device>"
+
+# Calculate the device key from the group enrollment key
+def compute_device_key (device_id, group_key):
+ message = device_id.encode("utf-8")
+ signing_key = base64.b64decode(group_key.encode("utf-8"))
+ signed_hmac = hmac.HMAC(signing_key, message, hashlib.sha256)
+ device_key_encoded = base64.b64encode(signed_hmac.digest())
+ return device_key_encoded.decode("utf-8")
+
+provisioning_device_client = ProvisioningDeviceClient.create_from_symmetric_key(
+ provisioning_host=provisioning_host,
+ registration_id=registration_id,
+ id_scope=id_scope,
+ symmetric_key=compute_device_key(registration_id, group_symmetric_key)
+)
+
+# Use the DPS payload to:
+# - specify the device capability model to use.
+# - associate the device with a gateway.
+provisioning_device_client.provisioning_payload = {"modelId": model_id, "iotcGateway":{"iotcGatewayId": gateway_id}}
+
+registration_result = provisioning_device_client.register()
+
+print("The registration status is:")
+print(registration_result.status)
+```
+++ ## Clean up resources [!INCLUDE [iot-central-clean-up-resources](../../../includes/iot-central-clean-up-resources.md)]
iot-central Tutorial Smart Meter App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/energy/tutorial-smart-meter-app.md
-# Tutorial: Deploy and walk through the smart meter monitoring app template
+# Tutorial: Deploy and walk through the smart meter monitoring app template
Use the IoT Central *smart meter monitoring* application template and the guidance in this article to develop an end-to-end smart meter monitoring solution.
This architecture consists of the following components. Some solutions may not r
### Smart meters and connectivity
-A smart meter is one of the most important devices among all the energy assets. It records and communicates energy consumption data to utilities for monitoring and other use cases, such as billing and demand response. Based on the meter type, it can connect to IoT Central either using gateways or other intermediate devices or systems, such edge devices and head-end systems. Build IoT Central device bridge to connect devices, which can't be connected directly. The IoT Central device bridge is an open-source solution and you can find the complete details [here](../core/howto-build-iotc-device-bridge.md).
+A smart meter is one of the most important devices among all the energy assets. It records and communicates energy consumption data to utilities for monitoring and other use cases, such as billing and demand response. Typically, a meter uses a gateway or bridge to connect to an IoT Central application. To learn more about bridges, see [Use the IoT Central device bridge to connect other IoT clouds to IoT Central](../core/howto-build-iotc-device-bridge.md).
### IoT Central platform
iot-central Tutorial Solar Panel App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/energy/tutorial-solar-panel-app.md
This architecture consists of the following components. Some applications may no
### Solar panels and connectivity
-Solar panels are one of the significant sources of renewable energy. Depending on the solar panel type and set up, you can connect it either using gateways or other intermediate devices and proprietary systems. You might need to build IoT Central device bridge to connect devices, which can't be connected directly. The IoT Central device bridge is an open-source solution and you can find the complete details [here](../core/howto-build-iotc-device-bridge.md).
+Solar panels are one of the significant sources of renewable energy. Typically, a solar panel uses a gateway to connect to an IoT Central application. You might need to build IoT Central device bridge to connect devices, which can't be connected directly. The IoT Central device bridge is an open-source solution and you can find the complete details [here](../core/howto-build-iotc-device-bridge.md).
### IoT Central platform
iot-central Tutorial Connected Waste Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/tutorial-connected-waste-management.md
Use the IoT Central *connected waste management* application template and the gu
### Devices and connectivity
-Devices such as waste bins that are used in open environments may connect through low-power wide area networks (LPWAN) or through a third-party network operator. For these types of devices, use the [Azure IoT Central Device Bridge](../core/howto-build-iotc-device-bridge.md) to send your device data to your IoT application in Azure IoT Central. You can also use device gateways that are IP capable and that can connect directly to IoT Central.
+Devices such as waste bins that are used in open environments may connect through low-power wide area networks (LPWAN) or through a third-party network operator. For these types of devices, use the [Azure IoT Central Device Bridge](../core/howto-build-iotc-device-bridge.md) to send your device data to your IoT Central application. You can also use an IP capable device gateway that connects directly to your IoT Central application.
### IoT Central
Here's how:
The connected waste bin device template comes with predefined views. Explore the views, and update them if you want to. The views define how operators see the device data and input cloud properties. ### Publish
Here's how:
1. Select **Change** to choose an image to upload for the **Browser icon** (an image that will appear on browser tabs). 1. You can also replace the default browser colors by adding HTML hexadecimal color codes. Use the **Header** and **Accent** fields for this purpose.
- :::image type="content" source="media/tutorial-connectedwastemanagement/connected-waste-management-customize-your-application.png" alt-text="Screenshot of Connected Wast Management Template Customize your application.":::
+ :::image type="content" source="media/tutorial-connectedwastemanagement/connected-waste-management-customize-your-application.png" alt-text="Screenshot of Connected Waste Management Template Customize your application.":::
1. You can also change application images. Select **Administration** > **Application settings** > **Select image** to choose an image to upload as the application image.
iot-central Tutorial Water Consumption Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/tutorial-water-consumption-monitoring.md
Use the IoT Central *water consumption monitoring* application template and the
Water management solutions use smart water devices such as flow meters, water quality monitors, smart valves, leak detectors.
-Devices in smart water solutions may connect through low-power wide area networks (LPWAN) or through a third-party network operator. For these types of devices, use the [Azure IoT Central Device Bridge](../core/howto-build-iotc-device-bridge.md) to send your device data to your IoT application in Azure IoT Central. You can also use device gateways that are IP capable and that can connect directly to IoT Central.
+Devices in smart water solutions may connect through low-power wide area networks (LPWAN) or through a third-party network operator. For these types of devices, use the [Azure IoT Central Device Bridge](../core/howto-build-iotc-device-bridge.md) to send your device data to your IoT application in Azure IoT Central. You can also use an IP capable device gateway that connects directly to your IoT Central application.
### IoT Central
iot-central Tutorial Water Quality Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/tutorial-water-quality-monitoring.md
Use the IoT Central *water quality monitoring* application template and the guid
Water management solutions use smart water devices such as flow meters, water quality monitors, smart valves, leak detectors.
-Devices in smart water solutions may connect through low-power wide area networks (LPWAN) or through a third-party network operator. For these types of devices, use the [Azure IoT Central Device Bridge](../core/howto-build-iotc-device-bridge.md) to send your device data to your IoT application in Azure IoT Central. You can also use device gateways that are IP capable and that can connect directly to IoT Central.
+Devices in smart water solutions may connect through low-power wide area networks (LPWAN) or through a third-party network operator. For these types of devices, use the [Azure IoT Central Device Bridge](../core/howto-build-iotc-device-bridge.md) to send your device data to your IoT application in Azure IoT Central. You can also use an IP capable device gateway that connects directly to your IoT Central application.
### IoT Central
iot-central Architecture Connected Logistics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/architecture-connected-logistics.md
- Title: Architecture IoT Connected logistics | Microsoft Docs
-description: An architecture of IoT Connected Logistics application template for IoT Central
----- Previously updated : 12/28/2021---
-# Architecture of IoT Central connected logistics application template
---
-Partners & customer can use the app template & following guidance to develop end to end **connected logistics solutions**.
-
-> [!div class="mx-imgBorder"]
-> ![connected logistics dashboard](./media/concept-connected-logistics-architecture/connected-logistics-architecture.png)
-
-1. Set of IoT tags sending telemetry data to a gateway device
-2. Gateway devices sending telemetry and aggregated insights to IoT Central
-3. Data is routed to the desired Azure service for manipulation
-4. Azure services like ASA or Azure Functions can be used to reformat data streams and send to the desired storage accounts
-5. Various business workflows can be powered by end-user business applications
-
-## Details
-Following section outlines each part of the conceptual architecture
-Telemetry ingestion from IoT Tags & Gateways
-
-## IoT tags
-IoT tags provide physical, ambient, and environmental sensor capabilities such as Temperature, Humidity, Shock, Tilt &Light. IoT tags typically connect to gateway device through Zigbee (802.15.4). Tags are less expensive sensors; so, they can be discarded at the end of a typical logistics journey to avoid challenges with reverse logistics.
-
-## Gateway
-Gateways can also act as IoT tags with their ambient sensing capabilities. The gateway enables upstream Azure IoT cloud connectivity (MQTT) using cellular, Wi-Fi channels. Bluetooth, NFC, and 802.15.4 Wireless Sensor Network (WSN) modes are used for downstream communication with IoT tags. Gateways provide end to end secure cloud connectivity, IoT tag pairing, sensor data aggregation, data retention, and the ability to configure alarm thresholds.
-
-## Device management with IoT Central
-Azure IoT Central is a solution development platform that simplifies IoT device connectivity, configuration, and management. The platform significantly reduces the burden and costs of IoT device management, operations, and related developments. Customers & partners can build an end to end enterprise solutions to achieve a digital feedback loop in logistics.
-
-## Business insights and actions using data egress
-IoT Central platform provides rich extensibility options through Continuous Data Export (CDE) and APIs. Business insights based on telemetry data processing or raw telemetry are typically exported to a preferred line-of-business application. It can be achieved using webhook, service bus, event hub, or blob storage to build, train, and deploy machine learning models & further enrich insights.
-
-## Next steps
-* Learn how to deploy [connected logistics solution template](./tutorial-iot-central-connected-logistics.md)
-* Learn more about [IoT Central retail templates](./overview-iot-central-retail.md)
-* Learn more about IoT Central refer to [IoT Central overview](../core/overview-iot-central.md)
iot-central Tutorial In Store Analytics Create App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md
Use the IoT Central *in-store analytics* application template and the guidance i
:::image type="content" source="media/tutorial-in-store-analytics-create-app/store-analytics-architecture-frame.png" alt-text="Azure IoT Central Store Analytics."::: -- Set of IoT sensors sending telemetry data to a gateway device-- Gateway devices sending telemetry and aggregated insights to IoT Central-- Continuous data export to the desired Azure service for manipulation-- Data can be structured in the desired format and sent to a storage service-- Business applications can query data and generate insights that power retail operations-
-Let's take a look at key components that generally play a part in an in-store analytics solution.
+- Set of IoT sensors sending telemetry data to a gateway device.
+- Gateway devices sending telemetry and aggregated insights to IoT Central.
+- Continuous data export to the desired Azure service for manipulation.
+- Data can be structured in the desired format and sent to a storage service.
+- Business applications can query data and generate insights that power retail operations.
## Condition monitoring sensors
An IoT solution starts with a set of sensors capturing meaningful signals from w
## Gateway devices
-Many IoT sensors can feed raw signals directly to the cloud or to a gateway device located near them. The gateway device performs data aggregation at the edge before sending summary insights to an IoT Central application. The gateway devices are also responsible for relaying command and control operations to the sensor devices when applicable.
+Many IoT sensors can feed raw signals directly to the cloud or to a gateway device located near them. The gateway device performs data aggregation at the edge before sending summary insights to an IoT Central application. The gateway devices are also responsible for relaying command and control operations to the sensor devices when applicable.
## IoT Central application
-The Azure IoT Central application ingests data from different kinds of IoT sensors as well gateway devices within the retail store environment and generates a set of meaningful insights.
+The Azure IoT Central application ingests data from different kinds of IoT sensors and gateway devices within the retail store environment and generates a set of meaningful insights.
Azure IoT Central also provides a tailored experience to the store operator enabling them to remotely monitor and manage the infrastructure devices.
After you have created and customized device templates, it's time to add devices
For this tutorial, you use the following set of real and simulated devices to build the application: -- A real Rigado C500 gateway-- Two real RuuviTag sensors
+- A real Rigado C500 gateway.
+- Two real RuuviTag sensors.
- A simulated **Occupancy** sensor. The simulated sensor is included in the application template, so you don't need to create it. > [!NOTE]
iot-central Tutorial Iot Central Connected Logistics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-iot-central-connected-logistics.md
Last updated 01/06/2022
# Tutorial: Deploy and walk through a connected logistics application template
+Use the application template and guidance in this article to develop an end-to-end *connected logistics solution*.
++
+1. IoT tags send telemetry data to a gateway device.
+2. Gateway devices send telemetry and aggregated insights to IoT Central.
+3. IoT Central routes data to an Azure service for manipulation.
+4. Services such as Azure Stream Analytics or Azure Functions can reformat data streams and send the data to storage accounts.
+5. End-user business applications can power business workflows.
+
+*IoT tags* provide physical, ambient, and environmental sensor capabilities such as temperature, humidity, shock, tilt, and light. IoT tags typically connect to gateway device through Zigbee (802.15.4). Tags are less expensive sensors and can be discarded at the end of a typical logistics journey to avoid challenges with reverse logistics.
+
+*Gateways* enable upstream Azure IoT cloud connectivity using cellular or Wi-Fi channels. Bluetooth, NFC, and 802.15.4 Wireless Sensor Network modes are used for downstream communication with IoT tags. Gateways provide end-to-end secure cloud connectivity, IoT tag pairing, sensor data aggregation, data retention, and the ability to configure alarm thresholds.
+
+Azure IoT Central is a solution development platform that simplifies IoT device connectivity, configuration, and management. The platform significantly reduces the burden and costs of IoT device management, operations, and related developments. You can build end-to-end enterprise solutions to achieve a digital feedback loop in logistics.
+
+The IoT Central platform provides rich extensibility options through data export and APIs. Business insights based on telemetry data processing or raw telemetry are typically exported to a preferred line-of-business application.
+ This tutorial shows you how to get started with the IoT Central *connected logistics* application template. You'll learn how to deploy and use the template. In this tutorial, you learn how to:
iot-develop Concepts Model Repository https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/concepts-model-repository.md
Title: Understand concepts of the device models repository | Microsoft Docs
description: As a solution developer or an IT professional, learn about the basic concepts of the device models repository. Previously updated : 11/12/2021 Last updated : 01/20/2022
The tools used to validate the models during the PR checks can also be used to a
### Install `dmr-client` ```bash
-dotnet tool install --global Microsoft.IoT.ModelsRepository.CommandLine --version 1.0.0-beta.5
+dotnet tool install --global Microsoft.IoT.ModelsRepository.CommandLine --version 1.0.0-beta.6
``` ### Import a model to the `dtmi/` folder
iot-edge How To Install Iot Edge Ubuntuvm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-install-iot-edge-ubuntuvm.md
Previously updated : 05/27/2021 Last updated : 01/20/2022
The [Deploy to Azure Button](../azure-resource-manager/templates/deploy-to-azure
[![Deploy to Azure Button for iotedge-vm-deploy](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2Fmaster%2FedgeDeploy.json) :::moniker-end :::moniker range="iotedge-2020-11"
- [![Deploy to Azure Button for iotedge-vm-deploy](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2F1.2.0%2FedgeDeploy.json)
+ [![Deploy to Azure Button for iotedge-vm-deploy](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2F1.2%2FedgeDeploy.json)
:::moniker-end 1. On the newly launched window, fill in the available form fields:
The [Deploy to Azure Button](../azure-resource-manager/templates/deploy-to-azure
```azurecli-interactive az deployment group create \ --resource-group IoTEdgeResources \
- --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.2.0/edgeDeploy.json" \
+ --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.2/edgeDeploy.json" \
--parameters dnsLabelPrefix='my-edge-vm1' \ --parameters adminUsername='<REPLACE_WITH_USERNAME>' \ --parameters deviceConnectionString=$(az iot hub device-identity connection-string show --device-id <REPLACE_WITH_DEVICE-NAME> --hub-name <REPLACE-WITH-HUB-NAME> -o tsv) \
The [Deploy to Azure Button](../azure-resource-manager/templates/deploy-to-azure
#Create a VM using the iotedge-vm-deploy script az deployment group create \ --resource-group IoTEdgeResources \
- --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.2.0/edgeDeploy.json" \
+ --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.2/edgeDeploy.json" \
--parameters dnsLabelPrefix='my-edge-vm1' \ --parameters adminUsername='<REPLACE_WITH_USERNAME>' \ --parameters deviceConnectionString=$(az iot hub device-identity connection-string show --device-id <REPLACE_WITH_DEVICE-NAME> --hub-name <REPLACE-WITH-HUB-NAME> -o tsv) \
iot-edge Quickstart Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/quickstart-linux.md
Title: Quickstart create an Azure IoT Edge device on Linux | Microsoft Docs
description: In this quickstart, learn how to create an IoT Edge device on Linux and then deploy prebuilt code remotely from the Azure portal. Previously updated : 04/07/2021 Last updated : 01/21/2022
Use the following CLI command to create your IoT Edge device based on the prebui
<!-- 1.2 --> :::moniker range=">=iotedge-2020-11"
-Use the following CLI command to create your IoT Edge device based on the prebuilt [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy/tree/1.2.0) template.
+Use the following CLI command to create your IoT Edge device based on the prebuilt [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy/tree/1.2) template.
* For bash or Cloud Shell users, copy the following command into a text editor, replace the placeholder text with your information, then copy into your bash or Cloud Shell window: ```azurecli-interactive az deployment group create \ --resource-group IoTEdgeResources \
- --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.2.0/edgeDeploy.json" \
+ --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.2/edgeDeploy.json" \
--parameters dnsLabelPrefix='<REPLACE_WITH_VM_NAME>' \ --parameters adminUsername='azureUser' \ --parameters deviceConnectionString=$(az iot hub device-identity connection-string show --device-id myEdgeDevice --hub-name <REPLACE_WITH_HUB_NAME> -o tsv) \
Use the following CLI command to create your IoT Edge device based on the prebui
```azurecli az deployment group create ` --resource-group IoTEdgeResources `
- --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.2.0/edgeDeploy.json" `
+ --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.2/edgeDeploy.json" `
--parameters dnsLabelPrefix='<REPLACE_WITH_VM_NAME>' ` --parameters adminUsername='azureUser' ` --parameters deviceConnectionString=$(az iot hub device-identity connection-string show --device-id myEdgeDevice --hub-name <REPLACE_WITH_HUB_NAME> -o tsv) `
load-testing Tutorial Cicd Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/tutorial-cicd-github-actions.md
Previously updated : 11/30/2021 Last updated : 01/21/2022 #Customer intent: As an Azure user, I want to learn how to automatically test builds for performance regressions on every pull request and/or deployment by using GitHub Actions. # Tutorial: Identify performance regressions with Azure Load Testing Preview and GitHub Actions
-This tutorial describes how to automate performance regression testing by using Azure Load Testing Preview and GitHub Actions. You'll configure a GitHub Actions continuous integration and continuous delivery (CI/CD) workflow to run a load test for a sample web application. You'll then use the test results to identify performance regressions.
+This tutorial describes how to automate performance regression testing by using Azure Load Testing Preview and GitHub Actions. You'll configure a GitHub Actions CI/CD workflow and use the [Azure Load Testing Action](https://github.com/marketplace/actions/azure-load-testing) to run a load test for a sample web application. You'll then use the test results to identify performance regressions.
If you're using Azure Pipelines for your CI/CD workflows, see the corresponding [Azure Pipelines tutorial](./tutorial-cicd-azure-pipelines.md).
jobs:
## Configure the GitHub Actions workflow to run a load test
-In this section, you'll set up a GitHub Actions workflow that triggers the load test. The sample application repository contains a workflow file *SampleApp.yaml*. The workflow first deploys the sample web application to Azure App Service, and then invokes the load test. The GitHub action uses an environment variable to pass the URL of the web application to the Apache JMeter script.
+In this section, you'll set up a GitHub Actions workflow that triggers the load test. The sample application repository contains a workflow file *SampleApp.yaml*. The workflow first deploys the sample web application to Azure App Service, and then invokes the load test by using the [Azure Load Testing Action](https://github.com/marketplace/actions/azure-load-testing). The GitHub action uses an environment variable to pass the URL of the web application to the Apache JMeter script.
Update the *SampleApp.yaml* GitHub Actions workflow file to configure the parameters for running the load test.
In this tutorial, you'll reconfigure the sample application to accept only secur
The Azure Load Testing task securely passes the repository secret from the workflow to the test engine. The secret parameter is used only while you're running the load test. Then the parameter's value is discarded from memory.
-## Configure and use the Azure Load Testing action
-
-This section describes the Azure Load Testing GitHub action. You can use this action by referencing `azure/load-testing@v1` in your workflow. The action runs on Windows, Linux, and Mac runners.
-
-You can use the following parameters to configure the GitHub action:
-
-|Parameter |Description |
-|||
-|`loadTestConfigFile` | *Required*. Path to the YAML configuration file for the load test. The path is fully qualified or relative to the default working directory. |
-|`resourceGroup` | *Required*. Name of the resource group that contains the Azure Load Testing resource. |
-|`loadTestResource` | *Required*. Name of an existing Azure Load Testing resource. |
-|`secrets` | Array of JSON objects that consist of the name and value for each secret. The name should match the secret name that's used in the Apache JMeter test script. |
-|`env` | Array of JSON objects that consist of the name and value for each environment variable. The name should match the variable name that's used in the Apache JMeter test script. |
-
-The following YAML code snippet describes how to use the action in a GitHub Actions workflow:
-
-```yaml
-- name: 'Azure Load Testing'
- uses: azure/load-testing@v1
- with:
- loadTestConfigFile: '< YAML File path>'
- loadTestResource: '<name of the load test resource>'
- resourceGroup: '<name of the resource group of your load test resource>'
- secrets: |
- [
- {
- "name": "<Name of the secret>",
- "value": "${{ secrets.MY_SECRET1 }}",
- },
- {
- "name": "<Name of the secret>",
- "value": "${{ secrets.MY_SECRET2 }}",
- }
- ]
- env: |
- [
- {
- "name": "<Name of the variable>",
- "value": "<Value of the variable>",
- },
- {
- "name": "<Name of the variable>",
- "value": "<Value of the variable>",
- }
- ]
-```
- ## Clean up resources [!INCLUDE [alt-delete-resource-group](../../includes/alt-delete-resource-group.md)]
The following YAML code snippet describes how to use the action in a GitHub Acti
You've now created a GitHub Actions workflow that uses Azure Load Testing for automatically running load tests. By using pass/fail criteria, you can set the status of the CI/CD workflow. With parameters, you can make the running of load tests configurable.
-* For more information about parameterizing load tests, see [Parameterize a load test](./how-to-parameterize-load-tests.md).
-* For more information about defining test pass/fail criteria, see [Define test criteria](./how-to-define-test-criteria.md).
+* Learn more about the [Azure Load Testing Action](https://github.com/marketplace/actions/azure-load-testing).
+* Learn how to [parameterize a load test](./how-to-parameterize-load-tests.md).
+* Learn how to [define test pass/fail criteria](./how-to-define-test-criteria.md).
machine-learning How To Manage Workspace Terraform https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-manage-workspace-terraform.md
Last updated 01/05/2022 -- # Manage Azure Machine Learning workspaces using Terraform
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-workspace-vnet.md
In this article you learn how to enable the following workspaces resources in a
* Your Azure Container Registry must be Premium version. For more information on upgrading, see [Changing SKUs](../container-registry/container-registry-skus.md#changing-tiers).
-* Your Azure Container Registry must be in the same virtual network and subnet as the storage account and compute targets used for training or inference.
+* If your Azure Container Registry uses a __private endpoint__, it must be in the same _virtual network_ as the storage account and compute targets used for training or inference. If it uses a __service endpoint__, it must be in the same _virtual network_ and _subnet_ as the storage account and compute targets.
* Your Azure Machine Learning workspace must contain an [Azure Machine Learning compute cluster](how-to-create-attach-compute-cluster.md).
machine-learning Reference Automl Images Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-automl-images-hyperparameters.md
Last updated 01/18/2022- # Hyperparameters for computer vision tasks in automated machine learning Learn which hyperparameters are available specifically for computer vision tasks in automated ML experiments.
-With support for computer vision tasks, you can control the model algorithm and sweep hyperparameters. These model algorithms and hyperparameters are passed in as the parameter space for the sweep. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are task-specific or model-specific.
+With support for computer vision tasks, you can control the model algorithm and sweep hyperparameters. These model algorithms and hyperparameters are passed in as the parameter space for the sweep. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are model-specific or task-specific.
## Model-specific hyperparameters
The following table summarizes hyperparmeters for image classification (multi-cl
The following hyperparameters are for object detection and instance segmentation tasks. > [!WARNING]
-> These parameters are not supported with the `yolov5` algorithm.
+> These parameters are not supported with the `yolov5` algorithm. See the [model specific hyperparameters](#model-specific-hyperparameters) section for `yolo5` supported hyperparmeters.
| Parameter name | Description | Default | | - |-|--|
The following hyperparameters are for object detection and instance segmentation
* Learn how to [Set up AutoML to train computer vision models with Python (preview)](how-to-auto-train-image-models.md).
-* [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
+* [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
machine-learning Reference Automl Images Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-automl-images-schema.md
Last updated 10/13/2021- # Data schemas to train computer vision models with automated machine learning
managed-instance-apache-cassandra Monitor Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/managed-instance-apache-cassandra/monitor-clusters.md
Use the [Azure Monitor REST API](/rest/api/monitor/diagnosticsettings/createorup
} ```
+## Audit whitelist
+
+> ![NOTE]
+> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+
+By default, audit logging creates a record for every login attempt and CQL query. The result can be rather overwhelming and increase overhead. You can use the audit whitelist feature in Cassandra 3.11 to set what operations *don't* create an audit record. The audit whitelist feature is enabled by default in Cassandra 3.11. To learn how to configure your whitelist, see [Role-based whitelist management](https://github.com/Ericsson/ecaudit/blob/release/c2.2/doc/role_whitelist_management.md).
+
+Examples:
+
+* To filter out all **select and modification** operations for the user **bob** from the audit log, execute the following statements:
+
+ ```
+ cassandra@cqlsh> ALTER ROLE bob WITH OPTIONS = { 'GRANT AUDIT WHITELIST FOR SELECT' : 'data' };
+ cassandra@cqlsh> ALTER ROLE bob WITH OPTIONS = { 'GRANT AUDIT WHITELIST FOR MODIFY' : 'data' };
+ ```
+
+* To filter out all **select** operations on the **decisions** table in the **design** keyspace for user **jim** from the audit log, execute the following statement:
+
+ ```
+ cassandra@cqlsh> ALTER ROLE jim WITH OPTIONS = { 'GRANT AUDIT WHITELIST FOR SELECT' : 'data/design/decisions' };
+ ```
+
+* To revoke the whitelist for user **bob** on all the user's **select** operations, execute the following statement:
+
+ ```
+ cassandra@cqlsh> ALTER ROLE bob WITH OPTIONS = { 'REVOKE AUDIT WHITELIST FOR SELECT' : 'data' };
+ ```
+
+* To view current whitelists, execute the following statement:
+
+ ```
+ cassandra@cqlsh> LIST ROLES;
+ ```
## Next steps
-* For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) article.
+* For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) article.
media-services Samples Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/samples-overview.md
This article contains a list of all the samples available for Media Services org
You'll find description and links to the samples you may be looking for in each of the tabs.
-## [Node.JS (Typescript)](#tab/node/)
+## [Node.JS (TypeScript)](#tab/node/)
|Sample|Description| |||
notification-hubs Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/notification-hubs/android-sdk.md
The first step is to create a project in Android Studio:
## Create a Firebase project that supports FCM
-1. Sign in to the [Firebase console](https://firebase.google.com/console/). Create a new Firebase project if you don't already have one.
+1. Sign in to the [Firebase console](https://console.firebase.google.com/). Create a new Firebase project if you don't already have one.
2. After you create your project, select **Add Firebase to your Android app**.
notification-hubs Notification Hubs Android Push Notification Google Gcm Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/notification-hubs/notification-hubs-android-push-notification-google-gcm-get-started.md
Your notification hub is now configured to work with GCM, and you have the conne
Update the three placeholders in the following code for the `NotificationSettings` class:
- * `SenderId`: The project number you obtained earlier in the [Google Cloud Console](https://cloud.google.com/console).
+ * `SenderId`: The project number you obtained earlier in the [Google Cloud Console](https://console.cloud.google.com/).
* `HubListenConnectionString`: The **DefaultListenAccessSignature** connection string for your hub. You can copy that connection string by clicking **Access Policies** on the **Settings** page of your hub on the [Azure portal]. * `HubName`: Use the name of your notification hub that appears in the hub page in the [Azure portal].
openshift Howto Deploy Java Jboss Enterprise Application Platform App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/howto-deploy-java-jboss-enterprise-application-platform-app.md
+
+ Title: Deploy a Java application with Red Hat JBoss Enterprise Application Platform (JBoss EAP) on an Azure Red Hat OpenShift 4 cluster
+description: Deploy a Java application with Red Hat JBoss Enterprise Application Platform on an Azure Red Hat OpenShift 4 cluster.
++ Last updated : 01/11/2022++
+keywords: java, jakartaee, microprofile, EAP, JBoss EAP, ARO, OpenShift, JBoss Enterprise Application Platform
+++
+# Deploy a Java application with Red Hat JBoss Enterprise Application Platform on an Azure Red Hat OpenShift 4 cluster
+
+This guide demonstrates how to deploy a Microsoft SQL Server database driven Jakarta EE application, running on Red Hat JBoss Enterprise Application Platform (JBoss EAP) to an Azure Red Hat OpenShift (ARO) 4 cluster by using [JBoss EAP Helm Charts](https://jbossas.github.io/eap-charts).
+
+The guide takes a traditional Jakarta EE application and walks you through the process of migrating it to a container orchestrator such as Azure Red Hat OpenShift. First, it describes how you can package your application as a [Bootable JAR](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.4/html/using_jboss_eap_xp_3.0.0/the-bootable-jar_default) to run it locally, connecting the application to a docker Microsoft SQL Server Container. Finally, it shows you how you can deploy the Microsoft SQL Server on OpenShift and how to deploy three replicas of the JBoss EAP application by using Helm Charts.
+
+The application is a stateful application that stores information in an HTTP Session. It makes use of the JBoss EAP clustering capabilities and uses the following Jakarta EE 8 and MicroProfile 4.0 technologies:
+
+* Jakarta Server Faces
+* Jakarta Enterprise Beans
+* Jakarta Persistence
+* MicroProfile Health
+
+> [!IMPORTANT]
+> This article uses a Microsoft SQL Server docker image running on a Linux container on Red Hat OpenShift. Before choosing to run a SQL Server container for production use cases, please review the [support policy for SQL Server Containers](https://support.microsoft.com/help/4047326/support-policy-for-microsoft-sql-server) to ensure that you are running on a supported configuration.
+
+> [!IMPORTANT]
+> This article deploys an application by using JBoss EAP Helm Charts. At the time of writing, this feature is still offered as a [Technology Preview](https://access.redhat.com/articles/6290611). Before choosing to deploy applications with JBoss EAP Helm Charts on production environments, ensure that this feature is a supported feature for your JBoss EAP/XP product version.
++
+## Prerequisites
++
+1. Prepare a local machine with a Unix-like operating system that is supported by the various products installed (for example Red Hat Enterprise Linux 8 (latest update) in the case of JBoss EAP).
+1. Install a Java SE implementation (for example, [Oracle JDK 11](https://www.oracle.com/java/technologies/downloads/#java11)).
+1. Install [Maven](https://maven.apache.org/download.cgi) 3.6.3 or higher.
+1. Install [Docker](https://docs.docker.com/get-docker/) for your OS.
+1. Install [Azure CLI](/cli/azure/install-azure-cli) 2.29.2 or later.
+1. Clone the code for this demo application (todo-list) to your local system. The demo application is at [GitHub](https://github.com/Azure-Samples/jboss-on-aro-jakartaee).
+1. Follow the instructions in [Create an Azure Red Hat OpenShift 4 cluster](./tutorial-create-cluster.md).
+
+ Though the "Get a Red Hat pull secret" step is labeled as optional, **it is required for this article**. The pull secret enables your ARO cluster to find the JBoss EAP application images.
+
+ If you plan to run memory-intensive applications on the cluster, specify the proper virtual machine size for the worker nodes using the `--worker-vm-size` parameter. For more information, see:
+
+ * [Azure CLI to create a cluster](/cli/azure/aro#az_aro_create)
+ * [Supported virtual machine sizes for memory optimized](./support-policies-v4.md#memory-optimized)
+
+1. Connect to the cluster by following the steps in [Connect to an Azure Red Hat OpenShift 4 cluster](./tutorial-connect-cluster.md).
+ * Follow the steps in "Install the OpenShift CLI"
+ * Connect to an Azure Red Hat OpenShift cluster using the OpenShift CLI with the user `kubeadmin`
+
+1. Execute the following command to create the OpenShift project for this demo application:
+
+ ```bash
+ $ oc new-project eap-demo
+ Now using project "eap-demo" on server "https://api.zhbq0jig.northeurope.aroapp.io:6443".
+
+ You can add applications to this project with the 'new-app' command. For example, try:
+
+ oc new-app rails-postgresql-example
+
+ to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:
+
+ kubectl create deployment hello-node --image=k8s.gcr.io/serve_hostname
+ ```
+
+1. Execute the following command to add the view role to the default service account. This role is needed so the application can discover other pods and form a cluster with them:
+
+ ```bash
+ $ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default -n $(oc project -q)
+ clusterrole.rbac.authorization.k8s.io/view added: "system:serviceaccount:eap-demo:default"
+ ```
+
+## Prepare the application
+
+At this stage, you have cloned the `Todo-list` demo application and your local repository is on the `main` branch. The demo application is a simple Jakarta EE 8 application that creates, reads, updates, and deletes records on a Microsoft SQL Server. This application can be deployed as it is on a JBoss EAP server installed in your local machine. You just need to configure the server with the required database driver and data source. You also need a database server available in your local environment.
+
+However, when you are targeting OpenShift, you might want to trim the capabilities of your JBoss EAP server. For example, to reduce the security exposure of the provisioned server and reduce the overall footprint. You might also want to include some MicroProfile specs to make your application more suitable for running on an OpenShift environment. When using JBoss EAP, one way to accomplish this is by packaging your application and your server in a single deployment unit known as a Bootable JAR. Let's do that by adding the required changes to our demo application.
+
+Navigate to your demo application local repository and change the branch to `bootable-jar`:
+
+```bash
+jboss-on-aro-jakartaee (main) $ git checkout bootable-jar
+Switched to branch 'bootable-jar'
+jboss-on-aro-jakartaee (bootable-jar) $
+```
+
+Let's do a quick review about what we have changed:
+
+- We have added the `wildfly-jar-maven` plugin to provision the server and the application in a single executable JAR file. The OpenShift deployment unit will be now our server with our application.
+- On the maven plugin, we have specified a set of Galleon layers. This configuration allows us to trim the server capabilities to only what we need. For complete documentation on Gallean, see [the WildFly documentation](https://docs.wildfly.org/galleon/).
+- Our application uses Jakarta Faces with Ajax requests, which means there will be information stored in the HTTP Session. We don't want to lose such information if a pod is removed. We could save this information on the client and send it back on each request. However, there are cases where you may decide not to distribute certain information to the clients. For this demo, we have chosen to replicate the session across all pod replicas. To do it, we have added `<distributable />` to the `web.xml` that together with the server clustering capabilities will make the HTTP Session distributable across all pods.
+- We have added two MicroProfile Health Checks that allow identifying when the application is live and ready to receive requests.
+
+## Run the application locally
+
+Before deploying the application on OpenShift, we are going to verify it locally geg. For the database, we are going to use a containerized Microsoft SQL Server running on Docker.
+
+### Run the Microsoft SQL database on Docker
+
+Follow the next steps to get the database server running on Docker and configured for the demo application:
+
+1. Start a Docker container running the Microsoft SQL Server. For more information, see [Run SQL Server container images with Docker](/sql/linux/quickstart-install-connect-docker) quickstart.
+
+ ```bash
+ $ sudo docker run \
+ -e 'ACCEPT_EULA=Y' \
+ -e 'SA_PASSWORD=Passw0rd!' \
+ -p 1433:1433 --name mssqlserver -h mssqlserver \
+ -d mcr.microsoft.com/mssql/server:2019-latest
+ ```
+
+1. Connect to the server and create the `todos_db` database.
+
+ ```bash
+ $ sudo docker exec -it mssqlserver "bash"
+ mssql@mssqlserver:/$ /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Passw0rd!'
+ 1> CREATE DATABASE todos_db
+ 2> GO
+ 1> exit
+ mssql@mssqlserver:/$ exit
+ ```
+
+### Run the demo application locally
+
+Follow the next steps to build and run the application locally.
+
+1. Build the Bootable JAR. When we are building the Bootable JAR, we need to specify the database driver version we want to use:
+
+ ```bash
+ jboss-on-aro-jakartaee (bootable-jar) $ MSSQLSERVER_DRIVER_VERSION=7.4.1.jre11 \
+ mvn clean package
+ ```
+
+1. Launch the Bootable JAR by using the following command. When we are launching the application, we need to pass the required environment variables to configure the data source:
+
+ ```bash
+ jboss-on-aro-jakartaee (bootable-jar) $ MSSQLSERVER_USER=SA \
+ MSSQLSERVER_PASSWORD=Passw0rd! \
+ MSSQLSERVER_JNDI=java:/comp/env/jdbc/mssqlds \
+ MSSQLSERVER_DATABASE=todos_db \
+ MSSQLSERVER_HOST=localhost \
+ MSSQLSERVER_PORT=1433 \
+ mvn wildfly-jar:run
+ ```
+
+ Check the [Galleon Feature Pack for integrating datasources](https://github.com/jbossas/eap-datasources-galleon-pack/blob/main/doc/mssqlserver/README.md) documentation to get a complete list of available environment variables. For details on the concept of feature-pack, see [the WildFly documentation](https://docs.wildfly.org/galleon/#_feature_packs).
+
+1. (Optional) If you want to verify the clustering capabilities, you can also launch more instances of the same application by passing to the Bootable JAR the `jboss.node.name` argument and, to avoid conflicts with the port numbers, shifting the port numbers by using `jboss.socket.binding.port-offset`. For example, to launch a second instance that will represent a new pod on OpenShift, you can execute the following command in a new terminal window:
+
+ ```bash
+ jboss-on-aro-jakartaee (bootable-jar) $ MSSQLSERVER_USER=SA \
+ MSSQLSERVER_PASSWORD=Passw0rd! \
+ MSSQLSERVER_JNDI=java:/comp/env/jdbc/mssqlds \
+ MSSQLSERVER_DATABASE=todos_db \
+ MSSQLSERVER_HOST=localhost \
+ MSSQLSERVER_PORT=1433 \
+ mvn wildfly-jar:run -Dwildfly.bootable.arguments="-Djboss.node.name=node2 -Djboss.socket.binding.port-offset=1000"
+ ```
+
+ If your cluster is working, you will see on the server console log a trace similar to the following one:
+
+ ```bash
+ INFO [org.infinispan.CLUSTER] (thread-6,ejb,node) ISPN000094: Received new cluster view for channel ejb
+ ```
+
+ > [!NOTE]
+ > By default the Bootable JAR configures the JGroups subsystem to use the UDP protocol and sends messages to discover other cluster members to the 230.0.0.4 multicast address. To properly verify the clustering capabilities on your local machine, your Operating System should be capable of sending and receiving multicast datagrams and route them to the 230.0.0.4 IP through your ethernet interface. If you see warnings related to the cluster on the server logs, check your network configuration and verify whether is working with the multicast address.
+
+1. Open `http://localhost:8080/` in your browser to visit the application home page. If you have created more instances, you can access them by shifting the port number, for example `http://localhost:9080/`. The application will look similar to the following image:
+
+ :::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/todo-demo-application.png" alt-text="Screenshot of ToDo EAP demo Application.":::
+
+1. Check the application health endpoints (live and ready). These endpoints will be used by OpenShift to verify when your pod is live and ready to receive user requests:
+
+ ```bash
+ jboss-on-aro-jakartaee (bootable-jar) $ curl http://localhost:9990/health/live
+ {"status":"UP","checks":[{"name":"SuccessfulCheck","status":"UP"}]}
+
+ jboss-on-aro-jakartaee (bootable-jar) $ curl http://localhost:9990/health/ready
+ {"status":"UP","checks":[{"name":"deployments-status","status":"UP","data":{"todo-list.war":"OK"}},{"name":"server-state","status":"UP","data":{"value":"running"}},{"name":"boot-errors","status":"UP"},{"name":"DBConnectionHealthCheck","status":"UP"}]}
+ ```
+
+1. Press **Control-C** to stop the application.
+1. Execute the following command to stop the database server:
+
+ ```bash
+ docker stop mssqlserver
+ ```
+
+1. If you don't plan to use the Docker database again, execute the following command to remove the database server from your Docker registry:
+
+ ```bash
+ docker rm mssqlserver
+ ```
+
+## Deploy to OpenShift
+
+Before deploying the demo application on OpenShift we will deploy the database server. The database server will be deployed by using a [DeploymentConfig OpenShift API resource](https://docs.openshift.com/container-platform/4.8/applications/deployments/what-deployments-are.html#deployments-and-deploymentconfigs_what-deployments-are). The database server deployment configuration is available as a YAML file in the application source code.
+
+To deploy the application, we are going to use the JBoss EAP Helm Charts already available in ARO. We also need to supply the desired configuration, for example, the database user, the database password, the driver version we want to use, and the connection information used by the data source. Since this information contains sensitive information, we will use [OpenShift Secret objects](https://docs.openshift.com/container-platform/4.8/nodes/pods/nodes-pods-secrets.html#nodes-pods-secrets-about_nodes-pods-secrets) to store it.
+
+> [!NOTE]
+> You can also use the [JBoss EAP Operator](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.4/html/getting_started_with_jboss_eap_for_openshift_container_platform/eap-operator-for-automating-application-deployment-on-openshift_default) to deploy this example, however, notice that the JBoss EAP Operator will deploy the application as `StatefulSets`. Use the JBoss EAP Operator if your application requires one or more one of the following.
+>
+> * Stable, unique network identifiers.
+> * Stable, persistent storage.
+> * Ordered, graceful deployment and scaling.
+> * Ordered, automated rolling updates.
+> * Transaction recovery facility when a pod is scaled down or crashes.
+
+Navigate to your demo application local repository and change the current branch to `bootable-jar-openshift`:
+
+```bash
+jboss-on-aro-jakartaee (bootable-jar) $ git checkout bootable-jar-openshift
+Switched to branch 'bootable-jar-openshift'
+jboss-on-aro-jakartaee (bootable-jar-openshift) $
+```
+
+Let's do a quick review about what we have changed:
+
+- We have added a new maven profile named `bootable-jar-openshift` that prepares the Bootable JAR with a specific configuration for running the server on the cloud, for example, it enables the JGroups subsystem to use TCP requests to discover other pods by using the KUBE_PING protocol.
+- We have added a set of configuration files in the _jboss-on-aro-jakartaee/deployment_ directory. In this directory, you will find the configuration files to deploy the database server and the application.
+
+### Deploy the database server on OpenShift
+
+The file to deploy the Microsoft SQL Server to OpenShift is _deployment/mssqlserver/mssqlserver.yaml_. This file is composed by three configuration objects:
+
+* A Service: To expose the SQL server port.
+* A DeploymentConfig: To deploy the SQL server image.
+* A PersistentVolumeClaim: To reclaim persistent disk space for the database. It uses the storage class named `managed-premium` which is available at your ARO cluster.
+
+This file expects the presence of an OpenShift Secret object named `mssqlserver-secret` to supply the database administrator password. In the next steps, we will use the OpenShift CLI to create this Secret, deploy the server, and create the `todos_db`:
+
+1. To create the Secret object with the information relative to the database, execute the following command on the `eap-demo` project created before at the pre-requisite steps section:
+
+ ```bash
+ jboss-on-aro-jakartaee (bootable-jar-openshift) $ oc create secret generic mssqlserver-secret \
+ --from-literal db-password=Passw0rd! \
+ --from-literal db-user=sa \
+ --from-literal db-name=todos_db
+ secret/mssqlserver-secret created
+ ```
+
+1. Deploy the database server by executing the following:
+
+ ```bash
+ jboss-on-aro-jakartaee (bootable-jar-openshift) $ oc apply -f ./deployment/msqlserver/mssqlserver.yaml
+ service/mssqlserver created
+ deploymentconfig.apps.openshift.io/mssqlserver created
+ persistentvolumeclaim/mssqlserver-pvc created
+ ```
+
+1. Monitor the status of the pods and wait until the database server is running:
+
+ ```bash
+ jboss-on-aro-jakartaee (bootable-jar-openshift) $ oc get pods -w
+ NAME READY STATUS RESTARTS AGE
+ mssqlserver-1-deploy 0/1 Completed 0 34s
+ mssqlserver-1-gw7qw 1/1 Running 0 31s
+ ```
+
+1. Connect to the database pod and create the database `todos_db`:
+
+ ```bash
+ jboss-on-aro-jakartaee (bootable-jar-openshift) $ oc rsh mssqlserver-1-gw7qw
+ sh-4.4$ /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Passw0rd!'
+ 1> CREATE DATABASE todos_db
+ 2> GO
+ 1> exit
+ sh-4.4$ exit
+ exit
+ ```
+
+### Deploy the application on OpenShift
+
+Now that we have the database server ready, we can deploy the demo application via JBoss EAP Helm Charts. The Helm Chart application configuration file is available at _deployment/application/todo-list-helm-chart.yaml_. You could deploy this file via the command line; however, to do so you would need to have Helm Charts installed on your local machine. Instead of using the command line, the next steps explain how you can deploy this Helm Chart by using the OpenShift web console.
+
+Before deploying the application, let's create the expected Secret object that will hold specific application configuration. The Helm Chart will get the database user, password and name from the `mssqlserver-secret` Secret created before, and the driver version, the datasource JNDI name and the cluster password from the following Secret:
+
+1. Execute the following to create the OpenShift secret object that will hold the application configuration:
+
+ ```bash
+ jboss-on-aro-jakartaee (bootable-jar-openshift) $ oc create secret generic todo-list-secret \
+ --from-literal app-driver-version=7.4.1.jre11 \
+ --from-literal app-ds-jndi=java:/comp/env/jdbc/mssqlds \
+ --from-literal app-cluster-password=mut2UTG6gDwNDcVW
+ ```
+
+ > [!NOTE]
+ > You decide the cluster password you want to use, the pods that want to join to your cluster need such a password. Using a password prevents that any pods that are not under your control can join to your JBoss EAP cluster.
+
+ > [!NOTE]
+ > You may have noticed from the above Secret that we are not supplying the database Hostname and Port. That's not necessary. If you take a closer look at the Helm Chart application file, you will see that the database Hostname and Port are passed by using the following notations \$(MSSQLSERVER_SERVICE_HOST) and \$(MSSQLSERVER_SERVICE_PORT). This is a standard OpenShift notation that will ensure the application variables (MSSQLSERVER_HOST, MSSQLSERVER_PORT) get assigned to the values of the pod environment variables (MSSQLSERVER_SERVICE_HOST, MSSQLSERVER_SERVICE_PORT) that are available at runtime. These pod environment variables are passed by OpenShift when the pod is launched. These variables are available to any pod because we have created a Service to expose the SQL server in the previous steps.
+
+2. Open the OpenShift console and navigate to the developer view (in the **</> Developer** perspective in the left hand menu)
+
+ :::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/console-developer-view.png" alt-text="Screenshot of OpenShift console developer view.":::
+
+3. Once you are in the **</> Developer** perspective, ensure you have selected the **eap-demo** project at the **Project** combo box.
+
+ :::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/console-project-combo-box.png" alt-text="Screenshot of OpenShift console project combo box.":::
+
+4. Go to **+Add**, then select **Helm Chart**. You will arrive at the Helm Chart catalog available on your ARO cluster. Write **eap** on the filter input box to filter all the Helm Charts and get the EAP ones. At this stage, you should see two options:
+
+ :::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/console-eap-helm-charts.png" alt-text="Screenshot of OpenShift console EAP Helm Charts.":::
+
+5. Since our application uses MicroProfile capabilities, we are going to select select for this demo the Helm Chart for EAP XP (at the time of this writing, the exact version of the Helm Chart is **EAP Xp3 v1.0.0**). The `Xp3` stands for Expansion Pack version 3.0.0. With the JBoss Enterprise Application Platform expansion pack, developers can use Eclipse MicroProfile application programming interfaces (APIs) to build and deploy microservices-based applications.
+
+6. Open the **EAP Xp** Helm Chart, and then select **Install Helm Chart**.
+
+At this point, we need to configure the chart to be able to build and deploy the application:
+
+1. Change the name of the release to **eap-todo-list-demo**.
+1. We can configure the Helm Chart either using a **Form View** or a **YAML View**. Select **YAML View** in the **Configure via** box.
+1. Then, change the YAML content to configure the Helm Chart by copying the content of the Helm Chart file available at _deployment/application/todo-list-helm-chart.yaml_ instead of the existing content:
+
+ :::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/console-eap-helm-charts-yaml-content-inline.png" alt-text="OpenShift console EAP Helm Chart YAML content" lightbox="media/howto-deploy-java-enterprise-application-platform-app/console-eap-helm-charts-yaml-content-expanded.png":::
+
+1. Finally, select **Install** to start the application deployment. This will open the **Topology** view with a graphical representation of the Helm release (named **eap-todo-list-demo**) and its associated resources.
+
+ :::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/console-topology.png" alt-text="Screenshot of OpenShift console topology.":::
+
+ The Helm Release (abbreviated **HR**) is named **eap-todo-list-demo**. It includes a Deployment resource (abbreviated **D**) also named **eap-todo-list-demo**.
+
+1. When the build is finished (the bottom-left icon will display a green check) and the application is deployed (the circle outline is in dark blue), you can go to application the URL (using the top-right icon) from the route associated to the deployment.
+
+ :::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/console-open-application.png" alt-text="Screenshot of OpenShift console open application.":::
+
+1. The application is opened in your browser looking similar to the following image ready to be used:
+
+ :::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/application-running-openshift.png" alt-text="Screenshot of OpenShift application running.":::
+
+1. The application shows you the name of the pod which has served the information. To verify the clustering capabilities, you could add some Todos. Then delete the pod with the name indicated in the **Server Host Name** field that appears on the application `(oc delete pod <pod name>)`, and once deleted, create a new Todo on the same application window. You will see that the new Todo is added via an Ajax request and the **Server Host Name** field now shows a different name. Behind the scenes, the new request has been dispatched by the OpenShift load balancer and delivered to an available pod. The Jakarta Faces view has been restored from the HTTP Session copy stored in the pod which is now processing the request. Indeed you will see that the **Session ID** field has not changed. If the session were not replicated across your pods, you would get a Jakarta Faces ViewExpiredException, and your application won't work as expected.
+
+## Clean up resources
+
+### Delete the application
+
+If you only want to delete your application, you can open the OpenShift console and, at the developer view, navigate to the **Helm** menu option. On this menu, you will see all the Helm Chart releases installed on your cluster.
+
+ :::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/console-uninstall-application-inline.png" alt-text="OpenShift uninstall application" lightbox="media/howto-deploy-java-enterprise-application-platform-app/console-uninstall-application-expanded.png":::
+
+Locate the **eap-todo-list-demo** Helm Chart and at the end of the row, select the tree vertical dots to open the action contextual menu entry.
+
+Select **Uninstall Helm Release** to remove the application. Notice that the secret object used to supply the application configuration is not part of the chart. You need to remove it separately if you no longer need it.
+
+Execute the following command if you want to delete the secret that holds the application configuration:
+
+```bash
+jboss-on-aro-jakartaee (bootable-jar-openshift) $ oc delete secrets/todo-list-secret
+secret "todo-list-secret" deleted
+```
+
+### Delete the database
+
+If you want to delete the database and the related objects, execute the following command:
+
+```bash
+jboss-on-aro-jakartaee (bootable-jar-openshift) $ oc delete all -l app=mssql2019
+replicationcontroller "mssqlserver-1" deleted
+service "mssqlserver" deleted
+deploymentconfig.apps.openshift.io "mssqlserver" deleted
+
+jboss-on-aro-jakartaee (bootable-jar-openshift) $ oc delete secrets/mssqlserver-secret
+secret "mssqlserver-secret" deleted
+```
+
+### Delete the OpenShift project
+
+You can also delete all the configuration created for this demo by deleting the `eap-demo` project. To do so, execute the following:
+
+```bash
+jboss-on-aro-jakartaee (bootable-jar-openshift) $ oc delete project eap-demo
+project.project.openshift.io "eap-demo" deleted
+```
+
+### Delete the ARO cluster
+
+Delete the ARO cluster by following the steps in [Tutorial: Delete an Azure Red Hat OpenShift 4 cluster](./tutorial-delete-cluster.md)
+
+## Next steps
+
+In this guide, you learned how to:
+> [!div class="checklist"]
+>
+> * Prepare an JBoss EAP application for OpenShift.
+> * Run it locally together with a containerized Microsoft SQL Server.
+> * Deploy a Microsoft SQL Server on an ARO 4 by using the OpenShift CLI.
+> * Deploy the application on an ARO 4 by using JBoss Helm Charts and OpenShift Web Console.
+
+You can learn more from references used in this guide:
+
+* [Red Hat JBoss Enterprise Application Platform](https://www.redhat.com/en/technologies/jboss-middleware/application-platform)
+* [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)
+* [JBoss EAP Helm Charts](https://jbossas.github.io/eap-charts/)
+* [JBoss EAP Bootable JAR](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.4/html-single/using_jboss_eap_xp_3.0.0/index#the-bootable-jar_default)
openshift Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/howto-deploy-java-liberty-app.md
This guide demonstrates how to run your Java, Java EE, [Jakarta EE](https://jaka
## Prerequisites + Complete the following prerequisites to successfully walk through this guide.
-> [!NOTE]
-> Azure Red Hat OpenShift requires a minimum of 40 cores to create and run an OpenShift cluster. The default Azure resource quota for a new Azure subscription does not meet this requirement. To request an increase in your resource limit, see [Standard quota: Increase limits by VM series](../azure-portal/supportability/per-vm-quota-requests.md). Note that the free trial subscription isn't eligible for a quota increase, [upgrade to a Pay-As-You-Go subscription](../cost-management-billing/manage/upgrade-azure-subscription.md) before requesting a quota increase.
1. Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, macOS). 1. Install a Java SE implementation (for example, [AdoptOpenJDK OpenJDK 8 LTS/OpenJ9](https://adoptopenjdk.net/?variant=openjdk8&jvmVariant=openj9)).
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-high-availability.md
For other user initiated operations such as scale-compute or scale-storage, the
### Reducing planned downtime with managed maintenance window
-With flexible server, you can optionally schedule Azure initiated maintenance activities by choosing a 30-minute window in a day of your preference where the activities on the databases are expected to be low. Azure maintenance tasks such as patching or minor version upgrades would happen during that maintenance window. If you do not choose a custom window, a system allocated 1-hr window between 11pm-7am local time is chosen for your server.
+With flexible server, you can optionally schedule Azure initiated maintenance activities by choosing a 60-minute window in a day of your preference where the activities on the databases are expected to be low. Azure maintenance tasks such as patching or minor version upgrades would happen during that maintenance window. If you do not choose a custom window, a system allocated 1-hr window between 11pm-7am local time is chosen for your server.
For flexible servers configured with high availability, these maintenance activities are performed on the standby replica first and the service is failed over to the standby to which applications can reconnect.
purview Catalog Private Link Name Resolution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-private-link-name-resolution.md
Previously updated : 01/10/2022 Last updated : 01/21/2022 # Customer intent: As an Azure Purview admin, I want to set up private endpoints for my Azure Purview account, for secure access.
If you do not use DNS forwarders and instead you manage A records directly in yo
| `Contoso-Purview.proxy.purview.azure.com` | A | \<account private endpoint IP address of Azure Purview\> | | `Contoso-Purview.guardian.purview.azure.com` | A | \<account private endpoint IP address of Azure Purview\> | | `gateway.purview.azure.com` | A | \<account private endpoint IP address of Azure Purview\> |
-| `Contoso-Purview.web.purview.azure.com` | A | \<portal private endpoint IP address of Azure Purview\> |
| `manifest.prod.ext.web.purview.azure.com` | A | \<portal private endpoint IP address of Azure Purview\> | | `cdn.prod.ext.web.purview.azure.com` | A | \<portal private endpoint IP address of Azure Purview\> | | `hub.prod.ext.web.purview.azure.com` | A | \<portal private endpoint IP address of Azure Purview\> |
purview Concept Best Practices Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-network.md
Previously updated : 01/13/2022 Last updated : 01/21/2022 # Azure Purview network architecture and best practices
When you're scanning a data source in Azure Purview, you need to provide a crede
### Additional considerations -- If you choose to scan data sources by using public endpoints, your on-premises or VM-based data sources must have outbound connectivity to Azure endpoints.
+- If you choose to scan data sources using public endpoints, your self-hosted integration runtime VMs must have outbound access to data sources and Azure endpoints.
- Your self-hosted integration runtime VMs must have [outbound connectivity to Azure endpoints](manage-integration-runtimes.md#networking-requirements).
purview Manage Integration Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/manage-integration-runtimes.md
Title: Create and manage Integration Runtimes description: This article explains the steps to create and manage Integration Runtimes in Azure Purview.--++
purview Register Scan Db2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-db2.md
This article outlines how to register DB2, and how to authenticate and interact
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**| ||||||||
-| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| [Yes](#lineage)]|
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| [Yes](#lineage)|
The supported IBM DB2 versions are DB2 for LUW 9.7 to 11.x. DB2 for z/OS (mainframe) and iSeries (AS/400) are not supported now.
purview Register Scan Oracle Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-oracle-source.md
When scanning Oracle source, Azure Purview supports:
- Synonyms - Types including the type attributes -- Fetching static lineage on assets relationships among tables, views and stored procedures.
+- Fetching static lineage on assets relationships among tables, views and stored procedures. Stored procedure lineage is supported for static SQL returning result set.
When setting up scan, you can choose to scan an entire Oracle server, or scope the scan to a subset of schemas matching the given name(s) or name pattern(s).
purview Register Scan Teradata Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-teradata-source.md
Follow the steps below to scan Teradata to automatically identify assets and cla
To understand more on credentials, refer to the link [here](./manage-credentials.md)
- 1. **Schema**: List subset of schemas to import expressed as a semicolon separated list. For Example: `schema1; schema2`. All user schemas are imported if that list is empty. All system schemas (for example, SysAdmin) and objects are ignored by default.
+ 1. **Schema**: List subset of databases to import expressed as a semicolon separated list. For Example: `schema1; schema2`. All user databases are imported if that list is empty. All system databases (for example, SysAdmin) and objects are ignored by default.
- Acceptable schema name patterns using SQL LIKE expressions syntax include using %. For example: `A%; %B; %C%; D`
+ Acceptable database name patterns using SQL LIKE expressions syntax include using %. For example: `A%; %B; %C%; D`
* Start with A or * End with B or * Contain C or
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/resource-provider-operations.md
Azure service: [Storage](../storage/index.yml)
> | Microsoft.Storage/storageAccounts/blobServices/containers/blobs/manageOwnership/action | Changes ownership of the blob | > | Microsoft.Storage/storageAccounts/blobServices/containers/blobs/modifyPermissions/action | Modifies permissions of the blob | > | Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action | Returns the result of the blob command |
-> | Microsoft.Storage/storageAccounts/blobServices/containers/blobs/immutableStorage/runAsSuperUser/action | |
+> | Microsoft.Storage/storageAccounts/blobServices/containers/blobs/immutableStorage/runAsSuperUser/action | Allows the user to write blob immutability policies and legal holds. |
> | Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/read | Returns the result of reading blob tags | > | Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write | Returns the result of writing blob tags | > | Microsoft.Storage/storageAccounts/fileServices/fileshares/files/read | Returns a file/folder or a list of files/folders |
search Cognitive Search Debug Session https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-debug-session.md
Last updated 12/30/2021
# Debug Sessions in Azure Cognitive Search
-Debug Sessions is a visual editor that works with an existing skillset in the Azure portal, exposing the structure and content of a single enriched document, as its produced by an indexer and skillset, for the duration of the session. Because you are working with a live document, the session is interactive - you can identify errors, modify and invoke skill execution, and validate the results in real time. If your changes resolve the problem, you can commit them to a published skillset to apply the fixes globally.
+Debug Sessions is a visual editor that works with an existing skillset in the Azure portal, exposing the structure and content of a single enriched document, as it's produced by an indexer and skillset, for the duration of the session. Because you are working with a live document, the session is interactive - you can identify errors, modify and invoke skill execution, and validate the results in real time. If your changes resolve the problem, you can commit them to a published skillset to apply the fixes globally.
> [!Important] > Debug Sessions is a preview feature provided under [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## How a debug session works
-When you start a session, the search service creates a copy of the skillset, indexer, and a search index containing a single document that will be used to test the skillset. All session state will be saved to a container in an Azure Storage account that you provide.
+When you start a session, the search service creates a copy of the skillset, indexer, and a data source containing a single document that will be used to test the skillset. All session state will be saved to a container in an Azure Storage account that you provide.
A cached copy of the enriched document and skillset is loaded into the visual editor so that you can inspect the content and metadata of the enriched document, with the ability to check each document node and edit any aspect of the skillset definition. Any changes made within the session are cached. Those changes will not affect the published skillset unless you commit them. Committing changes will overwrite the production skillset.
Skill details includes the following areas:
+ **Skill Settings** shows a formatted version of the skill definition. + **Skill JSON Editor** shows the raw JSON document of the definition.
-+ **Executions** shows the number of times a skill was executed.
++ **Executions** shows the data corresponding to each time a skill was executed. + **Errors and warnings** shows the messages generated upon session start or refresh. On Executions or Skill Settings, select the **`</>`** symbol to open the [**Expression Evaluator**](#expression-evaluator) used for viewing and editing the expressions of the skills inputs and outputs.
search Cognitive Search How To Debug Skillset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-how-to-debug-skillset.md
Last updated 12/31/2021
Start a debug session to identify and resolve errors, validate changes, and push changes to a published skillset in your Azure Cognitive Search service.
-A debug session is a cached indexer and skillset execution, scoped to a single document, that you can edit and test your changes interactively. If you are unfamiliar with how a debug session works, see [Debug sessions in Azure Cognitive Search](cognitive-search-debug-session.md). To practice a debug workflow with a sample document, see [Tutorial: Debug sessions](cognitive-search-tutorial-debug-sessions.md).
+A debug session is a cached indexer and skillset execution, scoped to a single document, that you can use to edit and test your changes interactively. If you are unfamiliar with how a debug session works, see [Debug sessions in Azure Cognitive Search](cognitive-search-debug-session.md). To practice a debug workflow with a sample document, see [Tutorial: Debug sessions](cognitive-search-tutorial-debug-sessions.md).
> [!Important] > Debug sessions is a preview portal feature, provided under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
As a best practice, resolve problems with inputs before moving on to outputs.
To prove whether a modification resolves an error, follow these steps:
-1. Select **Save** in Skill Details to preserve your changes.
+1. Select **Save** in the skill details pane to preserve your changes.
1. Select **Run** in the session window to invoke skillset execution using the modified definition.
The following steps show you how to get information about a skill.
+ **Skill Settings** if you prefer a visual editor + **Skill JSON Editor** to edit the JSON document directly
-1. Check the [path syntax for referencing nodes](cognitive-search-concept-annotations-syntax.md) in an enrichment tree. Inputs are usually one of the following:
+1. Check the [path syntax for referencing nodes](cognitive-search-concept-annotations-syntax.md) in an enrichment tree. Following are some of the most common input paths:
+ `/document/content` for chunks of text. This node is populated from the blob's content property. + `/document/merged_content` for chunks of text in skillets that include Text Merge skill.
search Cognitive Search Resources Documentation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-resources-documentation.md
- Title: Documentation links for AI enrichment-
-description: An annotated list of articles, tutorials, samples, and blog posts related to AI enrichment workloads in Azure Cognitive Search.
------ Previously updated : 09/16/2021-
-# Documentation resources for AI enrichment in Azure Cognitive Search
-
-AI enrichment is an add-on to indexer-based indexing that finds latent information in non-text sources and undifferentiated text, transforming it into full text searchable content in Azure Cognitive Search.
-
-For built-in processing, the pre-trained AI models in Cognitive Services are called internally to perform the analyses. You can also integrate custom models using Azure Machine Learning, Azure Functions, or other approaches.
-
-The following is a consolidated list of the documentation for AI enrichment.
-
-## Concepts
-
-+ [AI enrichments](cognitive-search-concept-intro.md)
-+ [Skillsets](cognitive-search-working-with-skillsets.md)
-+ [Debug sessions](cognitive-search-debug-session.md)
-+ [Knowledge stores](knowledge-store-concept-intro.md)
-+ [Projections](knowledge-store-projection-overview.md)
-+ [Incremental enrichment (reuse of a cached enriched document)](cognitive-search-incremental-indexing-conceptual.md)
-
-## Hands on walkthroughs
-
-+ [Quickstart: Create a text translation and entity skillset](cognitive-search-quickstart-blob.md)
-+ [Quickstart: Create an OCR image skillset](cognitive-search-quickstart-ocr.md)
-+ [Tutorial: Enriched indexing with AI](cognitive-search-tutorial-blob.md)
-+ [Tutorial: Diagnose, repair, and commit changes to your skillset with Debug Sessions](cognitive-search-tutorial-debug-sessions.md)
-
-## Knowledge stores
-
-+ [Quickstart: Create a knowledge store in the Azure portal](knowledge-store-create-portal.md)
-+ [Create a knowledge store using REST and Postman](knowledge-store-create-rest.md)
-+ [View a knowledge store with Storage Browser](knowledge-store-view-storage-explorer.md)
-+ [Connect a knowledge store with Power BI](knowledge-store-connect-power-bi.md)
-+ [Define projections in a knowledge store](knowledge-store-projections-examples.md)
-
-## Custom skills (advanced)
-
-+ [How to define a custom skills interface](cognitive-search-custom-skill-interface.md)
-+ [Example: Create a custom skill using Azure Functions (and Bing Entity Search APIs)](cognitive-search-create-custom-skill-example.md)
-+ [Example: Create a custom skill using Python](cognitive-search-custom-skill-python.md)
-+ [Example: Create a custom skill using Form Recognizer](cognitive-search-custom-skill-form.md)
-+ [Example: Create a custom skill using Azure Machine Learning](cognitive-search-tutorial-aml-custom-skill.md)
-
-## How-to guidance
-
-+ [Attach a Cognitive Services resource](cognitive-search-attach-cognitive-services.md)
-+ [Define a skillset](cognitive-search-defining-skillset.md)
-+ [reference annotations in a skillset](cognitive-search-concept-annotations-syntax.md)
-+ [Map fields to an index](cognitive-search-output-field-mapping.md)
-+ [Process and extract information from images](cognitive-search-concept-image-scenarios.md)
-+ [Configure caching for incremental enrichment](search-howto-incremental-index.md)
-+ [How to rebuild an Azure Cognitive Search index](search-howto-reindex.md)
-+ [Design tips](cognitive-search-concept-troubleshooting.md)
-+ [Common errors and warnings](cognitive-search-common-errors-warnings.md)
-
-## Skills reference
-
-+ [Built-in skills](cognitive-search-predefined-skills.md)
- + [Microsoft.Skills.Text.KeyPhraseExtractionSkill](cognitive-search-skill-keyphrases.md)
- + [Microsoft.Skills.Text.LanguageDetectionSkill](cognitive-search-skill-language-detection.md)
- + [Microsoft.Skills.Text.V3.EntityLinkingSkill](cognitive-search-skill-entity-linking-v3.md)
- + [Microsoft.Skills.Text.V3.EntityRecognitionSkill](cognitive-search-skill-entity-recognition-v3.md)
- + [Microsoft.Skills.Text.MergeSkill](cognitive-search-skill-textmerger.md)
- + [Microsoft.Skills.Text.PIIDetectionSkill](cognitive-search-skill-pii-detection.md)
- + [Microsoft.Skills.Text.SplitSkill](cognitive-search-skill-textsplit.md)
- + [Microsoft.Skills.Text.V3.SentimentSkill](cognitive-search-skill-sentiment-v3.md)
- + [Microsoft.Skills.Text.TranslationSkill](cognitive-search-skill-text-translation.md)
- + [Microsoft.Skills.Vision.ImageAnalysisSkill](cognitive-search-skill-image-analysis.md)
- + [Microsoft.Skills.Vision.OcrSkill](cognitive-search-skill-ocr.md)
- + [Microsoft.Skills.Util.ConditionalSkill](cognitive-search-skill-conditional.md)
- + [Microsoft.Skills.Util.DocumentExtractionSkill](cognitive-search-skill-document-extraction.md)
- + [Microsoft.Skills.Util.ShaperSkill](cognitive-search-skill-shaper.md)
-
-+ Custom skills
- + [Microsoft.Skills.Custom.AmlSkill](cognitive-search-aml-skill.md)
- + [Microsoft.Skills.Custom.WebApiSkill](cognitive-search-custom-skill-web-api.md)
-
-+ [Deprecated skills](cognitive-search-skill-deprecated.md)
- + [Microsoft.Skills.Text.NamedEntityRecognitionSkill](cognitive-search-skill-named-entity-recognition.md)
- + [Microsoft.Skills.Text.EntityRecognitionSkill](cognitive-search-skill-entity-recognition.md)
- + [Microsoft.Skills.Text.SentimentSkill](cognitive-search-skill-sentiment.md)
-
-## APIs
-
-+ [REST API](/rest/api/searchservice/)
- + [Create Skillset (api-version=2020-06-30)](/rest/api/searchservice/create-skillset)
- + [Create Indexer (api-version=2020-06-30)](/rest/api/searchservice/create-indexer)
-
-## See also
-
-+ [Azure Cognitive Search REST API](/rest/api/searchservice/)
-+ [Indexers in Azure Cognitive Search](search-indexer-overview.md)
-+ [What is Azure Cognitive Search?](search-what-is-azure-search.md)
search Cognitive Search Tutorial Debug Sessions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-tutorial-debug-sessions.md
You will need the [Postman collection](https://github.com/Azure-Samples/azure-se
## Check results in the portal
-The sample code intentionally creates a buggy index as a consequence of problems that occurred during skillset execution. The problem in the index is missing data.
+The sample code intentionally creates a buggy index as a consequence of problems that occurred during skillset execution. The problem is that the index is missing data.
1. In Azure portal, on the search service **Overview** page, select the **Indexes** tab.
The sample code intentionally creates a buggy index as a consequence of problems
1. Select **Search** to run the query. You should see empty values for "organizations" and "locations".
-These fields should have been populated through the skillset's [Entity Recognition skill](cognitive-search-skill-entity-recognition-v3.md), used to detect organizations and locations anywhere within the blob's content. In the next exercise, you'll use debug the skillset to determine what went wrong.
+These fields should have been populated through the skillset's [Entity Recognition skill](cognitive-search-skill-entity-recognition-v3.md), used to detect organizations and locations anywhere within the blob's content. In the next exercise, you'll debug the skillset to determine what went wrong.
Another way to investigate errors and warnings is through the Azure portal.
search Search Api Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-api-preview.md
Preview features that transition to general availability are removed from this l
| [**Management REST API 2021-04-01-Preview**](/rest/api/searchmanagement/) | Security | Modifies [Create or Update Service](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update) to support new [DataPlaneAuthOptions](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#dataplaneauthoptions). | Public preview, [Management REST API](/rest/api/searchmanagement/), API version 2021-04-01-Preview. Announced in May 2021. | | [**Reset Documents**](search-howto-run-reset-indexers.md) | Indexer | Reprocesses individually selected search documents in indexer workloads. | Use the [Reset Documents REST API](/rest/api/searchservice/preview-api/reset-documents), API versions 2021-04-30-Preview or 2020-06-30-Preview. | | [**Power Query connectors**](search-how-to-index-power-query-data-sources.md) | Indexer data source | Indexers can now index from other cloud platforms. If you are using an indexer to crawl external data sources for indexing, you can now use Power Query connectors to connect to Amazon Redshift, Elasticsearch, PostgreSQL, Salesforce Objects, Salesforce Reports, Smartsheet, and Snowflake. | [Sign up](https://aka.ms/azure-cognitive-search/indexer-preview) is required so that support can be enabled for your subscription on the backend. Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview, or the Azure portal.|
-| [**SharePoint Online Indexer**](search-howto-index-sharepoint-online.md) | Indexer data source | New data source for indexer-based indexing of SharePoint content. | [Sign up](https://aka.ms/azure-cognitive-search/indexer-preview) is required so that support can be enabled for your subscription on the backend. Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview, or the Azure portal. |
+| [**SharePoint Indexer**](search-howto-index-sharepoint-online.md) | Indexer data source | New data source for indexer-based indexing of SharePoint content. | [Sign up](https://aka.ms/azure-cognitive-search/indexer-preview) is required so that support can be enabled for your subscription on the backend. Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview, or the Azure portal. |
| [**MySQL indexer data source**](search-howto-index-mysql.md) | Indexer data source | Index content and metadata from Azure MySQL data sources.| [Sign up](https://aka.ms/azure-cognitive-search/indexer-preview) is required so that support can be enabled for your subscription on the backend. Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview, [.NET SDK 11.2.1](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourcetype.mysql), and Azure portal. | | [**Cosmos DB indexer: MongoDB API, Gremlin API**](search-howto-index-cosmosdb.md) | Indexer data source | For Cosmos DB, SQL API is generally available, but MongoDB and Gremlin APIs are in preview. | For MongoDB and Gremlin, [sign up first](https://aka.ms/azure-cognitive-search/indexer-preview) so that support can be enabled for your subscription on the backend. MongoDB data sources can be configured in the portal. Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview. | | [**Native blob soft delete**](search-howto-index-changed-deleted-blobs.md) | Indexer data source | The Azure Blob Storage indexer in Azure Cognitive Search will recognize blobs that are in a soft deleted state, and remove the corresponding search document during indexing. | Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview. |
search Search Blob Metadata Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-blob-metadata-properties.md
Last updated 01/15/2022
# Content metadata properties used in Azure Cognitive Search
-Several of the indexer-supported data sources, including Azure Blob Storage, Azure Data Lake Storage Gen2, and SharePoint Online, contain standalone files or embedded objects of various content types. Many of those content types have metadata properties that can be useful to index. Just as you can create search fields for standard blob properties like **`metadata_storage_name`**, you can create fields in a search index for metadata properties that are specific to a document format.
+Several of the indexer-supported data sources, including Azure Blob Storage, Azure Data Lake Storage Gen2, and SharePoint, contain standalone files or embedded objects of various content types. Many of those content types have metadata properties that can be useful to index. Just as you can create search fields for standard blob properties like **`metadata_storage_name`**, you can create fields in a search index for metadata properties that are specific to a document format.
## Supported document formats
-Cognitive Search supports blob indexing and SharePoint Online document indexing for the following document formats:
+Cognitive Search supports blob indexing and SharePoint document indexing for the following document formats:
[!INCLUDE [search-blob-data-sources](../../includes/search-blob-data-sources.md)] ## Properties by document format
-The following table summarizes processing done for each document format, and describes the metadata properties extracted by a blob indexer and the SharePoint Online indexer.
+The following table summarizes processing done for each document format, and describes the metadata properties extracted by a blob indexer and the SharePoint indexer.
| Document format / content type | Extracted metadata | Processing details | | | | |
The following table summarizes processing done for each document format, and des
* [Indexers in Azure Cognitive Search](search-indexer-overview.md) * [AI enrichment overview](cognitive-search-concept-intro.md) * [Blob indexing overview](search-blob-storage-integration.md)
-* [SharePoint Online indexing](search-howto-index-sharepoint-online.md)
+* [SharePoint indexing](search-howto-index-sharepoint-online.md)
search Search Data Sources Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-data-sources-gallery.md
Connect to Cosmos DB through the Mongo API to extract items from a container, se
-### SharePoint Online
+### SharePoint
by [Cognitive Search](search-what-is-azure-search.md)
-Connect to a SharePoint Online site and index documents from one or more Document Libraries, for accounts and search services in the same tenant. Text and normalized images will be extracted by default. Optionally, you can configure a skillset for more content transformation and enrichment, or configure change tracking to refresh a search index with new or changed content in SharePoint.
+Connect to a SharePoint site and index documents from one or more Document Libraries, for accounts and search services in the same tenant. Text and normalized images will be extracted by default. Optionally, you can configure a skillset for more content transformation and enrichment, or configure change tracking to refresh a search index with new or changed content in SharePoint.
[More details](search-howto-index-sharepoint-online.md)
The Database Server connector will crawl content from a Relational Database serv
by [BA Insight](https://www.bainsight.com/)
-The Deltek Vision Connector honors the security of the source system and provides both full and incremental crawls, so users always have the latest information available to them. It indexes content from Deltek Vision into Azure, SharePoint Online, or SharePoint 2016/2013, surfacing it through BA Insight's SmartHub to provide users with integrated search results.
+The Deltek Vision Connector honors the security of the source system and provides both full and incremental crawls, so users always have the latest information available to them. It indexes content from Deltek Vision into Azure, SharePoint in Microsoft 365, or SharePoint 2016/2013, surfacing it through BA Insight's SmartHub to provide users with integrated search results.
[More details](https://www.bainsight.com/connectors/deltek-connector-sharepoint-azure-elasticsearch/)
BA Insight's SharePoint Connector allows you to connect to SharePoint 2019, fetc
-### SharePoint Online
+### SharePoint in Microsoft 365
by [Accenture](https://www.accenture.com)
-The SharePoint Online connector will crawl content from any SharePoint Online site collection URL. The connector will retrieve Sites, Lists, Folders, List Items and Attachments, as well as other pages (in .aspx format). Supports SharePoint running in the Microsoft O365 offering.
+The SharePoint connector will crawl content from any SharePoint site collection URL. The connector will retrieve Sites, Lists, Folders, List Items and Attachments, as well as other pages (in .aspx format). Supports SharePoint running in the Microsoft O365 offering.
[More details](https://contentanalytics.digital.accenture.com/display/aspire40/SharePoint+Online+Connector)
The SharePoint Online connector will crawl content from any SharePoint Online si
-### SharePoint Online
+### SharePoint in Microsoft 365
by [BA Insight](https://www.bainsight.com/)
-BA Insight's SharePoint Online Connector allows you to connect to SharePoint Online, fetch data from any site, document library, or list; and index this content securely.
+BA Insight's SharePoint Connector allows you to connect to SharePoint in Microsoft 365, fetch data from any site, document library, or list; and index this content securely.
[More details](https://www.bainsight.com/connectors/sharepoint-online-connector/)
search Search Howto Create Indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-create-indexers.md
Last updated 01/17/2022
# Creating indexers in Azure Cognitive Search
-A search indexer provides an automated workflow for reading content from an external data source, and ingesting that content into a search index on your search service. Indexers support two workflows:
+A search indexer connects to an external data source, retrieves and processes data, and then passes it to the search engine for indexing. Indexers support two workflows:
-+ Extract text and metadata during indexing for full text search scenarios
++ Extract text and metadata during indexing for full text search scenarios.
-+ Apply integrated machine learning and AI models to analyze content that is *not* intrinsically searchable, such as images and large undifferentiated text. This extended workflow is called [AI enrichment](cognitive-search-concept-intro.md) and it's indexer-driven.
++ Apply integrated machine learning and AI models to analyze content that is not otherwise searchable, such as images and large undifferentiated text. This extended workflow is called [AI enrichment](cognitive-search-concept-intro.md) and it's indexer-driven. Using indexers significantly reduces the quantity and complexity of the code you need to write. This article focuses on the basics of creating an indexer. Depending on the data source and your workflow, additional configuration might be necessary.
search Search Howto Index Changed Deleted Blobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-changed-deleted-blobs.md
There are two ways to implement a soft delete strategy:
+ Use consistent document keys and file structure. Changing document keys or directory names and paths (applies to ADLS Gen2) breaks the internal tracking information used by indexers to know which content was indexed, and when it was last indexed. > [!NOTE]
-> ADLS Gen2 allows directories to be renamed. When a directory is renamed, the timestamps for the blobs in that directory do not get updated. As a result, the indexer will not reindex those blobs. If you need the blobs in a directory to be reindexed after a directory rename because they now have new URLs, you will need to update the `LastModified` timestamp for all the blobs in the directory so that the indexer knows to reindex them during a future run. The virtual directories in Azure Blob Storage cannot be changed, so they do not have this issue.
+> ADLS Gen2 allows directories to be renamed. When a directory is renamed, the timestamps for the blobs in that directory do not get updated. As a result, the indexer will not re-index those blobs. If you need the blobs in a directory to be reindexed after a directory rename because they now have new URLs, you will need to update the `LastModified` timestamp for all the blobs in the directory so that the indexer knows to re-index them during a future run. The virtual directories in Azure Blob Storage cannot be changed, so they do not have this issue.
## Native blob soft delete (preview)
For this deletion detection approach, Cognitive Search depends on the [native bl
1. [Run the indexer](/rest/api/searchservice/run-indexer) or set the indexer to run [on a schedule](search-howto-schedule-indexers.md). When the indexer runs and processes a blob having a soft delete state, the corresponding search document will be removed from the index.
-### Reindexing undeleted blobs (using native soft delete policies)
+### Re-index un-deleted blobs using native soft delete policies
-If you restore a soft deleted blob in Blob storage, the indexer will not always reindex it. This is because the indexer uses the blob's `LastModified` timestamp to determine whether indexing is needed. When a soft deleted blob is undeleted, its `LastModified` timestamp does not get updated, so if the indexer has already processed blobs with more recent `LastModified` timestamps, it won't reindex the undeleted blob.
+If you restore a soft deleted blob in Blob storage, the indexer will not always re-index it. This is because the indexer uses the blob's `LastModified` timestamp to determine whether indexing is needed. When a soft deleted blob is undeleted, its `LastModified` timestamp does not get updated, so if the indexer has already processed blobs with more recent `LastModified` timestamps, it won't re-index the undeleted blob.
To make sure that an undeleted blob is reindexed, you will need to update the blob's `LastModified` timestamp. One way to do this is by resaving the metadata of that blob. You don't need to change the metadata, but resaving the metadata will update the blob's `LastModified` timestamp so that the indexer knows to pick it up. <a name="soft-delete-using-custom-metadata"></a>
-## Custom metadata: Soft delete strategy
+## Soft delete strategy using custom metadata
This method uses custom metadata to indicate whether a search document should be removed from the index. It requires two separate actions: deleting the search document from the index, followed by file deletion in Azure Storage.
There are steps to follow in both Azure Storage and Cognitive Search, but there
1. Run the indexer. Once the indexer has processed the file and deleted the document from the search index, you can then delete the physical file in Azure Storage.
-## Custom metadata: Re-index undeleted blobs and files
+## Re-index un-deleted blobs and files
You can reverse a soft-delete if the original source file still physically exists in Azure Storage.
search Search Howto Index Sharepoint Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-sharepoint-online.md
Title: Index data from SharePoint Online (preview)
+ Title: SharePoint indexer (preview)
-description: Set up a SharePoint Online indexer to automate indexing of document library content in Azure Cognitive Search.
+description: Set up a SharePoint indexer to automate indexing of document library content in Azure Cognitive Search.
- Previously updated : 03/01/2021+ Last updated : 01/19/2022
-# Index data from SharePoint Online
+# Index data from SharePoint document libraries
> [!IMPORTANT]
-> SharePoint Online support is currently in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). [Request access](https://aka.ms/azure-cognitive-search/indexer-preview) to this feature, and after access is enabled, use a [preview REST API (2020-06-30-preview or later)](search-api-preview.md) to index your content. There is currently limited portal support and no .NET SDK support.
+> SharePoint indexer support is currently in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). [Request access](https://aka.ms/azure-cognitive-search/indexer-preview) to this feature, and after access is enabled, use a [preview REST API (2020-06-30-preview or later)](search-api-preview.md) to index your content. There is currently limited portal support and no .NET SDK support.
-This article describes how to use Azure Cognitive Search to index documents (such as PDFs, Microsoft Office documents, and several other common formats) stored in SharePoint Online document libraries into an Azure Cognitive Search index. First, it explains the basics of setting up and configuring the indexer. Then, it offers a deeper exploration of behaviors and scenarios you are likely to encounter.
+Configure a [search indexer](search-indexer-overview.md) to index documents stored in SharePoint document libraries for full text search in Azure Cognitive Search. This article explains the configuration steps, followed by a deeper exploration of behaviors and scenarios you are likely to encounter.
> [!NOTE]
-> SharePoint Online supports a granular authorization model that determines per-user access at the document level. The SharePoint Online indexer does not pull these permissions into the search index, and Cognitive Search does not support document-level authorization. When a document is indexed from SharePoint Online into a search service, the content is available to anyone who has read access to the index. If you require document-level permissions, you should investigate security filters to trim results of unauthorized content. For more information, see [Security trimming using Active Directory identities](search-security-trimming-for-azure-search-with-aad.md).
+> SharePoint supports a granular authorization model that determines per-user access at the document level. The SharePoint indexer does not pull these permissions into the search index, and Cognitive Search does not support document-level authorization. When a document is indexed from SharePoint into a search service, the content is available to anyone who has read access to the index. If you require document-level permissions, you should investigate [security filters to trim results](search-security-trimming-for-azure-search-with-aad.md) of unauthorized content.
## Functionality
-An indexer in Azure Cognitive Search is a crawler that extracts searchable data and metadata from a data source. The SharePoint Online indexer will connect to your SharePoint Online site and index documents from one or more Document Libraries. The indexer provides the following functionality:
-+ Index content from one or more SharePoint Online Document Libraries.
-+ The indexer will support incremental indexing meaning that it will identify which content in the Document Library has changed and only index the updated content on future indexing runs. For example, if 5 PDFs are originally indexed by the indexer, then 1 is updated, then the indexer runs again, the indexer will only index the 1 PDF that was updated.
-+ Text and normalized images will be extracted by default from the documents that are indexed. Optionally a skillset can be added to the pipeline for further content enrichment. More information on skillsets can be found in the article [Skillset concepts in Azure Cognitive Search](cognitive-search-working-with-skillsets.md).
+An indexer in Azure Cognitive Search is a crawler that extracts searchable data and metadata from a data source. The SharePoint indexer will connect to your SharePoint site and index documents from one or more document libraries. The indexer provides the following functionality:
+++ Index content and metadata from one or more document libraries.++ Incremental indexing, where the indexer identifies which files have changed and indexes only the updated content. For example, if five PDFs are originally indexed and one is updated, only the updated PDF is indexed.++ Deletion detection is built in. If a document is deleted from a document library, the indexer will detect the delete on the next indexer run and remove the document from the index.++ Text and normalized images will be extracted by default from the documents that are indexed. Optionally a [skillset](cognitive-search-working-with-skillsets.md) can be added to the pipeline for [AI enrichment](cognitive-search-concept-intro.md). +
+## Prerequisites
+++ [SharePoint in Microsoft 365](/sharepoint/introduction) cloud service+++ Files in a [document library](https://support.microsoft.com/office/what-is-a-document-library-3b5976dd-65cf-4c9e-bf5a-713c10ca2872) ## Supported document formats
-The Azure Cognitive Search SharePoint Online indexer can extract text from the following document formats:
+The SharePoint indexer can extract text from the following document formats:
[!INCLUDE [search-document-data-sources](../../includes/search-blob-data-sources.md)]
-## Incremental indexing and deletion detection
-By default, the SharePoint Online indexer supports incremental indexing meaning that it will identify which content in the Document Library has changed and only index the updated content on future indexing runs. For example, if 5 Word documents are originally indexed by the indexer, then 1 is updated, then the indexer runs again, the indexer will only re-index the 1 Word document that was updated.
-
-Deletion detection is also supported by default. This means that if a document is deleted from a SharePoint Online document library, the indexer will detect the delete during a future indexer run and remove the document from the index.
+## Configure the SharePoint indexer
-## Setting up SharePoint Online indexing
-To set up the SharePoint Online Indexer, you will need to perform some actions in the Azure portal and some actions using the preview REST API. This preview isnΓÇÖt supported by the SDK.
+To set up the SharePoint indexer, you will need to perform some tasks in the Azure portal, and other tasks through the preview REST API.
- The following video shows how to set up the SharePoint Online indexer.
+The following video shows how to set up the SharePoint indexer.
> [!VIDEO https://www.youtube.com/embed/QmG65Vgl0JI]
To set up the SharePoint Online Indexer, you will need to perform some actions i
When a system-assigned managed identity is enabled, Azure creates an identity for your search service that can be used by the indexer. This identity is used to automatically detect the tenant the search service is provisioned in.
-If the SharePoint Online site is in the same tenant as the search service you will need to enable the system-assigned managed identity for the search service. If the SharePoint Online site is in a different tenant from the search service, system-assigned managed identity doesn't need to be enabled.
+If the SharePoint site is in the same tenant as the search service, you will need to enable the system-assigned managed identity for the search service in the Azure portal. If the SharePoint site is in a different tenant from the search service, skip this step.
:::image type="content" source="media/search-howto-index-sharepoint-online/enable-managed-identity.png" alt-text="Enable system assigned managed identity":::
After selecting **Save** you will see an Object ID that has been assigned to you
### Step 2: Decide which permissions the indexer requires
-The SharePoint Online Indexer supports both [delegated and application](/graph/auth/auth-concepts#delegated-and-application-permissions) permissions. Choose which permissions you want to use based on your scenario:
+The SharePoint indexer supports both [delegated and application](/graph/auth/auth-concepts#delegated-and-application-permissions) permissions. Choose which permissions you want to use based on your scenario:
+ Delegated permissions, where the indexer runs under the identity of the user or app that sent the request. Data access is limited to the sites and files to which the user has access. To support deleted permissions, the indexer requires a [device code prompt](../active-directory/develop/v2-oauth2-device-code.md) to log in on behalf of the user.
-+ Application permissions, where the indexer runs under the identity of the SharePoint Online tenant with access to all sites and files within the SharePoint Online tenant. The indexer requires a [client secret](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) to access the SharePoint Online tenant. The indexer will also require [tenant admin approval](../active-directory/manage-apps/grant-admin-consent.md) before it can index any content.
+++ Application permissions, where the indexer runs under the identity of the SharePoint tenant with access to all sites and files within the SharePoint tenant. The indexer requires a [client secret](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) to access the SharePoint tenant. The indexer will also require [tenant admin approval](../active-directory/manage-apps/grant-admin-consent.md) before it can index any content. ### Step 3: Create an Azure AD application
-The SharePoint Online indexer will use this Azure AD application for authentication.
-1. Navigate to the [Azure portal](https://portal.azure.com/).
+The SharePoint indexer will use this Azure Active Directory (Azure AD) application for authentication.
+
+1. [Sign in to Azure portal](https://portal.azure.com/).
-1. Open the menu on the left side of the main page and select **Azure Active Directory** then select **App registrations**. Select **+ New registration**.
+1. Search for or navigate to **Azure Active Directory**, then select **App registrations**.
+
+1. Select **+ New registration**:
1. Provide a name for your app.
- 2. Select **Single tenant**.
- 3. No redirect URI required.
- 4. Select **Register**
+ 1. Select **Single tenant**.
+ 1. Skip the URI designation step. No redirect URI required.
+ 1. Select **Register**.
-1. Select **API permissions** from the menu on the left, then **Add a permission**, then **Microsoft Graph**.
+1. On the left, select **API permissions**, then **Add a permission**, then **Microsoft Graph**.
- + If the indexer is using delegated API permissions, then select **Delegated permissions** and add the following:
+ + If the indexer is using delegated API permissions, select **Delegated permissions** and add the following:
+ **Delegated - Files.Read.All** + **Delegated - Sites.Read.All**
The SharePoint Online indexer will use this Azure AD application for authenticat
:::image type="content" source="media/search-howto-index-sharepoint-online/delegated-api-permissions.png" alt-text="Delegated API permissions":::
- Delegated permissions allow the search client to connect to SharePoint Online under the security identity of the current user.
+ Delegated permissions allow the search client to connect to SharePoint under the security identity of the current user.
+ If the indexer is using application API permissions, then select **Application permissions** and add the following:
The SharePoint Online indexer will use this Azure AD application for authenticat
:::image type="content" source="media/search-howto-index-sharepoint-online/application-api-permissions.png" alt-text="Application API permissions":::
- Using application permissions means that the indexer will access the SharePoint site in a service context. So when you run the indexer it will have access to all content in the SharePoint Online tenant, which requires tenant admin approval. A client secret is also required for authentication. Setting up the client secret is described later in this article.
+ Using application permissions means that the indexer will access the SharePoint site in a service context. So when you run the indexer it will have access to all content in the SharePoint tenant, which requires tenant admin approval. A client secret is also required for authentication. Setting up the client secret is described later in this article.
1. Give admin consent.
The SharePoint Online indexer will use this Azure AD application for authenticat
:::image type="content" source="media/search-howto-index-sharepoint-online/aad-app-grant-admin-consent.png" alt-text="Azure AD app grant admin consent":::
-1. Select the **Authentication** tab. Set **Allow public client flows** to **Yes** then select **Save**.
+1. Select the **Authentication** tab.
+
+1. Set **Allow public client flows** to **Yes** then select **Save**.
1. Select **+ Add a platform**, then **Mobile and desktop applications**, then check `https://login.microsoftonline.com/common/oauth2/nativeclient`, then **Configure**.
The SharePoint Online indexer will use this Azure AD application for authenticat
1. (Application API Permissions only) To authenticate to the Azure AD application using application permissions, the indexer requires a client secret.
- + Select **Certificates & Secrets** from the menu on the left, then **Client secrets**, then **New client secret**
+ + Select **Certificates & Secrets** from the menu on the left, then **Client secrets**, then **New client secret**.
:::image type="content" source="media/search-howto-index-sharepoint-online/application-client-secret.png" alt-text="New client secret":::
The SharePoint Online indexer will use this Azure AD application for authenticat
<a name="create-data-source"></a> ### Step 4: Create data source+ > [!IMPORTANT] > Starting in this section you need to use the preview REST API for the remaining steps. If youΓÇÖre not familiar with the Azure Cognitive Search REST API, we suggest taking a look at this [Quickstart](search-get-started-rest.md). A data source specifies which data to index, credentials needed to access the data, and policies to efficiently identify changes in the data (new, modified, or deleted rows). A data source can be used by multiple indexers in the same search service. For SharePoint indexing, the data source must have the following required properties:+ + **name** is the unique name of the data source within your search service.
-+ **type** must be "sharepoint". This is case sensitive.
-+ **credentials** provide the SharePoint Online endpoint and the Azure AD application (client) ID. An example SharePoint Online endpoint is `https://microsoft.sharepoint.com/teams/MySharePointSite`. You can get the SharePoint Online endpoint by navigating to the home page of your SharePoint site and copying the URL from the browser.
++ **type** must be "sharepoint". This value is case-sensitive.++ **credentials** provide the SharePoint endpoint and the Azure AD application (client) ID. An example SharePoint endpoint is `https://microsoft.sharepoint.com/teams/MySharePointSite`. You can get the endpoint by navigating to the home page of your SharePoint site and copying the URL from the browser. + **container** specifies which document library to index. More information on creating the container can be found in the [Controlling which documents are indexed](#controlling-which-documents-are-indexed) section of this document.
-To create a data source:
+To create a data source, call [Create Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source) using preview API version `2020-06-30-Preview` or later.
```http POST https://[service name].search.windows.net/datasources?api-version=2020-06-30-Preview
api-key: [admin key]
``` #### Connection string format+ The format of the connection string changes based on whether the indexer is using delegated API permissions or application API permissions + Delegated API permissions connection string format
- `SharePointOnlineEndpoint=[SharePoint Online site url];ApplicationId=[Azure AD App ID];TenantId=[SharePoint Online site tenant id]`
+ `SharePointOnlineEndpoint=[SharePoint site url];ApplicationId=[Azure AD App ID];TenantId=[SharePoint site tenant id]`
+ Application API permissions connection string format
- `SharePointOnlineEndpoint=[SharePoint Online site url];ApplicationId=[Azure AD App ID];ApplicationSecret=[Azure AD App client secret];TenantId=[SharePoint Online site tenant id]`
+ `SharePointOnlineEndpoint=[SharePoint site url];ApplicationId=[Azure AD App ID];ApplicationSecret=[Azure AD App client secret];TenantId=[SharePoint site tenant id]`
> [!NOTE]
-> If the SharePoint Online site is in the same tenant as the search service and system-assigned managed identity is enabled, `TenantId` doesn't have to be included in the connection string. If the SharePoint Online site is in a different tenant from the search service, `TenantId` must be included.
+> If the SharePoint site is in the same tenant as the search service and system-assigned managed identity is enabled, `TenantId` doesn't have to be included in the connection string. If the SharePoint site is in a different tenant from the search service, `TenantId` must be included.
### Step 5: Create an index+ The index specifies the fields in a document, attributes, and other constructs that shape the search experience.
-Here's how to create an index with a searchable content field to store the text extracted from documents in a Document Library:
+To create an index, call [Create Index](/rest/api/searchservice/create-index):
```http POST https://[service name].search.windows.net/indexes?api-version=2020-06-30
api-key: [admin key]
``` > [!IMPORTANT]
-> Only [`metadata_spo_site_library_item_id`](#metadata) may be used as the key field in an index populated by the SharePoint Online indexer. If a key field doesn't exist in the data source, `metadata_spo_site_library_item_id` is automatically mapped to the key field.
-
-For more information, see [Create Index (REST API)](/rest/api/searchservice/create-index).
+> Only [`metadata_spo_site_library_item_id`](#metadata) may be used as the key field in an index populated by the SharePoint indexer. If a key field doesn't exist in the data source, `metadata_spo_site_library_item_id` is automatically mapped to the key field.
### Step 6: Create an indexer
-An indexer connects a data source with a target search index and provides a schedule to automate the data refresh. Once the index and data source have been created, you're ready to create the indexer!
-During this section youΓÇÖll be asked to login with your organization credentials that have access to the SharePoint site. If possible, we recommend creating a new organizational user account and giving that new user the exact permissions that you want the indexer to have.
+An indexer connects a data source with a target search index and provides a schedule to automate the data refresh. Once the index and data source have been created, you're ready to create the indexer.
+
+During this section youΓÇÖll be asked to sign in with your organization credentials that have access to the SharePoint site. If possible, we recommend creating a new organizational user account and giving that new user the exact permissions that you want the indexer to have.
There are a few steps to creating the indexer:
-1. Send a request to attempt to create the indexer.
+1. Send a [Create Indexer](/rest/api/searchservice/preview-api/create-or-update-indexer) request:
```http POST https://[service name].search.windows.net/indexers?api-version=2020-06-30-Preview Content-Type: application/json api-key: [admin key]
- {
- "name" : "sharepoint-indexer",
- "dataSourceName" : "sharepoint-datasource",
- "targetIndexName" : "sharepoint-index",
- "fieldMappings" : [
+ {
+ "name" : "sharepoint-indexer",
+ "dataSourceName" : "sharepoint-datasource",
+ "targetIndexName" : "sharepoint-index",
+ "parameters": {
+ "batchSize": null,
+ "maxFailedItems": null,
+ "maxFailedItemsPerBatch": null,
+ "base64EncodeKeys": null,
+ "configuration:" {
+ "indexedFileNameExtensions" : null,
+ "excludedFileNameExtensions" : null,
+ "dataToExtract": "contentAndMetadata"
+ }
+ },
+ "schedule" : { },
+ "fieldMappings" : [
{ "sourceFieldName" : "metadata_spo_site_library_item_id", "targetFieldName" : "id", "mappingFunction" : { "name" : "base64Encode" }
- }
- ]
- }
-
+ }
+ }
```
-1. When creating the indexer for the first time it will fail and youΓÇÖll see the following error. Go to the link in the error message. If you donΓÇÖt go to the link within 10 minutes the code will expire and youΓÇÖll need to recreate the [data source](#create-data-source).
+1. When creating the indexer for the first time it will fail and youΓÇÖll see the following error. Go to the link in the error message. If you donΓÇÖt go to the link within 10 minutes the code will expire and youΓÇÖll need to recreate the [data source](#create-data-source).
```http {
There are a few steps to creating the indexer:
} ```
-1. Provide the code that was provided in the error message.
+1. Provide the code that was provided in the error message.
:::image type="content" source="media/search-howto-index-sharepoint-online/enter-device-code.png" alt-text="Enter device code":::
-1. The SharePoint indexer will access the SharePoint content as the signed-in user. The user that logs in during this step will be that signed-in user. So, if you log in with a user account that doesnΓÇÖt have access to a document in the Document Library that you want to index, the indexer wonΓÇÖt have access to that document.
+1. The SharePoint indexer will access the SharePoint content as the signed-in user. The user that logs in during this step will be that signed-in user. So, if you log in with a user account that doesnΓÇÖt have access to a document in the Document Library that you want to index, the indexer wonΓÇÖt have access to that document.
If possible, we recommend creating a new user account and giving that new user the exact permissions that you want the indexer to have.
-1. Approve the permissions that are being requested.
+1. Approve the permissions that are being requested.
:::image type="content" source="media/search-howto-index-sharepoint-online/aad-app-approve-api-permissions.png" alt-text="Approve API permissions":::
-1. Resend the indexer create request. This time the request should succeed.
+1. Resend the indexer create request. This time the request should succeed.
```http POST https://[service name].search.windows.net/indexers?api-version=2020-06-30-Preview
There are a few steps to creating the indexer:
{ "name" : "sharepoint-indexer", "dataSourceName" : "sharepoint-datasource",
- "targetIndexName" : "sharepoint-index"
+ "targetIndexName" : "sharepoint-index",
+ "parameters": {
+ "batchSize": null,
+ "maxFailedItems": null,
+ "maxFailedItemsPerBatch": null,
+ "base64EncodeKeys": null,
+ "configuration:" {
+ "indexedFileNameExtensions" : null,
+ "excludedFileNameExtensions" : null,
+ "dataToExtract": "contentAndMetadata"
+ }
+ },
+ "schedule" : { },
+ "fieldMappings" : [
+ {
+ "sourceFieldName" : "metadata_spo_site_library_item_id",
+ "targetFieldName" : "id",
+ "mappingFunction" : {
+ "name" : "base64Encode"
+ }
+ }
} ```
There are a few steps to creating the indexer:
:::image type="content" source="media/search-howto-index-sharepoint-online/no-admin-approval-error.png" alt-text="Admin approval required"::: ### Step 7: Check the indexer status
-After the indexer has been created you can check the indexer status by making the following request.
+
+After the indexer has been created, you can call [Get Indexer Status](/rest/api/searchservice/get-indexer-status):
```http GET https://[service name].search.windows.net/indexers/sharepoint-indexer/status?api-version=2020-06-30-Preview
Content-Type: application/json
api-key: [admin key] ```
-More information on the indexer status can be found here: [Get Indexer Status](/rest/api/searchservice/get-indexer-status).
- ## Updating the data source
-If there are no updates to the data source object, the indexer can run on a schedule without any user interaction. However, every time the Azure Cognitive Search data source object is updated, you will need to login again in order for the indexer to run. For example, if you change the data source query, you will need to login again using the `https://microsoft.com/devicelogin` and a new code.
+
+If there are no updates to the data source object, the indexer can run on a schedule without any user interaction. However, every time the Azure Cognitive Search data source object is updated, you will need to sign in again in order for the indexer to run. For example, if you change the data source query, sign in again using the `https://microsoft.com/devicelogin` and a new code.
Once the data source has been updated, follow the below steps:
-1. Manually kick off a run of the indexer.
+1. Call [Run Indexer](/rest/api/searchservice/run-indexer) to manually kick off [indexer execution](search-howto-run-reset-indexers.md).
```http POST https://[service name].search.windows.net/indexers/sharepoint-indexer/run?api-version=2020-06-30-Preview
Once the data source has been updated, follow the below steps:
api-key: [admin key] ```
- More information on the indexer run request can be found here: [Run Indexer](/rest/api/searchservice/run-indexer).
-
-1. Check the indexer status. If the last indexer run has an error telling you to go to `https://microsoft.com/devicelogin`, go to that page and provide the new code.
+1. Check the [indexer status](/rest/api/searchservice/get-indexer-status). If the last indexer run has an error telling you to go to `https://microsoft.com/devicelogin`, go to that page and provide the new code.
```http GET https://[service name].search.windows.net/indexers/sharepoint-indexer/status?api-version=2020-06-30-Preview
Once the data source has been updated, follow the below steps:
api-key: [admin key] ```
- More information on the indexer status can be found here: [Get Indexer Status](/rest/api/searchservice/get-indexer-status).
+1. Login.
-1. Login
-
-1. Manually kick off an indexer run again and check the indexer status. This time the indexer run should successfully start.
+1. Manually run the indexer again and check the indexer status. This time the indexer run should successfully start.
<a name="metadata"></a> ## Indexing document metadata
-If you have set the indexer to index document metadata, the following metadata will be available to index.
+
+If you have set the indexer to index document metadata (`"dataToExtract": "contentAndMetadata"`), the following metadata will be available to index.
| Identifier | Type | Description | | - | -- | -- | | metadata_spo_site_library_item_id | Edm.String | The combination key of site ID, library ID and item ID which uniquely identifies an item in a document library for a site. |
-| metadata_spo_site_id | Edm.String | The ID of the SharePoint Online site. |
+| metadata_spo_site_id | Edm.String | The ID of the SharePoint site. |
| metadata_spo_library_id | Edm.String | The ID of document library. | | metadata_spo_item_id | Edm.String | The ID of the (document) item in the library. | | metadata_spo_item_last_modified | Edm.DateTimeOffset | The last modified date/time (UTC) of the item. |
If you have set the indexer to index document metadata, the following metadata w
| metadata_spo_item_weburi | Edm.String | The URI of the item. | | metadata_spo_item_path | Edm.String | The combination of the parent path and item name. |
-The SharePoint Online indexer also supports metadata specific to each document type. More information can be found in [Content metadata properties used in Azure Cognitive Search](search-blob-metadata-properties.md).
+The SharePoint indexer also supports metadata specific to each document type. More information can be found in [Content metadata properties used in Azure Cognitive Search](search-blob-metadata-properties.md).
> [!NOTE]
-> To index custom metadata, [`additionalColumns` must be specified in the query definition](#query)
+> To index custom metadata, "additionalColumns" must be specified in the [query parameter of the data source](#query).
-<a name="controlling-which-documents-are-indexed"></a>
+## Include or exclude by file type
-## Controlling which documents are indexed
-A single SharePoint Online indexer can index content from one or more Document Libraries. Use the *container* parameter when creating your data source to indicate the document libraries that you want to index.
-The data source *container* has two properties: *name* and *query*.
+You can control which files are indexed by setting inclusion and exclusion criteria in the "parameters" section of the indexer definition.
-### Name
-The *name* property is required and must be one of three values:
-+ *defaultSiteLibrary*
- + Index all the content from the sites default document library.
-+ *allSiteLibraries*
- + Index all the content from all the document libraries in a site. This will not index document libraries from a subsite. Those can be specified in the *query* though.
-+ *useQuery*
- + Only index content defined in the *query*.
+Include specific file extensions by setting `"indexedFileNameExtensions"` to a comma-separated list of file extensions (with a leading dot). Exclude specific file extensions by setting `"excludedFileNameExtensions"` to the extensions that should be skipped. If the same extension is in both lists, it will be excluded from indexing.
-<a name="query"></a>
+```http
+PUT /indexers/[indexer name]?api-version=2020-06-30
+{
+ "parameters" : {
+ "configuration" : {
+ "indexedFileNameExtensions" : ".pdf, .docx",
+ "excludedFileNameExtensions" : ".png, .jpeg"
+ }
+ }
+}
+```
-### Query
-The *query* property is made up of keyword/value pairs. The below are the keywords that can be used. The values are either site urls or document library urls.
+<a name="controlling-which-documents-are-indexed"></a>
-> [!NOTE]
-> To get the value for a particular keyword, we recommend opening SharePoint Online in a browser, navigating to the Document Library that youΓÇÖre trying to include/exclude and copying the URI from the browser. This is the easiest way to get the value to use with a keyword in the query.
+## Controlling which documents are indexed
-| Keyword | Query Description | Example |
-| - | -- | -- |
-| null | If null or empty, index either the default document library or all document libraries depending on the container name. | Index all content from the default site library: <br><br> ``` "container" : { "name" : "defaultSiteLibrary", "query" : null } ``` |
-| includeLibrariesInSite | Index content from all libraries in defined site in the connection string. These are limited to subsites of your site <br><br> The *query* value for this keyword should be the URI of the site or subsite. | Index all content from all the document libraries in mysite. <br><br> ``` "container" : { "name" : "useQuery", "query" : "includeLibrariesInSite=https://mycompany.sharepoint.com/mysite" } ``` |
-| includeLibrary | Index content from this library. <br><br> The *query* value for this keyword should be in one of the following formats: <br><br> Example 1: <br><br> *includeLibrary=[site or subsite]/[document library]* <br><br> Example 2: <br><br> URI copied from your browser. | Index all content from MyDocumentLibrary: <br><br> Example 1: <br><br> ``` "container" : { "name" : "useQuery", "query" : "includeLibrary=https://mycompany.sharepoint.com/mysite/MyDocumentLibrary" } ``` <br><br> Example 2: <br><br> ``` "container" : { "name" : "useQuery", "query" : "includeLibrary=https://mycompany.sharepoint.com/teams/mysite/MyDocumentLibrary/Forms/AllItems.aspx" } ``` |
-| excludeLibrary | Do not index content from this library. <br><br> The *query* value for this keyword should be in one of the following formats: <br><br> Example 1: <br><br> *excludeLibrary=[site or subsite URI]/[document library]* <br><br> Example 2: <br><br> URI copied from your browser. | Index all the content from all my libraries except for MyDocumentLibrary: <br><br> Example 1: <br><br> ``` "container" : { "name" : "useQuery", "query" : "includeLibrariesInSite=https://mysite.sharepoint.com/subsite1; excludeLibrary=https://mysite.sharepoint.com/subsite1/MyDocumentLibrary" } ``` <br><br> Example 2: <br><br> ``` "container" : { "name" : "useQuery", "query" : "includeLibrariesInSite=https://mycompany.sharepoint.com/teams/mysite; excludeLibrary=https://mycompany.sharepoint.com/teams/mysite/MyDocumentLibrary/Forms/AllItems.aspx" } ``` |
-| additionalColumns | Index columns from this library. <br><br> The query value for this keyword should include a comma-separated list of column names you want to index. Use a double backslash to escape semicolons and commas in column names: <br><br> Example 1: <br><br> additionalColumns=MyCustomColumn,MyCustomColumn2 <br><br> Example 2: <br><br> additionalColumns=MyCustomColumnWith\\,,MyCustomColumn2With\\; | Index all content from MyDocumentLibrary: <br><br> Example 1: <br><br> ``` "container" : { "name" : "useQuery", "query" : "includeLibrary=https://mycompany.sharepoint.com/mysite/MyDocumentLibrary;additionalColumns=MyCustomColumn,MyCustomColumn2" } ``` <br><br> Note the double backslashes when escaping characters ΓÇô JSON requires a backslash is escaped with another backslash. <br><br> Example 2: <br><br> ``` "container" : { "name" : "useQuery", "query" : "includeLibrary=https://mycompany.sharepoint.com/teams/mysite/MyDocumentLibrary/Forms/AllItems.aspx;additionalColumns=MyCustomColumnWith\\,,MyCustomColumnWith\\;" } ``` |
+A single SharePoint indexer can index content from one or more document libraries. Use the "container" parameter on the data source definition to indicate which sites and document libraries to index from.
+T
+The [data source "container" section](#create-data-source) has two properties for this task: "name" and "query".
-## Index by file type
-You can control which documents are indexed and which are skipped.
+### Name
-### Include documents having specific file extensions
-You can index only the documents with the file name extensions you specify by using the `indexedFileNameExtensions` indexer configuration parameter. The value is a string containing a comma-separated list of file extensions (with a leading dot). For example, to index only the .PDF and .DOCX documents, do this:
+The "name" property is required and must be one of three values:
-```http
-PUT https://[service name].search.windows.net/indexers/[indexer name]?api-version=2020-06-30-Preview
-Content-Type: application/json
-api-key: [admin key]
+| Value | Description |
+|-|-|
+| defaultSiteLibrary | Index all the content from the sites default document library. |
+| allSiteLibraries | Index all the content from all the document libraries in a site. This will not index document libraries from a subsite. Those can be specified in the "query" though. |
+| useQuery | Only index content defined in the "query". |
-{
- ... other parts of indexer definition
- "parameters" : { "configuration" : { "indexedFileNameExtensions" : ".pdf,.docx" } }
-}
-```
+<a name="query"></a>
-### Exclude documents having specific file extensions
-You can exclude documents with specific file name extensions from indexing by using the `excludedFileNameExtensions` configuration parameter. The value is a string containing a comma-separated list of file extensions (with a leading dot). For example, to index all content except those with the .PNG and .JPEG extensions, do this:
+### Query
-```http
-PUT https://[service name].search.windows.net/indexers/[indexer name]?api-version=2020-06-30-Preview
-Content-Type: application/json
-api-key: [admin key]
+The "query" parameter of the data source is made up of keyword/value pairs. The below are the keywords that can be used. The values are either site URLs or document library URLs.
-{
- ... other parts of indexer definition
- "parameters" : { "configuration" : { "excludedFileNameExtensions" : ".png,.jpeg" } }
-}
-```
+> [!NOTE]
+> To get the value for a particular keyword, we recommend navigating to the document library that youΓÇÖre trying to include/exclude and copying the URI from the browser. This is the easiest way to get the value to use with a keyword in the query.
-If both `indexedFileNameExtensions` and `excludedFileNameExtensions` parameters are present, Azure Cognitive Search first looks at `indexedFileNameExtensions`, then at `excludedFileNameExtensions`. This means that if the same file extension is present in both lists, it will be excluded from indexing.
+| Keyword | Value description and examples |
+| - | |
+| null | If null or empty, index either the default document library or all document libraries depending on the container name. <br><br>Example: <br><br>``` "container" : { "name" : "defaultSiteLibrary", "query" : null } ``` |
+| includeLibrariesInSite | Index content from all libraries under the specified site in the connection string. These are limited to subsites of your site. The value should be the URI of the site or subsite. <br><br>Example: <br><br>```"container" : { "name" : "useQuery", "query" : "includeLibrariesInSite=https://mycompany.sharepoint.com/mysite" }``` |
+| includeLibrary | Index all content from this library. The value is the fully-qualified path to the library, which can be copied from your browser: <br><br>Example 1 (fully-qualified path): <br><br>```"container" : { "name" : "useQuery", "query" : "includeLibrary=https://mycompany.sharepoint.com/mysite/MyDocumentLibrary" }``` <br><br>Example 2 (URI copied from your browser): <br><br>```"container" : { "name" : "useQuery", "query" : "includeLibrary=https://mycompany.sharepoint.com/teams/mysite/MyDocumentLibrary/Forms/AllItems.aspx" }``` |
+| excludeLibrary | Do not index content from this library. The value is the fully-qualified path to the library, which can be copied from your browser: <br><br> Example 1 (fully-qualified path): <br><br>```"container" : { "name" : "useQuery", "query" : "includeLibrariesInSite=https://mysite.sharepoint.com/subsite1; excludeLibrary=https://mysite.sharepoint.com/subsite1/MyDocumentLibrary" }``` <br><br> Example 2 (URI copied from your browser): <br><br>```"container" : { "name" : "useQuery", "query" : "includeLibrariesInSite=https://mycompany.sharepoint.com/teams/mysite; excludeLibrary=https://mycompany.sharepoint.com/teams/mysite/MyDocumentLibrary/Forms/AllItems.aspx" }``` |
+| additionalColumns | Index columns from the document library. The value is a comma-separated list of column names you want to index. Use a double backslash to escape semicolons and commas in co