Updates from: 03/20/2021 04:05:35
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Apple Sso Plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/apple-sso-plugin.md
Title: Microsoft Enterprise SSO plug-in for Apple devices
-description: Learn about Microsoft's Azure Active Directory SSO plug-in for iOS, iPadOS, and macOS devices.
+description: Learn about the Azure Active Directory SSO plug-in for iOS, iPadOS, and macOS devices.
-# Microsoft Enterprise SSO plug-in for Apple devices (Preview)
+# Microsoft Enterprise SSO plug-in for Apple devices (preview)
>[!IMPORTANT] > This feature [!INCLUDE [PREVIEW BOILERPLATE](../../../includes/active-directory-develop-preview.md)]
-The *Microsoft Enterprise SSO plug-in for Apple devices* provides single sign-on (SSO) for Azure Active Directory (Azure AD) accounts on macOS, iOS, and iPadOS across all applications that support Apple's [Enterprise Single Sign-On](https://developer.apple.com/documentation/authenticationservices) feature. This includes older applications your business might depend on but that don't yet support the latest identity libraries or protocols. Microsoft worked closely with Apple to develop this plug-in to increase your application's usability while providing the best protection that Apple and Microsoft can provide.
+The *Microsoft Enterprise SSO plug-in for Apple devices* provides single sign-on (SSO) for Azure Active Directory (Azure AD) accounts on macOS, iOS, and iPadOS across all applications that support Apple's [enterprise single sign-on](https://developer.apple.com/documentation/authenticationservices) feature. The plug-in provides SSO for even old applications that your business might depend on but that don't yet support the latest identity libraries or protocols. Microsoft worked closely with Apple to develop this plug-in to increase your application's usability while providing the best protection available.
-The Enterprise SSO plug-in is currently available as a built-in feature of the following apps:
+The Enterprise SSO plug-in is currently a built-in feature of the following apps:
-* [Microsoft Authenticator](../user-help/user-help-auth-app-overview.md) - iOS, iPadOS
-* Microsoft Intune [Company Portal](/mem/intune/apps/apps-company-portal-macos) - macOS
+* [Microsoft Authenticator](../user-help/user-help-auth-app-overview.md): iOS, iPadOS
+* Microsoft Intune [Company Portal](/mem/intune/apps/apps-company-portal-macos): macOS
## Features The Microsoft Enterprise SSO plug-in for Apple devices offers the following benefits: -- Provides SSO for Azure AD accounts across all applications that support Apple's Enterprise Single Sign-On feature.-- Can be enabled by any mobile device management (MDM) solution.-- Extends SSO to applications that do not yet use Microsoft identity platform libraries.-- Extends SSO to applications that use OAuth2, OpenID Connect, and SAML.
+- It provides SSO for Azure AD accounts across all applications that support the Apple Enterprise SSO feature.
+- It can be enabled by any mobile device management (MDM) solution.
+- It extends SSO to applications that don't yet use Microsoft identity platform libraries.
+- It extends SSO to applications that use OAuth 2, OpenID Connect, and SAML.
## Requirements
-To use Microsoft Enterprise SSO plug-in for Apple devices:
+To use the Microsoft Enterprise SSO plug-in for Apple devices:
-- Device must **support** and have an app that includes the the Microsoft Enterprise SSO plug-in for Apple devices **installed**:
- - iOS 13.0+: [Microsoft Authenticator app](../user-help/user-help-auth-app-overview.md)
- - iPadOS 13.0+: [Microsoft Authenticator app](../user-help/user-help-auth-app-overview.md)
- - macOS 10.15+: [Intune Company Portal app](/mem/intune/user-help/enroll-your-device-in-intune-macos-cp)
-- Device must be **MDM-enrolled** (for example, with Microsoft Intune).-- Configuration must be **pushed to the device** to enable the Enterprise SSO plug-in on the device. This security constraint is required by Apple.
+- The device must *support* and have an installed app that has the Microsoft Enterprise SSO plug-in for Apple devices:
+ - iOS 13.0 and later: [Microsoft Authenticator app](../user-help/user-help-auth-app-overview.md)
+ - iPadOS 13.0 and later: [Microsoft Authenticator app](../user-help/user-help-auth-app-overview.md)
+ - macOS 10.15 and later: [Intune Company Portal app](/mem/intune/user-help/enroll-your-device-in-intune-macos-cp)
+- The device must be *enrolled in MDM*, for example, through Microsoft Intune.
+- Configuration must be *pushed to the device* to enable the Enterprise SSO plug-in. Apple requires this security constraint.
-### iOS requirements:
-- iOS 13.0 or higher must be installed on the device.-- A Microsoft application that provides the Microsoft Enterprise SSO plug-in for Apple devices must be installed on the device. For Public Preview, these applications are the [Microsoft Authenticator app](/intune/user-help/user-help-auth-app-overview.md).
+iOS requirements:
+- iOS 13.0 or later must be installed on the device.
+- A Microsoft application that provides the Microsoft Enterprise SSO plug-in for Apple devices must be installed on the device. During the public preview, this application is the [Microsoft Authenticator app](/intune/user-help/user-help-auth-app-overview.md).
-### macOS requirements:
-- macOS 10.15 or higher must be installed on the device. -- A Microsoft application that provides the Microsoft Enterprise SSO plug-in for Apple devices must be installed on the device. For Public Preview, these applications include the [Intune Company Portal app](/intune/user-help/enroll-your-device-in-intune-macos-cp.md).
+macOS requirements:
+- macOS 10.15 or later must be installed on the device.
+- A Microsoft application that provides the Microsoft Enterprise SSO plug-in for Apple devices must be installed on the device. During the public preview, this application is the [Intune Company Portal app](/intune/user-help/enroll-your-device-in-intune-macos-cp.md).
-## Enable the SSO plug-in with mobile device management (MDM)
+## Enable the SSO plug-in
+Use the following information to enable the SSO plug-in by using MDM.
### Microsoft Intune configuration
-If you use Microsoft Intune as your MDM service, you can use built-in configuration profile settings to enable the Microsoft Enterprise SSO plug-in.
+If you use Microsoft Intune as your MDM service, you can use built-in configuration profile settings to enable the Microsoft Enterprise SSO plug-in:
-First, configure the [Single sign-on app extension](/mem/intune/configuration/device-features-configure#single-sign-on-app-extension) settings of a configuration profile and [assign the profile to a user or device group](/mem/intune/configuration/device-profile-assign) (if not already assigned).
+1. Configure the [SSO app extension](/mem/intune/configuration/device-features-configure#single-sign-on-app-extension) settings of a configuration profile.
+1. If the profile isn't already assigned, [assign the profile to a user or device group](/mem/intune/configuration/device-profile-assign).
The profile settings that enable the SSO plug-in are automatically applied to the group's devices the next time each device checks in with Intune. ### Manual configuration for other MDM services
-If you're not using Microsoft Intune for mobile device management, use the following parameters to configure the Microsoft Enterprise SSO plug-in for Apple devices.
+If you don't use Intune for MDM, use the following parameters to configure the Microsoft Enterprise SSO plug-in for Apple devices.
-#### iOS settings:
+iOS settings:
- **Extension ID**: `com.microsoft.azureauthenticator.ssoextension`-- **Team ID**: (this field is not needed for iOS)
+- **Team ID**: This field isn't needed for iOS.
-#### macOS settings:
+macOS settings:
- **Extension ID**: `com.microsoft.CompanyPortalMac.ssoextension` - **Team ID**: `UBF8T346G9`
-#### Common settings:
+Common settings:
- **Type**: Redirect - `https://login.microsoftonline.com`
If you're not using Microsoft Intune for mobile device management, use the follo
- `https://login.usgovcloudapi.net` - `https://login-us.microsoftonline.com`
-### Additional configuration options
-Additional configuration options can be added to extend SSO functionality to additional apps.
+### More configuration options
+You can add more configuration options to extend SSO functionality to other apps.
#### Enable SSO for apps that don't use a Microsoft identity platform library
-The SSO plug-in allows any application to participate in single sign-on even if it was not developed using a Microsoft SDK like the Microsoft Authentication Library (MSAL).
+The SSO plug-in allows any application to participate in SSO even if it wasn't developed by using a Microsoft SDK like Microsoft Authentication Library (MSAL).
-The SSO plug-in is installed automatically by devices that have downloaded the Microsoft Authenticator app on iOS and iPadOS or Intune Company Portal app on macOS and registered their device with your organization. Your organization likely uses the Authenticator app today for scenarios like multi-factor authentication, password-less authentication, and conditional access. It can be turned on for your applications using any MDM provider, although Microsoft has made it easy to configure inside the Microsoft Endpoint Manager of Intune. An allow list is used to configure these applications to use the SSO plugin.
+The SSO plug-in is installed automatically by devices that have:
+* Downloaded the Authenticator app on iOS or iPadOS, or downloaded the Intune Company Portal app on macOS.
+* Registered their device with your organization.
+
+Your organization likely uses the Authenticator app for scenarios like multifactor authentication (MFA), passwordless authentication, and conditional access. By using an MDM provider, you can turn on the SSO plug-in for your applications. Microsoft has made it easy to configure the plug-in inside the Microsoft Endpoint Manager in Intune. An allowlist is used to configure these applications to use the SSO plug-in.
>[!IMPORTANT]
-> Only apps that use native Apple network technologies or webviews are supported. If an application ships its own network layer implementation, Microsoft Enterprise SSO plug-in is not supported.
+> The Microsoft Enterprise SSO plug-in supports only apps that use native Apple network technologies or webviews. It doesn't support applications that ship their own network layer implementation.
-Use the following parameters to configure the Microsoft Enterprise SSO plug-in for apps that don't use a Microsoft identity platform library:
+Use the following parameters to configure the Microsoft Enterprise SSO plug-in for apps that don't use a Microsoft identity platform library.
-If you want to provide a list of specific apps:
+To provide a list of specific apps, use these parameters:
- **Key**: `AppAllowList` - **Type**: `String`-- **Value**: Comma-delimited list of application bundle IDs for the applications that are allowed to participate in the SSO
+- **Value**: Comma-delimited list of application bundle IDs for the applications that are allowed to participate in SSO.
- **Example**: `com.contoso.workapp, com.contoso.travelapp`
-Or if you want to provide a list of prefixes:
+To provide a list of prefixes, use these parameters:
- **Key**: `AppPrefixAllowList` - **Type**: `String`-- **Value**: Comma-delimited list of application bundle ID prefixes for the applications that are allowed to participate in the SSO. Note that this will enable all apps starting with a particular prefix to participate in the SSO
+- **Value**: Comma-delimited list of application bundle ID prefixes for the applications that are allowed to participate in SSO. This parameter allows all apps that start with a particular prefix to participate in SSO.
- **Example**: `com.contoso., com.fabrikam.`
-[Consented apps](./application-consent-experience.md) that are allowed by the MDM admin to participate in the SSO can silently get a token for the end user. Therefore, it is important to only add trusted applications to the allow list.
+[Consented apps](./application-consent-experience.md) that the MDM admin allows to participate in SSO can silently get a token for the end user. So add only trusted applications to the allowlist.
>[!NOTE]
-> You don't need to add applications that use MSAL or ASWebAuthenticationSession to this list. Those applications are enabled by default.
-
-##### How to discover app bundle identifiers on iOS devices
+> You don't need to add applications that use MSAL or ASWebAuthenticationSession to the list of apps that can participate in SSO. Those applications are enabled by default.
-Apple does not provide an easy way to discover Bundle IDs from the App Store. The easiest way to discover the Bundle IDs of the apps who want to use for SSO is to ask your vendor or app developer. If that option is not available, you can use your MDM configuration to discover the Bundle IDs.
+##### Find app bundle identifiers on iOS devices
-Temporarily enable following flag in your MDM configuration:
+Apple provides no easy way to get bundle IDs from the App Store. The easiest way to get the bundle IDs of the apps you want to use for SSO is to ask your vendor or app developer. If that option isn't available, you can use your MDM configuration to find the bundle IDs:
-- **Key**: `admin_debug_mode_enabled`-- **Type**: `Integer`-- **Value**: 1 or 0
+1. Temporarily enable the following flag in your MDM configuration:
-When this flag is on sign-in to iOS apps on the device you want to know the Bundle ID for. Then open Microsoft Authenticator app -> Help -> Send logs -> View logs.
+ - **Key**: `admin_debug_mode_enabled`
+ - **Type**: `Integer`
+ - **Value**: 1 or 0
+1. When this flag is on, sign in to iOS apps on the device for which you want to know the bundle ID.
+1. In the Authenticator app, select **Help** > **Send logs** > **View logs**.
+1. In the log file, look for following line: `[ADMIN MODE] SSO extension has captured following app bundle identifiers`. This line should capture all application bundle IDs that are visible to the SSO extension.
-In the log file, look for following line:
+Use the bundle IDs to configure SSO for the apps.
-`[ADMIN MODE] SSO extension has captured following app bundle identifiers:`
+#### Allow users to sign in from unknown applications and the Safari browser
-This should capture all application bundle identifiers visible to the SSO extension. You can then use those identifiers to configure the SSO for those apps.
+By default, the Microsoft Enterprise SSO plug-in provides SSO for authorized apps only when a user has signed in from an app that uses a Microsoft identity platform library like MSAL or Azure Active Directory Authentication Library (ADAL). The Microsoft Enterprise SSO plug-in can also acquire a shared credential when it's called by another app that uses a Microsoft identity platform library during a new token acquisition.
-#### Allow user to sign-in from unknown applications and the Safari browser.
+When you enable the `browser_sso_interaction_enabled` flag, apps that don't use a Microsoft identity platform library can do the initial bootstrapping and get a shared credential. The Safari browser can also do the initial bootstrapping and get a shared credential.
-By default the Microsoft Enterprise SSO plug-in provides SSO for authorized apps only when a user has signed in from an app that uses a Microsoft identity platform library like ADAL or MSAL. The Microsoft Enterprise SSO plug-in can also acquire a shared credential when it is called by another app that uses a Microsoft identity platform library during a new token acquisition.
+If the Microsoft Enterprise SSO plug-in doesn't have a shared credential yet, it will try to get one whenever a sign-in is requested from an Azure AD URL inside the Safari browser, ASWebAuthenticationSession, SafariViewController, or another permitted native application.
-Enabling `browser_sso_interaction_enabled` flag enables app that do not use a Microsoft identity platform library to do the initial bootstrapping and get a shared credential. It also allows Safari browser to do the initial bootstrapping and get a shared credential. If the Microsoft Enterprise SSO plug-in doesnΓÇÖt have a shared credential yet, it will try to get one whenever a sign-in is requested from an Azure AD URL inside Safari browser, ASWebAuthenticationSession, SafariViewController, or another permitted native application.
+Use these parameters to enable the flag:
- **Key**: `browser_sso_interaction_enabled` - **Type**: `Integer` - **Value**: 1 or 0
-For macOS this setting is required to get a more consistent experience across all apps. For iOS and iPadOS this setting isn't required as most apps use the Microsoft Authenticator application for sign-in. However, if you have some applications that do not use the Microsoft Authenticator on iOS or iPadOS this flag will improve the experience so we recommend you enable the setting. It is disabled by default.
+macOS requires this setting so it can provide a consistent experience across all apps. iOS and iPadOS don't require this setting because most apps use the Authenticator application for sign-in. But we recommend that you enable this setting because if some of your applications don't use the Authenticator app on iOS or iPadOS, this flag will improve the experience. The setting is disabled by default.
-#### Disable asking for MFA on initial bootstrapping
+#### Disable asking for MFA during initial bootstrapping
-By default the Microsoft Enterprise SSO plug-in always prompts the user for Multi-factor authentication (MFA) when doing the initial bootstrapping and getting a shared credential, even if it's not required for the current application the user has launched. This is so the shared credential can be easily used across all additional applications without prompting the user if MFA becomes required later. This reduces the times the user needs to be prompted on the device and is generally a good decision.
+By default, the Microsoft Enterprise SSO plug-in always prompts the user for MFA during the initial bootstrapping and while getting a shared credential. The user is prompted for MFA even if it's not required for the application that the user has opened. This behavior allows the shared credential to be easily used across all other applications without the need to prompt the user if MFA is required later. Because the user gets fewer prompts overall, this setup is generally a good decision.
-Enabling `browser_sso_disable_mfa` turns this off and will only prompt the user when MFA is required by an application or resource.
+Enabling `browser_sso_disable_mfa` turns off MFA during initial bootstrapping and while getting the shared credential. In this case, the user is prompted only when MFA is required by an application or resource.
+
+To enable the flag, use these parameters:
- **Key**: `browser_sso_disable_mfa` - **Type**: `Integer` - **Value**: 1 or 0
-We recommend keeping this flag disabled as it reduces the times the user needs to be prompted on the device. If your organization rarely uses MFA you may want to enable the flag, but we'd recommend you use MFA more frequently instead. For this reason, it is disabled by default.
+We recommend keeping this flag disabled because it reduces the number of times the user is prompted to sign in. If your organization rarely uses MFA, you might want to enable the flag. But we recommend that you use MFA more frequently instead. For this reason, the flag is disabled by default.
-#### Disable OAuth2 application prompts
+#### Disable OAuth 2 application prompts
-The Microsoft Enterprise SSO plug-in provides SSO by appending shared credentials to network requests coming from allowed applications. However, some OAuth2 applications might incorrectly enforce end-user prompts at the protocol layer. If this is happening, you'll see that shared credentials are ignored for those apps and your user is prompted to sign in even though the Microsoft Enterprise SSO plug-in is working for other applications.
+The Microsoft Enterprise SSO plug-in provides SSO by appending shared credentials to network requests that come from allowed applications. However, some OAuth 2 applications might incorrectly enforce end-user prompts at the protocol layer. If you see this problem, you'll also see that shared credentials are ignored for those apps. Your user is prompted to sign in even though the Microsoft Enterprise SSO plug-in works for other applications.
-Enabling `disable_explicit_app_prompt` flag restricts ability of both native and web applications to force an end-user prompt on the protocol layer and bypass SSO.
+Enabling the `disable_explicit_app_prompt` flag restricts the ability of both native applications and web applications to force an end-user prompt on the protocol layer and bypass SSO. To enable the flag, use these parameters:
- **Key**: `disable_explicit_app_prompt` - **Type**: `Integer` - **Value**: 1 or 0
-We recommend enabling this flag to get more consistent experience across all apps. It is disabled by default.
-
-#### Enable SSO through cookies for specific application
+We recommend enabling this flag to get a consistent experience across all apps. It's disabled by default.
-A small number of apps might be incompatible with the SSO extension. Specifically, apps that have advanced network settings might experience unexpected issues when they are enabled for the SSO (e.g. you might see an error that network request got canceled or interrupted).
+#### Enable SSO through cookies for a specific application
-If you are experiencing problems signing in using method described in the `Enable SSO for apps that don't use MSAL` section, you could try alternative configuration for those apps.
+A few apps might be incompatible with the SSO extension. Specifically, apps that have advanced network settings might experience unexpected issues when they're enabled for SSO. For example, you might see an error indicating that network request was canceled or interrupted.
-Use the following parameters to configure the Microsoft Enterprise SSO plug-in for those specific apps:
+If you have problems signing in by using the method described in the [Applications that don't use MSAL](#applications-that-dont-use-msal) section, try an alternative configuration. Use these parameters to configure the plug-in:
- **Key**: `AppCookieSSOAllowList` - **Type**: `String`-- **Value**: Comma-delimited list of application bundle ID prefixes for the applications that are allowed to participate in the SSO. Note that this will enable all apps starting with a particular prefix to participate in the SSO
+- **Value**: Comma-delimited list of application bundle ID prefixes for the applications that are allowed to participate in the SSO. All apps that start with the listed prefixes will be allowed to participate in SSO.
- **Example**: `com.contoso.myapp1, com.fabrikam.myapp2`
-Note that applications enabled for the SSO using this mechanism need to be added both to the `AppCookieSSOAllowList` and `AppPrefixAllowList`.
+Applications enabled for the SSO by using this setup need to be added to both `AppCookieSSOAllowList` and `AppPrefixAllowList`.
-We recommend trying this option only for applications experiencing unexpected sign-in failures.
+Try this configuration only for applications that have unexpected sign-in failures.
#### Use Intune for simplified configuration
-As stated before, you can use Microsoft Intune as your MDM service to ease configuration of the Microsoft Enterprise SSO plug-in including enabling the plug-in and adding your older apps to an allow list so they get SSO. For more information, see the [Intune configuration documentation](/intune/configuration/ios-device-features-settings).
+You can use Intune as your MDM service to ease configuration of the Microsoft Enterprise SSO plug-in. For example, you can use Intune to enable the plug-in and add old apps to an allowlist so they get SSO.
+
+For more information, see the [Intune configuration documentation](/intune/configuration/ios-device-features-settings).
+
+## Use the SSO plug-in in your application
-## Using the SSO plug-in in your application
+[MSAL for Apple devices](https://github.com/AzureAD/microsoft-authentication-library-for-objc) versions 1.1.0 and later supports the Microsoft Enterprise SSO plug-in for Apple devices. It's the recommended way to add support for the Microsoft Enterprise SSO plug-in. It ensures you get the full capabilities of the Microsoft identity platform.
-The [Microsoft Authentication Library (MSAL) for Apple devices](https://github.com/AzureAD/microsoft-authentication-library-for-objc) version 1.1.0 and higher supports the Microsoft Enterprise SSO plug-in for Apple devices. It is the recommended way to add support for the Microsoft Enterprise SSO plug-in and ensures you get the full capabilities of the Microsoft identity platform.
+If you're building an application for frontline-worker scenarios, see [Shared device mode for iOS devices](msal-ios-shared-devices.md) for setup information.
-If you're building an application for Frontline Worker scenarios, see [Shared device mode for iOS devices](msal-ios-shared-devices.md) for additional setup of the feature.
+## Understand how the SSO plug-in works
-## How the SSO plug-in works
+The Microsoft Enterprise SSO plug-in relies on the [Apple Enterprise SSO framework](https://developer.apple.com/documentation/authenticationservices/asauthorizationsinglesignonprovider?language=objc). Identity providers that join the framework can intercept network traffic for their domains and enhance or change how those requests are handled. For example, the SSO plug-in can show more UIs to collect end-user credentials securely, require MFA, or silently provide tokens to the application.
-The Microsoft Enterprise SSO plug-in relies on the [Apple's Enterprise Single Sign-On framework](https://developer.apple.com/documentation/authenticationservices/asauthorizationsinglesignonprovider?language=objc). Identity providers that onboard to the framework can intercept network traffic for their domains and enhance or change how those requests are handled. For example, the SSO plug-in can show additional UI to collect end-user credentials securely, require MFA, or silently provide tokens to the application.
+Native applications can also implement custom operations and communicate directly with the SSO plug-in. For more information, see this [2019 Worldwide Developer Conference video from Apple](https://developer.apple.com/videos/play/tech-talks/301/).
-Native applications can also implement custom operations and talk directly to the SSO plug-in.
-You can learn about Single Sign-in framework in this [2019 WWDC video from Apple](https://developer.apple.com/videos/play/tech-talks/301/)
+### Applications that use MSAL
-### Applications that use a Microsoft identity platform library
+[MSAL for Apple devices](https://github.com/AzureAD/microsoft-authentication-library-for-objc) versions 1.1.0 and later supports the Microsoft Enterprise SSO plug-in for Apple devices natively for work and school accounts.
-The [Microsoft Authentication Library (MSAL) for Apple devices](https://github.com/AzureAD/microsoft-authentication-library-for-objc) version 1.1.0 and higher supports the Microsoft Enterprise SSO plug-in for Apple devices natively for work and school accounts.
+You don't need any special configuration if you followed [all recommended steps](./quickstart-v2-ios.md) and used the default [redirect URI format](./redirect-uris-ios.md). On devices that have the SSO plug-in, MSAL automatically invokes it for all interactive and silent token requests. It also invokes it for account enumeration and account removal operations. Because MSAL implements a native SSO plug-in protocol that relies on custom operations, this setup provides the smoothest native experience to the end user.
-There's no special configuration needed if you've followed [all recommended steps](./quickstart-v2-ios.md) and used the default [redirect URI format](./redirect-uris-ios.md). When running on a device that has the SSO plug-in present, MSAL will automatically invoke it for all interactive and silent token requests, as well as account enumeration and account removal operations. Since MSAL implements native SSO plug-in protocol that relies on custom operations, this setup provides the smoothest native experience to the end user.
+If the SSO plug-in isn't enabled by MDM but the Microsoft Authenticator app is present on the device, MSAL instead uses the Authenticator app for any interactive token requests. The SSO plug-in shares SSO with the Authenticator app.
-If the SSO plug-in is not enabled by MDM, but the Microsoft Authenticator app is present on the device, MSAL will instead use the Microsoft Authenticator app for any interactive token requests. The SSO plug-in shares SSO with the Microsoft Authenticator app.
+### Applications that don't use MSAL
-### Applications that don't use a Microsoft identity platform library
+Applications that don't use a Microsoft identity platform library, like MSAL, can still get SSO if an administrator adds these applications to the allowlist.
-Applications that don't use a Microsoft identity platform library like MSAL can still get SSO if an administrator adds them to the allow list explicitly.
+You don't need to change the code in those apps as long as the following conditions are satisfied:
-There are no code changes needed in those apps as long as following conditions are satisfied:
+- The application uses Apple frameworks to run network requests. These frameworks include [WKWebView](https://developer.apple.com/documentation/webkit/wkwebview) and [NSURLSession](https://developer.apple.com/documentation/foundation/nsurlsession), for example.
+- The application uses standard protocols to communicate with Azure AD. These protocols include, for example, OAuth 2, SAML, and WS-Federation.
+- The application doesn't collect plaintext usernames and passwords in the native UI.
-- Application is using Apple frameworks to execute network requests (for example, [WKWebView](https://developer.apple.com/documentation/webkit/wkwebview), [NSURLSession](https://developer.apple.com/documentation/foundation/nsurlsession)) -- Application is using standard protocols to communicate with Azure AD (for example, OAuth2, SAML, WS-Federation)-- Application doesn't collect plaintext username and password in the native UI
+In this case, SSO is provided when the application creates a network request and opens a web browser to sign the user in. When a user is redirected to an Azure AD sign-in URL, the SSO plug-in validates the URL and checks for an SSO credential for that URL. If it finds the credential, the SSO plug-in passes it to Azure AD, which authorizes the application to complete the network request without asking the user to enter credentials. Additionally, if the device is known to Azure AD, the SSO plug-in passes the device certificate to satisfy the device-based conditional access check.
-In this case, SSO is provided when the application creates a network request and opens a web browser to sign the user in. When a user is redirected to an Azure AD login URL, the SSO plug-in validates the URL and checks if there is an SSO credential available for that URL. If there is one, the SSO plug-in passes the SSO credential to Azure AD, which authorizes the application to complete the network request without asking the user to enter their credentials. Additionally, if the device is known to Azure AD, the SSO plug-in will also pass the device certificate to satisfy the device-based conditional access check.
+To support SSO for non-MSAL apps, the SSO plug-in implements a protocol similar to the Windows browser plug-in described in [What is a primary refresh token?](../devices/concept-primary-refresh-token.md#browser-sso-using-prt).
-To support SSO for non-MSAL apps, the SSO plug-in implements a protocol similar to the Windows browser plug-in described in [What is a Primary Refresh Token?](../devices/concept-primary-refresh-token.md#browser-sso-using-prt).
+Compared to MSAL-based apps, the SSO plug-in acts more transparently for non-MSAL apps. It integrates with the existing browser sign-in experience that apps provide.
-Compared to MSAL-based apps, the SSO plug-in acts more transparently for non-MSAL apps by integrating with the existing browser login experience that apps provide. The end user would see their familiar experience, with the benefit of not having to perform additional sign-ins in each of the applications. For example, instead of displaying the native account picker, the SSO plug-in adds SSO sessions to the web-based account picker experience.
+The end user sees the familiar experience and doesn't have to sign in again in each application. For example, instead of displaying the native account picker, the SSO plug-in adds SSO sessions to the web-based account picker experience.
## Next steps
-For more information about shared device mode on iOS, see [Shared device mode for iOS devices](msal-ios-shared-devices.md).
+Learn about [Shared device mode for iOS devices](msal-ios-shared-devices.md).
active-directory Quickstart V2 Aspnet Core Web Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-web-api.md
# Customer intent: As an application developer, I want to know how to write an ASP.NET Core web API that uses the Microsoft identity platform to authorize API requests from clients.
-# Quickstart: Protect an ASP.NET Core web API with Microsoft identity platform
+# Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform
-In this quickstart, you download an ASP.NET Core web API code sample and review its code that restricts access to resources to authorized accounts only. The sample supports authorization of personal Microsoft accounts and accounts in any Azure Active Directory (Azure AD) organization.
+In this quickstart, you download an ASP.NET Core web API code sample and review the way it restricts resource access to authorized accounts only. The sample supports authorization of personal Microsoft accounts and accounts in any Azure Active Directory (Azure AD) organization.
> [!div renderon="docs"] > ## Prerequisites >
-> - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+> - Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
> - [Azure Active Directory tenant](quickstart-create-new-tenant.md) > - [.NET Core SDK 3.1+](https://dotnet.microsoft.com/) > - [Visual Studio 2019](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/)
In this quickstart, you download an ASP.NET Core web API code sample and review
> First, register the web API in your Azure AD tenant and add a scope by following these steps: > > 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application.
+> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: on the top menu to select the tenant in which you want to register an application.
> 1. Search for and select **Azure Active Directory**. > 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. Enter a **Name** for your application, for example `AspNetCoreWebApi-Quickstart`. Users of your app might see this name, and you can change it later.
+> 1. For **Name**, enter a name for your application. For example, enter **AspNetCoreWebApi-Quickstart**. Users of your app will see this name, and you can change it later.
> 1. Select **Register**.
-> 1. Under **Manage**, select **Expose an API** > **Add a scope**. Accept the default **Application ID URI** by selecting **Save and continue** and enter the following details:
+> 1. Under **Manage**, select **Expose an API** > **Add a scope**. For **Application ID URI**, accept the default by selecting **Save and continue**, and then enter the following details:
> - **Scope name**: `access_as_user` > - **Who can consent?**: **Admins and users** > - **Admin consent display name**: `Access AspNetCoreWebApi-Quickstart`
In this quickstart, you download an ASP.NET Core web API code sample and review
> [!div renderon="docs"] > ## Step 3: Configure the ASP.NET Core project >
-> In this step, configure the sample code to work with the app registration you created earlier.
+> In this step, configure the sample code to work with the app registration that you created earlier.
+>
+> 1. Extract the .zip archive into a folder near the root of your drive. For example, extract into *C:\Azure-Samples*.
+>
+> We recommend extracting the archive into a directory near the root of your drive to avoid errors caused by path length limitations on Windows.
>
-> 1. Extract the .zip archive into a folder near the root of your drive. For example, into *C:\Azure-Samples*.
> 1. Open the solution in the *webapi* folder in your code editor.
-> 1. Open the *appsettings.json* file and modify the following:
+> 1. Open the *appsettings.json* file and modify the following code:
> > ```json > "ClientId": "Enter_the_Application_Id_here", > "TenantId": "Enter_the_Tenant_Info_Here" > ``` >
-> - Replace `Enter_the_Application_Id_here` with the **Application (client) ID** of the application you registered in the Azure portal. You can find **Application (client) ID** in the app's **Overview** page.
+> - Replace `Enter_the_Application_Id_here` with the application (client) ID of the application that you registered in the Azure portal. You can find the application (client) ID on the app's **Overview** page.
> - Replace `Enter_the_Tenant_Info_Here` with one of the following:
-> - If your application supports **Accounts in this organizational directory only**, replace this value with the **Directory (tenant) ID** (a GUID) or **tenant name** (for example, `contoso.onmicrosoft.com`). You can find the **Directory (tenant) ID** on the app's **Overview** page.
-> - If your application supports **Accounts in any organizational directory**, replace this value with `organizations`
-> - If your application supports **All Microsoft account users**, leave this value as `common`
+> - If your application supports **Accounts in this organizational directory only**, replace this value with the directory (tenant) ID (a GUID) or tenant name (for example, `contoso.onmicrosoft.com`). You can find the directory (tenant) ID on the app's **Overview** page.
+> - If your application supports **Accounts in any organizational directory**, replace this value with `organizations`.
+> - If your application supports **All Microsoft account users**, leave this value as `common`.
> > For this quickstart, don't change any other values in the *appsettings.json* file.
The web API receives a token from a client application, and the code in the web
### Startup class
-The *Microsoft.AspNetCore.Authentication* middleware uses a `Startup` class that's executed when the hosting process initializes. In its `ConfigureServices` method, the `AddMicrosoftIdentityWebApi` extension method provided by *Microsoft.Identity.Web* is called.
+The *Microsoft.AspNetCore.Authentication* middleware uses a `Startup` class that's executed when the hosting process starts. In its `ConfigureServices` method, the `AddMicrosoftIdentityWebApi` extension method provided by *Microsoft.Identity.Web* is called.
```csharp public void ConfigureServices(IServiceCollection services)
The *Microsoft.AspNetCore.Authentication* middleware uses a `Startup` class that
The `AddAuthentication()` method configures the service to add JwtBearer-based authentication.
-The line containing `.AddMicrosoftIdentityWebApi` adds the Microsoft identity platform authorization to your web API. It's then configured to validate access tokens issued by the Microsoft identity platform based on the information in the `AzureAD` section of the *appsettings.json* configuration file:
+The line that contains `.AddMicrosoftIdentityWebApi` adds the Microsoft identity platform authorization to your web API. It's then configured to validate access tokens issued by the Microsoft identity platform based on the information in the `AzureAD` section of the *appsettings.json* configuration file:
| *appsettings.json* key | Description | ||-|
-| `ClientId` | **Application (client) ID** of the application registered in the Azure portal. |
+| `ClientId` | Application (client) ID of the application registered in the Azure portal. |
| `Instance` | Security token service (STS) endpoint for the user to authenticate. This value is typically `https://login.microsoftonline.com/`, indicating the Azure public cloud. |
-| `TenantId` | Name of your tenant or its tenant ID (a GUID), or *common* to sign in users with work or school accounts or Microsoft personal accounts. |
+| `TenantId` | Name of your tenant or its tenant ID (a GUID), or `common` to sign in users with work or school accounts or Microsoft personal accounts. |
The `Configure()` method contains two important methods, `app.UseAuthentication()` and `app.UseAuthorization()`, that enable their named functionality: ```csharp
-// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
+// The runtime calls this method. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env) { // more code
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
} ```
-### Protect a controller, a controller's method, or a Razor page
+### Protecting a controller, a controller's method, or a Razor page
-You can protect a controller or controller methods using the `[Authorize]` attribute. This attribute restricts access to the controller or methods by only allowing authenticated users, which means that authentication challenge can be started to access the controller if the user isn't authenticated.
+You can protect a controller or controller methods by using the `[Authorize]` attribute. This attribute restricts access to the controller or methods by allowing only authenticated users. An authentication challenge can be started to access the controller if the user isn't authenticated.
```csharp namespace webapi.Controllers
namespace webapi.Controllers
public class WeatherForecastController : ControllerBase ```
-### Validate the scope in the controller
+### Validation of scope in the controller
-The code in the API then verifies that the required scopes are in the token by using `HttpContext.VerifyUserHasAnyAcceptedScope(scopeRequiredByApi);`
+The code in the API verifies that the required scopes are in the token by using `HttpContext.VerifyUserHasAnyAcceptedScope(scopeRequiredByApi);`:
```csharp namespace webapi.Controllers
namespace webapi.Controllers
[Route("[controller]")] public class WeatherForecastController : ControllerBase {
- // The Web API will only accept tokens 1) for users, and 2) having the "access_as_user" scope for this API
+ // The web API will only accept tokens 1) for users, and 2) having the "access_as_user" scope for this API
static readonly string[] scopeRequiredByApi = new string[] { "access_as_user" }; [HttpGet]
namespace webapi.Controllers
The GitHub repository that contains this ASP.NET Core web API code sample includes instructions and more code samples that show you how to: -- Add authentication to a new ASP.NET Core web API-- Call the web API from a desktop application-- Call downstream APIs like Microsoft Graph and other Microsoft APIs
+- Add authentication to a new ASP.NET Core web API.
+- Call the web API from a desktop application.
+- Call downstream APIs like Microsoft Graph and other Microsoft APIs.
> [!div class="nextstepaction"] > [ASP.NET Core web API tutorials on GitHub](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2)
active-directory Quickstart V2 Netcore Daemon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-netcore-daemon.md
Last updated 10/05/2020
-#Customer intent: As an application developer, I want to learn how my .NET Core app can get an access token and call an API that's protected by the Microsoft identity platform using client credentials flow.
+#Customer intent: As an application developer, I want to learn how my .NET Core app can get an access token and call an API that's protected by the Microsoft identity platform by using the client credentials flow.
-# Quickstart: Acquire a token and call Microsoft Graph API using console app's identity
+# Quickstart: Get a token and call the Microsoft Graph API by using a console app's identity
-In this quickstart, you download and run a code sample that demonstrates how a .NET Core console application can get an access token to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample also demonstrates how a job or a windows service can run with an application identity, instead of a user's identity.
+In this quickstart, you download and run a code sample that demonstrates how a .NET Core console application can get an access token to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample also demonstrates how a job or a Windows service can run with an application identity, instead of a user's identity. The sample console application in this quickstart is also a daemon application, so it's a confidential client application.
-See [How the sample works](#how-the-sample-works) for an illustration.
+> [!div renderon="docs"]
+> The following diagram shows how the sample app works:
+>
+> ![Diagram that shows how the sample app generated by this quickstart works.](media/quickstart-v2-netcore-daemon/netcore-daemon-intro.svg)
+>
## Prerequisites
-This quickstart requires [.NET Core 3.1](https://www.microsoft.com/net/download/dotnet-core).
+This quickstart requires [.NET Core 3.1](https://www.microsoft.com/net/download/dotnet-core) but will also work with .NET Core 5.0.
> [!div renderon="docs"]
-> ## Register and download your quickstart app
+> ## Register and download the app
> [!div renderon="docs" class="sxs-lookup"] >
-> You have two options to start your quickstart application: Express (Option 1 below), and Manual (Option 2)
+> You have two options to start building your application: automatic or manual configuration.
>
-> ### Option 1: Register and auto configure your app and then download your code sample
+> ### Automatic configuration
>
-> 1. Go to the <a href="https://portal.azure.com/?Microsoft_AAD_RegisteredApps=true#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/DotNetCoreDaemonQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
+> If you want to register and automatically configure your app and then download the code sample, follow these steps:
+>
+> 1. Go to the <a href="https://portal.azure.com/?Microsoft_AAD_RegisteredApps=true#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/DotNetCoreDaemonQuickstartPage/sourceType/docs" target="_blank">Azure portal page for app registration</a>.
> 1. Enter a name for your application and select **Register**.
-> 1. Follow the instructions to download and automatically configure your new application with just one click.
+> 1. Follow the instructions to download and automatically configure your new application in one click.
+>
+> ### Manual configuration
+>
+> If you want to manually configure your application and code sample, use the following procedures.
>
-> ### Option 2: Register and manually configure your application and code sample
- > [!div renderon="docs"] > #### Step 1: Register your application > To register your application and add the app's registration information to your solution manually, follow these steps: > > 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</span></a>.
-> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application.
+> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: on the top menu to select the tenant in which you want to register the application.
> 1. Search for and select **Azure Active Directory**. > 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. Enter a **Name** for your application, for example `Daemon-console`. Users of your app might see this name, and you can change it later.
+> 1. For **Name**, enter a name for your application. For example, enter **Daemon-console**. Users of your app will see this name, and you can change it later.
> 1. Select **Register** to create the application. > 1. Under **Manage**, select **Certificates & secrets**. > 1. Under **Client secrets**, select **New client secret**, enter a name, and then select **Add**. Record the secret value in a safe location for use in a later step. > 1. Under **Manage**, select **API Permissions** > **Add a permission**. Select **Microsoft Graph**. > 1. Select **Application permissions**.
-> 1. Under **User** node, select **User.Read.All**, then select **Add permissions**.
+> 1. Under the **User** node, select **User.Read.All**, and then select **Add permissions**.
> [!div class="sxs-lookup" renderon="portal"] > ### Download and configure your quickstart app >
-> #### Step 1: Configure your application in Azure portal
-> For the code sample in this quickstart to work, create a client secret and add Graph API's **User.Read.All** application permission.
+> #### Step 1: Configure your application in the Azure portal
+> For the code sample in this quickstart to work, create a client secret and add the Graph API's **User.Read.All** application permission.
> > [!div renderon="portal" id="makechanges" class="nextstepaction"] > > [Make these changes for me]() >
This quickstart requires [.NET Core 3.1](https://www.microsoft.com/net/download/
> [!div class="sxs-lookup" renderon="portal"]
-> Run the project using Visual Studio 2019.
+> Run the project by using Visual Studio 2019.
> [!div class="sxs-lookup" renderon="portal" id="autoupdate" class="nextstepaction"] > [Download the code sample](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/archive/master.zip)
This quickstart requires [.NET Core 3.1](https://www.microsoft.com/net/download/
> [!div renderon="docs"] > #### Step 3: Configure your Visual Studio project >
-> 1. Extract the zip file to a local folder close to the root of the disk, for example, **C:\Azure-Samples**.
-> 1. Open the solution in Visual Studio - **1-Call-MSGraph\daemon-console.sln** (optional).
-> 1. Edit **appsettings.json** and replace the values of the fields `ClientId`, `Tenant` and `ClientSecret` with the following:
+> 1. Extract the .zip file to a local folder that's close to the root of the disk. For example, extract to *C:\Azure-Samples*.
+>
+> We recommend extracting the archive into a directory near the root of your drive to avoid errors caused by path length limitations on Windows.
+>
+> 1. Open the solution in Visual Studio: *1-Call-MSGraph\daemon-console.sln* (optional).
+> 1. In *appsettings.json*, replace the values of `Tenant`, `ClientId`, and `ClientSecret`:
> > ```json > "Tenant": "Enter_the_Tenant_Id_Here", > "ClientId": "Enter_the_Application_Id_Here", > "ClientSecret": "Enter_the_Client_Secret_Here" > ```
-> Where:
-> - `Enter_the_Application_Id_Here` - is the **Application (client) ID** for the application you registered.
-> - `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com)
-> - `Enter_the_Client_Secret_Here` - replace this value with the client secret created on step 1.
+> In that code:
+> - `Enter_the_Application_Id_Here` is the application (client) ID for the application that you registered.
+> - Replace `Enter_the_Tenant_Id_Here` with the tenant ID or tenant name (for example, `contoso.microsoft.com`).
+> - Replace `Enter_the_Client_Secret_Here` with the client secret that you created in step 1.
> [!div renderon="docs"] > > [!TIP]
-> > To find the values of **Application (client) ID**, **Directory (tenant) ID**, go to the app's **Overview** page in the Azure portal. To generate a new key, go to **Certificates & secrets** page.
+> > To find the values for the application (client) ID and the directory (tenant) ID, go to the app's **Overview** page in the Azure portal. To generate a new key, go to the **Certificates & secrets** page.
> [!div class="sxs-lookup" renderon="portal"] > #### Step 3: Admin consent
This quickstart requires [.NET Core 3.1](https://www.microsoft.com/net/download/
> [!div renderon="docs"] > #### Step 4: Admin consent
-If you try to run the application at this point, you'll receive *HTTP 403 - Forbidden* error: `Insufficient privileges to complete the operation`. This happens because any *app-only permission* requires Admin consent, which means that a global administrator of your directory must give consent to your application. Select one of the options below depending on your role:
+If you try to run the application at this point, you'll receive an *HTTP 403 - Forbidden* error: "Insufficient privileges to complete the operation." This error happens because any app-only permission requires a global administrator of your directory to give consent to your application. Select one of the following options, depending on your role.
##### Global tenant administrator > [!div renderon="docs"]
-> If you are a global tenant administrator, in the Azure Portal navigate to **Enterprise applications** > Select your app registration > Choose **"Permissions"** from the Security section of the left navigation pane. Select the large button labeled **Grant admin consent for {Tenant Name}** (Where {Tenant Name} is the name of your directory).
+> If you're a global tenant administrator, go to **Enterprise applications** in the Azure portal. Select your app registration, and select **Permissions** from the **Security** section of the left pane. Then select the large button labeled **Grant admin consent for {Tenant Name}** (where **{Tenant Name}** is the name of your directory).
> [!div renderon="portal" class="sxs-lookup"]
-> If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**
+> If you're a global administrator, go to the **API Permissions** page and select **Grant admin consent for Enter_the_Tenant_Name_Here**.
> > [!div id="apipermissionspage"] > > [Go to the API Permissions page]()
https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_i
``` > [!div renderon="docs"]
->> Where:
->> * `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com)
->> * `Enter_the_Application_Id_Here` - is the **Application (client) ID** for the application you registered.
+>> In that URL:
+>> * Replace `Enter_the_Tenant_Id_Here` with the tenant ID or tenant name (for example, `contoso.microsoft.com`).
+>> * `Enter_the_Application_Id_Here` is the application (client) ID for the application that you registered.
> [!NOTE]
-> You may see the error *'AADSTS50011: No reply address is registered for the application'* after granting consent to the app using the preceding URL. This happen because this application and the URL do not have a redirect URI - please ignore the error.
+> You might see the error "AADSTS50011: No reply address is registered for the application" after you grant consent to the app by using the preceding URL. This error happens because this application and the URL don't have a redirect URI. You can ignore it.
> [!div class="sxs-lookup" renderon="portal"] > #### Step 4: Run the application
https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_i
> [!div renderon="docs"] > #### Step 5: Run the application
-If you're using Visual Studio or Visual Studio for Mac, press **F5** to run the application, otherwise, run the application via command prompt, console, or terminal:
+If you're using Visual Studio or Visual Studio for Mac, press **F5** to run the application. Otherwise, run the application via command prompt, console, or terminal:
```dotnetcli cd {ProjectFolder}\1-Call-MSGraph\daemon-console dotnet run ```
-> Where:
-> * *{ProjectFolder}* is the folder where you extracted the zip file. Example **C:\Azure-Samples\active-directory-dotnetcore-daemon-v2**
+> In that code:
+> * `{ProjectFolder}` is the folder where you extracted the .zip file. An example is `C:\Azure-Samples\active-directory-dotnetcore-daemon-v2`.
-You should see a list of users in your Azure AD directory as result.
+You should see a list of users in Azure Active Directory as result.
> [!IMPORTANT]
-> This quickstart application uses a client secret to identify itself as confidential client. Because the client secret is added as a plain-text to your project files, for security reasons, it is recommended that you use a certificate instead of a client secret before considering the application as production application. For more information on how to use a certificate, see [these instructions](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/#variation-daemon-application-using-client-credentials-with-certificates) in the GitHub repository for this sample.
+> This quickstart application uses a client secret to identify itself as a confidential client. The client secret is added as a plain-text file to your project files. For security reasons, we recommend that you use a certificate instead of a client secret before considering the application as a production application. For more information on how to use a certificate, see [these instructions](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/#variation-daemon-application-using-client-credentials-with-certificates) in the GitHub repository for this sample.
## More information
+This section gives an overview of the code required to sign in users. This overview can be useful to understand how the code works, what the main arguments are, and how to add sign-in to an existing .NET Core console application.
-### How the sample works
-![Shows how the sample app generated by this quickstart works](media/quickstart-v2-netcore-daemon/netcore-daemon-intro.svg)
+> [!div class="sxs-lookup" renderon="portal"]
+> ### How the sample works
+>
+> ![Diagram that shows how the sample app generated by this quickstart works.](media/quickstart-v2-netcore-daemon/netcore-daemon-intro.svg)
### MSAL.NET
-MSAL ([Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client)) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. As described, this quickstart requests tokens by using the application own identity instead of delegated permissions. The authentication flow used in this case is known as *[client credentials oauth flow](v2-oauth2-client-creds-grant-flow.md)*. For more information on how to use MSAL.NET with client credentials flow, see [this article](https://aka.ms/msal-net-client-credentials).
+Microsoft Authentication Library (MSAL, in the [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) package) is the library that's used to sign in users and request tokens for accessing an API protected by the Microsoft identity platform. This quickstart requests tokens by using the application's own identity instead of delegated permissions. The authentication flow in this case is known as a [client credentials OAuth flow](v2-oauth2-client-creds-grant-flow.md). For more information on how to use MSAL.NET with a client credentials flow, see [this article](https://aka.ms/msal-net-client-credentials).
- You can install MSAL.NET by running the following command in Visual Studio's **Package Manager Console**:
+ You can install MSAL.NET by running the following command in the Visual Studio Package Manager Console:
```dotnetcli dotnet add package Microsoft.Identity.Client
You can add the reference for MSAL by adding the following code:
using Microsoft.Identity.Client; ```
-Then, initialize MSAL using the following code:
+Then, initialize MSAL by using the following code:
```csharp IConfidentialClientApplication app;
app = ConfidentialClientApplicationBuilder.Create(config.ClientId)
.Build(); ```
-> | Where: | Description |
+> | Element | Description |
> |||
-> | `config.ClientSecret` | Is the client secret created for the application in Azure Portal. |
-> | `config.ClientId` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. |
-> | `config.Authority` | (Optional) The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant Id.|
+> | `config.ClientSecret` | The client secret created for the application in the Azure portal. |
+> | `config.ClientId` | The application (client) ID for the application registered in the Azure portal. You can find this value on the app's **Overview** page in the Azure portal. |
+> | `config.Authority` | (Optional) The security token service (STS) endpoint for the user to authenticate. It's usually `https://login.microsoftonline.com/{tenant}` for the public cloud, where `{tenant}` is the name of your tenant or your tenant ID.|
-For more information, please see the [reference documentation for `ConfidentialClientApplication`](/dotnet/api/microsoft.identity.client.iconfidentialclientapplication)
+For more information, see the [reference documentation for `ConfidentialClientApplication`](/dotnet/api/microsoft.identity.client.iconfidentialclientapplication).
### Requesting tokens
-To request a token using app's identity, use `AcquireTokenForClient` method:
+To request a token by using the app's identity, use the `AcquireTokenForClient` method:
```csharp result = await app.AcquireTokenForClient(scopes) .ExecuteAsync(); ```
-> |Where:| Description |
+> |Element| Description |
> |||
-> | `scopes` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure Portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under **Expose an API** section in Azure Portal's Application Registration (Preview). |
+> | `scopes` | Contains the requested scopes. For confidential clients, this value should use a format similar to `{Application ID URI}/.default`. This format indicates that the requested scopes are the ones that are statically defined in the app object set in the Azure portal. For Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`. For custom web APIs, `{Application ID URI}` is defined in the Azure portal, under **Application Registration (Preview)** > **Expose an API**. |
-For more information, please see the [reference documentation for `AcquireTokenForClient`](/dotnet/api/microsoft.identity.client.confidentialclientapplication.acquiretokenforclient)
+For more information, see the [reference documentation for `AcquireTokenForClient`](/dotnet/api/microsoft.identity.client.confidentialclientapplication.acquiretokenforclient).
[!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
active-directory User Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/user-properties.md
Previously updated : 02/12/2021 Last updated : 03/18/2021
It's possible to turn off the default limitations so that a guest user in the co
![Screenshot showing the External users option in the user settings](media/user-properties/remove-guest-limitations.png) ## Can I make guest users visible in the Exchange Global Address List?
-Yes. By default, guest objects aren't visible in your organization's global address list, but you can use Azure Active Directory PowerShell to make them visible. For details, see **Can I make guest objects visible in the global address list?** in [Manage guest access in Microsoft 365 Groups](/office365/admin/create-groups/manage-guest-access-in-groups).
+Yes. By default, guest objects aren't visible in your organization's global address list, but you can use Azure Active Directory PowerShell to make them visible. For details, see "Add guests to the global address list" in the [Microsoft 365 per-group guest access article](/microsoft-365/solutions/per-group-guest-access).
## Can I update a guest user's email address?
active-directory What Is B2b https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/what-is-b2b.md
Previously updated : 03/02/2021 Last updated : 03/19/2021
Bring your external partners on board in ways customized to your organization's
## Integrate with Identity providers
-Azure AD supports external identity providers like Facebook, Microsoft accounts, Google, or enterprise identity providers. You can set up federation with identity providers so your external users can sign in with their existing social or enterprise accounts instead of creating a new account just for your application. Learn more about identity providers for External Identities.
+Azure AD supports external identity providers like Facebook, Microsoft accounts, Google, or enterprise identity providers. You can set up federation with identity providers so your external users can sign in with their existing social or enterprise accounts instead of creating a new account just for your application. Learn more about [identity providers for External Identities](identity-providers.md).
![Screenshot showing the Identity providers page](media/what-is-b2b/identity-providers.png)
active-directory Concept Adsync Service Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/concept-adsync-service-account.md
na ms.devlang: na Previously updated : 06/27/2019 Last updated : 03/17/2021
Azure AD Connect installs an on-premises service which orchestrates synchronizat
Choosing the ADSync service account is an important planning decision to make prior to installing Azure AD Connect. Any attempt to change the credentials after installation will result in the service failing to start, losing access to the synchronization database, and failing to authenticate with your connected directories (Azure and AD DS). No synchronization will occur until the original credentials are restored.
-## The default ADSync service account
+The sync service can run under different accounts. It can run under a Virtual Service Account (VSA), a Managed Service Account (gMSA/sMSA), or a regular User Account. The supported options were changed with the 2017 April release and 2021 March release of Azure AD Connect when you do a fresh installation. If you upgrade from an earlier release of Azure AD Connect, these additional options are not available.
-When run on a member server, the AdSync service runs in the context of a Virtual Service Account (VSA). Due to a product limitation, a custom service account is created when installed on a domain controller. If the Express settings service account does not meet your organizational security requirements, deploy Azure AD Connect by choosing the Customize option. Then choose the service account option which meets your organizationΓÇÖs requirements.
->[!NOTE]
->The default service account when installed on a domain controller is of the form Domain\AAD_InstallationIdentifier. The password for this account is randomly generated and presents significant challenges for recovery and password rotation. Microsoft recommends customizing the service account during initial installation on a domain controller to use either a standalone or group Managed Service Account (sMSA / gMSA)
+|Type of account|Installation option|Description|
+|--||--|
+|Virtual Service Account|Express and custom, 2017 April and later| A Virtual Service Account is used for all express installations, except for installations on a Domain Controller. When using custom installation, it is the default option unless another option is used.|
+|Managed Service Account|Custom, 2017 April and later|If you use a remote SQL Server, then we recommend using a group Managed Service Account. |
+|Managed Service Account|Express and custom, 2021 March and later|A standalone Managed Service Account prefixed with ADSyncMSA_ is created during installation for express installations when installed on a Domain Controller. When using custom installation, it is the default option unless another option is used.|
+|User Account|Express and custom, 2017 April to 2021 March|A User Account prefixed with AAD_ is created during installation for express installations when installed on a Domain Controller. When using custom installation, it is the default option unless another option is used.|
+|User Account|Express and custom, 2017 March and earlier|A User Account prefixed with AAD_ is created during installation for express installations. When using custom installation, another account can be specified.|
-|Azure AD Connect location|Service account created|
-|--|--|
-|Member Server|NT SERVICE\ADSync|
-|Domain Controller|Domain\AAD_74dc30c01e80 (see note)|
+>[!IMPORTANT]
+> If you use Connect with a build from 2017 March or earlier, then you should not reset the password on the service account since Windows destroys the encryption keys for security reasons. You cannot change the account to any other account without reinstalling Azure AD Connect. If you upgrade to a build from 2017 April or later, then it is supported to change the password on the service account, but you cannot change the account used.
-## Custom ADSync service accounts
-Microsoft recommends running the ADSync service in the context of either a Virtual Service Account or a standalone or group Managed Service Account. Your domain administrator may also choose to create a service account provisioned to meet your specific organizational security requirements. To customize the service account used during installation, choose the Customize option on the Express Settings page below. The following options are available:
+> [!IMPORTANT]
+> You can only set the service account on first installation. It is not supported to change the service account after the installation has been completed. If you need to change the service account password, this is supported and instructions can be found [here](how-to-connect-sync-change-serviceacct-pass.md).
-- default account ΓÇô Azure AD Connect will provision the service account as described above-- managed service account ΓÇô use a standalone or group MSA provisioned by your administrator-- domain account ΓÇô use a domain service account provisioned by your administrator
+The following is a table of the default, recommended, and supported options for the sync service account.
-![Screenshot of the Azure AD Connect Express Settings page with "Customize" or "Use express settings" option buttons.](media/concept-adsync-service-account/adsync1.png)
+Legend:
-![Screenshot of Azure AD Connect "Install required components" page with the option to use an existing Managed Service Account selected.](media/concept-adsync-service-account/adsync2.png)
+- **Bold** indicates the default option and, in most cases, the recommended option.
+- *Italic* indicates the recommended option when it is not the default option.
+- Non-bold - Supported option
+- Local account - Local user account on the server
+- Domain account - Domain user account
+- sMSA - [standalone Managed Service account](https://docs.microsoft.com/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd548356(v=ws.10))
+- gMSA - [group Managed Service account](https://docs.microsoft.com/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831782(v=ws.11))
-## Diagnosing ADSync service account changes
-Changing the credentials for the ADSync service after installation will result in the service failing to start, losing access to the synchronization database, and failing to authenticate with your connected directories (Azure and AD DS). Granting database access to the new ADSync service account is insufficient to recover from this issue. No synchronization will occur until the original credentials are restored.
+ ||**LocalDB</br> Express**|**LocalDB/LocalSQL</br> Custom**|**Remote SQL</br> Custom**|
+|--|--|--|--|
+|**domain-joined machine**|**VSA**|**VSA**</br> *sMSA*</br> *gMSA*</br> Local account</br> Domain account| *gMSA* </br>Domain account|
+|Domain Controller| **sMSA**|**sMSA** </br>*gMSA*</br> Domain account|*gMSA*</br>Domain account|
-The ADSync service will issue an error level message to the event log when it is unable to start. The content of the message will vary depending on whether the built-in database (localdb) or full SQL is in use. The following are examples of the event log entries that may be present.
+## Virtual Service Account
-### Example 1
+A Virtual Service Account is a special type of managed local account that does not have a password and is automatically managed by Windows.
-The AdSync service encryption keys could not be found and have been recreated. Synchronization will not occur until this issue is corrected.
+ ![Virtual service account](media/concept-adsync-service-account/account-1.png)
-Troubleshooting this Issue
-The Microsoft Azure AD Sync encryption keys will become inaccessible if the AdSync service Log On credentials are changed. If the credentials have been changed, use the Services application to change the Log On account back to its originally configured value (ex. NT SERVICE\AdSync) and restart the service. This will immediately restore correct operation of the AdSync service.
+The Virtual Service Account is intended to be used with scenarios where the sync engine and SQL are on the same server. If you use remote SQL, then we recommend using a group Managed Service Account instead.
-Please see the following [article](./whatis-hybrid-identity.md) for further information.
+The Virtual Service Account cannot be used on a Domain Controller due to [Windows Data Protection API (DPAPI)](https://msdn.microsoft.com/library/ms995355.aspx) issues.
-### Example 2
+## Managed Service Account
-The service was unable to start because a connection to the local database (localdb)
-could not be established.
+If you use a remote SQL Server, then we recommend to using a group Managed Service Account. For more information on how to prepare your Active Directory for group Managed Service account, see [Group Managed Service Accounts Overview](https://docs.microsoft.com/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831782(v=ws.11)).
-Troubleshooting this Issue
-The Microsoft Azure AD Sync service will lose permission to access the local database provider if the AdSync service Log On credentials are changed. If the credentials have been changed use the Services application to change the Log On account back to its originally configured value (ex. NT SERVICE\AdSync) and restart the service. This will immediately restore correct operation of the AdSync service.
+To use this option, on the [Install required components](how-to-connect-install-custom.md#install-required-components) page, select **Use an existing service account**, and select **Managed Service Account**.
-Please see the following [article](./whatis-hybrid-identity.md) for further information.
+ ![managed service account](media/concept-adsync-service-account/account-2.png)
+
+It is also supported to use a standalone managed service account. However, these can only be used on the local machine and there is no benefit to using them over the default Virtual Service Account.
+
+### Auto-generated standalone Managed Service Account
+
+If you install Azure AD Connect on a Domain Controller, a standalone Managed Service Account is created by the installation wizard (unless you specify the account to use in custom settings). The account is prefixed **ADSyncMSA_** and used for the actual sync service to run as.
+
+This account is a managed domain account that does not have a password and is automatically managed by Windows.
+
+This account is intended to be used with scenarios where the sync engine and SQL are on the Domain Controller.
+
+## User Account
+
+A local service account is created by the installation wizard (unless you specify the account to use in custom settings). The account is prefixed AAD_ and used for the actual sync service to run as. If you install Azure AD Connect on a Domain Controller, the account is created in the domain. The AAD_ service account must be located in the domain if:
+- you use a remote server running SQL Server
+- you use a proxy that requires authentication
+
+ ![user account](media/concept-adsync-service-account/account-3.png)
+
+The account is created with a long complex password that does not expire.
+
+This account is used to store passwords for the other accounts in a secure way. These other accounts passwords are stored encrypted in the database. The private keys for the encryption keys are protected with the cryptographic services secret-key encryption using Windows Data Protection API (DPAPI).
+
+If you use a full SQL Server, then the service account is the DBO of the created database for the sync engine. The service will not function as intended with any other permission. A SQL login is also created.
+
+The account is also granted permission to files, registry keys, and other objects related to the Sync Engine.
-Additional Details
-The following error information was returned by the provider:
-
-```
-OriginalError=0x80004005 OLEDB Provider error(s):
-Description = 'Login timeout expired'
-Failure Code = 0x80004005
-Minor Number = 0
-Description = 'A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.'
-```
## Next steps
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory How To Connect Single Object Sync https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-single-object-sync.md
+
+ Title: 'Azure AD Connect Single Object Sync '
+description: Learn how to synchronize one object from Active Directory to Azure AD for troubleshooting.
++++++ Last updated : 03/19/2021+++++
+# Azure AD Connect Single Object Sync
+
+The Azure AD Connect Single Object Sync tool is a PowerShell cmdlet that can be used to synchronize an individual object from Active Directory to Azure Active Directory. The report generated can be used to investigate and troubleshoot per object synchronization issues.
+
+> [!NOTE]
+> The tool supports synchronization from Active Directory to Azure Active Directory. It does not support synchronization from Azure Active Directory to Active Directory.
+>
+> The tool supports synchronizing an Object Modification Add and Update. It does not support synchronizing an Object Modification Delete.
+
+## How it works
+The Single Object Sync tool requires an Active Directory distinguished name as input to find the source connector and partition for import. It exports the changes to Azure Active Directory. The tool generates a JSON output similar to the **provisioningObjectSummary** resource type.
+
+The Single Object Sync tool performs the following steps:
+
+ 1. Determine if Object's (source) Domain (Active Directory Connector and Partition) in sync scope.
+ 2. Determine if Object's (target) Domain (Azure Active Directory Connector and Partition) in sync scope.
+ 3. Determine if Object's Organizational Unit in sync scope.
+ 4. Determine if Object is accessible using connector account credentials.
+ 5. Determine if Object's Type in sync scope.
+ 6. Determine if Object is in sync scope if Group Filtering enabled.
+ 7. Import Object from Active Directory to Active Directory Connector Space.
+ 8. Import Object from Azure Active Directory to Azure Active Directory Connector Space.
+ 9. Sync Object from Active Directory Connector Space.
+ 10. Export Object from Azure Active Directory Connector Space to Azure Active Directory.
+
+In addition to the JSON output, the tool generates an HTML report that has all the details of the synchronization operation. The HTML report is located in **C:\ProgramData\AADConnect\ADSyncObjectDiagnostics\ ADSyncSingleObjectSyncResult-<date>.htm**. This HTML report can be shared with the support team to do further troubleshooting, if needed.
+
+The HTML report has the following:
+
+|Tab|Description|
+|--|--|
+|Steps|Outlines the steps taken to synchronize an object. Each step contains details for troubleshooting. The Import, Sync and Export steps contains additional attribute info such as name, is multi-valued, type, value, value add, value delete, operation, sync rule, mapping type and data source.|
+|Troubleshooting & Recommendation|Provides the error code and reason. The error information is available only if a failure happens.|
+|Modified Properties|Shows the old value and the new value. If there is no old value or if the new value is deleted, that cell is blank. For multivalued attributes it shows the count. The attribute name is a link to Steps tab: Export Object from Azure Active Directory Connector Space to Azure Active Directory: Attribute Info that contains additional details of the attribute such as name, is multi-valued, type, value, value add, value delete, operation, sync rule, mapping type and data source.|
+|Summary|Provides an overview of what happened and identifiers for the object in the source and target systems.|
+
+## Prerequisites
+
+In order to use the Single Object Sync tool, you will need to use the 2021 March release of Azure AD Connect or later.
+
+### Run the Single Object Sync tool
+
+To run the Single Object Sync tool, perform the following steps:
+
+ 1. Open a new Windows PowerShell session on your Azure AD Connect server with the Run as Administrator option.
+
+ 2. Set the [execution policy](https://docs.microsoft.com/powershell/module/microsoft.powershell.security/set-executionpolicy) to RemoteSigned or Unrestricted.
+
+ 3. Disable the sync scheduler after verifying that no synchronization operations are running.
+
+ `Set-ADSyncScheduler -SyncCycleEnabled $false`
+
+ 4. Import the AdSync Diagnostics module
+
+ `Import-module -Name "C:\Program Files\Microsoft Azure AD Sync\Bin\ADSyncDiagnostics\ADSyncDiagnostics.psm1"`
+
+ 5. Invoke the Single Object Sync cmdlet.
+
+ `Invoke-ADSyncSingleObjectSync -DistinguishedName "CN=testobject,OU=corp,DC=contoso,DC=com" | Out-File -FilePath ".\output.json"`
+
+ 6. Re-enable the Sync Scheduler.
+
+ `Set-ADSyncScheduler -SyncCycleEnabled $true`
+
+|Single Object Sync Input Parameters|Description|
+|--|-|
+|DistinguishedName|This is a required string parameter. </br></br>This is the Active Directory objectΓÇÖs distinguished name that needs synchronization and troubleshooting.|
+|StagingMode|This is an optional switch parameter.</br></br>This parameter can be used to prevent exporting the changes to Azure Active Directory.</br></br>**Note**: The cmdlet will commit the sync operation. </br></br>**Note**: Azure AD Connect Staging server will not export the changes to Azure Active Directory.|
+|NoHtmlReport|This is an optional switch parameter.</br></br>This parameter can be used to prevent generating the HTML report.
+
+## Single Object Sync throttling
+
+The Single Object Sync tool **is** intended for investigating and troubleshooting per object synchronization issues. It is **not** intended to replace the synchronization cycle run by the Scheduler. The import from Azure Active Directory and export to Azure Active Directory are subject to throttling limits. Please retry after 5 minutes, if you reach the throttling limit.
+
+## Next steps
+- [Troubleshooting object synchronization](tshoot-connect-objectsync.md)
+- [Troubleshoot object not synchronizing](tshoot-connect-object-not-syncing.md)
+- [End-to-end troubleshooting of Azure AD Connect objects and attributes](https://docs.microsoft.com/troubleshoot/azure/active-directory/troubleshoot-aad-connect-objects-attributes)
active-directory How To Connect Sync Change Serviceacct Pass https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sync-change-serviceacct-pass.md
na ms.devlang: na Previously updated : 05/02/2019 Last updated : 03/17/2021
# Changing the ADSync service account password If you change the ADSync service account password, the Synchronization Service will not be able start correctly until you have abandoned the encryption key and reinitialized the ADSync service account password.
+>[!IMPORTANT]
+> If you use Connect with a build from 2017 March or earlier, then you should not reset the password on the service account since Windows destroys the encryption keys for security reasons. You cannot change the account to any other account without reinstalling Azure AD Connect. If you upgrade to a build from 2017 April or later, then it is supported to change the password on the service account, but you cannot change the account used.
+ Azure AD Connect, as part of the Synchronization Services uses an encryption key to store the passwords of the AD DS Connector account and ADSync service account. These accounts are encrypted before they are stored in the database. The encryption key used is secured using [Windows Data Protection (DPAPI)](/previous-versions/ms995355(v=msdn.10)). DPAPI protects the encryption key using the **ADSync service account**.
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-version-history.md
ms.assetid: ef2797d7-d440-4a9a-a648-db32ad137494
Previously updated : 08/07/2020 Last updated : 03/16/2021
Please follow this link to read more about [auto upgrade](how-to-connect-install
> >For version history information on retired versions, see [Azure AD Connect version release history archive](reference-connect-version-history-archive.md) +
+## 1.6.2.3
+
+>[!NOTE]
+> - This release will be made available for download only.
+> - The upgrade to this release will require a full synchronization due to sync rule changes.
+> - This release defaults the AADConnect server to the new V2 end point. Note that this end point is not supported in the German national cloud, the Chinese national cloud and the US government cloud and if you need to deploy this version in these clouds you need to follow [these instructions](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-sync-endpoint-api-v2#rollback) to switch back to the V1 end point. Failure to do so will result in errors in synchronization.
+
+### Release status
+3/17/2021: Released for download
+
+### Functional changes
+
+ - Updated default sync rules to limit membership in written back groups to 50k members.
+ - Added new default sync rules for limiting membership count in group writeback (Out to AD - Group Writeback Member Limit) and group sync to Azure Active Directory (Out to AAD - Group Writeup Member Limit) groups.
+ - Added member attribute to the 'Out to AD - Group SOAInAAD - Exchange' rule to limit members in written back groups to 50k
+ - Updated Sync Rules to support Group Writeback v2
+ -If the ΓÇ£In from AAD - Group SOAInAADΓÇ¥ rule is cloned and AADConnect is upgraded.
+ -The updated rule will be disabled by default and so the targetWritebackType will be null.
+ - AADConnect will writeback all Cloud Groups (including Azure Active Directory Security Groups enabled for writeback) as Distribution Groups.
+ -If the ΓÇ£Out to AD - Group SOAInAADΓÇ¥ rule is cloned and AADConnect is upgraded.
+ - The updated rule will be disabled by default. However, a new sync rule ΓÇ£Out to AD - Group SOAInAAD - ExchangeΓÇ¥ which is added will be enabled.
+ - Depending on the Cloned Custom Sync Rule's precedence, AADConnect will flow the Mail and Exchange attributes.
+ - If the Cloned Custom Sync Rule does not flow some Mail and Exchange attributes, then new Exchange Sync Rule will add those attributes.
+ - Added support for [Selective Password hash Synchronization](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-selective-password-hash-synchronization)
+ - Added the new [Single Object Sync cmdlet](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-single-object-sync). Use this cmdlet to troubleshoot your Azure AD Connect sync configuration.
+ - Updated AADConnectHealth agent to 3.1.83.0
+ - New version of the [ADSyncTools PowerShell module](https://docs.microsoft.com/azure/active-directory/hybrid/reference-connect-adsynctools), which has several new or improved cmdlets.
+
+ - Clear-ADSyncToolsMsDsConsistencyGuid
+ - ConvertFrom-ADSyncToolsAadDistinguishedName
+ - ConvertFrom-ADSyncToolsImmutableID
+ - ConvertTo-ADSyncToolsAadDistinguishedName
+ - ConvertTo-ADSyncToolsCloudAnchor
+ - ConvertTo-ADSyncToolsImmutableID
+ - Export-ADSyncToolsAadDisconnectors
+ - Export-ADSyncToolsObjects
+ - Export-ADSyncToolsRunHistory
+ - Get-ADSyncToolsAadObject
+ - Get-ADSyncToolsMsDsConsistencyGuid
+ - Import-ADSyncToolsObjects
+ - Import-ADSyncToolsRunHistory
+ - Remove-ADSyncToolsAadObject
+ - Search-ADSyncToolsADobject
+ - Set-ADSyncToolsMsDsConsistencyGuid
+ - Trace-ADSyncToolsADImport
+ - Trace-ADSyncToolsLdapQuery
+
+ - Updated error logging for token acquisition failures.
+ - Updated 'Learn more' links on the configuration page to give more detail on the linked information.
+ - Removed Explicit column from CS Search page in the Old Sync UI
+ - Additional UI has been added to the Group Writeback flow to prompt the user for credentials or to configure their own permissions using the ADSyncConfig module if credentials have not already been provided in an earlier step.
+ - Auto-create MSA for ADSync Service Account on a DC.
+ - Added ability to set and get Azure Active Directory DirSync feature Group Writeback V2 in the existing cmdlets:
+ - Set-ADSyncAADCompanyFeature
+ - Get-ADSyncAADCompanyFeature
+ - Added 2 cmdlets to read AWS API version
+ - Get-ADSyncAADConnectorImportApiVersion - to get import AWS API version
+ - Get-ADSyncAADConnectorExportApiVersion - to get export AWS API version
+
+ - Changes made to synchronization rules are now tracked to assist troubleshooting changes in the service. The cmdlet "Get-ADSyncRuleAudit" will retrieve tracked changes.
+ - Updated the Add-ADSyncADDSConnectorAccount cmdlet in the the [ADSyncConfig PowerShell module](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-configure-ad-ds-connector-account#using-the-adsyncconfig-powershell-module) to allow a user in ADSyncAdmin group to change the AD DS Connector account.
+
+### Bug fixes
+ - Updated disabled foreground color to satisfy luminosity requirements on a white background. Added additional conditions for navigation tree to set foreground text color to white when a disabled page is selected to satisfy luminosity requirements.
+ - Increase granularity for Set-ADSyncPasswordHashSyncPermissions cmdlet - Updated PHS permissions script (Set-ADSyncPasswordHashSyncPermissions) to include an optional "ADobjectDN" parameter.
+ - Accessibility bug fix. The screen reader would now describe the UX element that holds the list of forests as "**Forests list**" instead of "**Forest List list**"
+ - Updated screen reader output for some items in the Azure AD Connect wizard. Updated button hover color to satisfy contrast requirements. Updated Synchronization Service Manager title color to satisfy contrast requirements.
+ - Fixed an issue with installing AADConnect from exported configuration having custom extension attributes - Added a condition to skip checking for extension attributes in the target schema while applying the sync rule.
+ - Appropriate permissions are added on install if the Group Writeback feature is enabled.
+ - Fix duplicate default sync rule precedence on import
+ - Fixed an issue that caused a staging error during V2 API delta import for a conflicting object that was repaired via the health portal.
+ - Fixed an issue in the sync engine that caused CS objects to have an inconsistent link state
+ - Added import counters to Get-ADSyncConnectorStatistics output.
+ - Fixed unreachable domain de-selection(selected previously) issue in some corner cases during pass2 wizard.
+ - Modified policy import and export to fail if custom rule has duplicate precedence
+ - Fixed a bug in the domain selection logic.
+ - Fixes an issue with build 1.5.18.0 if you use mS-DS-ConsistencyGuid as the source anchor and have clone the In from AD - Group Join rule.
+ - Fresh AADConnect installs will use the Export Deletion Threshold stored in the cloud if there is one available and there is not a different one passed in.
+ - Fixed issue where AADConnect does not read AD displayName changes of hybrid-joined devices
+ ## 1.5.45.0 ### Release status
active-directory Aws Single Sign On Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/aws-single-sign-on-tutorial.md
Previously updated : 02/18/2021 Last updated : 03/12/2021
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* AWS Single Sign-on supports **SP and IDP** initiated SSO
+* AWS Single Sign-on supports **SP and IDP** initiated SSO.
* AWS Single Sign-on supports [**Automated user provisioning**](./aws-single-sign-on-provisioning-tutorial.md).
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **AWS Single Sign-on** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
Follow these steps to enable Azure AD SSO in the Azure portal.
a. Click **Upload metadata file**.
- ![image1](common/upload-metadata.png)
-
- b. Click on **folder logo** to select the metadata file and click **Upload**.
+ b. Click on **folder logo** to select the metadata file which you have downloaded from the **Configure AWS Single Sign-on SSO** section (point 8) and click **Add**.
![image2](common/browse-upload-metadata.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure AWS Single Sign-on SSO
-1. Open the **AWS SSO console** .
+1. To automate the configuration within AWS Single Sign-on, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+2. After adding extension to the browser, click on **Set up AWS Single Sign-on** will direct you to the AWS Single Sign-on application. From there, provide the admin credentials to sign into AWS Single Sign-on. The browser extension will automatically configure the application for you and automate steps 3-10.
+
+ ![Setup configuration](common/setup-sso.png)
+
+3. If you want to setup AWS Single Sign-on manually, in a different web browser window, sign in to your AWS Single Sign-on company site as an administrator.
+
+1. Go to the **Services -> Security, Identity, & Compliance -> AWS Single Sign-On**.
2. In the left navigation pane, choose **Settings**.
-3. On the **Settings** page, find **Identity source**, choose **Change**.
-4. On the Change directory page, choose **External identity provider**.
-5. In the **Service provider metadata** section, find **AWS SSO SAML metadata** and select **Download metadata file** to download the metadata file and save it on your computer.
-6. In the **Identity provider metadata** section, choose **Browse** to upload the metadata file which you have downloaded from the Azure portal.
-7. Choose **Next: Review**.
-8. In the text box, type **CONFIRM** to confirm changing directory.
-9. Choose **Finish**.
+3. On the **Settings** page, find **Identity source** and click on **Change**.
+
+ ![Screenshot for Identity source change service](./media/aws-single-sign-on-tutorial/settings.png)
+
+4. On the Change identity source, choose **External identity provider**.
+
+
+ ![Screenshot for selecting external identity provider section](./media/aws-single-sign-on-tutorial/external-identity-provider.png)
++
+1. Perform the below steps in the **Configure external identity provider** section:
+
+ ![Screenshot for download and upload metadata section](./media/aws-single-sign-on-tutorial/upload-metadata.png)
+
+ a. In the **Service provider metadata** section, find **AWS SSO SAML metadata** and select **Download metadata file** to download the metadata file and save it on your computer and use this metadata file to upload on Azure portal.
+
+ b. Copy **AWS SSO Sign-in URL** value, paste this value into the **Sign on URL** text box in the **Basic SAML Configuration section** in the Azure portal.
+
+ c. In the **Identity provider metadata** section, choose **Browse** to upload the metadata file which you have downloaded from the Azure portal.
+
+ d. Choose **Next: Review**.
+
+8. In the text box, type **ACCEPT** to change the identity source.
+
+ ![Screenshot for Confirming the configuration](./media/aws-single-sign-on-tutorial/accept.png)
+
+9. Click **Change identity source**.
### Create AWS Single Sign-on test user
active-directory Idc Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/idc-tutorial.md
Previously updated : 09/19/2019 Last updated : 03/18/2021
In this tutorial, you'll learn how to integrate IDC with Azure Active Directory
* Enable your users to be automatically signed-in to IDC with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* IDC supports **SP and IDP** initiated SSO
+* IDC supports **SP and IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
## Adding IDC from the gallery To configure the integration of IDC into Azure AD, you need to add IDC from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **IDC** in the search box. 1. Select **IDC** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for IDC
+## Configure and test Azure AD SSO for IDC
Configure and test Azure AD SSO with IDC using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in IDC.
-To configure and test Azure AD SSO with IDC, complete the following building blocks:
+To configure and test Azure AD SSO with IDC, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with IDC, complete the following building blo
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **IDC** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **IDC** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png) 1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
- a. In the **Identifier** text box, type a URL using the following pattern:
- `urn:idc:authentication:saml2:entity:cas:prod-2016:<ClientCode>`
+ a. In the **Identifier** text box, type the URL:
+ `https://www.idc.com/sp`
b. In the **Reply URL** text box, type a URL using the following pattern: `https://cas.idc.com:443/login?client_name=<ClientName>`
- c. In the **Relay State** text box, type a URL:
+ c. In the **Relay State** text box, type the URL:
`https://www.idc.com/j_spring_cas_security_check` 1. Click **Set additional URLs** and perform the following steps if you wish to configure the application in **SP** initiated mode:
- In the **Sign-on URL** text box, type a URL:
+ In the **Sign-on URL** text box, type the URL:
`https://www.idc.com/saml-welcome/<SamlWelcomeCode>` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier and Reply URL. Contact the IDC Client support team to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > The Reply URL value is not real. Update the value with the actual Reply URL. Contact the [IDC Client support team](mailto:idc_support@idc.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **IDC**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure IDC SSO
-To configure single sign-on on the **IDC** side, send the downloaded **Federation Metadata XML** and appropriate copied URLs from the Azure portal to the IDC support team. IDC configures this setting so the SAML SSO connection is set properly on both sides.
+To configure single sign-on on the **IDC** side, send the downloaded **Federation Metadata XML** and appropriate copied URLs from the Azure portal to the [IDC support team](mailto:idc_support@idc.com). IDC configures this setting so the SAML SSO connection is set properly on both sides.
### Create IDC test user
A user does not have to be created in IDC in advance. The user will created auto
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to IDC Sign on URL where you can initiate the login flow.
+
+* Go to IDC Sign-on URL directly and initiate the login flow from there.
-When you click the IDC tile in the Access Panel, you should be automatically signed in to the IDC for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the IDC for which you set up the SSO
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the IDC tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the IDC for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try IDC with Azure AD](https://aad.portal.azure.com/)
+ Once you configure IDC you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory Moqups Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/moqups-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Moqups | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Moqups.
++++++++ Last updated : 03/19/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Moqups
+
+In this tutorial, you'll learn how to integrate Moqups with Azure Active Directory (Azure AD). When you integrate Moqups with Azure AD, you can:
+
+* Control in Azure AD who has access to Moqups.
+* Enable your users to be automatically signed-in to Moqups with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Moqups single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Moqups supports **SP and IDP** initiated SSO.
+* Moqups supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Adding Moqups from the gallery
+
+To configure the integration of Moqups into Azure AD, you need to add Moqups from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Moqups** in the search box.
+1. Select **Moqups** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Moqups
+
+Configure and test Azure AD SSO with Moqups using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Moqups.
+
+To configure and test Azure AD SSO with Moqups, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Moqups SSO](#configure-moqups-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Moqups test user](#create-moqups-test-user)** - to have a counterpart of B.Simon in Moqups that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Moqups** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://app.moqups.com/saml-login`
+
+1. Click **Save**.
+
+1. Moqups application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Moqups application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | -- | |
+ | FirstName | user.givenname |
+ | LastName | user.surname |
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Moqups.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Moqups**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Moqups SSO
+
+To configure single sign-on on **Moqups** side, you need to send the **App Federation Metadata Url** to [Moqups support team](mailto:support@moqups.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Moqups test user
+
+In this section, a user called Britta Simon is created in Moqups. Moqups supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Moqups, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Moqups Sign on URL where you can initiate the login flow.
+
+* Go to Moqups Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Moqups for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Moqups tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Moqups for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+
+## Next steps
+
+Once you configure Moqups you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory Sequr Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sequr-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Sequr | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Sequr.
+ Title: 'Tutorial: Azure Active Directory integration with Genea Access Control | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Genea Access Control.
Previously updated : 04/10/2019 Last updated : 03/17/2021
-# Tutorial: Azure Active Directory integration with Sequr
+# Tutorial: Azure Active Directory integration with Genea Access Control
-In this tutorial, you learn how to integrate Sequr with Azure Active Directory (Azure AD).
-Integrating Sequr with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Genea Access Control with Azure Active Directory (Azure AD). When you integrate Genea Access Control with Azure AD, you can:
-* You can control in Azure AD who has access to Sequr.
-* You can enable your users to be automatically signed-in to Sequr (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Genea Access Control.
+* Enable your users to be automatically signed-in to Genea Access Control with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Sequr, you need the following items:
+To configure Azure AD integration with Genea Access Control, you need the following items:
* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* Sequr single sign-on enabled subscription
+* Genea Access Control single sign-on enabled subscription
## Scenario description
-In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-
-* Sequr supports **SP and IDP** initiated SSO
-
-## Adding Sequr from the gallery
-
-To configure the integration of Sequr into Azure AD, you need to add Sequr from the gallery to your list of managed SaaS apps.
-
-**To add Sequr from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
+In this tutorial, you configure and test Azure AD SSO in a test environment.
-4. In the search box, type **Sequr**, select **Sequr** from result panel then click **Add** button to add the application.
+* Genea Access Control supports **SP and IDP** initiated SSO.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
- ![Sequr in the results list](common/search-new-app.png)
-## Configure and test Azure AD single sign-on
+## Adding Genea Access Control from the gallery
-In this section, you configure and test Azure AD single sign-on with Sequr based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Sequr needs to be established.
+To configure the integration of Genea Access Control into Azure AD, you need to add Genea Access Control from the gallery to your list of managed SaaS apps.
-To configure and test Azure AD single sign-on with Sequr, you need to complete the following building blocks:
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Genea Access Control** in the search box.
+1. Select **Genea Access Control** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Sequr Single Sign-On](#configure-sequr-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Sequr test user](#create-sequr-test-user)** - to have a counterpart of Britta Simon in Sequr that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+## Configure and test Azure AD SSO for Genea Access Control
-### Configure Azure AD single sign-on
+Configure and test Azure AD SSO with Genea Access Control using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Genea Access Control.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+To configure and test Azure AD SSO with Genea Access Control, perform the following steps:
-To configure Azure AD single sign-on with Sequr, perform the following steps:
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Genea Access Control SSO](#configure-genea-access-control-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Genea Access Control test user](#create-genea-access-control-test-user)** - to have a counterpart of B.Simon in Genea Access Control that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-1. In the [Azure portal](https://portal.azure.com/), on the **Sequr** application integration page, select **Single sign-on**.
+## Configure Azure AD SSO
- ![Configure single sign-on link](common/select-sso.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. In the Azure portal, on the **Genea Access Control** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following step:
- ![Sequr Domain and URLs single sign-on information](common/idp-identifier.png)
- In the **Identifier** text box, type the URL: `https://login.sequr.io` 5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- ![image](common/both-advanced-urls.png)
- a. In the **Sign-on URL** text box, type the URL: `https://login.sequr.io` b. In the **Relay State** textbox, you will get this value, which is explained later in the tutorial.
To configure Azure AD single sign-on with Sequr, perform the following steps:
![The Certificate download link](common/certificatebase64.png)
-7. On the **Set up Sequr** section, copy the appropriate URL(s) as per your requirement.
+7. On the **Set up Genea Access Control** section, copy the appropriate URL(s) as per your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
- b. Azure AD Identifier
+### Assign the Azure AD test user
- c. Logout URL
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Genea Access Control.
-### Configure Sequr Single Sign-On
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Genea Access Control**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+## Configure Genea Access Control SSO
-1. In a different web browser window, sign in to your Sequr company site as an administrator.
+1. In a different web browser window, sign in to your Genea Access Control company site as an administrator.
1. Click on the **Integrations** from the left navigation panel.
- ![Screenshot shows Integration selected from the navigation panel.](./media/sequr-tutorial/configure1.png)
+ ![Screenshot shows Integration selected from the navigation panel.](./media/sequr-tutorial/configure-1.png)
1. Scroll down to the **Single Sign-On** section and click **Manage**.
- ![Screenshot shows the Single Sign-on section with the Manage button selected.](./media/sequr-tutorial/configure2.png)
+ ![Screenshot shows the Single Sign-on section with the Manage button selected.](./media/sequr-tutorial/configure-2.png)
1. In the **Manage Single Sign-On** section, perform the following steps:
- ![Screenshot shows the Manage Single Sign-On section where you can enter the values described.](./media/sequr-tutorial/configure3.png)
+ ![Screenshot shows the Manage Single Sign-On section where you can enter the values described.](./media/sequr-tutorial/configure-3.png)
a. In the **Identity Provider Single Sign-On URL** textbox, paste the **Login URL** value, which you have copied from the Azure portal.
To configure Azure AD single sign-on with Sequr, perform the following steps:
d. Click **Save**.
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type `brittasimon@yourcompanydomain.extension`. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Sequr.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Sequr**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Sequr**.
-
- ![The Sequr link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
+### Create Genea Access Control test user
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+In this section, you create a user called Britta Simon in Genea Access Control. Work with [Genea Access Control Client support team](mailto:support@sequr.io) to add the users in the Genea Access Control platform. Users must be created and activated before you use single sign-on.
-7. In the **Add Assignment** dialog click the **Assign** button.
+## Test SSO
-### Create Sequr test user
+In this section, you test your Azure AD single sign-on configuration with following options.
-In this section, you create a user called Britta Simon in Sequr. Work with [Sequr Client support team](mailto:support@sequr.io) to add the users in the Sequr platform. Users must be created and activated before you use single sign-on.
+#### SP initiated:
-### Test single sign-on
+* Click on **Test this application** in Azure portal. This will redirect to Genea Access Control Sign on URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to Genea Access Control Sign-on URL directly and initiate the login flow from there.
-When you click the Sequr tile in the Access Panel, you should be automatically signed in to the Sequr for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
-## Additional Resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Genea Access Control for which you set up the SSO
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Genea Access Control tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Genea Access Control for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Genea Access Control you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
aks Aks Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/aks-migration.md
This article helps you plan and execute a successful migration to Azure Kubernet
This document can be used to help support the following scenarios:
+* Containerizing certain applications and migrating them to AKS using [Azure Migrate](../migrate/migrate-services-overview.md)
* Migrating an AKS Cluster backed by [Availability Sets](../virtual-machines/windows/tutorial-availability-sets.md) to [Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md) * Migrating an AKS cluster to use a [Standard SKU load balancer](./load-balancer-standard.md) * Migrating from [Azure Container Service (ACS) - retiring January 31, 2020](https://azure.microsoft.com/updates/azure-container-service-will-retire-on-january-31-2020/) to AKS
Several open-source tools can help with your migration, depending on your scenar
In this article we will summarize migration details for: > [!div class="checklist"]
+> * Containerizing applications through Azure Migrate
> * AKS with Standard Load Balancer and Virtual Machine Scale Sets > * Existing attached Azure Services > * Ensure valid quotas
In this article we will summarize migration details for:
> * Considerations for stateful applications > * Deployment of your cluster configuration
+## Use Azure Migrate to migrate your applications to AKS
+
+Azure Migrate offers a unified platform to assess and migrate to Azure on-premises servers, infrastructure, applications, and data. For AKS, you can use Azure Migrate for the following:
+
+* [Containerize ASP.NET applications and migrate to AKS](../migrate/tutorial-containerize-aspnet-kubernetes.md)
+* [Containerize Java web applications and migrate to AKS](../migrate/tutorial-containerize-java-kubernetes.md)
+ ## AKS with Standard Load Balancer and Virtual Machine Scale Sets AKS is a managed service offering unique capabilities with lower management overhead. As a result of being a managed service, you must select from a set of [regions](./quotas-skus-regions.md) which AKS supports. The transition from your existing cluster to AKS may require modifying your existing applications so they remain healthy on the AKS managed control plane.
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/planned-maintenance.md
# Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster (preview)
-Your AKS cluster has regular maintenance performed on it automatically. By default, this work can happen at any time. Planned Maintenance allows you to schedule weekly maintenance windows that will update your control plane and minimize workload impact. Once scheduled, all your maintenance will occur during the window you selected. You can schedule one or more weekly windows on your cluster by specifying a day or time range on a specific day. Maintenance Windows are configured using the Azure CLI.
+Your AKS cluster has regular maintenance performed on it automatically. By default, this work can happen at any time. Planned Maintenance allows you to schedule weekly maintenance windows that will update your control plane as well as your kube-system Pods on a VMSS instance and minimize workload impact. Once scheduled, all your maintenance will occur during the window you selected. You can schedule one or more weekly windows on your cluster by specifying a day or time range on a specific day. Maintenance Windows are configured using the Azure CLI.
## Before you begin
api-management Validation Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/validation-policies.md
In the following example, the JSON payload in requests and responses is validate
| Name | Description | Required | Default | | -- | - | -- | - | | unspecified-content-type-action | [Action](#actions) to perform for requests or responses with a content type that isnΓÇÖt specified in the API schema. | Yes | N/A |
-| max-size | Maximum length of the body of the request or response, checked against the `Content-Length` header. If the request body or response body is compressed, this value is the decompressed length. Maximum allowed value: 102,400 bytes (100 KB). | Yes | N/A |
+| max-size | Maximum length of the body of the request or response in bytes, checked against the `Content-Length` header. If the request body or response body is compressed, this value is the decompressed length. Maximum allowed value: 102,400 bytes (100 KB). | Yes | N/A |
| size-exceeded-action | [Action](#actions) to perform for requests or responses whose body exceeds the size specified in `max-size`. | Yes | N/A | | errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | Yes | N/A | | type | Content type to execute body validation for, checked against the `Content-Type` header. This value is case insensitive. If empty, it applies to every content type specified in the API schema. | No | N/A |
app-service Configure Authentication Provider Aad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-authentication-provider-aad.md
To register the app, perform the following steps:
|-|-| |Client ID| Use the **Application (client) ID** of the app registration. | |Issuer Url| Use `<authentication-endpoint>/<tenant-id>/v2.0`, and replace *\<authentication-endpoint>* with the [authentication endpoint for your cloud environment](../active-directory/develop/authentication-national-cloud.md#azure-ad-authentication-endpoints) (e.g., "https://login.microsoftonline.com" for global Azure), also replacing *\<tenant-id>* with the **Directory (tenant) ID** in which the app registration was created. This value is used to redirect users to the correct Azure AD tenant, as well as to download the appropriate metadata to determine the appropriate token signing keys and token issuer claim value for example. For applications that use Azure AD v1 and for Azure Functions apps, omit `/v2.0` in the URL.|
- |Client Secret (Optional)| Use the client secret you generated in the app registration.|
- |Allowed Token Audiences| If this is a cloud or server app and you want to allow authentication tokens from a web app, add the **Application ID URI** of the web app here. The configured **Client ID** is *always* implicitly considered to be an allowed audience. |
+ |Client Secret (Optional)| Use the client secret you generated in the app registration. With a client secret, hybrid flow is used and the App Service will return access and refresh tokens. When the client secret is not set, implicit flow is used and only an id token is returned. These tokens are sent by the provider and stored in the EasyAuth token store.|
+ |Allowed Token Audiences| If this is a cloud or server app and you want to allow authentication tokens from a web app, add the **Application ID URI** of the web app here. The configured **Client ID** is *always* implicitly considered to be an allowed audience.|
2. Select **OK**, and then select **Save**.
app-service Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/networking/private-endpoint.md
description: Connect privately to a Web App using Azure Private Endpoint
ms.assetid: 2dceac28-1ba6-4904-a15d-9e91d5ee162c Previously updated : 10/09/2020 Last updated : 03/16/2021
When you use Azure Function in Elastic Premium Plan with Private Endpoint, to ru
You can connect up to 100 Private Endpoints to a particular Web App.
-Slots cannot use Private Endpoint.
- Remote Debugging functionality is not available when Private Endpoint is enabled for the Web App. The recommendation is to deploy the code to a slot and remote debug it there. FTP access is provided through the inbound public IP address. Private Endpoint does not support FTP access to the Web App.
application-gateway Create Multiple Sites Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/create-multiple-sites-portal.md
Previously updated : 02/23/2021 Last updated : 03/19/2021 #Customer intent: As an IT administrator, I want to use the Azure portal to set up an application gateway so I can host multiple sites.
In this tutorial, you learn how to:
> * Create backend pools with the backend servers > * Create backend listeners > * Create routing rules
-> * Create a CNAME record in your domain
+> * Edit Hosts file for name resolution
:::image type="content" source="./media/create-multiple-sites-portal/scenario.png" alt-text="Multi-site Application Gateway":::
In this example, you install IIS on the virtual machines only to verify Azure cr
Wait for the deployment to complete before proceeding to the next step.
-## Edit your hosts file
+## Edit your hosts file for name resolution
-After the application gateway is created with its public IP address, you can get the IP address and use it to edit your hosts file to resolve `www.contoso.com` and `www.fabrikam.com`
+After the application gateway is created with its public IP address, you can get the IP address and use it to edit your hosts file to resolve `www.contoso.com` and `www.fabrikam.com`. In a production environment, you could create a `CNAME` in DNS for name resolution.
1. Click **All resources**, and then click **myAGPublicIPAddress**.
automation Schedules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/shared-resources/schedules.md
Title: Manage schedules in Azure Automation
description: This article tells how to create and work with a schedule in Azure Automation. Previously updated : 09/10/2020 Last updated : 03/19/2021
The cmdlets in the following table create and manage Automation schedules with P
## Create a schedule
-You can create a new schedule for your runbooks in the Azure portal or with PowerShell. To avoid affecting your runbooks and the processes they automate, you should first test any runbooks that have linked schedules with an Automation account dedicated for testing. A test validates that your scheduled runbooks continue to work correctly. If you see a problem, you can troubleshoot and apply any changes required before you migrate the updated runbook version to production.
+You can create a new schedule for your runbooks from the Azure portal, with PowerShell, or using an Azure Resource Manager (ARM) template. To avoid affecting your runbooks and the processes they automate, you should first test any runbooks that have linked schedules with an Automation account dedicated for testing. A test validates that your scheduled runbooks continue to work correctly. If you see a problem, you can troubleshoot and apply any changes required before you migrate the updated runbook version to production.
> [!NOTE] > Your Automation account doesn't automatically get any new versions of modules unless you've updated them manually by selecting the [Update Azure modules](../automation-update-azure-modules.md) option from **Modules**. Azure Automation uses the latest modules in your Automation account when a new scheduled job is run.
automation Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/deploy-updates.md
Title: How to create update deployments for Azure Automation Update Management
description: This article describes how to schedule update deployments and review their status. Previously updated : 12/09/2020 Last updated : 03/19/2021
Under each scenario, the deployment you create targets that selected machine or
* The target machine to update is set to target itself automatically * When configuring the schedule, you can specify **Update now**, occurs once, or uses a recurring schedule.
+> [!IMPORTANT]
+> By creating an update deployment, you accept the terms of the Software License Terms (EULA) provided by the company offering updates for their operating system.
+ ## Sign in to the Azure portal Sign in to the [Azure portal](https://portal.azure.com)
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/overview.md
Title: Azure Automation Update Management overview
description: This article provides an overview of the Update Management feature that implements updates for your Windows and Linux machines. Previously updated : 03/08/2021 Last updated : 03/19/2021 # Update Management overview
The following table lists the supported operating systems for update assessments
||| |Windows Server 2019 (Datacenter/Standard including Server Core)<br><br>Windows Server 2016 (Datacenter/Standard excluding Server Core)<br><br>Windows Server 2012 R2(Datacenter/Standard)<br><br>Windows Server 2012 | | |Windows Server 2008 R2 (RTM and SP1 Standard)| Update Management supports assessments and patching for this operating system. The [Hybrid Runbook Worker](../automation-windows-hrw-install.md) is supported for Windows Server 2008 R2. |
-|CentOS 6 and 7 (x64) | Linux agents require access to an update repository. Classification-based patching requires `yum` to return security data that CentOS doesn't have in its RTM releases. For more information on classification-based patching on CentOS, see [Update classifications on Linux](view-update-assessments.md#linux). |
-|Red Hat Enterprise 6 and 7 (x64) | Linux agents require access to an update repository. |
+|CentOS 6, 7, and 8 (x64) | Linux agents require access to an update repository. Classification-based patching requires `yum` to return security data that CentOS doesn't have in its RTM releases. For more information on classification-based patching on CentOS, see [Update classifications on Linux](view-update-assessments.md#linux). |
+|Red Hat Enterprise 6, 7, and 8 (x64) | Linux agents require access to an update repository. |
|SUSE Linux Enterprise Server 12, 15, and 15.1 (x64) | Linux agents require access to an update repository. For SUSE 15.x, Python 3 is required on the machine. | |Ubuntu 14.04 LTS, 16.04 LTS, and 18.04 LTS (x64) |Linux agents require access to an update repository. |
azure-cache-for-redis Cache Dotnet Core Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-dotnet-core-quickstart.md
Open your *Redistest.csproj* file. Add a `DotNetCliToolReference` element to inc
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType>
- <TargetFramework>netcoreapp2.0</TargetFramework>
+ <TargetFramework>net5.0</TargetFramework>
<UserSecretsId>Redistest</UserSecretsId> </PropertyGroup> <ItemGroup>
dotnet restore
In your command window, execute the following command to store a new secret named *CacheConnection*, after replacing the placeholders (including angle brackets) for your cache name and primary access key: ```
-dotnet user-secrets set CacheConnection "<cache name>.redis.cache.windows.net,abortConnect=false,ssl=true,password=<primary-access-key>"
+dotnet user-secrets set CacheConnection "<cache name>.redis.cache.windows.net,abortConnect=false,ssl=true,allowAdmin=true,password=<primary-access-key>"
``` Add the following `using` statement to *Program.cs*:
The connection to the Azure Cache for Redis is managed by the `ConnectionMultipl
In *Program.cs*, add the following members to the `Program` class of your console application: ```csharp
-private static Lazy<ConnectionMultiplexer> lazyConnection = new Lazy<ConnectionMultiplexer>(() =>
-{
- string cacheConnection = Configuration[SecretName];
- return ConnectionMultiplexer.Connect(cacheConnection);
-});
+private static Lazy<ConnectionMultiplexer> lazyConnection = CreateConnection();
public static ConnectionMultiplexer Connection {
public static ConnectionMultiplexer Connection
return lazyConnection.Value; } }+
+private static Lazy<ConnectionMultiplexer> CreateConnection()
+{
+ return new Lazy<ConnectionMultiplexer>(() =>
+ {
+ string cacheConnection = Configuration[SecretName];
+ return ConnectionMultiplexer.Connect(cacheConnection);
+ });
+}
``` This approach to sharing a `ConnectionMultiplexer` instance in your application uses a static property that returns a connected instance. The code provides a thread-safe way to initialize only a single connected `ConnectionMultiplexer` instance. `abortConnect` is set to false, which means that the call succeeds even if a connection to the Azure Cache for Redis is not established. One key feature of `ConnectionMultiplexer` is that it automatically restores connectivity to the cache once the network issue or other causes are resolved. The value of the *CacheConnection* secret is accessed using the Secret Manager configuration provider and used as the password parameter.
+## Handle RedisConnectionException and SocketException by reconnecting
+
+A recommended best practice when calling methods on `ConnectionMultiplexer` is to attempt to resolve `RedisConnectionException` and `SocketException` exceptions automatically by closing and reestablishing the connection.
+
+Add the following `using` statements to *Program.cs*:
+
+```csharp
+using System.Net.Sockets;
+using System.Threading;
+```
+
+In *Program.cs*, add the following members to the `Program` class:
+
+```csharp
+private static long lastReconnectTicks = DateTimeOffset.MinValue.UtcTicks;
+private static DateTimeOffset firstErrorTime = DateTimeOffset.MinValue;
+private static DateTimeOffset previousErrorTime = DateTimeOffset.MinValue;
+
+private static readonly object reconnectLock = new object();
+
+// In general, let StackExchange.Redis handle most reconnects,
+// so limit the frequency of how often ForceReconnect() will
+// actually reconnect.
+public static TimeSpan ReconnectMinFrequency => TimeSpan.FromSeconds(60);
+
+// If errors continue for longer than the below threshold, then the
+// multiplexer seems to not be reconnecting, so ForceReconnect() will
+// re-create the multiplexer.
+public static TimeSpan ReconnectErrorThreshold => TimeSpan.FromSeconds(30);
+
+public static int RetryMaxAttempts => 5;
+
+private static void CloseConnection(Lazy<ConnectionMultiplexer> oldConnection)
+{
+ if (oldConnection == null)
+ return;
+
+ try
+ {
+ oldConnection.Value.Close();
+ }
+ catch (Exception)
+ {
+ // Example error condition: if accessing oldConnection.Value causes a connection attempt and that fails.
+ }
+}
+
+/// <summary>
+/// Force a new ConnectionMultiplexer to be created.
+/// NOTES:
+/// 1. Users of the ConnectionMultiplexer MUST handle ObjectDisposedExceptions, which can now happen as a result of calling ForceReconnect().
+/// 2. Don't call ForceReconnect for Timeouts, just for RedisConnectionExceptions or SocketExceptions.
+/// 3. Call this method every time you see a connection exception. The code will:
+/// a. wait to reconnect for at least the "ReconnectErrorThreshold" time of repeated errors before actually reconnecting
+/// b. not reconnect more frequently than configured in "ReconnectMinFrequency"
+/// </summary>
+public static void ForceReconnect()
+{
+ var utcNow = DateTimeOffset.UtcNow;
+ long previousTicks = Interlocked.Read(ref lastReconnectTicks);
+ var previousReconnectTime = new DateTimeOffset(previousTicks, TimeSpan.Zero);
+ TimeSpan elapsedSinceLastReconnect = utcNow - previousReconnectTime;
+
+ // If multiple threads call ForceReconnect at the same time, we only want to honor one of them.
+ if (elapsedSinceLastReconnect < ReconnectMinFrequency)
+ return;
+
+ lock (reconnectLock)
+ {
+ utcNow = DateTimeOffset.UtcNow;
+ elapsedSinceLastReconnect = utcNow - previousReconnectTime;
+
+ if (firstErrorTime == DateTimeOffset.MinValue)
+ {
+ // We haven't seen an error since last reconnect, so set initial values.
+ firstErrorTime = utcNow;
+ previousErrorTime = utcNow;
+ return;
+ }
+
+ if (elapsedSinceLastReconnect < ReconnectMinFrequency)
+ return; // Some other thread made it through the check and the lock, so nothing to do.
+
+ TimeSpan elapsedSinceFirstError = utcNow - firstErrorTime;
+ TimeSpan elapsedSinceMostRecentError = utcNow - previousErrorTime;
+
+ bool shouldReconnect =
+ elapsedSinceFirstError >= ReconnectErrorThreshold // Make sure we gave the multiplexer enough time to reconnect on its own if it could.
+ && elapsedSinceMostRecentError <= ReconnectErrorThreshold; // Make sure we aren't working on stale data (e.g. if there was a gap in errors, don't reconnect yet).
+
+ // Update the previousErrorTime timestamp to be now (e.g. this reconnect request).
+ previousErrorTime = utcNow;
+
+ if (!shouldReconnect)
+ return;
+
+ firstErrorTime = DateTimeOffset.MinValue;
+ previousErrorTime = DateTimeOffset.MinValue;
+
+ Lazy<ConnectionMultiplexer> oldConnection = lazyConnection;
+ CloseConnection(oldConnection);
+ lazyConnection = CreateConnection();
+ Interlocked.Exchange(ref lastReconnectTicks, utcNow.UtcTicks);
+ }
+}
+
+// In real applications, consider using a framework such as
+// Polly to make it easier to customize the retry approach.
+private static T BasicRetry<T>(Func<T> func)
+{
+ int reconnectRetry = 0;
+ int disposedRetry = 0;
+
+ while (true)
+ {
+ try
+ {
+ return func();
+ }
+ catch (Exception ex) when (ex is RedisConnectionException || ex is SocketException)
+ {
+ reconnectRetry++;
+ if (reconnectRetry > RetryMaxAttempts)
+ throw;
+ ForceReconnect();
+ }
+ catch (ObjectDisposedException)
+ {
+ disposedRetry++;
+ if (disposedRetry > RetryMaxAttempts)
+ throw;
+ }
+ }
+}
+
+public static IDatabase GetDatabase()
+{
+ return BasicRetry(() => Connection.GetDatabase());
+}
+
+public static System.Net.EndPoint[] GetEndPoints()
+{
+ return BasicRetry(() => Connection.GetEndPoints());
+}
+
+public static IServer GetServer(string host, int port)
+{
+ return BasicRetry(() => Connection.GetServer(host, port));
+}
+```
+ ## Executing cache commands In *Program.cs*, add the following code for the `Main` procedure of the `Program` class for your console application:
static void Main(string[] args)
{ InitializeConfiguration();
- // Connection refers to a property that returns a ConnectionMultiplexer
- // as shown in the previous example.
- IDatabase cache = lazyConnection.Value.GetDatabase();
+ IDatabase cache = GetDatabase();
// Perform cache operations using the cache object...
static void Main(string[] args)
Console.WriteLine("Cache response : " + cache.StringGet("Message").ToString()); // Get the client list, useful to see if connection list is growing...
+ // Note that this requires allowAdmin=true in the connection string
cacheCommand = "CLIENT LIST"; Console.WriteLine("\nCache command : " + cacheCommand);
- Console.WriteLine("Cache response : \n" + cache.Execute("CLIENT", "LIST").ToString().Replace("id=", "id="));
+ var endpoint = (System.Net.DnsEndPoint)GetEndPoints()[0];
+ IServer server = GetServer(endpoint.Host, endpoint.Port);
+ ClientInfo[] clients = server.ClientList();
+
+ Console.WriteLine("Cache response :");
+ foreach (ClientInfo client in clients)
+ {
+ Console.WriteLine(client.Raw);
+ }
- lazyConnection.Value.Dispose();
+ CloseConnection(lazyConnection);
} ```
class Employee
public string Name { get; set; } public int Age { get; set; }
- public Employee(string EmployeeId, string Name, int Age)
+ public Employee(string employeeId, string name, int age)
{
- this.Id = EmployeeId;
- this.Name = Name;
- this.Age = Age;
+ Id = employeeId;
+ Name = name;
+ Age = age;
} } ```
-At the bottom of `Main()` procedure in *Program.cs*, and before the call to `Dispose()`, add the following lines of code to cache and retrieve a serialized .NET object:
+At the bottom of `Main()` procedure in *Program.cs*, and before the call to `CloseConnection()`, add the following lines of code to cache and retrieve a serialized .NET object:
```csharp // Store .NET object to cache
azure-cache-for-redis Cache Dotnet How To Use Azure Redis Cache https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-dotnet-how-to-use-azure-redis-cache.md
Replace `<access-key>` with the primary key for your cache.
In Visual Studio, click **File** > **New** > **Project**.
-Select **Console App (.NET Framework)**, and **Next** to configure your app. Type a **Project name** and click **Create** to create a new console application.
+Select **Console App (.NET Framework)**, and **Next** to configure your app. Type a **Project name**, verify that **.NET Framework 4.6.1** or higher is selected, and then click **Create** to create a new console application.
<a name="configure-the-cache-clients"></a>
In Visual Studio, open your *App.config* file and update it to include an `appSe
<?xml version="1.0" encoding="utf-8" ?> <configuration> <startup>
- <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.7.1" />
+ <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.7.2" />
</startup> <appSettings file="C:\AppSecrets\CacheSecrets.config"></appSettings>
Never store credentials in source code. To keep this sample simple, IΓÇÖm only u
In *Program.cs*, add the following members to the `Program` class of your console application: ```csharp
-private static Lazy<ConnectionMultiplexer> lazyConnection = new Lazy<ConnectionMultiplexer>(() =>
-{
- string cacheConnection = ConfigurationManager.AppSettings["CacheConnection"].ToString();
- return ConnectionMultiplexer.Connect(cacheConnection);
-});
+private static Lazy<ConnectionMultiplexer> lazyConnection = CreateConnection();
public static ConnectionMultiplexer Connection {
public static ConnectionMultiplexer Connection
return lazyConnection.Value; } }+
+private static Lazy<ConnectionMultiplexer> CreateConnection()
+{
+ return new Lazy<ConnectionMultiplexer>(() =>
+ {
+ string cacheConnection = ConfigurationManager.AppSettings["CacheConnection"].ToString();
+ return ConnectionMultiplexer.Connect(cacheConnection);
+ });
+}
```
This approach to sharing a `ConnectionMultiplexer` instance in your application
The value of the *CacheConnection* appSetting is used to reference the cache connection string from the Azure portal as the password parameter.
+## Handle RedisConnectionException and SocketException by reconnecting
+
+A recommended best practice when calling methods on `ConnectionMultiplexer` is to attempt to resolve `RedisConnectionException` and `SocketException` exceptions automatically by closing and reestablishing the connection.
+
+Add the following `using` statements to *Program.cs*:
+
+```csharp
+using System.Net.Sockets;
+using System.Threading;
+```
+
+In *Program.cs*, add the following members to the `Program` class:
+
+```csharp
+private static long lastReconnectTicks = DateTimeOffset.MinValue.UtcTicks;
+private static DateTimeOffset firstErrorTime = DateTimeOffset.MinValue;
+private static DateTimeOffset previousErrorTime = DateTimeOffset.MinValue;
+
+private static readonly object reconnectLock = new object();
+
+// In general, let StackExchange.Redis handle most reconnects,
+// so limit the frequency of how often ForceReconnect() will
+// actually reconnect.
+public static TimeSpan ReconnectMinFrequency => TimeSpan.FromSeconds(60);
+
+// If errors continue for longer than the below threshold, then the
+// multiplexer seems to not be reconnecting, so ForceReconnect() will
+// re-create the multiplexer.
+public static TimeSpan ReconnectErrorThreshold => TimeSpan.FromSeconds(30);
+
+public static int RetryMaxAttempts => 5;
+
+private static void CloseConnection(Lazy<ConnectionMultiplexer> oldConnection)
+{
+ if (oldConnection == null)
+ return;
+
+ try
+ {
+ oldConnection.Value.Close();
+ }
+ catch (Exception)
+ {
+ // Example error condition: if accessing oldConnection.Value causes a connection attempt and that fails.
+ }
+}
+
+/// <summary>
+/// Force a new ConnectionMultiplexer to be created.
+/// NOTES:
+/// 1. Users of the ConnectionMultiplexer MUST handle ObjectDisposedExceptions, which can now happen as a result of calling ForceReconnect().
+/// 2. Don't call ForceReconnect for Timeouts, just for RedisConnectionExceptions or SocketExceptions.
+/// 3. Call this method every time you see a connection exception. The code will:
+/// a. wait to reconnect for at least the "ReconnectErrorThreshold" time of repeated errors before actually reconnecting
+/// b. not reconnect more frequently than configured in "ReconnectMinFrequency"
+/// </summary>
+public static void ForceReconnect()
+{
+ var utcNow = DateTimeOffset.UtcNow;
+ long previousTicks = Interlocked.Read(ref lastReconnectTicks);
+ var previousReconnectTime = new DateTimeOffset(previousTicks, TimeSpan.Zero);
+ TimeSpan elapsedSinceLastReconnect = utcNow - previousReconnectTime;
+
+ // If multiple threads call ForceReconnect at the same time, we only want to honor one of them.
+ if (elapsedSinceLastReconnect < ReconnectMinFrequency)
+ return;
+
+ lock (reconnectLock)
+ {
+ utcNow = DateTimeOffset.UtcNow;
+ elapsedSinceLastReconnect = utcNow - previousReconnectTime;
+
+ if (firstErrorTime == DateTimeOffset.MinValue)
+ {
+ // We haven't seen an error since last reconnect, so set initial values.
+ firstErrorTime = utcNow;
+ previousErrorTime = utcNow;
+ return;
+ }
+
+ if (elapsedSinceLastReconnect < ReconnectMinFrequency)
+ return; // Some other thread made it through the check and the lock, so nothing to do.
+
+ TimeSpan elapsedSinceFirstError = utcNow - firstErrorTime;
+ TimeSpan elapsedSinceMostRecentError = utcNow - previousErrorTime;
+
+ bool shouldReconnect =
+ elapsedSinceFirstError >= ReconnectErrorThreshold // Make sure we gave the multiplexer enough time to reconnect on its own if it could.
+ && elapsedSinceMostRecentError <= ReconnectErrorThreshold; // Make sure we aren't working on stale data (e.g. if there was a gap in errors, don't reconnect yet).
+
+ // Update the previousErrorTime timestamp to be now (e.g. this reconnect request).
+ previousErrorTime = utcNow;
+
+ if (!shouldReconnect)
+ return;
+
+ firstErrorTime = DateTimeOffset.MinValue;
+ previousErrorTime = DateTimeOffset.MinValue;
+
+ Lazy<ConnectionMultiplexer> oldConnection = lazyConnection;
+ CloseConnection(oldConnection);
+ lazyConnection = CreateConnection();
+ Interlocked.Exchange(ref lastReconnectTicks, utcNow.UtcTicks);
+ }
+}
+
+// In real applications, consider using a framework such as
+// Polly to make it easier to customize the retry approach.
+private static T BasicRetry<T>(Func<T> func)
+{
+ int reconnectRetry = 0;
+ int disposedRetry = 0;
+
+ while (true)
+ {
+ try
+ {
+ return func();
+ }
+ catch (Exception ex) when (ex is RedisConnectionException || ex is SocketException)
+ {
+ reconnectRetry++;
+ if (reconnectRetry > RetryMaxAttempts)
+ throw;
+ ForceReconnect();
+ }
+ catch (ObjectDisposedException)
+ {
+ disposedRetry++;
+ if (disposedRetry > RetryMaxAttempts)
+ throw;
+ }
+ }
+}
+
+public static IDatabase GetDatabase()
+{
+ return BasicRetry(() => Connection.GetDatabase());
+}
+
+public static System.Net.EndPoint[] GetEndPoints()
+{
+ return BasicRetry(() => Connection.GetEndPoints());
+}
+
+public static IServer GetServer(string host, int port)
+{
+ return BasicRetry(() => Connection.GetServer(host, port));
+}
+```
+ ## Executing cache commands Add the following code for the `Main` procedure of the `Program` class for your console application:
Add the following code for the `Main` procedure of the `Program` class for your
```csharp static void Main(string[] args) {
- // Connection refers to a property that returns a ConnectionMultiplexer
- // as shown in the previous example.
- IDatabase cache = Connection.GetDatabase();
+ IDatabase cache = GetDatabase();
// Perform cache operations using the cache object...
static void Main(string[] args)
Console.WriteLine("Cache response : " + cache.StringGet("Message").ToString()); // Get the client list, useful to see if connection list is growing...
- // Note that this requires the allowAdmin=true
+ // Note that this requires allowAdmin=true in the connection string
cacheCommand = "CLIENT LIST"; Console.WriteLine("\nCache command : " + cacheCommand);
- var endpoint = (System.Net.DnsEndPoint) Connection.GetEndPoints()[0];
- var server = Connection.GetServer(endpoint.Host, endpoint.Port);
+ var endpoint = (System.Net.DnsEndPoint)GetEndPoints()[0];
+ IServer server = GetServer(endpoint.Host, endpoint.Port);
+ ClientInfo[] clients = server.ClientList();
- var clients = server.ClientList();
Console.WriteLine("Cache response :");
- foreach (var client in clients)
+ foreach (ClientInfo client in clients)
{ Console.WriteLine(client.Raw); }
- lazyConnection.Value.Dispose();
+ CloseConnection(lazyConnection);
} ```
class Employee
public string Name { get; set; } public int Age { get; set; }
- public Employee(string EmployeeId, string Name, int Age)
+ public Employee(string employeeId, string name, int age)
{
- this.Id = EmployeeId;
- this.Name = Name;
- this.Age = Age;
+ Id = employeeId;
+ Name = name;
+ Age = age;
} } ```
-At the bottom of `Main()` procedure in *Program.cs*, and before the call to `Dispose()`, add the following lines of code to cache and retrieve a serialized .NET object:
+At the bottom of `Main()` procedure in *Program.cs*, and before the call to `CloseConnection()`, add the following lines of code to cache and retrieve a serialized .NET object:
```csharp // Store .NET object to cache
azure-cache-for-redis Cache Web App Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-web-app-howto.md
If you want to skip straight to the code, see the [ASP.NET quickstart](https://g
## Create the Visual Studio project
-1. Open Visual Studio, and then and select **File** >**New** > **Project**.
+1. Open Visual Studio, and then select **File** > **New** > **Project**.
-2. In the **New Project** dialog box, take the following steps:
+2. In the **Create a new project** dialog box, take the following steps:
![Create project](./media/cache-web-app-howto/cache-create-project.png)
- a. In the **Templates** list, expand the **Visual C#** node.
+ a. In the search box, enter _C# ASP.NET Web Application_.
- b. Select **Cloud**.
+ b. Select **ASP.NET Web Application (.NET Framework)**.
- c. Select **ASP.NET Web Application**.
+ c. Select **Next**.
- d. Verify that **.NET Framework 4.5.2** or higher is selected.
+3. In the **Project name** box, give the project a name. For this example, we used **ContosoTeamStats**.
- e. In the **Name** box, give the project a name. For this example, we used **ContosoTeamStats**.
+4. Verify that **.NET Framework 4.6.1** or higher is selected.
- f. Select **OK**.
+5. Select **Create**.
-3. Select **MVC** as the project type.
+6. Select **MVC** as the project type.
-4. Make sure that **No Authentication** is specified for the **Authentication** settings. Depending on your version of Visual Studio, the default **Authentication** setting might be something else. To change it, select **Change Authentication** and then **No Authentication**.
+7. Make sure that **No Authentication** is specified for the **Authentication** settings. Depending on your version of Visual Studio, the default **Authentication** setting might be something else. To change it, select **Change Authentication** and then **No Authentication**.
-5. Select **OK** to create the project.
+8. Select **Create** to create the project.
## Create a cache
The ASP.NET runtime merges the contents of the external file with the markup in
1. In **Solution Explorer**, expand the **Controllers** folder, and then open the *HomeController.cs* file.
-2. Add the following two `using` statements at the top of the file to support the cache client and app settings.
+2. Add the following `using` statements at the top of the file.
```csharp
- using System.Configuration;
using StackExchange.Redis;
+ using System.Configuration;
+ using System.Net.Sockets;
+ using System.Text;
+ using System.Threading;
```
-3. Add the following method to the `HomeController` class to support a new `RedisCache` action that runs some commands against the new cache.
+3. Add the following members to the `HomeController` class to support a new `RedisCache` action that runs some commands against the new cache.
```csharp public ActionResult RedisCache() { ViewBag.Message = "A simple example with Azure Cache for Redis on ASP.NET.";
-
- IDatabase cache = Connection.GetDatabase();
+
+ IDatabase cache = GetDatabase();
// Perform cache operations using the cache object...
The ASP.NET runtime merges the contents of the external file with the markup in
ViewBag.command4Result = cache.StringGet("Message").ToString(); // Get the client list, useful to see if connection list is growing...
+ // Note that this requires allowAdmin=true in the connection string
ViewBag.command5 = "CLIENT LIST"; StringBuilder sb = new StringBuilder();-
- var endpoint = (System.Net.DnsEndPoint)Connection.GetEndPoints()[0];
- var server = Connection.GetServer(endpoint.Host, endpoint.Port);
- var clients = server.ClientList();
+ var endpoint = (System.Net.DnsEndPoint)GetEndPoints()[0];
+ IServer server = GetServer(endpoint.Host, endpoint.Port);
+ ClientInfo[] clients = server.ClientList();
sb.AppendLine("Cache response :");
- foreach (var client in clients)
+ foreach (ClientInfo client in clients)
{ sb.AppendLine(client.Raw); }
The ASP.NET runtime merges the contents of the external file with the markup in
return View(); }
-
- private static Lazy<ConnectionMultiplexer> lazyConnection = new Lazy<ConnectionMultiplexer>(() =>
- {
- string cacheConnection = ConfigurationManager.AppSettings["CacheConnection"].ToString();
- return ConnectionMultiplexer.Connect(cacheConnection);
- });
+
+ private static long lastReconnectTicks = DateTimeOffset.MinValue.UtcTicks;
+ private static DateTimeOffset firstErrorTime = DateTimeOffset.MinValue;
+ private static DateTimeOffset previousErrorTime = DateTimeOffset.MinValue;
+
+ private static readonly object reconnectLock = new object();
+
+ // In general, let StackExchange.Redis handle most reconnects,
+ // so limit the frequency of how often ForceReconnect() will
+ // actually reconnect.
+ public static TimeSpan ReconnectMinFrequency => TimeSpan.FromSeconds(60);
+
+ // If errors continue for longer than the below threshold, then the
+ // multiplexer seems to not be reconnecting, so ForceReconnect() will
+ // re-create the multiplexer.
+ public static TimeSpan ReconnectErrorThreshold => TimeSpan.FromSeconds(30);
+
+ public static int RetryMaxAttempts => 5;
+
+ private static Lazy<ConnectionMultiplexer> lazyConnection = CreateConnection();
public static ConnectionMultiplexer Connection {
The ASP.NET runtime merges the contents of the external file with the markup in
} }
+ private static Lazy<ConnectionMultiplexer> CreateConnection()
+ {
+ return new Lazy<ConnectionMultiplexer>(() =>
+ {
+ string cacheConnection = ConfigurationManager.AppSettings["CacheConnection"].ToString();
+ return ConnectionMultiplexer.Connect(cacheConnection);
+ });
+ }
+
+ private static void CloseConnection(Lazy<ConnectionMultiplexer> oldConnection)
+ {
+ if (oldConnection == null)
+ return;
+
+ try
+ {
+ oldConnection.Value.Close();
+ }
+ catch (Exception)
+ {
+ // Example error condition: if accessing oldConnection.Value causes a connection attempt and that fails.
+ }
+ }
+
+ /// <summary>
+ /// Force a new ConnectionMultiplexer to be created.
+ /// NOTES:
+ /// 1. Users of the ConnectionMultiplexer MUST handle ObjectDisposedExceptions, which can now happen as a result of calling ForceReconnect().
+ /// 2. Don't call ForceReconnect for Timeouts, just for RedisConnectionExceptions or SocketExceptions.
+ /// 3. Call this method every time you see a connection exception. The code will:
+ /// a. wait to reconnect for at least the "ReconnectErrorThreshold" time of repeated errors before actually reconnecting
+ /// b. not reconnect more frequently than configured in "ReconnectMinFrequency"
+ /// </summary>
+ public static void ForceReconnect()
+ {
+ var utcNow = DateTimeOffset.UtcNow;
+ long previousTicks = Interlocked.Read(ref lastReconnectTicks);
+ var previousReconnectTime = new DateTimeOffset(previousTicks, TimeSpan.Zero);
+ TimeSpan elapsedSinceLastReconnect = utcNow - previousReconnectTime;
+
+ // If multiple threads call ForceReconnect at the same time, we only want to honor one of them.
+ if (elapsedSinceLastReconnect < ReconnectMinFrequency)
+ return;
+
+ lock (reconnectLock)
+ {
+ utcNow = DateTimeOffset.UtcNow;
+ elapsedSinceLastReconnect = utcNow - previousReconnectTime;
+
+ if (firstErrorTime == DateTimeOffset.MinValue)
+ {
+ // We haven't seen an error since last reconnect, so set initial values.
+ firstErrorTime = utcNow;
+ previousErrorTime = utcNow;
+ return;
+ }
+
+ if (elapsedSinceLastReconnect < ReconnectMinFrequency)
+ return; // Some other thread made it through the check and the lock, so nothing to do.
+
+ TimeSpan elapsedSinceFirstError = utcNow - firstErrorTime;
+ TimeSpan elapsedSinceMostRecentError = utcNow - previousErrorTime;
+
+ bool shouldReconnect =
+ elapsedSinceFirstError >= ReconnectErrorThreshold // Make sure we gave the multiplexer enough time to reconnect on its own if it could.
+ && elapsedSinceMostRecentError <= ReconnectErrorThreshold; // Make sure we aren't working on stale data (e.g. if there was a gap in errors, don't reconnect yet).
+
+ // Update the previousErrorTime timestamp to be now (e.g. this reconnect request).
+ previousErrorTime = utcNow;
+
+ if (!shouldReconnect)
+ return;
+
+ firstErrorTime = DateTimeOffset.MinValue;
+ previousErrorTime = DateTimeOffset.MinValue;
+
+ Lazy<ConnectionMultiplexer> oldConnection = lazyConnection;
+ CloseConnection(oldConnection);
+ lazyConnection = CreateConnection();
+ Interlocked.Exchange(ref lastReconnectTicks, utcNow.UtcTicks);
+ }
+ }
+
+ // In real applications, consider using a framework such as
+ // Polly to make it easier to customize the retry approach.
+ private static T BasicRetry<T>(Func<T> func)
+ {
+ int reconnectRetry = 0;
+ int disposedRetry = 0;
+
+ while (true)
+ {
+ try
+ {
+ return func();
+ }
+ catch (Exception ex) when (ex is RedisConnectionException || ex is SocketException)
+ {
+ reconnectRetry++;
+ if (reconnectRetry > RetryMaxAttempts)
+ throw;
+ ForceReconnect();
+ }
+ catch (ObjectDisposedException)
+ {
+ disposedRetry++;
+ if (disposedRetry > RetryMaxAttempts)
+ throw;
+ }
+ }
+ }
+
+ public static IDatabase GetDatabase()
+ {
+ return BasicRetry(() => Connection.GetDatabase());
+ }
+
+ public static System.Net.EndPoint[] GetEndPoints()
+ {
+ return BasicRetry(() => Connection.GetEndPoints());
+ }
+
+ public static IServer GetServer(string host, int port)
+ {
+ return BasicRetry(() => Connection.GetServer(host, port));
+ }
``` 4. In **Solution Explorer**, expand the **Views** > **Shared** folder. Then open the *_Layout.cshtml* file.
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
Since you're charged for any data collected in a Log Analytics workspace, you sh
To specify additional filters, you must use Custom configuration and specify an XPath that filters out the events you don't. XPath entries are written in the form `LogName!XPathQuery`. For example, you may want to return only events from the Application event log with an event ID of 1035. The XPathQuery for these events would be `*[System[EventID=1035]]`. Since you want to retrieve the events from the Application event log, the XPath would be `Application!*[System[EventID=1035]]`
+See [XPath 1.0 limitations](/windows/win32/wes/consuming-events#xpath-10-limitations) for a list of limitations in the XPath supported by Windows event log.
+ > [!TIP] > Use the PowerShell cmdlet `Get-WinEvent` with the `FilterXPath` parameter to test the validity of an XPathQuery. The following script shows an example. >
You cannot create a data collection rule using a Resource Manager template, but
## Next steps - Learn more about the [Azure Monitor Agent](azure-monitor-agent-overview.md).-- Learn more about [data collection rules](data-collection-rule-overview.md).
+- Learn more about [data collection rules](data-collection-rule-overview.md).
azure-monitor Data Sources Windows Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/data-sources-windows-events.md
As you type the name of an event log, Azure Monitor provides suggestions of comm
[![Configure Windows events](media/data-sources-windows-events/configure.png)](media/data-sources-windows-events/configure.png#lightbox)
+> [!IMPORTANT]
+> You can't configure collection of security events from the workspace. You must used [Azure Security Center](../../security-center/security-center-enable-data-collection.md) or [Azure Sentinel](../../sentinel/connect-windows-security-events.md) to collect security events.
++ > [!NOTE] > Critical events from the Windows event log will have a severity of "Error" in Azure Monitor Logs.
The following table provides different examples of log queries that retrieve Win
## Next steps * Configure Log Analytics to collect other [data sources](../agents/agent-data-sources.md) for analysis. * Learn about [log queries](../logs/log-query-overview.md) to analyze the data collected from data sources and solutions.
-* Configure [collection of performance counters](data-sources-performance-counters.md) from your Windows agents.
+* Configure [collection of performance counters](data-sources-performance-counters.md) from your Windows agents.
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/action-groups.md
If you are not receiving Notifications on your *primary email*, then you can try
You may have a limited number of email actions in an Action Group. See the [rate limiting information](./alerts-rate-limiting.md) article. ### Function
-Calls an existing HTTP trigger endpoint in [Azure Functions](../../azure-functions/functions-get-started.md).
+Calls an existing HTTP trigger endpoint in [Azure Functions](../../azure-functions/functions-get-started.md). To handle a request, your endpoint must handle the HTTP POST verb.
You may have a limited number of Function actions in an Action Group.
azure-monitor Change Analysis Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/change-analysis-visualizations.md
Search for Change Analysis in the search bar on Azure portal to launch the exper
![Screenshot of searching Change Analysis in Azure portal](./media/change-analysis/search-change-analysis.png)
-All resources under a selected subscription are displayed with changes from the past 24 hours. To optimize for the page load performance, the service is displaying 10 resources at a time. Select the next page to view more resources. We are working on removing this limitation.
+All resources under a selected subscription are displayed with changes from the past 24 hours. All changes are displayed with old value and new value to provide insights at one glance.
![Screenshot of Change Analysis blade in Azure portal](./media/change-analysis/change-analysis-standalone-blade.png)
-Clicking into a resource to view all its changes. If needed, drill down into a change to view json formatted change details and insights.
+Clicking into a change to view full Resource Manager snippet and other properties.
![Screenshot of change details](./media/change-analysis/change-details.png)
The UI supports selecting multiple subscriptions to view resource changes. Use t
![Screenshot of subscription filter that supports selecting multiple subscriptions](./media/change-analysis/multiple-subscriptions-support.png)
-### Web App Diagnose and Solve Problems
-
-In Azure Monitor, Change Analysis is also built into the self-service **Diagnose and solve problems** experience. Access this experience from the **Overview** page of your App Service application.
-
-![Screenshot of the "Overview" button and the "Diagnose and solve problems" button](./media/change-analysis/change-analysis.png)
## Application Change Analysis in the Diagnose and solve problems tool
Application Change Analysis is a standalone detector in the Web App diagnose and
![Screenshot of the change diff view](./media/change-analysis/change-view.png)
+## Diagnose and Solve Problems tool
+Change Analysis is available as an insight card in Diagnose and Solve Problem tool. If a resource experiences issues and there are changes discovered in the past 72 hours, the insights card will display the number of changes. Clicking on view change details link will lead to the filtered view from Change Analysis standalone UI.
+
+![Screenshot of viewing change insight in Diagnose and Solve Problems tool.](./media/change-analysis/change-insight-diagnose-and-solve.png)
+++ ## Virtual Machine Diagnose and Solve Problems Go to Diagnose and Solve Problems tool for a Virtual Machine. Go to **Troubleshooting Tools**, browse down the page and select **Analyze recent changes** to view changes on the Virtual Machine.
Users having [VM Insights](../vm/vminsights-overview.md) enabled can view what c
## Next steps -- Learn how to [troubleshoot problems in Change Analysis](change-analysis-troubleshoot.md)
+- Learn how to [troubleshoot problems in Change Analysis](change-analysis-troubleshoot.md)
azure-monitor Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/change-analysis.md
Change Analysis captures the deployment and configuration state of an applicatio
![Screenshot of the "Scan changes now" button](./media/change-analysis/scan-changes.png)
+Currently all text-based files under site root **wwwroot** with the following extensions are supported:
+- *.config
+- *.xml
+- *.json
+- *.gem
+- *.yml
+- *.txt
+- *.ini
+- *.env
+ ### Dependency changes Changes to resource dependencies can also cause issues in a resource. For example, if a web app calls into a Redis cache, the Redis cache SKU could affect the web app performance. Another example is if port 22 was closed in a Virtual Machine's Network Security Group, it will cause connectivity errors.
For web app in-guest changes, separate enablement is required for scanning code
## Cost Application Change Analysis is a free service - it does not incur any billing cost to subscriptions with it enabled. The service also does not have any performance impact for scanning Azure Resource properties changes. When you enable Change Analysis for web apps in-guest file changes (or enable the Diagnose and Solve problems tool), it will have negligible performance impact on the web app and no billing cost.
-## Visualizations for Application Change Analysis
-
-### Standalone UI
-
-In Azure Monitor, there is a standalone pane for Change Analysis to view all changes with insights into application dependencies and resources.
-
-Search for Change Analysis in the search bar on Azure portal to launch the experience.
-
-![Screenshot of searching Change Analysis in Azure portal](./media/change-analysis/search-change-analysis.png)
-
-All resources under a selected subscription are displayed with changes from the past 24 hours. To optimize for the page load performance the service is displaying 10 resources at a time. Click on next pages to view more resources. We are working on removing this limitation.
-
-![Screenshot of Change Analysis blade in Azure portal](./media/change-analysis/change-analysis-standalone-blade.png)
-
-Clicking into a resource to view all its changes. If needed, drill down into a change to view json formatted change details and insights.
-
-![Screenshot of change details](./media/change-analysis/change-details.png)
-
-For any feedback, use the send feedback button in the blade or email changeanalysisteam@microsoft.com.
-
-![Screenshot of feedback button in Change Analysis blade](./media/change-analysis/change-analysis-feedback.png)
-
-#### Multiple subscription support
-The UI supports selecting multiple subscriptions to view resource changes. Use the subscription filter:
-
-![Screenshot of subscription filter that supports selecting multiple subscriptions](./media/change-analysis/multiple-subscriptions-support.png)
-
-### Web App Diagnose and Solve Problems
-
-In Azure Monitor, Change Analysis is also built into the self-service **Diagnose and solve problems** experience. Access this experience from the **Overview** page of your App Service application.
-
-![Screenshot of the "Overview" button and the "Diagnose and solve problems" button](./media/change-analysis/change-analysis.png)
-
-### Application Change Analysis in the Diagnose and solve problems tool
-
-Application Change Analysis is a standalone detector in the Web App diagnose and solve problems tools. It is also aggregated in **Application Crashes** and **Web App Down detectors**. As you enter the Diagnose and Solve Problems tool, the **Microsoft.ChangeAnalysis** resource provider will automatically be registered. Follow these instructions to enable web app in-guest change tracking.
-
-1. Select **Availability and Performance**.
-
- ![Screenshot of the "Availability and Performance" troubleshooting options](./media/change-analysis/availability-and-performance.png)
-
-2. Select **Application Changes**. The feature is also available in **Application Crashes**.
-
- ![Screenshot of the "Application Crashes" button](./media/change-analysis/application-changes.png)
-
-3. The link leads to Application Change Aalysis UI scoped to the web app. If web app in-guest change tracking is not enabled, follow the banner to get file and app settings changes.
-
- ![Screenshot of "Application Crashes" options](./media/change-analysis/enable-changeanalysis.png)
-
-4. Turn on **Change Analysis** and select **Save**. The tool displays all web apps under an App Service plan. You can use the plan level switch to turn on Change Analysis for all web apps under a plan.
-
- ![Screenshot of the "Enable Change Analysis" user interface](./media/change-analysis/change-analysis-on.png)
-
-5. Change data is also available in select **Web App Down** and **Application Crashes** detectors. You'll see a graph that summarizes the type of changes over time along with details on those changes. By default, changes in the past 24 hours are displayed to help with immediate problems.
-
- ![Screenshot of the change diff view](./media/change-analysis/change-view.png)
---
-### Virtual Machine Diagnose and Solve Problems
-
-Go to Diagnose and Solve Problems tool for a Virtual Machine. Go to **Troubleshooting Tools**, browse down the page and select **Analyze recent changes** to view changes on the Virtual Machine.
-
-![Screenshot of the VM Diagnose and Solve Problems](./media/change-analysis/vm-dnsp-troubleshootingtools.png)
-
-![Change analyzer in troubleshooting tools](./media/change-analysis/analyze-recent-changes.png)
-
-### Activity Log Change History
-The [View change history](../essentials/activity-log.md#view-change-history) feature in Activity Log calls Application Change Analysis service backend to get changes associated with an operation. **Change history** used to call [Azure Resource Graph](../../governance/resource-graph/overview.md) directly, but swapped the backend to call Application Change Analysis so changes returned will include resource level changes from [Azure Resource Graph](../../governance/resource-graph/overview.md), resource properties from [Azure Resource Manager](../../azure-resource-manager/management/overview.md), and in-guest changes from PaaS services such as App Services web app.
-In order for the Application Change Analysis service to be able to scan for changes in users' subscriptions, a resource provider needs to be registered. The first time entering **Change History** tab, the tool will automatically start to register **Microsoft.ChangeAnalysis** resource provider. After registered, changes from **Azure Resource Graph** will be available immediately and cover the past 14 days. Changes from other sources will be available after ~4 hours after subscription is onboard.
-
-![Activity Log change history integration](./media/change-analysis/activity-log-change-history.png)
-
-### VM Insights integration
-Users having [VM Insights](../vm/vminsights-overview.md) enabled can view what changed in their virtual machines that might caused any spikes in a metrics chart such as CPU or Memory and wonder what caused it. Change data is integrated in the VM Insights side navigation bar. User can view if any changes happened in the VM and click **Investigate Changes** to view change details in Application Change Analysis standalone UI.
-
-[![VM insights integration](./media/change-analysis/vm-insights.png)](./media/change-analysis/vm-insights.png#lightbox)
-
-## Enable Change Analysis at scale
+## Enable Change Analysis at scale for Web App in-guest file and environment variable changes
If your subscription includes numerous web apps, enabling the service at the level of the web app would be inefficient. Run the following script to enable all web apps in your subscription.
azure-monitor Website Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/website-monitoring.md
Title: 'Quickstart: Monitor websites with Azure Monitor Application Insights' description: In this quickstart, learn how to set up client/browser-side website monitoring with Azure Monitor Application Insights. Previously updated : 08/19/2020 Last updated : 03/19/2021
Application Insights can gather telemetry data from any internet-connected appli
## Configure Application Insights SDK
-1. Select **Overview** > **Essentials**, and then copy your application's **Instrumentation Key**.
+1. Select **Overview** and then copy your application's **Connection String**. For this example, we only need the Instrumentation key part of the connection string `InstrumentationKey=00000000-0000-0000-0000-000000000000;`.
- ![New Application Insights resource form](media/website-monitoring/instrumentation-key-001.png)
+ :::image type="content" source="media/website-monitoring/keys.png" alt-text="Screenshot of overview page with instrumentation key and connection string.":::
1. Add the following script to your ``hello_world.html`` file before the closing ``</head>`` tag:
Application Insights can gather telemetry data from any internet-connected appli
crossOrigin: "anonymous", // When supplied this will add the provided value as the cross origin attribute on the script tag // onInit: null, // Once the application insights instance has loaded and initialized this callback function will be called with 1 argument -- the sdk instance (DO NOT ADD anything to the sdk.queue -- As they won't get called) cfg: { // Application Insights Configuration
- instrumentationKey: "YOUR_INSTRUMENTATION_KEY_GOES_HERE"
+ connectionString:"InstrumentationKey=YOUR_INSTRUMENTATION_KEY_GOES_HERE;"
/* ...Other Configuration Options... */ }}); </script>
Application Insights can gather telemetry data from any internet-connected appli
> [!NOTE] > The current Snippet (listed above) is version "5", the version is encoded in the snippet as sv:"#" and the [current version and configuration details are available on GitHub](https://go.microsoft.com/fwlink/?linkid=2156318).
-
+ 1. Edit ``hello_world.html`` and add your instrumentation key. 1. Open ``hello_world.html`` in a local browser session. This action creates a single page view. You can refresh your browser to generate multiple test-page views.
Application Insights can gather telemetry data from any internet-connected appli
The four default charts on the overview page are scoped to server-side application data. Because we're instrumenting the client/browser-side interactions with the JavaScript SDK, this particular view doesn't apply unless we also have a server-side SDK installed.
-1. Select **Analytics** ![Application Map icon](media/website-monitoring/006.png). This action opens **Analytics**, which provides a rich query language for analyzing all data collected by Application Insights. To view data related to the client-side browser requests, run the following query:
+1. Select **Logs**. This action opens **Logs**, which provides a rich query language for analyzing all data collected by Application Insights. To view data related to the client-side browser requests, run the following query:
```kusto // average pageView duration by name
Application Insights can gather telemetry data from any internet-connected appli
| render timechart ```
- ![Analytics graph of user requests over a period of time](./media/website-monitoring/analytics-query.png)
-
-1. Go back to the **Overview** page. Under the **Investigate** header, select **Browser**, and then select **Performance**. Metrics related to the performance of your website appear. There's a corresponding view for analyzing failures and exceptions in your website. You can select **Samples** to access the [end-to-end transaction details](./transaction-diagnostics.md).
+ :::image type="content" source="media/website-monitoring/log-query.png" alt-text="Screenshot of log analytics graph of user requests over a period of time.":::
- ![Server metrics graph](./media/website-monitoring/browser-performance.png)
+1. Go back to the **Overview** page. Under the **Investigate** header, select **Performance** then select the **Browser** tab. Metrics related to the performance of your website appear. There's a corresponding view for analyzing failures and exceptions in your website. You can select **Samples** to access the [end-to-end transaction details](./transaction-diagnostics.md).
-1. On the main Application Insights menu, under the **Usage** header, select [**Users**](./usage-segmentation.md) to begin exploring the [user behavior analytics tools](./usage-overview.md). Because we're testing from a single machine, we'll only see data for one user. For a live website, the distribution of users might look like this:
+ :::image type="content" source="media/website-monitoring/performance.png" alt-text="Screenshot of performance tab with browser metrics graph.":::
- ![User graph](./media/website-monitoring/usage-users.png)
+1. On the main Application Insights menu, under the **Usage** header, select [**Users**](./usage-segmentation.md) to begin exploring the [user behavior analytics tools](./usage-overview.md). Because we're testing from a single machine, we'll only see data for one user.
1. For a more complex website with multiple pages, you can use the [**User Flows**](./usage-flows.md) tool to track the pathway that visitors take through the various parts of your website.
- ![User Flows visualization](./media/website-monitoring/user-flows.png)
- To learn more advanced configurations for monitoring websites, see the [JavaScript SDK API reference](./javascript.md). ## Clean up resources
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/activity-log.md
You can access the Activity log from most menus in the Azure portal. The menu th
For a description of Activity log categories see [Azure Activity Log event schema](activity-log-schema.md#categories).
+## Download the Activity log
+Select **Download as CSV** to download the events in the current view.
+
+![Download Activity log](media/activity-log/download-activity-log.png)
+ ### View change history For some events, you can view the Change history, which shows what changes happened during that event time. Select an event from the Activity Log you want to look deeper into. Select the **Change history (Preview)** tab to view any associated changes with that event.
You will soon no longer be able to add the Activity Logs Analytics solution to y
* [Read an overview of platform logs](./platform-logs-overview.md) * [Review Activity log event schema](activity-log-schema.md)
-* [Create diagnostic setting to send Activity logs to other destinations](./diagnostic-settings.md)
+* [Create diagnostic setting to send Activity logs to other destinations](./diagnostic-settings.md)
azure-monitor Dns Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/dns-analytics.md
The solution starts collecting data without the need of further configuration. H
### Configure the solution
-On the solution dashboard, click **Configuration** to open the DNS Analytics Configuration page. There are two types of configuration changes that you can make:
+From the Log Analytics workspace in the Azure portal, select **Workspace summary** and then click on the **DNS Analytics** tile. On the solution dashboard, click **Configuration** to open the DNS Analytics Configuration page. There are two types of configuration changes that you can make:
- **Allowlisted Domain Names**. The solution does not process all the lookup queries. It maintains an allowlist of domain name suffixes. The lookup queries that resolve to the domain names that match domain name suffixes in this allowlist are not processed by the solution. Not processing allowlisted domain names helps to optimize the data sent to Azure Monitor. The default allowlist includes popular public domain names, such as www.google.com and www.facebook.com. You can view the complete default list by scrolling.
azure-monitor Quick Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/quick-create-workspace.md
description: Learn how to create a Log Analytics workspace to enable management
Previously updated : 05/26/2020 Last updated : 03/18/2021
If you don't have an Azure subscription, create a [free account](https://azure.m
Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com). ## Create a workspace
-1. In the Azure portal, click **All services**. In the list of resources, type **Log Analytics**. As you begin typing, the list filters based on your input. Select **Log Analytics workspaces**.
+In the Azure portal, click **All services**. In the list of resources, type **Log Analytics**. As you begin typing, the list filters based on your input. Select **Log Analytics workspaces**.
- ![Azure portal](media/quick-create-workspace/azure-portal-01.png)
+![Azure portal](media/quick-create-workspace/azure-portal-01.png)
-2. Click **Add**, and then select choices for the following items:
+Click **Add**, and then provide values for the following options:
- * Provide a name for the new **Log Analytics workspace**, such as *DefaultLAWorkspace*. This name must be globally unique across all Azure Monitor subscriptions.
* Select a **Subscription** to link to by selecting from the drop-down list if the default selected is not appropriate. * For **Resource Group**, choose to use an existing resource group already setup or create a new one.
- * Select an available **Location**. For more information, see which [regions Log Analytics is available in](https://azure.microsoft.com/regions/services/) and search for Azure Monitor from the **Search for a product** field.
- * If you are creating a workspace in a new subscription created after April 2, 2018, it will automatically use the *Per GB* pricing plan and the option to select a pricing tier will not be available. If you are creating a workspace for an existing subscription created before April 2, or to subscription that was tied to an existing Enterprise Agreement (EA) enrollment, select your preferred pricing tier. For more information about the particular tiers, see [Log Analytics Pricing Details](https://azure.microsoft.com/pricing/details/log-analytics/).
+ * Provide a name for the new **Log Analytics workspace**, such as *DefaultLAWorkspace*. This name must be globally unique across all Azure Monitor subscriptions.
+ * Select an available **Region**. For more information, see which [regions Log Analytics is available in](https://azure.microsoft.com/regions/services/) and search for Azure Monitor from the **Search for a product** field.
++
+ ![Create Log Analytics resource blade](media/quick-create-workspace/create-workspace.png)
+
- ![Create Log Analytics resource blade](media/quick-create-workspace/create-loganalytics-workspace-02.png)
+Click **Review + create** to review the settings and then **Create** to create the workspace. This will select a default pricing tier of Pay-as-you-go which will not incur any changes until you start collecting a sufficient amount of data. For more information about other pricing tiers, see [Log Analytics Pricing Details](https://azure.microsoft.com/pricing/details/log-analytics/).
-3. After providing the required information on the **Log Analytics Workspace** pane, click **OK**.
-While the information is verified and the workspace is created, you can track its progress under **Notifications** from the menu.
## Troubleshooting When you create a workspace that was deleted in the last 14 days and in [soft-delete state](../logs/delete-workspace.md#soft-delete-behavior), the operation could have different outcome depending on your workspace configuration:
azure-percept How To Ssh Into Percept Dk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-ssh-into-percept-dk.md
Previously updated : 02/03/2021 Last updated : 03/18/2021 # Connect to your Azure Percept DK over SSH
-Follow the steps below to set up an SSH connection to your Azure Percept DK through [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html).
+Follow the steps below to set up an SSH connection to your Azure Percept DK through OpenSSH or [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html).
## Prerequisites - A Windows, Linux, or OS X based host computer with Wi-Fi capability-- An SSH client
- - If your host computer runs Windows, [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html) is an effective SSH client, and will be used throughout this guide.
- - If your host computer runs Linux or OS X, SSH services are included in those operating systems and can be run without a separate client application. Check your operating system product documentation for more information on how to run SSH services.
-- Azure Percept DK-- Set up a SSH login account during the [Azure Percept DK on-boarding experience](./quickstart-percept-dk-set-up.md)
+- An SSH client (see the next section for installation guidance)
+- An Azure Percept DK (dev kit)
+- An SSH login, created during the [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md)
+
+## Install your preferred SSH client
+
+If your host computer runs Linux or OS X, SSH services are included in those operating systems and can be run without a separate client application. Check your operating system product documentation for more information on how to run SSH services.
+
+If your host computer runs Windows, you may have two SSH client options to choose from: OpenSSH and PuTTY.
+
+### OpenSSH
+
+Windows 10 includes a built-in SSH client called OpenSSH that can be run with a simple command inside of a command prompt. We recommend using OpenSSH with Azure Percept if it is available to you. To check if your Windows computer has OpenSSH installed, follow these steps:
+
+1. Go to **Start** -> **Settings**.
+
+1. Select **Apps**.
+
+1. Under **Apps & features**, select **Optional features**.
+
+1. Type **OpenSSH Client** into the **Installed features** search bar. If OpenSSH appears, the client is already installed, and you may move on to the next section. If you do not see OpenSSH, click **Add a feature**.
+
+ :::image type="content" source="./media/how-to-ssh-into-percept-dk/open-ssh-install.png" alt-text="Screenshot of settings showing OpenSSH installation status.":::
+
+1. Select **OpenSSH Client** and click **Install**. You may now move on to the next section. If OpenSSH is not available to install on your computer, follow the steps below to install PuTTY, a third-party SSH client.
+
+### PuTTY
+
+If your Windows computer does not include OpenSSH, we recommend using [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html). To download and install PuTTY, complete the following steps:
+
+1. Go to the [PuTTY download page](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html).
+
+1. Under **Package files**, click on the 32-bit or 64-bit .msi file to download the installer. If you are unsure of which version to choose, check out the [FAQs](https://www.chiark.greenend.org.uk/~sgtatham/putty/faq.html#faq-32bit-64bit).
+
+1. Click on the installer to start the installation process. Follow the prompts as required.
+
+1. Congratulations! You have successfully installed the PuTTY SSH client.
## Initiate the SSH connection
-1. Power on your Azure Percept DK (dev kit)
+1. Power on your Azure Percept DK.
+
+1. If your dev kit is already connected to a network over Ethernet or Wi-Fi, skip to the next step. Otherwise, connect your host computer directly to the dev kitΓÇÖs Wi-Fi access point. Like connecting to any other Wi-Fi network, open the network and internet settings on your computer, click on the following network, and enter the network password when prompted:
-1. If your dev kit is already connected to a network over Ethernet or Wi-Fi, skip to the next step. Otherwise, connect your host computer directly to the dev kitΓÇÖs Wi-Fi access point, just like connecting to any other Wi-Fi network:
- - **network name**: scz-xxxx (where ΓÇ£xxxxΓÇ¥ is the last four digits of the dev kitΓÇÖs MAC network address)
- - **password**: can be found on the Welcome Card that came with the dev kit
+ - **Network name**: depending on your dev kit's operating system version, the name of the Wi-Fi access point is either **scz-xxxx** or **apd-xxxx** (where ΓÇ£xxxxΓÇ¥ is the last four digits of the dev kitΓÇÖs MAC address)
+ - **Password**: can be found on the Welcome Card that came with the dev kit
> [!WARNING]
- > While connected to the Azure Percept DK Wi-Fi access point, your host computer will temporarily lose its connection to the Internet. Active video conference calls, web streaming, or other network-based experiences will be interrupted until step 3 of the Azure Percept DK on-boarding experience is completed.
+ > While connected to the Azure Percept DK Wi-Fi access point, your host computer will temporarily lose its connection to the Internet. Active video conference calls, web streaming, or other network-based experiences will be interrupted.
+
+1. Complete the SSH connection process according to your SSH client.
+
+### Using OpenSSH
+
+1. Open a command prompt (**Start** -> **Command Prompt**).
+
+1. Enter the following into the command prompt:
+
+ ```console
+ ssh [your ssh user name]@[IP address]
+ ```
+
+ If your computer is connected to the dev kit's Wi-Fi access point, the IP address will be 10.1.1.1. If your dev kit is connected over Ethernet, use the local IP address of the device, which you can get from the Ethernet router or hub. If your dev kit is connected over Wi-Fi, you must use the IP address that was assigned to your dev kit during the [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md).
-1. Open PuTTY. Enter the following and click **Open** to SSH into your devkit:
+ > [!TIP]
+ > If your dev kit is connected to a Wi-Fi network but you do not know its IP address, go to Azure Percept Studio and [open your device's video stream](./how-to-view-video-stream.md). The address bar in the video stream browser tab will show your device's IP address.
- 1. Host Name: 10.1.1.1
+1. Enter your SSH password when prompted.
+
+ :::image type="content" source="./media/how-to-ssh-into-percept-dk/open-ssh-prompt.png" alt-text="Screenshot of Open SSH command prompt login.":::
+
+1. If this is the first time connecting to your dev kit through OpenSSH, you may also be prompted to accept the host's key. Enter **yes** to accept the key.
+
+1. Congratulations! You have successfully connected to your dev kit over SSH.
+
+### Using PuTTY
+
+1. Open PuTTY. Enter the following into the **PuTTY Configuration** window and click **Open** to SSH into your dev kit:
+
+ 1. Host Name: [IP address]
1. Port: 22 1. Connection Type: SSH
- > [!NOTE]
- > The **Host Name** is your device's IP address. If your dev kit is connected to the dev kit's Wi-Fi access point, the IP address will be 10.1.1.1. If your dev kit is connected over Ethernet, use the local IP address of the device, which you can get from the Ethernet router or hub. If your device is connected over Wi-Fi, you must use the IP address that was provided collected during the [Azure Percept DK on-boarding experience](./quickstart-percept-dk-set-up.md).
+ The **Host Name** is your dev kit's IP address. If your computer is connected to the dev kit's Wi-Fi access point, the IP address will be 10.1.1.1. If your dev kit is connected over Ethernet, use the local IP address of the device, which you can get from the Ethernet router or hub. If your dev kit is connected over Wi-Fi, you must use the IP address that was assigned to your dev kit during the [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md).
+
+ > [!TIP]
+ > If your dev kit is connected to a Wi-Fi network but you do not know its IP address, go to Azure Percept Studio and [open your device's video stream](./how-to-view-video-stream.md). The address bar in the video stream browser tab will show your device's IP address.
+
+ :::image type="content" source="./media/how-to-ssh-into-percept-dk/ssh-putty.png" alt-text="Screenshot of PuTTY Configuration window.":::
- :::image type="content" source="./media/how-to-ssh-into-percept-dk/ssh-putty.png" alt-text="Image.":::
+1. A PuTTY terminal will open. When prompted, enter your SSH username and password into the terminal.
-1. Log in to the PuTTY terminal with the SSH username and password created during the on-boarding experience.
+1. Congratulations! You have successfully connected to your dev kit over SSH.
## Next steps
-After successfully connecting to your Azure Percept DK through SSH, you may perform a variety of tasks, including troubleshooting, USB updates, and running the DiagTool or SoftAP Tool.
+After connecting to your Azure Percept DK through SSH, you may perform a variety of tasks, including [device troubleshooting](./troubleshoot-dev-kit.md) and [USB updates](./how-to-update-via-usb.md).
azure-percept How To Update Via Usb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-update-via-usb.md
Previously updated : 02/18/2021 Last updated : 03/18/2021 # How to update Azure Percept DK over a USB connection
-Follow this guide to learn how to perform a USB update for the carrier board of your Azure Percept DK.
+Although using over-the-air (OTA) updates is the best method of keeping your dev kit's operating system and firmware up to date, there are scenarios where updating (or "flashing") the dev kit over a USB connection is necessary:
+
+- An OTA update is not possible due to connectivity or other technical issues
+- The device needs to be reset back to its factory state
+
+This guide will show you how to successfully update your dev kit's operating system and firmware over a USB connection.
+
+> [!WARNING]
+> Updating your dev kit over USB will delete all existing data on the device, including AI models and containers.
+>
+> Follow all instructions in order. Skipping steps could put your dev kit in an unusable state.
## Prerequisites-- Host computer with an available USB-C or USB-A port.-- Azure Percept DK (dev kit) carrier board and supplied USB-C to USB-C cable. If your host computer has a USB-A port but not a USB-C port, you may use a USB-C to USB-A cable (sold separately).-- Install [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html) (admin access needed).-- Install the NXP UUU tool. [Download the Latest Release](https://github.com/NXPmicro/mfgtools/releases) uuu.exe file (for Windows) or the uuu file (for Linux) under the Assets tab.-- [Install 7-Zip](https://www.7-zip.org/). This software will be used for extracting the raw image file from its XZ compressed file. Download and install the appropriate .exe file.-
-## Steps
-1. Download the following [three USB Update files](https://go.microsoft.com/fwlink/?linkid=2155734):
- - pe101-uefi-***&lt;version number&gt;***.raw.xz
- - emmc_full.txt
- - fast-hab-fw.raw
-
-1. Extract to pe101-uefi-***&lt;version number&gt;***.raw from the compressed pe101-uefi***&lt;version number&gt;***.raw.xz file.
-Not sure how to extract? Download and Install 7-Zip, then right-click on the **.xz** image file and select 7-Zip &gt; Extract Here.
-
-1. Copy the following three files to the folder that contains the UUU tool:
- - Extracted pe101-uefi-***&lt;version number&gt;***.raw file (from step 2).
- - emmc_full.txt (from step 1).
- - fast-hab-fw.raw (from step 1).
-
-1. Power on the dev kit.
-1. [Connect to the dev kit over SSH](./how-to-ssh-into-percept-dk.md)
-1. Open a Windows command prompt (Start &gt; cmd) or a Linux terminal and navigate to the folder where the update files are stored. Run the following command to initiate the update:
- - Windows:
- ```uuu -b emmc_full.txt fast-hab-fw.raw pe101-uefi-<version number>.raw```
- - Linux:
- ```sudo ./uuu -b emmc_full.txt fast-hab-fw.raw pe101-uefi-<version number>.raw```
-
-After running these commands, you may see a message stating "Waiting for device..." in the command prompt. This is expected and you should proceed to the next step.
-
-1. Connect the dev kit carrier board to the host computer via a USB cable. Always connect from the carrier boards USB-C port to either the host computer's USB-C or USB-A port (USB-C to USB-A cable sold separately), depending on which ports are available.
-
-1. In the SSH/PuTTY terminal, enter the following commands to set the dev kit into USB mode and then to reboot the dev kit.
- - ```flagutil -wBfRequestUsbFlash -v1```
- - ```reboot -f```
-
-1. You may get an indication that the host computer recognizes the device and the update process will automatically start. Navigate back to the command prompt to see the status. The process will take up to ten minutes and when the update is successful, you will see a message stating ΓÇ£Success 1 Failure 0ΓÇ¥
-
-1. Once the update is complete, power off the carrier board. Unplug the USB cable from the PC. Plug the Azure Percept Vision module back to the carrier board using the USB cable.
-
-1. Power the carrier board back on.
+
+- An Azure Percept DK
+- A Windows, Linux, or OS X based host computer with Wi-Fi capability and an available USB-C or USB-A port
+- A USB-C to USB-A cable (optional, sold separately)
+- An SSH login, created during the [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md)
+
+## Download software tools and update files
+
+1. [NXP UUU tool](https://github.com/NXPmicro/mfgtools/releases). Download the **Latest Release** uuu.exe file (for Windows) or the uuu file (for Linux) under the **Assets** tab.
+
+1. [7-Zip](https://www.7-zip.org/). This software will be used for extracting the raw image file from its XZ compressed file. Download and install the appropriate .exe file.
+
+1. [Download the update files](https://go.microsoft.com/fwlink/?linkid=2155734).
+
+1. Ensure all three build artifacts are present:
+ - Azure-Percept-DK-*&lt;version number&gt;*.raw.xz
+ - fast-hab-fw.raw
+ - emmc_full.txt
+
+## Set up your environment
+
+1. Create a folder/directory on the host computer in a location that is easy to access via command line.
+
+1. Copy the UUU tool (**uuu.exe** or **uuu**) to the new folder.
+
+1. Extract the **Azure-Percept-DK-*&lt;version number&gt;*.raw** file from the compressed file by right clicking on **Azure-Percept-DK-*&lt;version number&gt;*.raw.xz** and selecting **7-Zip** &gt; **Extract Here**.
+
+1. Move the extracted **Azure-Percept-DK-*&lt;version number&gt;*.raw** file, **fast-hab-fw.raw**, and **emmc_full.txt** to the folder containing the UUU tool.
+
+## Update your device
+
+1. [SSH into your dev kit](./how-to-ssh-into-percept-dk.md).
+
+1. Next, open a Windows command prompt (**Start** > **cmd**) or a Linux terminal and navigate to the folder where the update files and UUU tool are stored. Enter the following command in the command prompt or terminal to prepare your computer to receive a flashable device:
+
+ - Windows:
+
+ ```bash
+ uuu -b emmc_full.txt fast-hab-fw.raw Azure-Percept-DK-<version number>.raw
+ ```
+
+ - Linux:
+
+ ```bash
+ sudo ./uuu -b emmc_full.txt fast-hab-fw.raw Azure-Percept-DK-<version number>.raw
+ ```
+
+1. Disconnect the Azure Percept Vision device from the carrier board's USB-C port.
+
+1. Connect the supplied USB-C cable to the carrier board's USB-C port and to the host computer's USB-C port. If your computer only has a USB-A port, connect a USB-C to USB-A cable (sold separately) to the carrier board and host computer.
+
+1. In the SSH client prompt, enter the following commands:
+
+ 1. Set the device to USB update mode:
+
+ ```bash
+ sudo flagutil -wBfRequestUsbFlash -v1
+ ```
+
+ 1. Reboot the device. The update installation will begin.
+
+ ```bash
+ sudo reboot -f
+ ```
+
+1. Navigate back to the other command prompt or terminal. When the update is finished, you will see a message with ```Success 1 Failure 0```:
+
+ > [!NOTE]
+ > After updating, your device will be reset to factory settings and you will lose your Wi-Fi connection and SSH login.
+
+1. Once the update is complete, power off the carrier board. Unplug the USB cable from the PC.ΓÇ»
## Next steps
-Your dev kit is now successfully updated. You may continue development and operation with your devkit.
+Work through the [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md) to reconfigure your device.
azure-resource-manager Move Resource Group And Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-resource-group-and-subscription.md
This article shows you how to move Azure resources to either another Azure subscription or another resource group under the same subscription. You can use the Azure portal, Azure PowerShell, Azure CLI, or the REST API to move resources.
-Both the source group and the target group are locked during the move operation. Write and delete operations are blocked on the resource groups until the move completes. This lock means you can't add, update, or delete resources in the resource groups. It doesn't mean the resources are frozen. For example, if you move a SQL Server and its database to a new resource group, an application that uses the database experiences no downtime. It can still read and write to the database. The lock can last for a maximum of four hours, but most moves complete in much less time.
+Both the source group and the target group are locked during the move operation. Write and delete operations are blocked on the resource groups until the move completes. This lock means you can't add, update, or delete resources in the resource groups. It doesn't mean the resources are frozen. For example, if you move an Azure SQL logical server and its databases to a new resource group or subscription, applications that use the databases experience no downtime. They can still read and write to the databases. The lock can last for a maximum of four hours, but most moves complete in much less time.
Moving a resource only moves it to a new resource group or subscription. It doesn't change the location of the resource.
azure-sql Access To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/access-to-sql-database-guide.md
To create an assessment, follow these steps:
1. Open SQL Server Migration Assistant for Access. 1. Select **File** and then choose **New Project**. Provide a name for your migration project.
-1. Select **Add Databases** and choose databases to be added to your new project
+
+ ![Choose New Project](./media/access-to-sql-database-guide/new-project.png)
+
+1. Select **Add Databases** and choose databases to be added to your new project.
+
+ ![Choose Add databases](./media/access-to-sql-database-guide/add-databases.png)
+ 1. In **Access Metadata Explorer**, right-click the database and then choose **Create Report**. +
+ ![Right-click the database and choose Create Report](./media/access-to-sql-database-guide/create-report.png)
+ 1. Review the sample assessment. For example:
+ ![Review the sample report assessment](./media/access-to-sql-database-guide/sample-assessment.png)
+
+### Validate data types
+
+Validate the default data type mappings and change them based on requirements if necessary. To do so, follow these steps:
+
+1. Select **Tools** from the menu.
+1. Select **Project Settings**.
+1. Select the **Type mappings** tab.
+
+ ![Type Mappings](./media/access-to-sql-database-guide/type-mappings.png)
+
+1. You can change the type mapping for each table by selecting the table in the **Access Metadata Explorer**.
++ ### Convert schema To convert database objects, follow these steps: 1. Select **Connect to Azure SQL Database** and provide connection details.
-1. Right-click the database in **Access Metadata Explorer** and choose **Convert schema**.
-1. (Optional) To convert an individual object, right-click the object and choose **Convert schema**. An object that has been converted appears bold in the **Access Metadata Explorer**:
+
+ ![Connect to Azure SQL Database](./media/access-to-sql-database-guide/connect-to-sqldb.png)
+
+1. Right-click the database in **Access Metadata Explorer** and choose **Convert schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your database.
+
+ ![Right-click the database and choose convert schema](./media/access-to-sql-database-guide/convert-schema.png)
+
+ Compare converted queries to original queries:
+
+ ![Converted queries can be compared with source code](./media/access-to-sql-database-guide/query-comparison.png)
+
+ Compare converted objects to original objects:
+
+ ![Converted objects can be compared with source](./media/access-to-sql-database-guide/table-comparison.png)
+
+1. (Optional) To convert an individual object, right-click the object and choose **Convert schema**. Converted objects appear bold in the **Access Metadata Explorer**:
+
+ ![Bold objects in metadata explorer have been converted](./media/access-to-sql-database-guide/converted-items.png)
+
1. Select **Review results** in the Output pane, and review errors in the **Error list** pane.
To migrate data by using SSMA for Access, follow these steps:
1. If you haven't already, select **Connect to Azure SQL Database** and provide connection details. 1. Right-click the database from the **Azure SQL Database Metadata Explorer** and choose **Synchronize with Database**. This action publishes the MySQL schema to Azure SQL Database.+
+ ![Synchronize with Database](./media/access-to-sql-database-guide/synchronize-with-database.png)
+
+ Review the mapping between your source project and your target:
+
+ ![Review the synchronization with the database](./media/access-to-sql-database-guide/synchronize-with-database-review.png)
+ 1. Use **Access Metadata Explorer** to check boxes next to the items you want to migrate. If you want to migrate the entire database, check the box next to the database. 1. Right-click the database or object you want to migrate, and choose **Migrate data**. To migrate data for an entire database, select the check box next to the database name. To migrate data from individual tables, expand the database, expand Tables, and then select the check box next to the table. To omit data from individual tables, clear the check box.
+ ![Migrate Data](./media/access-to-sql-database-guide/migrate-data.png)
+
+ Review the migrated data:
+
+ ![Migrate Data Review](./media/access-to-sql-database-guide/migrate-data-review.png)
+
+1. Connect to your Azure SQL Database by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema.
+
+ ![Validate in SSMA](./media/access-to-sql-database-guide/validate-data.png)
+++ ## Post-migration After you have successfully completed the **Migration** stage, you need to go through a series of post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
azure-sql Mysql To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/mysql-to-sql-database-guide.md
By using [SQL Server Migration Assistant for MySQL](https://www.microsoft.com/do
To create an assessment, perform the following steps. 1. Open SQL Server Migration Assistant for MySQL.
-1. Select **File** from the menu and then choose **New Project**. Provide the project name, a location to save your project.
-1. Choose **Azure SQL Database** as the migration target.
+1. Select **File** from the menu and then choose **New Project**. Provide the project name, a location to save your project. Choose **Azure SQL Database** as the migration target.
+
+ ![New Project](./media/mysql-to-sql-database-guide/new-project.png)
+ 1. Choose **Connect to MySQL** and provide connection details to connect your MySQL server. +
+ ![Connect to MySQL](./media/mysql-to-sql-database-guide/connect-to-mysql.png)
+ 1. Right-click the MySQL schema in **MySQL Metadata Explorer** and choose **Create report**. Alternatively, you can select **Create report** from the top-line navigation bar. +
+ ![Create Report](./media/mysql-to-sql-database-guide/create-report.png)
+ 1. Review the HTML report for conversion statistics, as well as errors and warnings. Analyze it to understand conversion issues and resolutions. This report can also be accessed from the SSMA projects folder as selected in the first screen. From the example above locate the report.xml file from:
To create an assessment, perform the following steps.
and open it in Excel to get an inventory of MySQL objects and the effort required to perform schema conversions.
+ ![Conversion Report](./media/mysql-to-sql-database-guide/conversion-report.png)
+ ### Validate data types
-Before you perform schema conversion validate the default datatype mappings or change them based on requirements. You could do so either by navigating to the "Tools" menu and choosing "Project Settings" or you can change type mapping for each table by selecting the table in the "MySQL Metadata Explorer".
+Validate the default data type mappings and change them based on requirements if necessary. To do so, follow these steps:
+
+1. Select **Tools** from the menu.
+1. Select **Project Settings**.
+1. Select the **Type mappings** tab.
+
+ ![Type Mappings](./media/mysql-to-sql-database-guide/type-mappings.png)
+
+1. You can change the type mapping for each table by selecting the table in the **MySQL Metadata explorer**.
### Convert schema
To convert the schema, follow these steps:
1. (Optional) To convert dynamic or ad-hoc queries, right-click the node and choose **Add statement**. 1. Choose **Connect to Azure SQL Database** from the top-line navigation bar and provide connection details. You can choose to connect to an existing database or provide a new name, in which case a database will be created on the target server.+
+ ![Connect to SQL](./media/mysql-to-sql-database-guide/connect-to-sqldb.png)
+
1. Right-click the schema and choose **Convert schema**. +
+ ![Convert Schema](./media/mysql-to-sql-database-guide/convert-schema.png)
+ 1. After the schema is finished converting, compare the converted code to the original code to identify potential problems.
+ Compare converted objects to original objects:
+
+ ![ Compare And Review object ](./media/mysql-to-sql-database-guide/table-comparison.png)
+
+ Compare converted procedures to original procedures:
+
+ ![Compare And Review object code](./media/mysql-to-sql-database-guide/procedure-comparison.png)
## Migrate
After you have completed assessing your databases and addressing any discrepanci
To publish the schema and migrate the data, follow these steps: 1. Right-click the database from the **Azure SQL Database Metadata Explorer** and choose **Synchronize with Database**. This action publishes the MySQL schema to Azure SQL Database.+
+ ![Synchronize with Database](./media/mysql-to-sql-database-guide/synchronize-database.png)
+
+ Review the mapping between your source project and your target:
+
+ ![Synchronize with Database Review](./media/mysql-to-sql-database-guide/synchronize-database-review.png)
+ 1. Right-click the MySQL schema from the **MySQL Metadata Explorer** and choose **Migrate Data**. Alternatively, you can select **Migrate Data** from the top-line navigation. +
+ ![Migrate data](./media/mysql-to-sql-database-guide/migrate-data.png)
+ 1. After migration completes, view the **Data Migration** report: +
+ ![Data Migration Report](./media/mysql-to-sql-database-guide/data-migration-report.png)
+ 1. Validate the migration by reviewing the data and schema on Azure SQL Database by using SQL Server Management Studio (SSMS).
+ ![Validate in SSMA](./media/mysql-to-sql-database-guide/validate-in-ssms.png)
++ ## Post-migration
azure-sql Oracle To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/oracle-to-sql-database-guide.md
To create an assessment, follow these steps:
1. Open [SQL Server Migration Assistant for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258). 1. Select **File** and then choose **New Project**. 1. Provide a project name, a location to save your project, and then select Azure SQL Database as the migration target from the drop-down. Select **OK**.
-1. Enter in values for Oracle connection details on the **Connect to Oracle** dialog box.
+
+ ![New Project](./media/oracle-to-sql-database-guide/new-project.png)
++
+1. Select **Connect to Oracle**. Enter in values for Oracle connection details on the **Connect to Oracle** dialog box.
+
+ ![Connect to Oracle](./media/oracle-to-sql-database-guide/connect-to-oracle.png)
+
+ Select the Oracle schema(s) you want to migrate:
+
+ ![Select Oracle schema](./media/oracle-to-sql-database-guide/select-schema.png)
+ 1. Right-click the Oracle schema you want to migrate in the **Oracle Metadata Explorer**, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the database.+
+ ![Create Report](./media/oracle-to-sql-database-guide/create-report.png)
+ 1. Review the HTML report to understand conversion statistics and any errors or warnings. You can also open the report in Excel to get an inventory of Oracle objects and the effort required to perform schema conversions. The default location for the report is in the report folder within SSMAProjects. For example: `drive:\<username>\Documents\SSMAProjects\MyOracleMigration\report\report_2020_11_12T02_47_55\`
+ ![Assessment Report](./media/oracle-to-sql-database-guide/assessment-report.png)
+ ### Validate data types
Validate the default data type mappings and change them based on requirements if
1. Select **Tools** from the menu. 1. Select **Project Settings**. 1. Select the **Type mappings** tab. +
+ ![Type Mappings](./media/oracle-to-sql-database-guide/type-mappings.png)
+ 1. You can change the type mapping for each table by selecting the table in the **Oracle Metadata Explorer**. ### Convert schema
To convert the schema, follow these steps:
1. Choose your target SQL Database from the drop-down. 1. Select **Connect**.
-1. Right-click the schema and then choose **Convert Schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your schema.
+ ![Connect to SQL Database](./media/oracle-to-sql-database-guide/connect-to-sql-database.png)
++
+1. Right-click the Oracle schema in the **Oracle Metadata Explorer** and then choose **Convert Schema**. Alternatively, you can choose **Convert Schema** from the top navigation bar after selecting your schema.
+
+ ![Convert Schema](./media/oracle-to-sql-database-guide/convert-schema.png)
+ 1. After the conversion completes, compare and review the converted objects to the original objects to identify potential problems and address them based on the recommendations.+
+ ![Review recommendations schema](./media/oracle-to-sql-database-guide/table-mapping.png)
+
+ Compare the converted Transact-SQL text to the original stored procedures and review the recommendations.
+
+ ![Review recommendations](./media/oracle-to-sql-database-guide/procedure-comparison.png)
+ 1. Save the project locally for an offline schema remediation exercise. Select **Save Project** from the **File** menu. ## Migrate
After you have completed assessing your databases and addressing any discrepanci
To publish your schema and migrate your data, follow these steps: 1. Publish the schema: Right-click the database from the **Databases** node in the **Azure SQL Database Metadata Explorer** and choose **Synchronize with Database**.
-1. Migrate the data: Right-click the schema from the **Oracle Metadata Explorer** and choose **Migrate Data**.
+
+ ![Synchronize with Database](./media/oracle-to-sql-database-guide/synchronize-with-database.png)
+
+ Review the mapping between your source project and your target:
+
+ ![Synchronize with Database review](./media/oracle-to-sql-database-guide/synchronize-with-database-review.png)
++
+1. Migrate the data: Right-click the schema from the **Oracle Metadata Explorer** and choose **Migrate Data**. Alternatively, you can choose **Migrate Data** from the top line navigation bar after selecting the schema.
+
+ ![Migrate Data](./media/oracle-to-sql-database-guide/migrate-data.png)
+ 1. Provide connection details for both Oracle and Azure SQL Database. 1. View the **Data Migration report**.+
+ ![Data Migration Report](./media/oracle-to-sql-database-guide/data-migration-report.png)
+ 1. Connect to your Azure SQL Database by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and validate the migration by reviewing the data and schema.
+ ![Validate in SSMA](./media/oracle-to-sql-database-guide/validate-data.png)
Alternatively, you can also use SQL Server Integration Services (SSIS) to perform the migration. To learn more, see:
azure-sql Oracle To Sql On Azure Vm Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/oracle-to-sql-on-azure-vm-guide.md
To use the MAP Toolkit to perform an inventory scan, follow these steps:
1. Open the [MAP Toolkit](https://go.microsoft.com/fwlink/?LinkID=316883). 1. Select **Create/Select database**.+
+ ![Select database](./media/oracle-to-sql-on-azure-vm-guide/select-database.png)
+ 1. Select **Create an inventory database**, enter a name for the new inventory database you're creating, provide a brief description, and then select **OK**. +
+ :::image type="content" source="media/oracle-to-sql-on-azure-vm-guide/create-inventory-database.png" alt-text="Create an inventory database":::
+ 1. Select **Collect inventory data** to open the **Inventory and Assessment Wizard**. +
+ :::image type="content" source="media/oracle-to-sql-on-azure-vm-guide/collect-inventory-data.png" alt-text="Collect inventory data":::
+ 1. In the **Inventory and Assessment Wizard**, choose **Oracle** and then select **Next**. +
+ ![Choose oracle](./media/oracle-to-sql-on-azure-vm-guide/choose-oracle.png)
+ 1. Choose the computer search option that best suits your business needs and environment, and then select **Next**: +
+ ![Choose the computer search option that best suits your business needs](./media/oracle-to-sql-on-azure-vm-guide/choose-search-option.png)
+ 1. Either enter credentials or create new credentials for the systems that you want to explore, and then select **Next**.+
+ ![Enter credentials](./media/oracle-to-sql-on-azure-vm-guide/choose-credentials.png)
+ 1. Set the order of the credentials, and then select **Next**. +
+ ![Set credential order](./media/oracle-to-sql-on-azure-vm-guide/set-credential-order.png)
+ 1. Specify the credentials for each computer you want to discover. You can use unique credentials for every computer/machine, or you can choose to use the **All Computer Credentials** list. ++
+ ![Specify the credentials for each computer you want to discover](./media/oracle-to-sql-on-azure-vm-guide/specify-credentials-for-each-computer.png)
++ 1. Verify your selection summary, and then select **Finish**.
-1. After the scan completes, view the **Data Collection** summary report. The scan take a few minutes, and depends on the number of databases. Select **Close** when finished.
+
+ ![Review summary](./media/oracle-to-sql-on-azure-vm-guide/review-summary.png)
+
+1. After the scan completes, view the **Data Collection** summary report. The scan can take a few minutes, and depends on the number of databases. Select **Close** when finished.
+
+ ![Collection summary report](./media/oracle-to-sql-on-azure-vm-guide/collection-summary-report.png)
++ 1. Select **Options** to generate a report about the Oracle Assessment and database details. Select both options (one by one) to generate the report.
To create an assessment, follow these steps:
1. Open the [SQL Server Migration Assistant (SSMA) for Oracle](https://www.microsoft.com/en-us/download/details.aspx?id=54258). 1. Select **File** and then choose **New Project**. 1. Provide a project name, a location to save your project, and then select a SQL Server migration target from the drop-down. Select **OK**.
-1. Enter in values for Oracle connection details on the **Connect to Oracle** dialog box.
+
+ ![New project](./media/oracle-to-sql-on-azure-vm-guide/new-project.png)
+
+1. Select **Connect to Oracle**. Enter in values for Oracle connection details on the **Connect to Oracle** dialog box.
+
+ ![Connect to Oracle](./media/oracle-to-sql-on-azure-vm-guide/connect-to-oracle.png)
+
+ Select the Oracle schema(s) you want to migrate:
+
+ ![Select Oracle schema](./media/oracle-to-sql-on-azure-vm-guide/select-schema.png)
+ 1. Right-click the Oracle schema you want to migrate in the **Oracle Metadata Explorer**, and then choose **Create report**. This will generate an HTML report. Alternatively, you can choose **Create report** from the navigation bar after selecting the database.
+ ![Create Report](./media/oracle-to-sql-on-azure-vm-guide/create-report.png)
+ 1. In **Oracle Metadata Explorer**, select the Oracle schema, and then select **Create Report** to generate an HTML report with conversion statistics and error/warnings, if any.. 1. Review the HTML report for conversion statistics, as well as errors and warnings. Analyze it to understand conversion issues and resolutions.
To create an assessment, follow these steps:
and then open it in Excel to get an inventory of Oracle objects and the effort required to perform schema conversions.
+ ![Conversion Report](./media/oracle-to-sql-on-azure-vm-guide/conversion-report.png)
++ ### Validate data types
Validate the default data type mappings and change them based on requirements if
1. Select **Tools** from the menu. 1. Select **Project Settings**. 1. Select the **Type mappings** tab. +
+ ![Type Mappings](./media/oracle-to-sql-on-azure-vm-guide/type-mappings.png)
+ 1. You can change the type mapping for each table by selecting the table in the **Oracle Metadata explorer**.
To convert the schema, follow these steps:
1. (Optional) To convert dynamic or ad-hoc queries, right-click the node and choose **Add statement**. 1. Choose **Connect to SQL Server** from the top-line navigation bar and provide connection details for your SQL Server on Azure VM. You can choose to connect to an existing database or provide a new name, in which case a database will be created on the target server.
-1. Right-click the schema and choose **Convert Schema**.
+
+ ![Connect to SQL](./media/oracle-to-sql-on-azure-vm-guide/connect-to-sql-vm.png)
+
+1. Right-click the Oracle schema in the **Oracle Metadata Explorer** and choose **Convert Schema**.
+
+ ![Convert Schema](./media/oracle-to-sql-on-azure-vm-guide/convert-schema.png)
+ 1. After the schema is finished converting, compare and review the structure of the schema to identify potential problems.
+ ![Review recommendations](./media/oracle-to-sql-on-azure-vm-guide/table-mapping.png)
+
+ Compare the converted Transact-SQL text to the original stored procedures and review the recommendations:
+
+ ![Review recommendations code](./media/oracle-to-sql-on-azure-vm-guide/procedure-comparison.png)
+ You can save the project locally for an offline schema remediation exercise. You can do so by selecting **Save Project** from the **File** menu. This gives you an opportunity to evaluate the source and target schemas offline and perform remediation before you can publish the schema to SQL Server.
After you have the necessary prerequisites in place and have completed the tasks
To publish the schema and migrate the data, follow these steps: 1. Right-click the database from the **SQL Server Metadata Explorer** and choose **Synchronize with Database**. This action publishes the Oracle schema to SQL Server on Azure VM. +
+ ![Synchronize with Database](./media/oracle-to-sql-on-azure-vm-guide/synchronize-database.png)
+
+ Review the synchronization status:
+
+ ![Review synchronization status](./media/oracle-to-sql-on-azure-vm-guide/synchronize-database-review.png)
++ 1. Right-click the Oracle schema from the **Oracle Metadata Explorer** and choose **Migrate Data**. Alternatively, you can select Migrate Data from the top-line navigation.+
+ ![Migrate Data](./media/oracle-to-sql-on-azure-vm-guide/migrate-data.png)
+ 1. Provide connection details for Oracle and SQL Server on Azure VM at the dialog box. 1. After migration completes, view the Data Migration report:+
+ ![Data Migration Report](./media/oracle-to-sql-on-azure-vm-guide/data-migration-report.png)
+ 1. Connect to your SQL Server on Azure VM using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) to review data and schema on your SQL Server instance.
+ ![Validate in SSMA](./media/oracle-to-sql-on-azure-vm-guide/validate-in-ssms.png)
+++ In addition to using SSMA, you can also use SQL Server Integration Services (SSIS) to migrate the data. To learn more, see: - The article [Getting Started with SQL Server Integration Services](https://docs.microsoft.com//sql/integration-services/sql-server-integration-services).
For additional assistance with completing this migration scenario, please see th
| [Oracle Inventory Script Artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/Oracle%20Inventory%20Script%20Artifacts) | This asset includes a PL/SQL query that hits Oracle system tables and provides a count of objects by schema type, object type, and status. It also provides a rough estimate of ΓÇÿRaw DataΓÇÖ in each schema and the sizing of tables in each schema, with results stored in a CSV format. | | [Automate SSMA Oracle Assessment Collection & Consolidation](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/Automate%20SSMA%20Oracle%20Assessment%20Collection%20%26%20Consolidation) | This set of resource uses a .csv file as entry (sources.csv in the project folders) to produce the xml files that are needed to run SSMA assessment in console mode. The source.csv is provided by the customer based on an inventory of existing Oracle instances. The output files are AssessmentReportGeneration_source_1.xml, ServersConnectionFile.xml, and VariableValueFile.xml.| | [SSMA for Oracle Common Errors and how to fix them](https://aka.ms/dmj-wp-ssma-oracle-errors) | With Oracle, you can assign a non-scalar condition in the WHERE clause. However, SQL Server doesnΓÇÖt support this type of condition. As a result, SQL Server Migration Assistant (SSMA) for Oracle doesnΓÇÖt convert queries with a non-scalar condition in the WHERE clause, instead generating an error O2SS0001. This white paper provides more details on the issue and ways to resolve it. |
-| [Oracle to SQL Server Migration Handbook](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20SQL%20Server%20Migration%20Handbook.pdf) | This document focuses on the tasks associated with migrating an Oracle schema to the latest version of SQL Serverbase. If the migration requires changes to features/functionality, then the possible impact of each change on the applications that use the database must be considered carefully. |
+| [Oracle to SQL Server Migration Handbook](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20SQL%20Server%20Migration%20Handbook.pdf) | This document focuses on the tasks associated with migrating an Oracle schema to the latest version of SQL Server. If the migration requires changes to features/functionality, then the possible impact of each change on the applications that use the database must be considered carefully. |
These resources were developed as part of the Data SQL Ninja Program, which is sponsored by the Azure Data Group engineering team. The core charter of the Data SQL Ninja program is to unblock and accelerate complex modernization and compete data platform migration opportunities to Microsoft's Azure Data platform. If you think your organization would be interested in participating in the Data SQL Ninja program, please contact your account team and ask them to submit a nomination.
backup Backup Azure Arm Vms Prepare https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-arm-vms-prepare.md
If you selected to create a new backup policy, fill in the policy settings.
4. In **Retention range**, specify how long you want to keep your daily or weekly backup points. 5. In **Retention of monthly backup point** and **Retention of yearly backup point**, specify whether you want to keep a monthly or yearly backup of your daily or weekly backups. 6. Select **OK** to save the policy.
+ > [!NOTE]
+ > To store the restore point collection (RPC), the Backup service creates a separate resource group (RG). This RG is different than RG of the VM. [Learn more](backup-during-vm-creation.md#azure-backup-resource-group-for-virtual-machines).
![New backup policy](./media/backup-azure-arm-vms-prepare/new-policy.png)
backup Backup Azure Delete Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-delete-vault.md
To stop protection and delete the backup data, perform the following steps:
![The Delete Backup Data pane.](./media/backup-azure-delete-vault/stop-backup-blade-delete-backup-data.png)
+ This option deletes scheduled backups, also deletes on-demand backups.
3. Check the **Notification** icon: ![The Notification icon.](./media/backup-azure-delete-vault/messages.png) After the process finishes, the service displays the following message: *Stopping backup and deleting backup data for "*Backup Item*"*. *Successfully completed the operation*. 4. Select **Refresh** on the **Backup Items** menu, to make sure the backup item was deleted.
batch Batch Nodejs Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-nodejs-get-started.md
The following diagram depicts how we can scale the Python script using Azure Bat
The node.js client deploys a batch job with a preparation task (explained in detail later) and a set of tasks depending on the number of containers in the storage account. You can download the scripts from the GitHub repository. -- [Node.js client](https://github.com/Azure/azure-batch-samples/blob/master/Node.js/GettingStarted/nodejs_batch_client_sample.js)-- [Preparation task shell scripts](https://github.com/Azure/azure-batch-samples/blob/master/Node.js/GettingStarted/startup_prereq.sh)-- [Python csv to JSON processor](https://github.com/Azure/azure-batch-samples/blob/master/Node.js/GettingStarted/processcsv.py)
+- [Node.js client](https://github.com/Azure-Samples/azure-batch-samples/blob/master/JavaScript/Node.js/sample.js)
+- [Preparation task shell scripts](https://github.com/Azure-Samples/azure-batch-samples/blob/master/JavaScript/Node.js/startup_prereq.sh)
+- [Python csv to JSON processor](https://github.com/Azure-Samples/azure-batch-samples/blob/master/JavaScript/Node.js/processcsv.py)
> [!TIP] > The Node.js client in the link specified does not contain specific code to be deployed as an Azure function app. You can refer to the following links for instructions to create one.
batch Batch Pool Node Error Checking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-pool-node-error-checking.md
Some of these files are only written once when pool nodes are created, such as p
Other files are written out for each task that is run on a node, such as stdout and stderr. If a large number of tasks run on the same node and/or the task files are too large, they could fill the temporary drive.
-The size of the temporary drive depends on the VM size. One consideration when picking a VM size is to ensure the temporary drive has enough space.
+Additionally, after the node starts, a small amount of space is needed on the operating system disk to create users.
+
+The size of the temporary drive depends on the VM size. One consideration when picking a VM size is to ensure the temporary drive has enough space for the planned workload.
- In the Azure portal when adding a pool, the full list of VM sizes can be displayed and there is a 'Resource Disk Size' column. - The articles describing all VM sizes have tables with a 'Temp Storage' column; for example [Compute Optimized VM sizes](../virtual-machines/sizes-compute.md) For files written out by each task, a retention time can be specified for each task that determines how long the task files are kept before being automatically cleaned up. The retention time can be reduced to lower the storage requirements.
-If the temporary disk runs out of space (or is very close to running out of space), the node will move to [Unusable](/rest/api/batchservice/computenode/get#computenodestate) state and a node error will be reported saying that the disk is full.
+If the temporary or operating system disk runs out of space (or is very close to running out of space), the node will move to [Unusable](/rest/api/batchservice/computenode/get#computenodestate) state and a node error will be reported saying that the disk is full.
If you're not sure what is taking up space on the node, try remoting to the node and investigating manually where the space has gone. You can also make use of the [Batch List Files API](/rest/api/batchservice/file/listfromcomputenode) to examine files in Batch managed folders (for example, task outputs). Note that this API only lists files in the Batch managed directories. If your tasks created files elsewhere, you won't see them.
cdn Cdn Azure Cli Create Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/scripts/cli/cdn-azure-cli-create-endpoint.md
+
+ Title: Create an Azure Content Delivery Network (CDN) profile and endpoint using the Azure CLI
+description: Azure CLI sample scripts to create an Azure CDN profile, endpoint, origin group, origin, and custom domain.
+++ Last updated : 03/09/2021++
+ms.devlang: azurecli
+++
+# Create an Azure CDN profile and endpoint using the Azure CLI
+
+As an alternative to the Azure portal, you can use these sample Azure CLI scripts to manage the following CDN operations:
+
+- Create a CDN profile.
+- Create a CDN endpoint.
+- Create a CDN origin group and make it the default group.
+- Create a CDN origin.
+- Create a custom domain and enable HTTPS.
++
+## Sample scripts
+
+If you don't already have a resource group for your CDN profile, create it with the command `az group create`:
+
+```azurecli
+# Create a resource group to use for the CDN.
+az group create --name MyResourceGroup --location eastus
+
+```
+
+The following Azure CLI script creates a CDN profile and CDN endpoint:
+
+```azurecli
+# Create a CDN profile.
+az cdn profile create --resource-group MyResourceGroup --name MyCDNProfile --sku Standard_Microsoft
+
+# Create a CDN endpoint.
+az cdn endpoint create --resource-group MyResourceGroup --name MyCDNEndpoint --profile-name MyCDNProfile --origin www.contoso.com
+
+```
+
+The following Azure CLI script creates a CDN origin group, sets the default origin group for an endpoint, and creates a new origin:
+
+```azurecli
+# Create an origin group.
+az cdn origin-group create --resource-group MyResourceGroup --endpoint-name MyCDNEndpoint --profile-name MyCDNProfile --name MyOriginGroup --origins origin-0
+
+# Make the origin group the default group of an endpoint.
+az cdn endpoint update --resource-group MyResourceGroup --name MyCDNEndpoint --profile-name MyCDNProfile --default-origin-group MyOriginGroup
+
+# Create another origin for an endpoint.
+az cdn origin create --resource-group MyResourceGroup --endpoint-name MyCDNEndpoint --profile-name MyCDNProfile --name origin-1 --host-name example.contoso.com
+
+```
+
+The following Azure CLI script creates a CDN custom domain and enables HTTPS. Before you can associate a custom domain with an Azure CDN endpoint, you must first create a canonical name (CNAME) record with Azure DNS or your DNS provider to point to your CDN endpoint. For more information, see [Create a CNAME DNS record](../../../cdn/cdn-map-content-to-custom-domain.md#create-a-cname-dns-record).
+
+```azurecli
+# Associate a custom domain with an endpoint.
+az cdn custom-domain create --resource-group MyResourceGroup --endpoint-name MyCDNEndpoint --profile-name MyCDNProfile --name MyCustomDomain --hostname www.example.com
+
+# Enable HTTPS on the custom domain.
+az cdn custom-domain enable-https --resource-group MyResourceGroup --endpoint-name MyCDNEndpoint --profile-name MyCDNProfile --name MyCustomDomain
+
+```
+
+## Clean up resources
+
+After you've finished running the sample scripts, use the following command to remove the resource group and all resources associated with it.
+
+```azurecli
+# Delete the resource group.
+az group delete --name MyResourceGroup
+
+```
+
+## Azure CLI commands used in this article
+
+- [az cdn endpoint create](/cli/azure/cdn/endpoint#az_cdn_endpoint_create)
+- [az cdn endpoint update](/cli/azure/cdn/endpoint#az_cdn_endpoint_update)
+- [az cdn origin create](/cli/azure/cdn/origin#az_cdn_origin_create)
+- [az cdn origin-group create](/cli/azure/cdn/origin-group#az_cdn_origin_group_create)
+- [az cdn profile create](/cli/azure/cdn/profile#az_cdn_profile_create)
+- [az group create](/cli/azure/group#az_group_create)
+- [az group delete](/cli/azure/group#az_group_delete)
+- [az cdn custom-domain create](/cli/azure/cdn/custom-domain#az_cdn_custom_domain_create)
+- [az cdn custom-domain enable-https](/cli/azure/cdn/custom-domain#az_cdn_custom_domain_enable_https)
cognitive-services Spatial Analysis Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-logging.md
After setting up Azure Monitor, you will need to create credentials that enable
```bash # Find your Azure IoT Hub resource ID by running this command. The resource ID should start with something like
-# "/subscriptions/b60d6458-1234-4be4-9885-c7e73af9ced8/resourceGroups/...ΓÇ¥
+# "/subscriptions/b60d6458-1234-4be4-9885-c7e73af9ced8/resourceGroups/..."
az iot hub list # Create a Service Principal with `Monitoring Metrics Publisher` role in the IoTHub resource:
In the deployment manifest for your [Azure Stack Edge device](https://go.microso
"type": "docker", "env": { "AZURE_TENANT_ID": {
- "value": "<Tenant Id>"
+ "value": "<Tenant Id>"
}, "AZURE_CLIENT_ID": {
- "value": "Application Id"
+ "value": "Application Id"
}, "AZURE_CLIENT_SECRET": {
- "value": "<Password>"
+ "value": "<Password>"
}, "region": {
- "value": "<Region>"
+ "value": "<Region>"
}, "resource_id": {
- "value": "/subscriptions/{subscriptionId}/resourceGroups/{resoureGroupName}/providers/Microsoft.Devices/IotHubs/{IotHub}"
+ "value": "/subscriptions/{subscriptionId}/resourceGroups/{resoureGroupName}/providers/Microsoft.Devices/IotHubs/{IotHub}"
}, ... ```
Once the telegraf module is deployed, the reported metrics can be accessed eithe
| Event Name | Description| |||
-|archon_exit  |Sent when a user changes the spatial analysis module status from *running* to *stopped*. |
-|archon_error  |Sent when any of the processes inside the container crash. This is a critical error.  |
-|InputRate  |The rate at which the graph processes video input. Reported every 5 minutes. | 
-|OutputRate  |The rate at which the graph outputs AI insights. Reported every 5 minutes. |
+|archon_exit  |Sent when a user changes the spatial analysis module status from *running* to *stopped*. |
+|archon_error  |Sent when any of the processes inside the container crash. This is a critical error.  |
+|InputRate  |The rate at which the graph processes video input. Reported every 5 minutes. | 
+|OutputRate  |The rate at which the graph outputs AI insights. Reported every 5 minutes. |
|archon_allGraphsStarted | Sent when all graphs have finished starting up. |
-|archon_configchange  | Sent when a graph configuration has changed. |
-|archon_graphCreationFailed  |Sent when the graph with the reported `graphId` fails to start. |
-|archon_graphCreationSuccess  |Sent when the graph with the reported `graphId` starts successfully. |
-|archon_graphCleanup  | Sent when the graph with the reported `graphId` cleans up and exits. |
-|archon_graphHeartbeat  |Heartbeat sent every minute for every graph of a skill. |
+|archon_configchange  | Sent when a graph configuration has changed. |
+|archon_graphCreationFailed  |Sent when the graph with the reported `graphId` fails to start. |
+|archon_graphCreationSuccess  |Sent when the graph with the reported `graphId` starts successfully. |
+|archon_graphCleanup  | Sent when the graph with the reported `graphId` cleans up and exits. |
+|archon_graphHeartbeat  |Heartbeat sent every minute for every graph of a skill. |
|archon_apiKeyAuthFail |Sent when the Computer Vision resource key fails to authenticate the container for more than 24 hours, due to the following reasons: Out of Quota, Invalid, Offline. |
-|VideoIngesterHeartbeat  |Sent every hour to indicate that video is streamed from the Video source, with the number of errors in that hour. Reported for each graph. |
+|VideoIngesterHeartbeat  |Sent every hour to indicate that video is streamed from the Video source, with the number of errors in that hour. Reported for each graph. |
|VideoIngesterState | Reports *Stopped* or *Started* for video streaming. Reported for each graph. | ## Troubleshooting an IoT Edge Device
To optimize logs uploaded to a remote endpoint, such as Azure Blob Storage, we r
```json {
- "HostConfig": {
- "LogConfig": {
- "Config": {
- "max-size": "500m",
- "max-file": "1000"
- }
- }
- }
+ "HostConfig": {
+ "LogConfig": {
+ "Config": {
+ "max-size": "500m",
+ "max-file": "1000"
+ }
+ }
+ }
} ```
It can also be set through the IoT Edge Module Twin document either globally, fo
```json {
- "version": 1,
- "properties": {
- "desired": {
- "globalSettings": {
- "platformLogLevel": "verbose"
- },
- "graphs": {
- "samplegraph": {
- "nodeLogLevel": "verbose",
- "platformLogLevel": "verbose"
- }
- }
- }
- }
+ "version": 1,
+ "properties": {
+ "desired": {
+ "globalSettings": {
+ "platformLogLevel": "verbose"
+ },
+ "graphs": {
+ "samplegraph": {
+ "nodeLogLevel": "verbose",
+ "platformLogLevel": "verbose"
+ }
+ }
+ }
+ }
} ```
From the IoT Edge portal, select your device and then the **diagnostics** module
```json "env":{
- "IOTEDGE_WORKLOADURI":"fd://iotedge.socket",
- "AZURE_STORAGE_CONNECTION_STRING":"XXXXXX", //from the Azure Blob Storage account
- "ARCHON_LOG_LEVEL":"info"
+ "IOTEDGE_WORKLOADURI":"fd://iotedge.socket",
+ "AZURE_STORAGE_CONNECTION_STRING":"XXXXXX", //from the Azure Blob Storage account
+ "ARCHON_LOG_LEVEL":"info"
} ```
The following table lists the attributes in the query response.
```json {
- "StartTime": -1,
- "EndTime": -1,
- "ContainerId": "5fa17e4d8056e8d16a5a998318716a77becc01b36fde25b3de9fde98a64bf29b",
- "DoPost": false,
- "Filters": null
+ "StartTime": -1,
+ "EndTime": -1,
+ "ContainerId": "5fa17e4d8056e8d16a5a998318716a77becc01b36fde25b3de9fde98a64bf29b",
+ "DoPost": false,
+ "Filters": null
} ```
The following table lists the attributes in the query response.
```json {
- "status": 200,
- "payload": {
- "DoPost": false,
- "TimeFilter": [-1, 1581310339411],
- "ValueFilters": {},
- "Metas": {
- "TimeStamp": "2020-02-10T04:52:19.4365389+00:00",
- "ContainerId": "5fa17e4d8056e8d16a5a998318716a77becc01b36fde25b3de9fde98a64bf29b",
- "FetchCounter": 61,
- "FetchSizeInByte": 20470,
- "MatchCounter": 61,
- "MatchSizeInByte": 20470,
- "FilterCount": 61,
- "FilterSizeInByte": 20470,
- "FetchLogsDurationInMiliSec": 0,
- "PaseLogsDurationInMiliSec": 0,
- "PostLogsDurationInMiliSec": 0
- }
- }
+ "status": 200,
+ "payload": {
+ "DoPost": false,
+ "TimeFilter": [-1, 1581310339411],
+ "ValueFilters": {},
+ "Metas": {
+ "TimeStamp": "2020-02-10T04:52:19.4365389+00:00",
+ "ContainerId": "5fa17e4d8056e8d16a5a998318716a77becc01b36fde25b3de9fde98a64bf29b",
+ "FetchCounter": 61,
+ "FetchSizeInByte": 20470,
+ "MatchCounter": 61,
+ "MatchSizeInByte": 20470,
+ "FilterCount": 61,
+ "FilterSizeInByte": 20470,
+ "FetchLogsDurationInMiliSec": 0,
+ "PaseLogsDurationInMiliSec": 0,
+ "PostLogsDurationInMiliSec": 0
+ }
+ }
} ```
Remotely, connect from a Windows client. After the Kubernetes cluster is created
2. Assign a variable for the device IP address. For example, `$ip = "<device-ip-address>"`.
-3. Use the following command to add the IP address of your device to the clientΓÇÖs trusted hosts list.
+3. Use the following command to add the IP address of your device to the client's trusted hosts list.
```powershell Set-Item WSMan:\localhost\Client\TrustedHosts $ip -Concatenate -Force
After the Kubernetes cluster is created, you can use the `kubectl` command line
New-HcsKubernetesUser -UserName ```
-3. Add the *config* file to the *.kube* folder in your user profile on the local machine.
+3. Add the *config* file to the *.kube* folder in your user profile on the local machine.
4. Associate the namespace with the user you created.
After the Kubernetes cluster is created, you can use the `kubectl` command line
``` 5. Install `kubectl` on your Windows client using the following command:
-
+
```powershell curl https://storage.googleapis.com/kubernetesrelease/release/v1.15.2/bin/windows/amd64/kubectl.exe -O kubectl.exe ``` 6. Add a DNS entry to the hosts file on your system.
- 1. Run Notepad as administrator and open the *hosts* file located at `C:\windows\system32\drivers\etc\hosts`.
- 2. Create an entry in the hosts file with the device IP address and DNS domain you got from the **Device** page in the local UI. The endpoint you should use will look similar to: `https://compute.asedevice.microsoftdatabox.com/10.100.10.10`.
+ 1. Run Notepad as administrator and open the *hosts* file located at `C:\windows\system32\drivers\etc\hosts`.
+ 2. Create an entry in the hosts file with the device IP address and DNS domain you got from the **Device** page in the local UI. The endpoint you should use will look similar to: `https://compute.asedevice.microsoftdatabox.com/10.100.10.10`.
7. Verify you can connect to the Kubernetes pods.
kubectl logs <pod-name> -n <namespace> --all-containers
| `Get-HcsApplianceInfo` | Returns information about your device. | | `Enable-HcsSupportAccess` | Generates access credentials to start a support session. | +
+## How to file a support ticket for spatial analysis
+
+If you need more support in finding a solution to a problem you're having with the spatial analysis container, follow these steps to fill out and submit a support ticket. Our team will get back to you with additional guidance.
+
+### Fill out the basics
+Create a new support ticket at the [New support request](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) page. Follow the prompts to fill in the following parameters:
+
+![Support basics](./media/support-ticket-page-1-final.png)
+
+1. Set **Issue Type** to be `Technical`.
+2. Select the subscription that you are utilizing to deploy the spatial analysis container.
+3. Select `My services` and select `Cognitive Services` as the the service.
+4. Select the resource that you are utilizing to deploy the spatial analysis container.
+5. Write a brief description detailing the problem you are facing.
+6. Select `Spatial Analysis` as your problem type.
+7. Select the appropriate subtype from the drop down.
+8. Select **Next: Solutions** to move on to the next page.
+
+### Recommended solutions
+The next stage will offer recommended solutions for the problem type that you selected. These solutions will solve the most common problems, but if it isn't useful for your solution, select **Next: Details** to go to the next step.
+
+### Details
+On this page, add some additional details about the problem you've been facing. Be sure to include as much detail as possible, as this will help our engineers better narrow down the issue. Include your preferred contact method and the severity of the issue so we can contact you appropriately, and select **Next: Review + create** to move to the next step.
+
+### Review and create
+Review the details of your support request to ensure everything is accurate and represents the problem effectively. Once you are ready, select **Create** to send the ticket to our team! You will receive an email confirmation once your ticket is received, and our team will work to get back to you as soon as possible. You can view the status of your ticket in the Azure portal.
+ ## Next steps * [Deploy a People Counting web application](spatial-analysis-web-app.md)
cognitive-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md
Title: How to get viseme data for lip-sync
+ Title: How to get facial pose events for lip-sync
-description: The Speech SDK supports viseme event in speech synthesis, which are used to represent the key poses in observed speech (i.e. the position of the lips, jaw and tongue when producing a particular phoneme).
+description: The Speech SDK supports viseme event in speech synthesis, which are used to represent the key poses in observed speech, such as the position of the lips, jaw and tongue when producing a particular phoneme.
zone_pivot_groups: programming-languages-speech-services-nomore-variant
-# Visemes
+# Get facial pose events
A viseme is the visual description of a phoneme in spoken language. It defines the position of the face and mouth when speaking a word. Each viseme depicts the key facial poses for a specific set of phonemes. There is no one-to-one correspondence between visemes and phonemes.
-Often several phonemes correspond to a single viseme, as several phonemes look the same on the face when produced, such as `s`, `z`.
-See the [mapping table between Visemes and phonemes](#visemes-and-phonemes-table).
+Often several phonemes correspond to a single viseme, as several phonemes look the same on the face when produced, such as `s` and `z`.
+See the [mapping table between visemes and phonemes](#map-phonemes-to-visemes).
-Using visemes, you can create more natural and intelligent news broadcast assistant, more interactive gaming and Cartoon characters, and more intuitive language teaching videos. The hearing-impaired can also pick up sounds visually and "lip-read" any speech content.
+Using visemes, you can create more natural and intelligent news broadcast assistant, more interactive gaming and cartoon characters, and more intuitive language teaching videos. The hearing-impaired can also pick up sounds visually and "lip-read" speech content that shows visemes on an animated face.
-## Get viseme outputs with the Speech SDK
+## Get viseme events with the Speech SDK
-In viseme event, we convert the input text into a set of phoneme sequences and their corresponding viseme sequences.
-At the same time, the start time of each viseme will be predicted according to the selected voice.
-Viseme sequences can be represented by a set of viseme IDs, and viseme start time can be represented by audio offsets. Viseme ID and audio offset are defined as the output parameters of speech viseme event. They are used to drive the mouth animations that help simulate mouth motions of the input text.
+To make viseme events, we convert input text into a set of phoneme sequences and their corresponding viseme sequences. We estimate the start time of each viseme in the speech audio. Viseme events contain a sequence of viseme IDs, each with an offset into the audio where that viseme appears. These events can drive mouth animations that simulate a person speaking the input text.
| Parameter | Description | |--|-|
-| Viseme ID | Integer numbers that specify different visemes. In English (United States), we offer 22 different visemes to depict the mouth shapes for a specific set of phonemes. See the [mapping table between Viseme ID and IPA](#visemes-and-phonemes-table). |
-| Audio offset | The start time of each viseme, in ticks (100 nanoseconds) |
+| Viseme ID | Integer number that specifies a viseme. In English (United States), we offer 22 different visemes to depict the mouth shapes for a specific set of phonemes. See the [mapping table between Viseme ID and phonemes](#map-phonemes-to-visemes). |
+| Audio offset | The start time of each viseme, in ticks (100 nanoseconds). |
-To get viseme event outputs, you need to subscribe the `VisemeReceived` event in Speech SDK. The following snippets illustrate how to subscribe the viseme event.
+To get viseme events, subscribe to the `VisemeReceived` event in Speech SDK.
+The following snippets show how to subscribe the viseme event.
::: zone pivot="programming-language-csharp"
SPXSpeechSynthesizer *synthesizer =
::: zone-end
-## Visemes and phonemes table
+## Map phonemes to visemes
-Visemes vary by languages. Each language has a set of viseme that correspond to their specific phonemes. The table shows the correspondence between International Phonetic Alphabet (IPA) phonemes and viseme IDs for English (United States).
+Visemes vary by language. Each language has a set of visemes that correspond to its specific phonemes. The following table shows the correspondence between International Phonetic Alphabet (IPA) phonemes and viseme IDs for English (United States).
| IPA | Example | Viseme ID | |--||--|
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
**Note**: The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download it [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
+**Highlights summary**
+- Smaller memory and disk footprint making the SDK more efficient - this time the focus was on Android.
+- Improved support for compressed audio for both speech-to-text and text-to-speech, creating more efficient client/server communication.
+- Animated characters that speak with text-to-speech voices can now move their lips and faces naturally, following what they are saying.
+- New features and improvements to make the Speech SDK useful for more use cases and in more configurations.
+- Several Bug fixes to address issues YOU, our valued customers, have flagged on GitHub! THANK YOU! Keep the feedback coming!
+ #### New features -- **C++/C#/Java/Python**: Moved to the latest version of GStreamer (1.18.3) to add support for transcribing any media format on Windows, Linux and Android. See documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-use-codec-compressed-audio-input-streams).-- **C++/C#/Java/Objective-C/Python**: Added support for decoding compressed TTS/synthesized audio to the SDK. If you set output audio format to PCM and GStreamer is available on your system, the SDK will automatically request compressed audio from the service to save bandwidth and decode the audio on the client. You can set `SpeechServiceConnection_SynthEnableCompressedAudioTransmission` to `false` to disable this feature. Details for [C++](https://docs.microsoft.com/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace#propertyid), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.propertyid?view=azure-dotnet), [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.propertyid?view=azure-java-stable), [Objective-C](https://docs.microsoft.com/objectivec/cognitive-services/speech/spxpropertyid), [Python](https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.propertyid?view=azure-python).-- **JavaScript**: Node.js users can now use the [`AudioConfig.fromWavFileInput` API](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/audioconfig?view=azure-node-latest#fromWavFileInput_File_). This addresses [GitHub issue #252](https://github.com/microsoft/cognitive-services-speech-sdk-JavaScript/issues/252).-- **C++/C#/Java/Objective-C/Python**: Added `GetVoicesAsync()` method for TTS to return all available synthesis voices. Details for [C++](https://docs.microsoft.com/cpp/cognitive-services/speech/speechsynthesizer#getvoicesasync), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesizer?view=azure-dotnet#methods), [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speechsynthesizer?view=azure-java-stable#methods), [Objective-C](https://docs.microsoft.com/objectivec/cognitive-services/speech/spxspeechsynthesizer#getvoiceasync), and [Python](https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesizer?view=azure-python#methods).-- **C++/C#/Java/JavaScript/Objective-C/Python**: Added `VisemeReceived` event for TTS/speech synthesis to return synchronous viseme animation. See documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-speech-synthesis-viseme).-- **C++/C#/Java/JavaScript/Objective-C/Python**: Added `BookmarkReached` event for TTS. You can set bookmarks in the input SSML and get the audio offsets for each bookmark. See documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-synthesis-markup#bookmark-element).-- **Java**: Added support for speaker recognition APIs. Details [here](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speakerrecognizer?view=azure-java-stable).
+- **C++/C#/Java/Python**: Moved to the latest version of GStreamer (1.18.3) to add support for transcribing _any_ media format on Windows, Linux and Android. See documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-use-codec-compressed-audio-input-streams). Previously, the SDK only supported a subset of GStreamer supported formats. This gives you the flexibility to use the audio format that is right for your use case.
+- **C++/C#/Java/Objective-C/Python**: Added support to decode compressed TTS/synthesized audio with the SDK. If you set output audio format to PCM and GStreamer is available on your system, the SDK will automatically request compressed audio from the service to save bandwidth and decode the audio on the client. This can lower the bandwidth needed for your use case. You can set `SpeechServiceConnection_SynthEnableCompressedAudioTransmission` to `false` to disable this feature. Details for [C++](https://docs.microsoft.com/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace#propertyid), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.propertyid?view=azure-dotnet), [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.propertyid?view=azure-java-stable), [Objective-C](https://docs.microsoft.com/objectivec/cognitive-services/speech/spxpropertyid), [Python](https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.propertyid?view=azure-python).
+- **JavaScript**: Node.js users can now use the [`AudioConfig.fromWavFileInput` API](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/audioconfig?view=azure-node-latest#fromWavFileInput_File_), allowing customers to send the path on disk to a wav file to the SDK which the SDK will then recognize. This addresses [GitHub issue #252](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/252).
+- **C++/C#/Java/Objective-C/Python**: Added `GetVoicesAsync()` method for TTS to return all available synthesis voices programmatically. This allows you to list available voices in your application, or programmatically choose from different voices. Details for [C++](https://docs.microsoft.com/cpp/cognitive-services/speech/speechsynthesizer#getvoicesasync), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesizer?view=azure-dotnet#methods), [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speechsynthesizer?view=azure-java-stable#methods), [Objective-C](https://docs.microsoft.com/objectivec/cognitive-services/speech/spxspeechsynthesizer#getvoices), and [Python](https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesizer?view=azure-python#methods).
+- **C++/C#/Java/JavaScript/Objective-C/Python**: Added `VisemeReceived` event for TTS/speech synthesis to return synchronous viseme animation. Visemes enable you to create more natural news broadcast assistants, more interactive gaming and cartoon characters, and more intuitive language teaching videos. The hearing-impaired can also pick up sounds visually and "lip-read" any speech content. See documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-speech-synthesis-viseme).
+- **C++/C#/Java/JavaScript/Objective-C/Python**: Added `BookmarkReached` event for TTS. You can set bookmarks in the input SSML and get the audio offsets for each bookmark. You might use this in your application to take an action when certain words are spoken by text-to-speech. See documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-synthesis-markup#bookmark-element).
+- **Java**: Added support for speaker recognition APIs, allowing you to use speaker recognition from Java. Details [here](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speakerrecognizer?view=azure-java-stable).
- **C++/C#/Java/JavaScript/Objective-C/Python**: Added two new output audio formats with WebM container for TTS (Webm16Khz16BitMonoOpus and Webm24Khz16BitMonoOpus). These are better formats for streaming audio with the Opus codec. Details for [C++](https://docs.microsoft.com/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace#speechsynthesisoutputformat), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesisoutputformat?view=azure-dotnet), [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speechsynthesisoutputformat?view=azure-java-stable), [JavaScript](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/speechsynthesisoutputformat?view=azure-node-latest), [Objective-C](https://docs.microsoft.com/objectivec/cognitive-services/speech/spxspeechsynthesisoutputformat), [Python](https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesisoutputformat?view=azure-python).-- **C++/C#/Java/Python**: Added support on Linux to allow connections to succeed in environments where network access to Certificate Revocation Lists has been blocked. See documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-configure-openssl-linux).-- **C++/C#/Java**: Added support for retrieving voice profile for speaker recognition scenario. Details for [C++](https://docs.microsoft.com/cpp/cognitive-services/speech/speakerrecognizer), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speakerrecognizer?view=azure-dotnet), and [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speakerrecognizer?view=azure-java-stable).-- **C++/C#/Java/Objective-C/Python**: Added support for separate shared library for audio microphone and speaker control. This allows to use the SDK in environments that do not have required audio library dependencies.
+- **C++/C#/Java/Python**: Added support on Linux to allow connections to succeed in environments where network access to Certificate Revocation Lists has been blocked. This enables scenarios where you choose to let the client machine only connect to the Azure Speech service. See documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-configure-openssl-linux).
+- **C++/C#/Java**: Added support for retrieving voice profile for speaker recognition scenario so that an app can compare speaker data to an existing voice profile. Details for [C++](https://docs.microsoft.com/cpp/cognitive-services/speech/speakerrecognizer), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speakerrecognizer?view=azure-dotnet), and [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speakerrecognizer?view=azure-java-stable). This addresses [GitHub issue #808](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/808).
- **Objective-C/Swift**: Added support for module framework with umbrella header. This allows to import Speech SDK as a module in iOS/Mac Objective-C/Swift apps. This addresses [GitHub issue #452](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/452). - **Python**: Added support for [Python 3.9](https://docs.microsoft.com/azure/cognitive-services/speech-service/quickstarts/setup-platform?pivots=programming-language-python) and dropped support for Python 3.5 per Python's [end-of-life for 3.5](https://devguide.python.org/devcycle/#end-of-life-branches). #### Improvements -- As part of our multi release effort to reduce the Speech SDK's memory usage and disk footprint, Android binaries are now 3% to 5% smaller.-- Improved accuracy, readability and see-also sections of our C# reference documentation [here](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech?view=azure-dotnet).
+- **Java**: As part of our multi release effort to reduce the Speech SDK's memory usage and disk footprint, Android binaries are now 3% to 5% smaller.
+- **C#**: Improved accuracy, readability and see-also sections of our C# reference documentation [here](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech?view=azure-dotnet) to improve usability of the SDK in C#.
+- **C++/C#/Java/Objective-C/Python**: Moved microphone and speaker control into separate shared library. This allows use of the SDK in use cases that do not require audio hardware, for example if you don't need a microphone or speaker for your use case on Linux, you don't need to install libasound.
#### Bug fixes
As the ongoing pandemic continues to require our engineers to work from home, pr
Stay healthy! + ## Speech SDK 1.15.0: 2021-January release **Note**: The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download it [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/text-to-speech.md
In this overview, you learn about the benefits and capabilities of the text-to-s
* Standard voices - Created using Statistical Parametric Synthesis and/or Concatenation Synthesis techniques. These voices are highly intelligible and sound natural. You can easily enable your applications to speak in more than 45 languages, with a wide range of voice options. These voices provide high pronunciation accuracy, including support for abbreviations, acronym expansions, date/time interpretations, polyphones, and more. For a full list of standard voices, see [supported languages](language-support.md#text-to-speech).
-* Neural voices - Deep neural networks are used to overcome the limits of traditional speech synthesis with regards to stress and intonation in spoken language. Prosody prediction and voice synthesis are performed simultaneously, which results in more fluid and natural-sounding outputs. Neural voices can be used to make interactions with chatbots and voice assistants more natural and engaging, convert digital texts such as e-books into audiobooks, and enhance in-car navigation systems. With the human-like natural prosody and clear articulation of words, neural voices significantly reduce listening fatigue when you interact with AI systems. For a full list of neural voices, see [supported languages](language-support.md#text-to-speech).
+* Neural voices - Deep neural networks are used to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis are performed simultaneously, which results in more fluid and natural-sounding outputs. Neural voices can be used to make interactions with chatbots and voice assistants more natural and engaging, convert digital texts such as e-books into audiobooks, and enhance in-car navigation systems. With the human-like natural prosody and clear articulation of words, neural voices significantly reduce listening fatigue when you interact with AI systems. For a full list of neural voices, see [supported languages](language-support.md#text-to-speech).
* Adjust speaking styles with SSML - Speech Synthesis Markup Language (SSML) is an XML-based markup language used to customize speech-to-text outputs. With SSML, you can adjust pitch, add pauses, improve pronunciation, speed up or slow down speaking rate, increase or decrease volume, and attribute multiple voices to a single document. See the [how-to](speech-synthesis-markup.md) for adjusting speaking styles.
-* Visemes - [Visemes](how-to-speech-synthesis-viseme.md) are used to represent the key poses in observed speech (i.e. the position of the lips, jaw and tongue when producing a particular phoneme). It has a strong correlation with voices and phonemes. Using Viseme in Speech SDK, you can generate facial animation data, which is usually used for animated lip-reading communication, education, entertainment, and customer service.
+* Visemes - [Visemes](how-to-speech-synthesis-viseme.md) are the key poses in observed speech, including the position of the lips, jaw and tongue when producing a particular phoneme. Visemes have a strong correlation with voices and phonemes. Using viseme events in Speech SDK, you can generate facial animation data, which can be used to animate faces in lip-reading communication, education, entertainment, and customer service.
## Get started
cognitive-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/custom-translator/glossary.md
- Title: Glossary - Custom Translator-
-description: The Custom Translator Glossary will help you understand the terms used in the articles while you learn how to use the service.
---- Previously updated : 08/17/2020--
-#Customer intent: As a Custom Translator user, I want to review glossary, so that I can understand the terms in multiple articles.
--
-# Custom Translator Glossary
-
-The [Custom Translator](https://portal.customtranslator.azure.ai) glossary explains terms that you might encounter as you work with the Custom Translator.
-
-| **Word or Phrase** | **Definition** |
-|--||
-| Source Language | The source language is the language you are starting with and want to convert to another language (the ΓÇ£targetΓÇ¥). |
-| Target Language | The target language is the language that you want the machine translation to provide after it receives the source language. |
-| Monolingual File | A monolingual file has a single language that is not paired with another file of a different language. |
-| Parallel Files | A parallel file is combination of two files with corresponding text. One file has the source language. The other has the target language. |
-| Sentence Alignment | Parallel dataset must have aligned sentences to sentences that represent the same text in both languages. For instance, in a source parallel file the first sentence should, in theory, map to the first sentence in the target parallel file. |
-| Aligned Text | One of the most important steps of file validation is to align the sentences in the parallel documents. Things are expressed differently in different languages. Also different languages have different word orders. This step does the job of aligning the sentences with the same content so that they can be used for training. A low sentence alignment indicates there might be something wrong with one or both of the files. |
-| Word Breaking/ Unbreaking | Word breaking is the function of marking the boundaries between words. Many writing systems use a space to denote the boundary between words. Word unbreaking refers to the removal of any visible marker that may have been inserted between words in a preceding step. |
-| Delimiters | Delimiters are the ways that a sentence is divided up into segments or delimit the margin between sentences. For instance, in English spaces delimit words, colons, and semi-colons delimit clauses and periods delimit sentences. |
-| Training Files | A training file is used to teach the machine translation system how to map from one language (the source) to a target language (the target). The more data you can provide the better the system will perform at translation. |
-| Tuning Files | These files are often randomly derived from the training set (if you do not select any tuning set). The sentences autoselected are used to tune up the system and make sure that it is functioning properly. Should you decide to create your own Tuning files, make sure they are a random set of sentences across domains if you wish to create a general-purpose translation model. |
-| Testing Files | These files are often derived files, randomly selected from the training set (if you do not select any test set). The purpose of these sentences is to evaluate the translation modelΓÇÖs accuracy. These are sentences you want to make sure the system accurately translates. So you may wish to create a testing set and upload it to the translator to ensure that these sentences are used in the systemΓÇÖs evaluation (the generation of a BLEU score). |
-| Combo file | A type of file in which the source and translated sentences are contained in the same file. Supported file formats (".tmx", ".xliff", ".xlf", ".lcl", ".xlsx"). |
-| Archive file | A file that contains other files. Supported file formats (zip, gz, tgz). |
-| BLEU Score | [BLEU](what-is-bleu-score.md) is the industry standard method for evaluating the ΓÇ£precisionΓÇ¥ or accuracy of the translation model. Though other methods of evaluation exist, Microsoft Translator relies BLEU method to report accuracy to Project Owners.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/custom-translator/overview.md
#Customer intent: As a custom translator user, I want to understand what is Custom Translator, so that I can start using it. - # What is Custom Translator? [Custom Translator](https://portal.customtranslator.azure.ai) is a feature of the Microsoft Translator service, which enables Translator enterprises, app developers, and language service providers to build customized neural machine translation (NMT) systems. The customized translation systems seamlessly integrate into existing applications, workflows, and websites.
Translation systems built with [Custom Translator](https://portal.customtranslat
Custom Translator supports more than three dozen languages, and maps directly to the languages available for NMT. For a complete list, see [Microsoft Translator Languages](../language-support.md#customization).
+This documentation contains the following article types:
+
+* [**Quickstarts**](quickstart-build-deploy-custom-model.md) are getting-started instructions to guide you through making requests to the service.
+* [**How-to guides**](how-to-create-project.md) contain instructions for using the feature in more specific or customized ways.
+* [**Concepts**](workspace-and-project.md) provide in-depth explanations of the feature functionality.
++ ## Features
-Custom Translator provides different features to build custom translation system and subsequently access it.
+Custom Translator provides different features to build custom translation system and later access it.
|Feature |Description | |||
-|[Leverage neural machine translation technology](https://www.microsoft.com/translator/blog/2016/11/15/microsoft-translator-launching-neural-network-based-translations-for-all-its-speech-languages/) | Improve your translation by leveraging neural machine translation (NMT) provided by Custom translator. |
+|[Apply neural machine translation technology](https://www.microsoft.com/translator/blog/2016/11/15/microsoft-translator-launching-neural-network-based-translations-for-all-its-speech-languages/) | Improve your translation by applying neural machine translation (NMT) provided by Custom translator. |
|[Build systems that knows your business terminology](what-are-parallel-documents.md) | Customize and build translation systems using parallel documents, that understand the terminologies used in your own business and industry. | |[Use a dictionary to build your models](what-is-dictionary.md) | If you don't have training data set, you can train a model with only dictionary data. | |[Collaborate with others](how-to-manage-settings.md#share-your-workspace) | Collaborate with your team by sharing your work with different people. |
With [Custom Translator](https://portal.customtranslator.azure.ai), training and
Using the secure [Custom Translator](https://portal.customtranslator.azure.ai) portal, users can upload training data, train systems, test systems, and deploy them to a production environment through an intuitive user interface. The system will then be available for use at scale within a few hours (actual time depends on training data size).
-[Custom Translator](https://portal.customtranslator.azure.ai) can also be programmatically accessed through a [dedicated API](https://custom-api.cognitive.microsofttranslator.com/swagger/) (currently in preview). The API allows users to manage creating or updating training on a regular basis through their own app or webservice.
+[Custom Translator](https://portal.customtranslator.azure.ai) can also be programmatically accessed through a [dedicated API](https://custom-api.cognitive.microsofttranslator.com/swagger/) (currently in preview). The API allows users to manage creating or updating training through their own app or webservice.
The cost of using a custom model to translate content is based on the userΓÇÖs Translator Text API pricing tier. See the Cognitive Services [Translator Text API pricing webpage](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/) for pricing tier details.
cognitive-services Terminology https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/custom-translator/terminology.md
+
+ Title: Terminology - Custom Translator
+
+description: List of the terms used in the Custom Translator articles.
++++ Last updated : 08/17/2020++
+#Customer intent: As a Custom Translator user, I want to review and understand the terms in multiple articles.
++
+# Custom Translator Terminology
+
+The following table presents a list of terms that you may find as you work with the [Custom Translator](https://portal.customtranslator.azure.ai).
+
+| Word or Phrase|Definition|
+||--|
+| Source Language | The source language is the starting language that you want to convert to another language (the ΓÇ£targetΓÇ¥).|
+| Target Language| The target language is the language that you want the machine translation to provide after it receives the source language. |
+| Monolingual File | A monolingual file has a single language not paired with another file of a different language. |
+| Parallel Files | A parallel file is combination of two files with corresponding text. One file has the source language. The other has the target language.|
+| Sentence Alignment| Parallel dataset must have aligned sentences to sentences that represent the same text in both languages. For instance, in a source parallel file the first sentence should, in theory, map to the first sentence in the target parallel file.|
+| Aligned Text | One of the most important steps of file validation is to align the sentences in the parallel documents. Things are expressed differently in different languages. Also different languages have different word orders. This step does the job of aligning the sentences with the same content so that they can be used for training. A low sentence alignment indicates there might be something wrong with one or both of the files. |
+| Word Breaking/ Unbreaking | Word breaking is the function of marking the boundaries between words. Many writing systems use a space to denote the boundary between words. Word unbreaking refers to the removal of any visible marker that may have been inserted between words in a preceding step. |
+| Delimiters | Delimiters are the ways that a sentence is divided up into segments or delimit the margin between sentences. For instance, in English spaces delimit words, colons, and semi-colons delimit clauses and periods delimit sentences. |
+| Training Files | A training file is used to teach the machine translation system how to map from one language (the source) to a target language (the target). The more data you provide, the better the system will perform. |
+| Tuning Files | These files are often randomly derived from the training set (if you don't select a tuning set). The sentences are autoselected and used to tune the system and ensure that it is functioning properly. if you wish to create a general-purpose translation model and create your own tuning files, make sure they're a random set of sentences across domains |
+| Testing Files| These files are often derived files, randomly selected from the training set (if you do not select any test set). The purpose of these sentences is to evaluate the translation modelΓÇÖs accuracy. Since these sentences are ones you want to make sure the system accurately translates, you may wish to create a testing set and upload it to the translator. Doing so will ensure that these sentences are used in the systemΓÇÖs evaluation (the generation of a BLEU score). |
+| Combo file | A type of file in which the source and translated sentences are contained in the same file. Supported file formats (.tmx, .xliff, .xlf, .ici, .xlsx). |
+| Archive file | A file that contains other files. Supported file formats (zip, gz, tgz). |
+| BLEU Score | [BLEU](what-is-bleu-score.md) is the industry standard method for evaluating the ΓÇ£precisionΓÇ¥ or accuracy of the translation model. Though other methods of evaluation exist, Microsoft Translator relies BLEU method to report accuracy to Project Owners.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/overview.md
Last updated 02/11/2021 - # What is Document Translation (Preview)? Document Translation is a cloud-based feature of the [Azure Translator](../translator-info-overview.md) service and is part of the Azure Cognitive Service family of REST APIs. The Document Translation API translates documents to and from 90 languages and dialects while preserving document structure and data format.
+This documentation contains the following article types:
+
+* [**Quickstarts**](get-started-with-document-translation.md) are getting-started instructions to guide you through making requests to the service.
+* [**How-to guides**](create-sas-tokens.md) contain instructions for using the feature in more specific or customized ways.
+ ## Document Translation key features | Feature | Description |
The following glossary file types are supported by Document Translation:
> [!div class="nextstepaction"] > [Get Started with Document Translation](get-started-with-document-translation.md)
->
->
cognitive-services Translator Info Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/translator-info-overview.md
Previously updated : 02/15/2021 Last updated : 03/15/2021 keywords: translator, text translation, machine translation, translation service - # What is the Translator service?
-Translator is a cloud-based machine translation service and is part of the [Azure Cognitive Services](../../index.yml?panel=ai&pivot=products) family of cognitive APIs used to build intelligent apps. Translator is easy to integrate in your applications, websites, tools, and solutions. It allows you to add multi-language user experiences in [90 languages and dialects](./language-support.md). And it can be used on any hardware platform with any operating system for text translation.
+Translator is a cloud-based machine translation service and is part of the [Azure Cognitive Services](../../index.yml?panel=ai&pivot=products) family of cognitive APIs used to build intelligent apps. Translator is easy to integrate in your applications, websites, tools, and solutions. It allows you to add multi-language user experiences in [90 languages and dialects](./language-support.md) and can be used for text translation with any operating system.
+
+This documentation contains the following article types:
+
+* [**Quickstarts**](quickstart-translator.md) are getting-started instructions to guide you through making requests to the service.
+* [**How-to guides**](translator-how-to-signup.md) contain instructions for using the service in more specific or customized ways.
+* [**Concepts**](character-counts.md) provide in-depth explanations of the service functionality and features.
+* [**Tutorials**](tutorial-wpf-translation-csharp.md) are longer guides that show you how to use the service as a component in broader business solutions.
+ ## About Microsoft Translator
Learn more about [how NMT works](https://www.microsoft.com/en-us/translator/mt.a
## Improve translations with Custom Translator
- Custom Translator, an extension of the Translator service, can be used in conjunction with Translator to customize the neural translation system and improve the translation for your specific terminology and style.
+ [Custom Translator](customization.md), an extension of the Translator service, can be used to customize the neural translation system and improve the translation for your specific terminology and style.
With Custom Translator, you can build translation systems to handle the terminology used in your own business or industry. Your customized translation system can easily integrate with your existing applications, workflows, websites, and devices, through the regular Translator, by using the category parameter.
-Learn more about [Custom Translator](customization.md).
- ## Next steps - [Create a Translator service](./translator-how-to-signup.md) to get your access keys and endpoint.
cognitive-services Cognitive Services Apis Create Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-apis-create-account.md
The multi-service resource is named **Cognitive Services** in the portal. [Creat
At this time, the multi-service resource enables access to the following Cognitive
-* Computer Vision
-* Content Moderator
-* Face
-* Language Understanding (LUIS)
-* Text Analytics
-* Translator
+* **Vision** - Computer Vision, Custom Vision, Form Recognizer, Face
+* **Speech** - Speech
+* **Language** - Language Understanding (LUIS), Text Analytics, Translator
+* **Decision** - Personalizer, Content Moderator
### [Single-service resource](#tab/singleservice)
Use the below links to create a resource for the available Cognitive
| Vision | Speech | Language | Decision | |--|-|--|-| | [Computer vision](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision) | [Speech Services](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) | [Immersive reader](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesImmersiveReader) | [Anomaly Detector](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector) |
-| [Custom vision service](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesCustomVision) | [Speaker Recognition](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesSpeakerRecognition) | [Language Understanding (LUIS)](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesLUISAllInOne) | [Content Moderator](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesContentModerator) |
+| [Custom vision service](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesCustomVision) | | [Language Understanding (LUIS)](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesLUISAllInOne) | [Content Moderator](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesContentModerator) |
| [Face](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFace) | | [QnA Maker](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) | [Personalizer](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesPersonalizer) |
-| [Ink Recognizer](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesInkRecognizer) | | [Text Analytics](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) | [Metrics Advisor](https://go.microsoft.com/fwlink/?linkid=2142156) |
+| [Form Recognizer](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) | | [Text Analytics](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) | [Metrics Advisor](https://go.microsoft.com/fwlink/?linkid=2142156) |
+| | | [Translator](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) | |
Use the below links to create a resource for the available Cognitive
| **Name** | A descriptive name for your cognitive services resource. For example, *MyCognitiveServicesResource*. | | **Pricing tier** | The cost of your Cognitive Services account depends on the options you choose and your usage. For more information, see the API [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/).
-![Multi-service resource resource creation screen](media/cognitive-services-apis-create-account/resource_create_screen-multi.png)
+<!--![Multi-service resource creation screen](media/cognitive-services-apis-create-account/resource_create_screen-multi.png)-->
-Select **Create**.
+Read and accept the conditions (as applicable for you) and then select **Review + create**.
### [Single-service resource](#tab/singleservice)
Select **Create**.
| **Name** | A descriptive name for your cognitive services resource. For example, *MyCognitiveServicesResource*. | | **Pricing tier** | The cost of your Cognitive Services account depends on the options you choose and your usage. For more information, see the API [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/).
-![Single-service resource creation screen](media/cognitive-services-apis-create-account/resource_create_screen.png)
+<!--![Single-service resource creation screen](media/cognitive-services-apis-create-account/resource_create_screen.png)-->
-Select **Create**.
+Select **Next: Virtual Network** and choose the type of network access you want to allow for your resource, and then select **Review + create**.
cognitive-services Cognitive Services Support Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-support-options.md
Explore the range of [Azure support options and choose the plan](https://azure.m
* [Azure portal](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) * [Azure portal for the United States government](https://portal.azure.us) + ## Post a question on Microsoft Q&A For quick and reliable answers on your technical product questions from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our expert community, engage with us on [Microsoft Q&A](/answers/products/azure?product=all), Azure's preferred destination for community support.
cognitive-services Tutorial Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/tutorial-azure-function.md
Title: "Tutorial: Use an Azure Function to process stored documents" description: This guide shows you how to use an Azure function to trigger the processing of documents that are uploaded to an Azure blob storage container. -+ Previously updated : 10/28/2020- Last updated : 03/19/2021+ # Tutorial: Use an Azure Function to process stored documents
The following code block calls the Form Recognizer [Analyze Layout](https://west
# This is the call to the Form Recognizer endpoint endpoint = r"Your Form Recognizer Endpoint" apim_key = "Your Form Recognizer Key"
- post_url = endpoint + "/formrecognizer/v2.1-preview.2/Layout/analyze"
+ post_url = endpoint + "/formrecognizer/v2.1-preview.3/Layout/analyze"
source = myblob.read() headers = {
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/concepts.md
- Title: Chat concepts in Azure Communication Services-
-description: Learn about Communication Services Chat concepts.
----- Previously updated : 03/10/2021---
-# Chat concepts
---
-Azure Communication Services Chat client libraries can be used to add real-time text chat to your applications. This page summarizes key Chat concepts and capabilities.
-
-See the [Communication Services Chat client library Overview](./sdk-features.md) to learn more about specific client library languages and capabilities.
-
-## Chat overview
-
-Chat conversations happen within chat threads. A chat thread can contain many messages and many users. Every message belongs to a single thread, and a user can be a part of one or many threads.
-
-Each user in the chat thread is called a member. You can have up to 250 members in a chat thread. Only thread members can send and receive messages or add/remove members in a chat thread. The maximum message size allowed is approximately 28KB. You can retrieve all messages in a chat thread using the `List/Get Messages` operation. Communication Services stores chat history until you execute a delete operation on the chat thread or message, or until no members are remaining in the chat thread at which point it is orphaned and processed for deletion.
-
-For chat threads with more than 20 members, read receipts and typing indicator features are disabled.
-
-## Chat architecture
-
-There are two core parts to chat architecture: 1) Trusted Service and 2) Client Application.
----
-We recommend generating access tokens using the trusted service tier. In this scenario the server side would be responsible for creating and managing users and issuing their tokens.
-
-## Message types
-
-Communication Services Chat shares user-generated messages as well as system-generated messages called **Thread activities**. Thread activities are generated when a chat thread is updated. When you call `List Messages` or `Get Messages` on a chat thread, the result will contain the user-generated text messages as well as the system messages in chronological order. This helps you identify when a member was added or removed or when the chat thread topic was updated. Supported message types are:
---
-```
-{
- "id": "1613589626560",
- "type": "participantAdded",
- "sequenceId": "7",
- "version": "1613589626560",
- "content":
- {
- "participants":
- [
- {
- "id": "8:acs:d2a829bc-8523-4404-b727-022345e48ca6_00000008-511c-4df6-f40f-343a0d003226",
- "displayName": "Jane",
- "shareHistoryTime": "1970-01-01T00:00:00Z"
- }
- ],
- "initiator": "8:acs:d2a829bc-8523-4404-b727-022345e48ca6_00000008-511c-4ce0-f40f-343a0d003224"
- },
- "createdOn": "2021-02-17T19:20:26Z"
- }
-```
--- `ThreadActivity/ParticipantRemoved`: System message that indicates a participant has been removed from the chat thread. For example:-
-```
-{
- "id": "1613589627603",
- "type": "participantRemoved",
- "sequenceId": "8",
- "version": "1613589627603",
- "content":
- {
- "participants":
- [
- {
- "id": "8:acs:d2a829bc-8523-4404-b727-022345e48ca6_00000008-511c-4df6-f40f-343a0d003226",
- "displayName": "Jane",
- "shareHistoryTime": "1970-01-01T00:00:00Z"
- }
- ],
- "initiator": "8:acs:d2a829bc-8523-4404-b727-022345e48ca6_00000008-511c-4ce0-f40f-343a0d003224"
- },
- "createdOn": "2021-02-17T19:20:27Z"
- }
-```
--- `ThreadActivity/TopicUpdate`: System message that indicates the thread topic has been updated. For example:-
-```
-{
- "id": "1613589623037",
- "type": "topicUpdated",
- "sequenceId": "2",
- "version": "1613589623037",
- "content":
- {
- "topic": "New topic",
- "initiator": "8:acs:d2a829bc-8523-4404-b727-022345e48ca6_00000008-511c-4ce0-f40f-343a0d003224"
- },
- "createdOn": "2021-02-17T19:20:23Z"
- }
-```
-
-## Real-time signaling
+
+ Title: Chat concepts in Azure Communication Services
+
+description: Learn about Communication Services Chat concepts.
++++ Last updated : 09/30/2020 ++
+
+
+# Chat concepts
++
+Azure Communication Services Chat client libraries can be used to add real-time text chat to your applications. This page summarizes key Chat concepts and capabilities.
+
+See the [Communication Services Chat client library Overview](./sdk-features.md) to learn more about specific client library languages and capabilities.
+
+## Chat overview
+
+Chat conversations happen within chat threads. A chat thread can contain many messages and many users. Every message belongs to a single thread, and a user can be a part of one or many threads. Each user in the chat thread is called a participant. Only thread participants can send and receive messages and add or remove other users in a chat thread. Communication Services stores chat history until you execute a delete operation on the chat thread or message, or until no participants are remaining in the chat thread, at which point, the chat thread is orphaned and queued for deletion.
+
+## Service limits
+
+- The maximum number of participants allowed in a chat thread is 250.
+- The maximum message size allowed is approximately 28 KB.
+- For chat threads with more than 20 participants, read receipts and typing indicator features aren't supported.
+-
+## Chat architecture
+
+There are two core parts to chat architecture: 1) Trusted Service and 2) Client Application.
++
+ - **Trusted service:** To properly manage a chat session, you need a service that helps you connect to Communication Services by using your resource connection string. This service is responsible for creating chat threads, managing thread participant lists, and providing access tokens to users. More information about access tokens can be found in our [access tokens](../../quickstarts/access-tokens.md) quickstart.
+ - **Client app:** The client application connects to your trusted service and receives the access tokens that are used to connect directly to Communication Services. After this connection is made, your client app can send and receive messages.
+We recommend generating access tokens using the trusted service tier. In this scenario the server side would be responsible for creating and managing users and issuing their tokens.
+
+## Message types
+
+Communication Services Chat shares user-generated messages as well as system-generated messages called **Thread activities**. Thread activities are generated when a chat thread is updated. When you call `List Messages` or `Get Messages` on a chat thread, the result will contain the user-generated text messages as well as the system messages in chronological order. This helps you identify when a participant was added or removed or when the chat thread topic was updated. Supported message types are:
+
+ - `Text`: A plain text message composed and sent by a user as part of a chat conversation.
+ - `RichText/HTML`: A formatted text message. Note that Communication Services users currently can't send RichText messages. This message type is supported by messages sent from Teams users to Communication Services users in Teams Interop scenarios.
+ - `ThreadActivity/ParticipantAdded`: A system message that indicates one or more participants have been added to the chat thread. For example:
++
+```
+{
+ "id": "1613589626560",
+ "type": "participantAdded",
+ "sequenceId": "7",
+ "version": "1613589626560",
+ "content":
+ {
+ "participants":
+ [
+ {
+ "id": "8:acs:d2a829bc-8523-4404-b727-022345e48ca6_00000008-511c-4df6-f40f-343a0d003226",
+ "displayName": "Jane",
+ "shareHistoryTime": "1970-01-01T00:00:00Z"
+ }
+ ],
+ "initiator": "8:acs:d2a829bc-8523-4404-b727-022345e48ca6_00000008-511c-4ce0-f40f-343a0d003224"
+ },
+ "createdOn": "2021-02-17T19:20:26Z"
+ }
+```
+
+- `ThreadActivity/ParticipantRemoved`: System message that indicates a participant has been removed from the chat thread. For example:
+
+```
+{
+ "id": "1613589627603",
+ "type": "participantRemoved",
+ "sequenceId": "8",
+ "version": "1613589627603",
+ "content":
+ {
+ "participants":
+ [
+ {
+ "id": "8:acs:d2a829bc-8523-4404-b727-022345e48ca6_00000008-511c-4df6-f40f-343a0d003226",
+ "displayName": "Jane",
+ "shareHistoryTime": "1970-01-01T00:00:00Z"
+ }
+ ],
+ "initiator": "8:acs:d2a829bc-8523-4404-b727-022345e48ca6_00000008-511c-4ce0-f40f-343a0d003224"
+ },
+ "createdOn": "2021-02-17T19:20:27Z"
+ }
+```
+
+- `ThreadActivity/TopicUpdate`: System message that indicates the thread topic has been updated. For example:
+```
+{
+ "id": "1613589623037",
+ "type": "topicUpdated",
+ "sequenceId": "2",
+ "version": "1613589623037",
+ "content":
+ {
+ "topic": "New topic",
+ "initiator": "8:acs:d2a829bc-8523-4404-b727-022345e48ca6_00000008-511c-4ce0-f40f-343a0d003224"
+ },
+ "createdOn": "2021-02-17T19:20:23Z"
+ }
+```
+
+## Real-time signaling
The Chat JavaScript client library includes real-time signaling. This allows clients to listen for real-time updates and incoming messages to a chat thread without having to poll the APIs. Available events include:
+ - `ChatMessageReceived` - when a new message is sent to a chat thread. This event is not sent for auto generated system messages which were discussed in the previous topic.
+ - `ChatMessageEdited` - when a message is edited in a chat thread.
+ - `ChatMessageDeleted` - when a message is deleted in a chat thread.
+ - `TypingIndicatorReceived` - when another participant is typing a message in a chat thread.
+ - `ReadReceiptReceived` - when another participant has read the message that a user sent in a chat thread.
+ - `ChatThreadCreated` - when a chat thread is created by a communication user.
+ - `ChatThreadDeleted` - when a chat thread is deleted by a communication user.
+ - `ChatThreadPropertiesUpdated` - when chat thread properties are updated; currently, we support only updating the topic for the thread.
+ - `ParticipantsAdded` - when a user is added as participant to a chat thread.
+ - `ParticipantsRemoved` - when an existing participant is removed from the chat thread.
-## Chat events
-Real-time signaling allows your users to chat in real-time. Your services can use Azure Event Grid to subscribe to chat-related events. For more details, see [Event Handling conceptual](../event-handling.md).
+## Chat events
-## Using Cognitive Services with Chat client library to enable intelligent features
+Real-time signaling allows your users to chat in real-time. Your services can use Azure Event Grid to subscribe to chat-related events. For more details, see [Event Handling conceptual](https://docs.microsoft.com/azure/event-grid/event-schema-communication-services?tabs=event-grid-event-schema).
-You can use [Azure Cognitive APIs](../../../cognitive-services/index.yml) with the Chat client library to add intelligent features to your applications. For example, you can:
-- Enable users to chat with each other in different languages.-- Help a support agent prioritize tickets by detecting a negative sentiment of an incoming issue from a customer.-- Analyze the incoming messages for key detection and entity recognition, and prompt relevant info to the user in your app based on the message content.
+## Using Cognitive Services with Chat client library to enable intelligent features
-One way to achieve this is by having your trusted service act as a member of a chat thread. Let's say you want to enable language translation. This service will be responsible for listening to the messages being exchanged by other members [1], calling cognitive APIs to translate the content to desired language[2,3] and sending the translated result as a message in the chat thread[4].
+You can use [Azure Cognitive APIs](../../../cognitive-services/index.yml) with the Chat client library to add intelligent features to your applications. For example, you can:
-This way, the message history will contain both original and translated messages. In the client application, you can add logic to show the original or translated message. See [this quickstart](../../../cognitive-services/translator/quickstart-translator.md) to understand how to use Cognitive APIs to translate text to different languages.
+- Enable users to chat with each other in different languages.
+- Help a support agent prioritize tickets by detecting a negative sentiment of an incoming issue from a customer.
+- Analyze the incoming messages for key detection and entity recognition, and prompt relevant info to the user in your app based on the message content.
+One way to achieve this is by having your trusted service act as a participant of a chat thread. Let's say you want to enable language translation. This service will be responsible for listening to the messages being exchanged by other participants [1], calling cognitive APIs to translate the content to desired language[2,3] and sending the translated result as a message in the chat thread[4].
-## Next steps
+This way, the message history will contain both original and translated messages. In the client application, you can add logic to show the original or translated message. See [this quickstart](../../../cognitive-services/translator/quickstart-translator.md) to understand how to use Cognitive APIs to translate text to different languages.
+
-> [!div class="nextstepaction"]
-> [Get started with chat](../../quickstarts/chat/get-started.md)
+## Next steps
-The following documents may be interesting to you:
+> [!div class="nextstepaction"]
+> [Get started with chat](../../quickstarts/chat/get-started.md)
-- Familiarize yourself with the [Chat client library](sdk-features.md)
+The following documents may be interesting to you:
+- Familiarize yourself with the [Chat client library](sdk-features.md)
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/sdk-features.md
- Title: Chat client library overview for Azure Communication Services-
-description: Learn about the Azure Communication Services chat client library.
----- Previously updated : 03/10/2021---
-# Chat client library overview
--
+
+ Title: Chat client library overview for Azure Communication Services
+
+description: Learn about the Azure Communication Services chat client library.
++++ Last updated : 09/30/2020 ++
+
+
+# Chat client library overview
+ Azure Communication Services Chat client libraries can be used to add rich, real-time chat to your applications.-
-## Chat client library capabilities
-
-The following list presents the set of features which are currently available in the Communication Services chat client libraries.
-
-| Group of features | Capability | JS | Java | .NET | Python |
-| -- | - | | -- | - | -- |
-| Core Capabilities | Create a chat thread between 2 or more users (up to 250 users) | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Update the topic of a chat thread | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Add or remove members from a chat thread | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Choose whether to share chat message history with newly added members - *all/none/up to certain time* | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Get a list of all chat members thread | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Delete a chat thread | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Get a list of a user's chat thread memberships | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Get info for a particular chat thread | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Send and receive messages in a chat thread | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Edit the content of a message after it's been sent | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Delete a message | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Tag a message with priority as normal or high at the time of sending | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Send and receive read receipts for messages that have been read by members <br/> *Not available when there are more than 20 members in a chat thread* | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Send and receive typing notifications when a member is actively typing a message in a chat thread <br/> *Not available when there are more than 20 members in a chat thread* | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Get all messages in a chat thread <br/> *Unicode emojis supported* | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Send emojis as part of message content | ✔️ | ✔️ | ✔️ | ✔️ |
-|Real-time signaling (enabled by proprietary signaling package**)| Get notified when a user receives a new message in a chat thread they're a member of | ✔️ | ❌ | ❌ | ❌ |
-| | Get notified when a message has been edited by another member in a chat thread they're a member of | ✔️ | ❌ | ❌ | ❌ |
-| | Get notified when a message has been deleted by another member in a chat thread they're a member of | ✔️ | ❌ | ❌ | ❌ |
-| | Get notified when another chat thread member is typing | ✔️ | ❌ | ❌ | ❌ |
-| | Get notified when another member has read a message (read receipt) in the chat thread | ✔️ | ❌ | ❌ | ❌ |
-| Events | Use Event Grid to subscribe to user activity happening in chat threads and integrate custom notification services or business logic | ✔️ | ✔️ | ✔️ | ✔️ |
-| Monitoring | Monitor usage in terms of messages sent | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Monitor the quality and status of API requests made by your app and configure alerts via the portal | ✔️ | ✔️ | ✔️ | ✔️ |
-|Additional features | Use [Cognitive Services APIs](../../../cognitive-services/index.yml) along with chat client library to enable intelligent features - *language translation & sentiment analysis of the incoming message on a client, speech to text conversion to compose a message while the member speaks, etc.* | ✔️ | ✔️ | ✔️ | ✔️ |
-
-**The proprietary signaling package is implemented using web sockets. It will fallback to long polling if web sockets are unsupported.
-
-## JavaScript chat client library support by OS and browser
+
+## Chat client library capabilities
+
+The following list presents the set of features which are currently available in the Communication Services chat client libraries.
+
+| Group of features | Capability | JavaScript | Java | .NET | Python | iOS | Android |
+|--|-||--|-|--|-|-|
+| Core Capabilities | Create a chat thread between 2 or more users (up to 250 users) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Update the topic of a chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Add or remove participants from a chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Choose whether to share chat message history with the participant being added | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Get a list of participants in a chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Delete a chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Given a communication user, get the list of chat threads the user is part of | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Get info for a particular chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Send and receive messages in a chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Edit the contents of a sent message | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Delete a message | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Read receipts for messages that have been read by other participants in a chat <br/> *Not available when there are more than 20 participants in a chat thread* | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Get notified when participants are actively typing a message in a chat thread <br/> *Not available when there are more than 20 members in a chat thread* | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Get all messages in a chat thread <br/> | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Send Unicode emojis as part of message content | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+|Real-time signaling (enabled by proprietary signaling package**)| Subscribe to get real-time updates for incoming messages and other operations in your chat app. To see a list of supported updates for real-time signaling, see [Chat concepts](concepts.md#real-time-signaling) | ✔️ | ❌ | ❌ | ❌ | ❌ | ❌ |
+| Event Grid support | Use integration with Azure Event Grid and configure your communication service to execute business logic based on chat activity or to plug in a custom push notification service | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Monitoring | Use the API request metrics emitted in the Azure portal to build dashboards, monitor the health of your chat app, and set alerts to detect abnormalities | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Configure your Communication Services resource to receive chat operational logs for monitoring and diagnostic purposes | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
++
+**The proprietary signaling package is implemented using web sockets. It will fallback to long polling if web sockets are unsupported.
+
+## JavaScript chat client library support by OS and browser
The following table represents the set of supported browsers and versions which are currently available.-
+
| | Windows | macOS | Ubuntu | Linux | Android | iOS | iPad OS|
-| -- | - | -- | - | | | | -|
+|--|-|--|-||||-|
| **Chat client library** | Firefox*, Chrome*, new Edge | Firefox*, Chrome*, Safari* | Chrome* | Chrome* | Chrome* | Safari* | Safari* |
+*Note that the latest version is supported in addition to the previous two releases.<br/>
-*Note that the latest version is supported in addition to the previous two releases.<br/>
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Get started with chat](../../quickstarts/chat/get-started.md)
+## Next steps
-The following documents may be interesting to you:
+> [!div class="nextstepaction"]
+> [Get started with chat](../../quickstarts/chat/get-started.md)
-- Familiarize yourself with [chat concepts](../chat/concepts.md)
+The following documents may be interesting to you:
+- Familiarize yourself with [chat concepts](../chat/concepts.md)
communication-services Event Handling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/event-handling.md
- Title: Event handling-
-description: Use Azure Event Grid to trigger processes based on actions that happen in a Communication Service.
----- Previously updated : 03/10/2021----
-# Event Handling in Azure Communication Services
---
-Azure Communication Services integrates with [Azure Event Grid](https://azure.microsoft.com/services/event-grid/) to deliver real-time event notifications in a reliable, scalable and secure manner. The purpose of this article is to help you configure your applications to listen to Communication Services events. For example, you may want to update a database, create a work item and deliver a push notification whenever an SMS message is received by a phone number associated with your Communication Services resource.
-
-Azure Event Grid is a fully managed event routing service, which uses a publish-subscribe model. Event Grid has built-in support for Azure services like [Azure Functions](../../azure-functions/functions-overview.md) and [Azure Logic Apps](../../azure-functions/functions-overview.md). It can deliver event alerts to non-Azure services using webhooks. For a complete list of the event handlers that Event Grid supports, see [An introduction to Azure Event Grid](../../event-grid/overview.md).
--
-> [!NOTE]
-> To learn more about how data residency relates to event handling, visit the [Data Residency conceptual documentation](./privacy.md)
-
-## Events types
-
-Event grid uses [event subscriptions](../../event-grid/concepts.md#event-subscriptions) to route event messages to subscribers.
-
-Azure Communication Services emits the following event types:
-
-| Event type | Description |
-| -- | - |
-| Microsoft.Communication.SMSReceived | Published when an SMS is received by a phone number associated with the Communication Service. |
-| Microsoft.Communication.SMSDeliveryReportReceived | Published when a delivery report is received for an SMS sent by the Communication Service. |
-| Microsoft.Communication.ChatMessageReceived | Published when a message is received for a user in a chat thread that she is member of. |
-| Microsoft.Communication.ChatMessageEdited | Published when a message is edited in a chat thread that the user is member of. |
-| Microsoft.Communication.ChatMessageDeleted | Published when a message is deleted in a chat thread that the user is member of. |
-| Microsoft.Communication.ChatThreadCreatedWithUser | Published when the user is added as member at the time of creation of a chat thread. |
-| Microsoft.Communication.ChatThreadWithUserDeleted | Published when a chat thread is deleted which the user is member of. |
-| Microsoft.Communication.ChatThreadPropertiesUpdatedPerUser | Published when a chat thread's properties are updated that the user is member of. |
-| Microsoft.Communication.ChatMemberAddedToThreadWithUser | Published when the user is added as member to a chat thread. |
-| Microsoft.Communication.ChatMemberRemovedFromThreadWithUser | Published when the user is removed from a chat thread. |
-| Microsoft.Communication.ChatParticipantAddedToThreadWithUser| Published for a user when a new participant is added to a chat thread, that the user is part of.|
-| Microsoft.Communication.ChatParticipantRemovedFromThreadWithUser | Published for a user when a participant is removed from a chat thread, that the user is part of. |
-| Microsoft.Communication.ChatThreadCreated | Published when a chat thread is created |
-| Microsoft.Communication.ChatThreadDeleted| Published when a chat thread is deleted |
-| Microsoft.Communication.ChatThreadParticipantAdded | Published when a new participant is added to a chat thread |
-| Microsoft.Communication.ChatThreadParticipantRemoved | Published when a new participant is added to a chat thread. |
-| Microsoft.Communication.ChatMessageReceivedInThread | Published when a message is received in a chat thread |
-| Microsoft.Communication.ChatThreadPropertiesUpdated| Published when a chat thread's properties like topic are updated.|
-| Microsoft.Communication.ChatMessageEditedInThread | Published when a message is edited in a chat thread |
-| Microsoft.Communication.ChatMessageDeletedInThread | Published when a message is deleted in a chat thread |
-
-You can use the Azure portal or Azure CLI to subscribe to events emitted by your Communication Services resource. Get started with handling events by looking at [How to handle SMS Events in Communication Services](../quickstarts/telephony-sms/handle-sms-events.md)
--
-## Event subjects
-
-The `subject` field of all Communication Services events identifies the user, phone number or entity that is targeted by the event. Common prefixes are used to allow simple [Event Grid Filtering](../../event-grid/event-filtering.md).
-
-| Subject Prefix | Communication Service Entity |
-| - | - |
-| `phonenumber/` | PSTN phone number |
-| `user/` | Communication Services User |
-| `thread/` | Chat thread. |
-
-The following example shows a filter for all SMS messages and delivery reports sent to all 555 area code phone numbers owned by a Communication Services resource:
-
-```json
-"filter": {
- "includedEventTypes": [
- "Microsoft.Communication.SMSReceived",
- "Microsoft.Communication.SMSDeliveryReportReceived"
- ],
- "subjectBeginsWith": "phonenumber/1555",
-}
-```
-
-## Sample event responses
-
-When an event is triggered, the Event Grid service sends data about that event to subscribing endpoints.
-
-This section contains an example of what that data would look like for each event.
-
-### Microsoft.Communication.SMSDeliveryReportReceived event
-
-```json
-[{
- "id": "Outgoing_202009180022138813a09b-0cbf-4304-9b03-1546683bb910",
- "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
- "subject": "/phonenumber/15555555555",
- "data": {
- "MessageId": "Outgoing_202009180022138813a09b-0cbf-4304-9b03-1546683bb910",
- "From": "15555555555",
- "To": "+15555555555",
- "DeliveryStatus": "Delivered",
- "DeliveryStatusDetails": "No error.",
- "ReceivedTimestamp": "2020-09-18T00:22:20.2855749Z",
- "DeliveryAttempts": [
- {
- "Timestamp": "2020-09-18T00:22:14.9315918Z",
- "SegmentsSucceeded": 1,
- "SegmentsFailed": 0
- }
- ]
- },
- "eventType": "Microsoft.Communication.SMSDeliveryReportReceived",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2020-09-18T00:22:20Z"
-}]
-```
-### Microsoft.Communication.SMSReceived event
-
-```json
-[{
- "id": "Incoming_20200918002745d29ebbea-3341-4466-9690-0a03af35228e",
- "topic": "/subscriptions/50ad1522-5c2c-4d9a-a6c8-67c11ecb75b8/resourcegroups/acse2e/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
- "subject": "/phonenumber/15555555555",
- "data": {
- "MessageId": "Incoming_20200918002745d29ebbea-3341-4466-9690-0a03af35228e",
- "From": "15555555555",
- "To": "15555555555",
- "Message": "Great to connect with ACS events ",
- "ReceivedTimestamp": "2020-09-18T00:27:45.32Z"
- },
- "eventType": "Microsoft.Communication.SMSReceived",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2020-09-18T00:27:47Z"
-}]
-```
-
-### Microsoft.Communication.ChatMessageReceived event
-
-```json
-[{
- "id": "02272459-badb-4e2e-b538-4cb8a2f71da6",
- "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/{thread-id}/sender/{rawId}/recipient/{rawId}",
- "data": {
- "messageBody": "Welcome to Azure Communication Services",
- "messageId": "1613694358927",
- "senderId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724",
- "senderCommunicationIdentifier": {
- "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724",
- "communicationUser": {
- "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724"
- }
- },
- "senderDisplayName": "Jhon",
- "composeTime": "2021-02-19T00:25:58.927Z",
- "type": "Text",
- "version": 1613694358927,
- "recipientId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d05-83fe-084822000f6d",
- "recipientCommunicationIdentifier": {
- "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d05-83fe-084822000f6d",
- "communicationUser": {
- "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d05-83fe-084822000f6d"
- }
- },
- "transactionId": "oh+LGB2dUUadMcTAdRWQxQ.1.1.1.1.1827536918.1.7",
- "threadId": "19:6e5d6ca1d75044a49a36a7965ec4a906@thread.v2"
- },
- "eventType": "Microsoft.Communication.ChatMessageReceived",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-02-19T00:25:59.9436666Z"
- }
-]
-```
-
-### Microsoft.Communication.ChatMessageEdited event
-
-```json
-[{
- "id": "93fc1037-b645-4eb0-a0f2-d7bb3ba6e060",
- "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/{thread-id}/sender/{rawId}/recipient/{rawId}",
- "data": {
- "editTime": "2021-02-19T00:28:20.784Z",
- "messageBody": "Let's Chat about new communication services.",
- "messageId": "1613694357917",
- "senderId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724",
- "senderCommunicationIdentifier": {
- "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724",
- "communicationUser": {
- "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724"
- }
- },
- "senderDisplayName": "Bob(Admin)",
- "composeTime": "2021-02-19T00:25:57.917Z",
- "type": "Text",
- "version": 1613694500784,
- "recipientId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d60-83fe-084822000f6f",
- "recipientCommunicationIdentifier": {
- "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d60-83fe-084822000f6f",
- "communicationUser": {
- "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d60-83fe-084822000f6f"
- }
- },
- "transactionId": "1mL4XZH2gEecu/alk9tOtw.2.1.2.1.1833042153.1.7",
- "threadId": "19:6e5d6ca1d75044a49a36a7965ec4a906@thread.v2"
- },
- "eventType": "Microsoft.Communication.ChatMessageEdited",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-02-19T00:28:21.7456718Z"
- }]
-```
-
-### Microsoft.Communication.ChatMessageDeleted event
-```json
-[{
- "id": "23cfcc13-33f2-4ae1-8d23-b5015b05302b",
- "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/{thread-id}/sender/{rawId}/recipient/{rawId}",
- "data": {
- "deleteTime": "2021-02-19T00:43:10.14Z",
- "messageId": "1613695388152",
- "senderId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d07-83fe-084822000f6e",
- "senderCommunicationIdentifier": {
- "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d07-83fe-084822000f6e",
- "communicationUser": {
- "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d07-83fe-084822000f6e"
- }
- },
- "senderDisplayName": "Bob(Admin)",
- "composeTime": "2021-02-19T00:43:08.152Z",
- "type": "Text",
- "version": 1613695390361,
- "recipientId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d60-83fe-084822000f6f",
- "recipientCommunicationIdentifier": {
- "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d60-83fe-084822000f6f",
- "communicationUser": {
- "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d60-83fe-084822000f6f"
- }
- },
- "transactionId": "fFs4InlBn0O/0WyhfQZVSQ.1.1.2.1.1867776045.1.4",
- "threadId": "19:48899258eec941e7a281e03edc8f4964@thread.v2"
- },
- "eventType": "Microsoft.Communication.ChatMessageDeleted",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-02-19T00:43:10.9982947Z"
- }]
-```
-
-### Microsoft.Communication.ChatThreadCreatedWithUser event
-
-```json
-[{
- "id": "eba02b2d-37bf-420e-8656-3a42ef74c435",
- "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/{thread-id}/createdBy/rawId/recipient/rawId",
- "data": {
- "createdBy": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286d-e1fe-0848220013b9",
- "createdByCommunicationIdentifier": {
- "rawId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286d-e1fe-0848220013b9",
- "communicationUser": {
- "id": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286d-e1fe-0848220013b9"
- }
- },
- "properties": {
- "topic": "Chat about new commuication services"
- },
- "members": [
- {
- "displayName": "Bob",
- "memberId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286d-e1fe-0848220013b9"
- },
- {
- "displayName": "John",
- "memberId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-289b-07fd-0848220015ea"
- }
- ],
- "participants": [
- {
- "displayName": "Bob",
- "participantCommunicationIdentifier": {
- "rawId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286d-e1fe-0848220013b9",
- "communicationUser": {
- "id": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286d-e1fe-0848220013b9"
- }
- }
- },
- {
- "displayName": "John",
- "participantCommunicationIdentifier": {
- "rawId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-289b-07fd-0848220015ea",
- "communicationUser": {
- "id": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-289b-07fd-0848220015ea"
- }
- }
- }
- ],
- "createTime": "2021-02-18T23:47:26.91Z",
- "version": 1613692046910,
- "recipientId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286e-84f5-08482200181c",
- "recipientCommunicationIdentifier": {
- "rawId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286e-84f5-08482200181c",
- "communicationUser": {
- "id": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286e-84f5-08482200181c"
- }
- },
- "transactionId": "zbZt+9h/N0em+XCW2QvyIA.1.1.1.1.1737228330.0.1737490483.1.6",
- "threadId": "19:1d594fb1eeb14566903cbc5decb5bf5b@thread.v2"
- },
- "eventType": "Microsoft.Communication.ChatThreadCreatedWithUser",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-02-18T23:47:34.7437103Z"
- }]
-```
-
-### Microsoft.Communication.ChatThreadWithUserDeleted event
-
-```json
-[{
- "id": "f5d6750c-c6d7-4da8-bb05-6f3fca6c7295",
- "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/{thread-id}/deletedBy/{rawId}/recipient/{rawId}",
- "data": {
- "deletedBy": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-5772-6473-83fe-084822000e21",
- "deletedByCommunicationIdentifier": {
- "rawId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-5772-6473-83fe-084822000e21",
- "communicationUser": {
- "id": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-5772-6473-83fe-084822000e21"
- }
- },
- "deleteTime": "2021-02-18T23:57:51.5987591Z",
- "createTime": "2021-02-18T23:54:15.683Z",
- "version": 1613692578672,
- "recipientId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-5772-647b-e1fe-084822001416",
- "recipientCommunicationIdentifier": {
- "rawId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-5772-647b-e1fe-084822001416",
- "communicationUser": {
- "id": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-5772-647b-e1fe-084822001416"
- }
- },
- "transactionId": "mrliWVUndEmLwkZbeS5KoA.1.1.2.1.1761607918.1.6",
- "threadId": "19:5870b8f021d74fd786bf5aeb095da291@thread.v2"
- },
- "eventType": "Microsoft.Communication.ChatThreadWithUserDeleted",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-02-18T23:57:52.1597234Z"
- }]
-```
-
-### Microsoft.Communication.ChatParticipantAddedToThreadWithUser event
-```json
-[{
- "id": "049a5a7f-6cd7-43c1-b352-df9e9e6146d1",
- "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/{thread-id}/participantAdded/{rawId}/recipient/{rawId}",
- "data": {
- "time": "2021-02-25T06:37:29.9232485Z",
- "addedByCommunicationIdentifier": {
- "rawId": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8767-1655-373a0d00885d",
- "communicationUser": {
- "id": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8767-1655-373a0d00885d"
- }
- },
- "participantAdded": {
- "displayName": "John Smith",
- "participantCommunicationIdentifier": {
- "rawId": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8785-1655-373a0d00885f",
- "communicationUser": {
- "id": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8785-1655-373a0d00885f"
- }
- }
- },
- "recipientCommunicationIdentifier": {
- "rawId": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8781-1655-373a0d00885e",
- "communicationUser": {
- "id": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8781-1655-373a0d00885e"
- }
- },
- "createTime": "2021-02-25T06:37:17.371Z",
- "version": 1614235049907,
- "transactionId": "q7rr9by6m0CiGiQxKdSO1w.1.1.1.1.1473446055.1.6",
- "threadId": "19:f1400e1c542f4086a606b52ad20cd0bd@thread.v2"
- },
- "eventType": "Microsoft.Communication.ChatParticipantAddedToThreadWithUser",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-02-25T06:37:31.4880091Z"
- }]
-```
-
-### Microsoft.Communication.ChatParticipantRemovedFromThreadWithUser event
-```json
-[{
- "id": "e8a4df24-799d-4c53-94fd-1e05703a4549",
- "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/{thread-id}/participantRemoved/{rawId}/recipient/{rawId}",
- "data": {
- "time": "2021-02-25T06:40:20.3564556Z",
- "removedByCommunicationIdentifier": {
- "rawId": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8767-1655-373a0d00885d",
- "communicationUser": {
- "id": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8767-1655-373a0d00885d"
- }
- },
- "participantRemoved": {
- "displayName": "Bob",
- "participantCommunicationIdentifier": {
- "rawId": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8785-1655-373a0d00885f",
- "communicationUser": {
- "id": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8785-1655-373a0d00885f"
- }
- }
- },
- "recipientCommunicationIdentifier": {
- "rawId": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8781-1655-373a0d00885e",
- "communicationUser": {
- "id": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8781-1655-373a0d00885e"
- }
- },
- "createTime": "2021-02-25T06:37:17.371Z",
- "version": 1614235220325,
- "transactionId": "usv74GQ5zU+JmWv/bQ+qfg.1.1.1.1.1480065078.1.5",
- "threadId": "19:f1400e1c542f4086a606b52ad20cd0bd@thread.v2"
- },
- "eventType": "Microsoft.Communication.ChatParticipantRemovedFromThreadWithUser",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-02-25T06:40:24.2244945Z"
- }]
-```
-
-### Microsoft.Communication.ChatThreadPropertiesUpdatedPerUser event
-
-```json
-[{
- "id": "d57342ff-264e-4a5e-9c54-ef05b7d50082",
- "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/{thread-id}/editedBy/{rawId}/recipient/{rawId}",
- "data": {
- "editedBy": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d07-83fe-084822000f6e",
- "editedByCommunicationIdentifier": {
- "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d07-83fe-084822000f6e",
- "communicationUser": {
- "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d07-83fe-084822000f6e"
- }
- },
- "editTime": "2021-02-19T00:28:28.7390282Z",
- "properties": {
- "topic": "Communication in Azure"
- },
- "createTime": "2021-02-19T00:28:25.864Z",
- "version": 1613694508719,
- "recipientId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724",
- "recipientCommunicationIdentifier": {
- "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724",
- "communicationUser": {
- "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724"
- }
- },
- "transactionId": "WLXPrnJ/I0+LTj2cwMrNMQ.1.1.1.1.1833369763.1.4",
- "threadId": "19:2cc3504c41244d7483208a4f58a1f188@thread.v2"
- },
- "eventType": "Microsoft.Communication.ChatThreadPropertiesUpdatedPerUser",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-02-19T00:28:29.559726Z"
- }]
-```
-
-### Microsoft.Communication.ChatMemberAddedToThreadWithUser event
-
-```json
-[{
- "id": "4abd2b49-d1a9-4fcc-9cd7-170fa5d96443",
- "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/{thread-id}/memberAdded/{rawId}/recipient/{rawId}",
- "data": {
- "time": "2020-09-18T00:47:13.1867087Z",
- "addedBy": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f1",
- "memberAdded": {
- "displayName": "John Smith",
- "memberId": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003fe"
- },
- "createTime": "2020-09-18T00:46:41.559Z",
- "version": 1600390033176,
- "recipientId": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "transactionId": "pVIjw/pHEEKUOUJ2DAAl5A.1.1.1.1.1818361951.1.1",
- "threadId": "19:6d20c2f921cd402ead7d1b31b0d030cd@thread.v2"
- },
- "eventType": "Microsoft.Communication.ChatMemberAddedToThreadWithUser",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2020-09-18T00:47:13.2342692Z"
-}]
-```
-
-### Microsoft.Communication.ChatMemberRemovedFromThreadWithUser event
-
-```json
-[{
- "id": "b3701976-1ea2-4d66-be68-4ec4fc1b4b96",
- "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/{thread-id}/memberRemoved/{rawId}/recipient/{rawId}",
- "data": {
- "time": "2020-09-18T00:47:51.1461742Z",
- "removedBy": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f1",
- "memberRemoved": {
- "displayName": "John",
- "memberId": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003fe"
- },
- "createTime": "2020-09-18T00:46:41.559Z",
- "version": 1600390071131,
- "recipientId": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "transactionId": "G9Y+UbjVmEuxAG3O4bEyvw.1.1.1.1.1819803816.1.1",
- "threadId": "19:6d20c2f921cd402ead7d1b31b0d030cd@thread.v2"
- },
- "eventType": "Microsoft.Communication.ChatMemberRemovedFromThreadWithUser",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2020-09-18T00:47:51.2244511Z"
-}]
-```
-
-### Microsoft.Communication.ChatThreadCreated event
-
-```json
-[ {
- "id": "a607ac52-0974-4d3c-bfd8-6f708a26f509",
- "topic": "/subscriptions/{subscription-id}/resourcegroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
- "subject": "thread/{thread-id}/createdBy/{rawId}",
- "data": {
- "createdByCommunicationIdentifier": {
- "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453",
- "communicationUser": {
- "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453"
- }
- },
- "properties": {
- "topic": "Talk about new Thread Events in commuication services"
- },
- "participants": [
- {
- "displayName": "Bob",
- "participantCommunicationIdentifier": {
- "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453",
- "communicationUser": {
- "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453"
- }
- }
- },
- {
- "displayName": "Scott",
- "participantCommunicationIdentifier": {
- "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38e6-07fd-084822002467",
- "communicationUser": {
- "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38e6-07fd-084822002467"
- }
- }
- },
- {
- "displayName": "Shawn",
- "participantCommunicationIdentifier": {
- "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38f6-83fe-084822002337",
- "communicationUser": {
- "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38f6-83fe-084822002337"
- }
- }
- },
- {
- "displayName": "Anthony",
- "participantCommunicationIdentifier": {
- "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38e3-e1fe-084822002c35",
- "communicationUser": {
- "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38e3-e1fe-084822002c35"
- }
- }
- }
- ],
- "createTime": "2021-02-20T00:31:54.365+00:00",
- "version": 1613781114365,
- "threadId": "19:e07c8ddc5bab4c059ea9f11d29b544b6@thread.v2",
- "transactionId": "gK6+kgANy0O1wchlVKVTJg.1.1.1.1.921436178.1"
- },
- "eventType": "Microsoft.Communication.ChatThreadCreated",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-02-20T00:31:54.5369967Z"
- }]
-```
-### Microsoft.Communication.ChatThreadPropertiesUpdated event
-
-```json
-[{
- "id": "cf867580-9caf-45be-b49f-ab1cbfcaa59f",
- "topic": "/subscriptions/{subscription-id}/resourcegroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
- "subject": "thread/{thread-id}/editedBy/{rawId}",
- "data": {
- "editedByCommunicationIdentifier": {
- "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5c9e-9e35-07fd-084822002264",
- "communicationUser": {
- "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5c9e-9e35-07fd-084822002264"
- }
- },
- "editTime": "2021-02-20T00:04:07.7152073+00:00",
- "properties": {
- "topic": "Talk about new Thread Events in commuication services"
- },
- "createTime": "2021-02-20T00:00:40.126+00:00",
- "version": 1613779447695,
- "threadId": "19:9e8eefe67b3c470a8187b4c2b00240bc@thread.v2",
- "transactionId": "GBE9MB2a40KEWzexIg0D3A.1.1.1.1.856359041.1"
- },
- "eventType": "Microsoft.Communication.ChatThreadPropertiesUpdated",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-02-20T00:04:07.8410277Z"
- }]
-```
-### Microsoft.Communication.ChatThreadDeleted event
-
-```json
-[
-{
- "id": "1dbd5237-4823-4fed-980c-8d27c17cf5b0",
- "topic": "/subscriptions/{subscription-id}/resourcegroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
- "subject": "thread/{thread-id}/deletedBy/{rawId}",
- "data": {
- "deletedByCommunicationIdentifier": {
- "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5c9e-a300-07fd-084822002266",
- "communicationUser": {
- "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5c9e-a300-07fd-084822002266"
- }
- },
- "deleteTime": "2021-02-20T00:00:42.109802+00:00",
- "createTime": "2021-02-20T00:00:39.947+00:00",
- "version": 1613779241389,
- "threadId": "19:c9e9f3060b884e448671391882066ac3@thread.v2",
- "transactionId": "KibptDpcLEeEFnlR7cI3QA.1.1.2.1.848298005.1"
- },
- "eventType": "Microsoft.Communication.ChatThreadDeleted",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-02-20T00:00:42.5428002Z"
- }
- ]
-```
-### Microsoft.Communication.ChatThreadParticipantAdded event
-
-```json
-[
-{
- "id": "3024eb5d-1d71-49d1-878c-7dc3165433d9",
- "topic": "/subscriptions/{subscription-id}/resourcegroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
- "subject": "thread/{thread-id}/participantadded/{rawId}",
- "data": {
- "time": "2021-02-20T00:54:42.8622646+00:00",
- "addedByCommunicationIdentifier": {
- "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453",
- "communicationUser": {
- "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453"
- }
- },
- "participantAdded": {
- "displayName": "Bob",
- "participantCommunicationIdentifier": {
- "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38f3-88f7-084822002454",
- "communicationUser": {
- "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38f3-88f7-084822002454"
- }
- }
- },
- "createTime": "2021-02-20T00:31:54.365+00:00",
- "version": 1613782482822,
- "threadId": "19:e07c8ddc5bab4c059ea9f11d29b544b6@thread.v2",
- "transactionId": "9q6cO7i4FkaZ+5RRVzshVw.1.1.1.1.974913783.1"
- },
- "eventType": "Microsoft.Communication.ChatThreadParticipantAdded",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-02-20T00:54:43.9866454Z"
- }
-]
-```
-### Microsoft.Communication.ChatThreadParticipantRemoved event
-
-```json
-[
-{
- "id": "6ed810fd-8776-4b13-81c2-1a0c4f791a07",
- "topic": "/subscriptions/{subscription-id}/resourcegroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
- "subject": "thread/{thread-id}/participantremoved/{rawId}",
- "data": {
- "time": "2021-02-20T00:56:18.1118825+00:00",
- "removedByCommunicationIdentifier": {
- "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453",
- "communicationUser": {
- "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453"
- }
- },
- "participantRemoved": {
- "displayName": "Shawn",
- "participantCommunicationIdentifier": {
- "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38e6-07fd-084822002467",
- "communicationUser": {
- "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38e6-07fd-084822002467"
- }
- }
- },
- "createTime": "2021-02-20T00:31:54.365+00:00",
- "version": 1613782578096,
- "threadId": "19:e07c8ddc5bab4c059ea9f11d29b544b6@thread.v2",
- "transactionId": "zGCq8IGRr0aEF6COuy7wSA.1.1.1.1.978649284.1"
- },
- "eventType": "Microsoft.Communication.ChatThreadParticipantRemoved",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-02-20T00:56:18.856721Z"
- }
-]
-```
-### Microsoft.Communication.ChatMessageReceivedInThread event
-
-```json
-[
-{
- "id": "4f614f97-c451-4b82-a8c9-1e30c3bfcda1",
- "topic": "/subscriptions/{subscription-id}/resourcegroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
- "subject": "thread/{thread-id}/sender/8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cdb-4916-07fd-084822002624",
- "data": {
- "messageBody": "Talk about new Thread Events in commuication services",
- "messageId": "1613783230064",
- "type": "Text",
- "version": "1613783230064",
- "senderDisplayName": "Bob",
- "senderCommunicationIdentifier": {
- "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cdb-4916-07fd-084822002624",
- "communicationUser": {
- "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cdb-4916-07fd-084822002624"
- }
- },
- "composeTime": "2021-02-20T01:07:10.064+00:00",
- "threadId": "19:5b3809e80e4a439d92c3316e273f4a2b@thread.v2",
- "transactionId": "foMkntkKS0O/MhMlIE5Aag.1.1.1.1.1004077250.1"
- },
- "eventType": "Microsoft.Communication.ChatMessageReceivedInThread",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-02-20T01:07:10.5704596Z"
- }
-]
-```
-### Microsoft.Communication.ChatMessageEditedInThread event
-
-```json
-[
- {
- "id": "7b8dc01e-2659-41fa-bc8c-88a967714510",
- "topic": "/subscriptions/{subscription-id}/resourcegroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
- "subject": "thread/{thread-id}/sender/{rawId}",
- "data": {
- "editTime": "2021-02-20T00:59:10.464+00:00",
- "messageBody": "8effb181-1eb2-4a58-9d03-ed48a461b19b",
- "messageId": "1613782685964",
- "type": "Text",
- "version": "1613782750464",
- "senderDisplayName": "Scott",
- "senderCommunicationIdentifier": {
- "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453",
- "communicationUser": {
- "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453"
- }
- },
- "composeTime": "2021-02-20T00:58:05.964+00:00",
- "threadId": "19:e07c8ddc5bab4c059ea9f11d29b544b6@thread.v2",
- "transactionId": "H8Gpj3NkIU6bXlWw8WPvhQ.2.1.2.1.985333801.1"
- },
- "eventType": "Microsoft.Communication.ChatMessageEditedInThread",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-02-20T00:59:10.7600061Z"
- }
-]
-```
-
-### Microsoft.Communication.ChatMessageDeletedInThread event
-
-```json
-[
- {
- "id": "17d9c39d-0c58-4ed8-947d-c55959f57f75",
- "topic": "/subscriptions/{subscription-id}/resourcegroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
- "subject": "thread/{thread-id}/sender/{rawId}",
- "data": {
- "deleteTime": "2021-02-20T00:59:10.464+00:00",
- "messageId": "1613782685440",
- "type": "Text",
- "version": "1613782814333",
- "senderDisplayName": "Scott",
- "senderCommunicationIdentifier": {
- "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453",
- "communicationUser": {
- "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453"
- }
- },
- "composeTime": "2021-02-20T00:58:05.44+00:00",
- "threadId": "19:e07c8ddc5bab4c059ea9f11d29b544b6@thread.v2",
- "transactionId": "HqU6PeK5AkCRSpW8eAbL0A.1.1.2.1.987824181.1"
- },
- "eventType": "Microsoft.Communication.ChatMessageDeletedInThread",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-02-20T01:00:14.8518034Z"
- }
-]
-```
--
-## Quickstarts and how-tos
-
-| Title | Description |
-| | |
-| [How do handle SMS Events in Communication Services](../quickstarts/telephony-sms/handle-sms-events.md) | Handling all SMS events received by your Communication Service using WebHook. |
--
-## Next steps
-
-* For an introduction to Azure Event Grid, see [What is Event Grid?](../../event-grid/overview.md)
-* For an introduction to Azure Event Grid Concepts, see [Concepts in Event Grid?](../../event-grid/concepts.md)
-* For an introduction to Azure Event Grid SystemTopics, see [System topics in Azure Event Grid?](../../event-grid/system-topics.md)
communication-services Notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/notifications.md
Azure Communication Services integrates with [Azure Event Grid](https://azure.mi
:::image type="content" source="./media/notifications/acs-events-int.png" alt-text="Diagram showing how Communication Services integrates with Event Grid.":::
-Learn more about [event handling in Azure Communication Services](./event-handling.md).
+Learn more about [event handling in Azure Communication Services](https://docs.microsoft.com/azure/event-grid/event-schema-communication-services).
## Deliver push notifications via Azure Notification Hubs
communication-services Handle Sms Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/handle-sms-events.md
Get started with Azure Communication Services by using Azure Event Grid to handl
## About Azure Event Grid
-[Azure Event Grid](../../../event-grid/overview.md) is a cloud-based eventing service. In this article, you'll learn how to subscribe to events for [communication service events](../../concepts/event-handling.md), and trigger an event to view the result. Typically, you send events to an endpoint that processes the event data and takes actions. In this article, we'll send the events to a web app that collects and displays the messages.
+[Azure Event Grid](../../../event-grid/overview.md) is a cloud-based eventing service. In this article, you'll learn how to subscribe to events for [communication service events](../../../event-grid/event-schema-communication-services.md), and trigger an event to view the result. Typically, you send events to an endpoint that processes the event data and takes actions. In this article, we'll send the events to a web app that collects and displays the messages.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
You can subscribe to specific events to tell Event Grid which of the SMS events
If you're prompted to provide a **System Topic Name**, feel free to provide a unique string. This field has no impact on your experience and is used for internal telemetry purposes.
-Check out the full list of [events supported by Azure Communication Services](../../concepts/event-handling.md).
+Check out the full list of [events supported by Azure Communication Services](https://docs.microsoft.com/azure/event-grid/event-schema-communication-services).
:::image type="content" source="./media/handle-sms-events/select-events-create-eventsub.png" alt-text="Screenshot showing the SMS Received and SMS Delivery Report Received event types being selected.":::
To view event triggers, we must generate events in the first place.
- `SMS Received` events are generated when the Communication Services phone number receives a text message. To trigger an event, just send a message from your phone to the phone number attached to your Communication Services resource. - `SMS Delivery Report Received` events are generated when you send an SMS to a user using a Communication Services phone number. To trigger an event, you are required to enable `Delivery Report` in the options of the [sent SMS](../telephony-sms/send.md). Try sending a message to your phone with `Delivery Report`. Completing this action incurs a small cost of a few USD cents or less in your Azure account.
-Check out the full list of [events supported by Azure Communication Services](../../concepts/event-handling.md).
+Check out the full list of [events supported by Azure Communication Services](https://docs.microsoft.com/azure/event-grid/event-schema-communication-services).
### Receiving SMS events
Once you complete either action above you will notice that `SMS Received` and `S
:::image type="content" source="./media/handle-sms-events/sms-delivery-report-received.png" alt-text="Screenshot showing the Event Grid Schema for an SMS Delivery Report Event.":::
-Learn more about the [event schemas and other eventing concepts](../../concepts/event-handling.md).
+Learn more about the [event schemas and other eventing concepts](https://docs.microsoft.com/azure/event-grid/event-schema-communication-services).
## Clean up resources
In this quickstart, you learned how to consume SMS events. You can receive SMS m
You may also want to:
+ - [Learn about event handling concepts](../../../event-grid/event-schema-communication-services.md)
- [Learn about Event Grid](../../../event-grid/overview.md)
communication-services Get Started Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-interop.md
Title: Quickstart - Teams interop on Azure Communication Services description: In this quickstart, you'll learn how to join an Teams meeting with the Azure Communication Calling SDK.-+ Last updated 03/10/2021
communication-services Calling Hero Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/calling-hero-sample.md
# Get started with the group calling hero sample [!INCLUDE [Web Calling Hero Sample](./includes/web-calling-hero.md)]-
confidential-computing Confidential Nodes Aks Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-get-started.md
Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster by using Azure CLI with confidential computing nodes'
-description: Learn to create an AKS cluster with confidential nodes and deploy a hello world app using the Azure CLI.
+description: In this quickstart, you will learn to create an AKS cluster with confidential nodes and deploy a hello world app using the Azure CLI.
Previously updated : 2/25/2020 Last updated : 03/18/2020 + # Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster with confidential computing nodes (DCsv2) using Azure CLI
This quickstart is intended for developers or cluster operators who want to quic
## Overview
-In this quickstart, you'll learn how to deploy an Azure Kubernetes Service (AKS) cluster with confidential computing nodes using the Azure CLI and run a simple hello world application in an enclave. AKS is a managed Kubernetes service that lets you quickly deploy and manage clusters. Read more about AKS [here](../aks/intro-kubernetes.md).
+In this quickstart, you'll learn how to deploy an Azure Kubernetes Service (AKS) cluster with confidential computing nodes using the Azure CLI and run a simple hello world application in an enclave. AKS is a managed Kubernetes service that lets you quickly deploy and manage clusters. To learn more, read the [AKS Introduction](../aks/intro-kubernetes.md) and the [AKS Confidential Nodes Overview](confidential-nodes-aks-overview.md).
> [!NOTE] > Confidential computing DCsv2 VMs leverage specialized hardware that is subject to higher pricing and region availability. For more information, see the virtual machines page for [available SKUs and supported regions](virtual-machine-solutions.md).
-### Confidential computing node features (DC<x>s-v2)
+### Confidential computing node features (DCsv2)
-1. Linux Worker Nodes supporting Linux Containers
-1. Generation 2 VM with Ubuntu 18.04 Virtual Machines Nodes
-1. Intel SGX-based CPU with Encrypted Page Cache Memory (EPC). Read more [here](./faq.md)
-1. Supporting Kubernetes version 1.16+
-1. Intel SGX DCAP Driver pre-installed on the AKS Nodes. Read more [here](./faq.md)
+1. Linux Worker Nodes supporting Linux Containers.
+1. Generation 2 VM with Ubuntu 18.04 Virtual Machines Nodes.
+1. Intel SGX-based CPU with Encrypted Page Cache Memory (EPC). Read more [here](./faq.md).
+1. Support for Kubernetes version 1.16+.
+1. Intel SGX DCAP Driver pre-installed on the AKS Nodes. Read more [here](./faq.md).
-## Deployment prerequisites
-The deployment tutorial requires the below :
+## Prerequisites
-1. An active Azure Subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin
-1. Azure CLI version 2.0.64 or later installed and configured on your deployment machine (Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](../container-registry/container-registry-get-started-azure-cli.md)
-1. Minimum of six **DC<x>s-v2** cores available in your subscription for use. By default, the VM cores quota for the confidential computing per Azure subscription 8 cores. If you plan to provision a cluster that requires more than 8 cores, follow [these](../azure-portal/supportability/per-vm-quota-requests.md) instructions to raise a quota increase ticket
+This quickstart requires:
+
+1. An active Azure Subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+1. Azure CLI version 2.0.64 or later installed and configured on your deployment machine (Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](../container-registry/container-registry-get-started-azure-cli.md).
+1. A minimum of six **DCsv2** cores available in your subscription for use. By default, the VM cores quota for the confidential computing per Azure subscription is eight cores. If you plan to provision a cluster that requires more than eight cores, follow [these](../azure-portal/supportability/per-vm-quota-requests.md) instructions to raise a quota increase ticket.
+
+## Create a new AKS cluster with confidential computing nodes and add-on
-## Creating new AKS cluster with confidential computing nodes and add-on
Follow the below instructions to add confidential computing capable nodes with add-on.
-### Step 1: Creating an AKS cluster with system node pool
+### Create an AKS cluster with a system node pool
If you already have an AKS cluster that meets the above requirements, [skip to the existing cluster section](#existing-cluster) to add a new confidential computing node pool.
-First, create a resource group for the cluster using the az group create command. The following example creates a resource group name *myResourceGroup* in the *westus2* region:
+First, create a resource group for the cluster using the [az group create][az-group-create] command. The following example creates a resource group name *myResourceGroup* in the *westus2* region:
```azurecli-interactive az group create --name myResourceGroup --location westus2 ```
-Now create an AKS cluster using the az aks create command.
+Now create an AKS cluster using the [az aks create][az-aks-create] command:
```azurecli-interactive
-# Create a new AKS cluster with system node pool with Confidential Computing addon enabled
az aks create -g myResourceGroup --name myAKSCluster --generate-ssh-keys --enable-addon confcom ```
-The above creates a new AKS cluster with system node pool with the add-on enabled. Now proceed adding a user node of Confidential Computing Nodepool type on AKS (DCsv2)
-### Step 2: Adding confidential computing node pool to AKS cluster
+The above creates a new AKS cluster with a system node pool with the add-on enabled. Next, add a user node pool with confidential computing capabilities to the AKS cluster.
+
+### Add a confidential computing node pool to the AKS cluster
-Run the below command to an user nodepool of `Standard_DC2s_v2` size with 3 nodes. You can choose other supported list of DCsv2 SKUs and regions from [here](../virtual-machines/dcv2-series.md):
+Run the following command to add a user node pool of `Standard_DC2s_v2` size with three nodes. You can choose another SKU from the supported list of [DCsv2 SKUs and regions](../virtual-machines/dcv2-series.md).
```azurecli-interactive az aks nodepool add --cluster-name myAKSCluster --name confcompool1 --resource-group myResourceGroup --node-vm-size Standard_DC2s_v2 ```
-The above command is complete a new node pool with **DC<x>s-v2** should be visible with Confidential computing add-on daemonsets ([SGX Device Plugin](confidential-nodes-aks-overview.md#sgx-plugin)
-
-### Step 3: Verify the node pool and add-on
-Get the credentials for your AKS cluster using the az aks get-credentials command:
+
+After running, a new node pool with **DCsv2** should be visible with confidential computing add-on daemonsets ([SGX Device Plugin](confidential-nodes-aks-overview.md#sgx-plugin)).
+
+### Verify the node pool and add-on
+
+Get the credentials for your AKS cluster using the [az aks get-credentials][az-aks-get-credentials] command:
```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ```
-Verify the nodes are created properly and the SGX-related daemonsets are running on **DC<x>s-v2** node pools using kubectl get pods & nodes command as shown below:
+
+Verify the nodes are created properly and the SGX-related daemonsets are running on **DCsv2** node pools using kubectl get pods & nodes command as shown below:
```console $ kubectl get pods --all-namespaces
-output
kube-system sgx-device-plugin-xxxx 1/1 Running ```
-If the output matches to the above, then your AKS cluster is now ready to run confidential applications.
-Go to [Hello World from Enclave](#hello-world) deployment section to test an app in an enclave. Or follow the below instructions to add additional node pools on AKS (AKS supports mixing SGX node pools and non-SGX node pools)
+If the output matches the above, then your AKS cluster is now ready to run confidential applications.
+
+Go to the [Hello World from Enclave](#hello-world) deployment section to test an app in an enclave. Or follow the below instructions to add additional node pools on AKS (AKS supports mixing SGX node pools and non-SGX node pools).
-## Adding confidential computing node pool to existing AKS cluster<a id="existing-cluster"></a>
+## Add a confidential computing node pool to an existing AKS cluster<a id="existing-cluster"></a>
This section assumes you have an AKS cluster running already that meets the criteria listed in the prerequisites section (applies to add-on).
-### Step 1: Enabling the confidential computing AKS add-on on the existing cluster
+### Enable the confidential computing AKS add-on on the existing cluster
-Run the below command to enable the confidential computing add-on
+Run the following command to enable the confidential computing add-on:
```azurecli-interactive az aks enable-addons --addons confcom --name MyManagedCluster --resource-group MyResourceGroup ```
-### Step 2: Add **DC<x>s-v2** user node pool to the cluster
-
+
+### Add a **DCsv2** user node pool to the cluster
+ > [!NOTE]
-> To use the confidential computing capability your existing AKS cluster need to have at minimum one **DC<x>s-v2** VM SKU based node pool. Learn more on confidential computing DCsv2 VMs SKU's here [available SKUs and supported regions](virtual-machine-solutions.md).
-
- ```azurecli-interactive
-az aks nodepool add --cluster-name myAKSCluster --name confcompool1 --resource-group myResourceGroup --node-count 1 --node-vm-size Standard_DC4s_v2
+> To use the confidential computing capability, your existing AKS cluster needs to have at minimum one **DCsv2** VM SKU based node pool. To learn more about confidential computing DCs-v2 VMs SKU's, see [available SKUs and supported regions](virtual-machine-solutions.md).
+
+Run the following command to create a new node pool:
-output node pool added
+```azurecli-interactive
+az aks nodepool add --cluster-name myAKSCluster --name confcompool1 --resource-group myResourceGroup --node-count 1 --node-vm-size Standard_DC4s_v2
+```
-Verify
+Verify the new node pool with the name confcompool1 has been created:
+```azurecli-interactive
az aks nodepool list --cluster-name myAKSCluster --resource-group myResourceGroup ```
-the above command should list the recent node pool you added with the name confcompool1.
-### Step 3: Verify that daemonsets are running on confidential node pools
+### Verify that daemonsets are running on confidential node pools
-Login to your existing AKS cluster to perform the below verification.
+Sign in to your existing AKS cluster to perform the following verification.
```console kubectl get nodes ```
-The output should show the newly added confcompool1 on the AKS cluster.
+
+The output should show the newly added confcompool1 on the AKS cluster. You may also see other daemonsets.
```console $ kubectl get pods --all-namespaces
-output (you may also see other daemonsets along SGX daemonsets as below)
kube-system sgx-device-plugin-xxxx 1/1 Running ```
-If the output matches to the above, then your AKS cluster is now ready to run confidential applications. Please follow the below test application deployment.
+
+If the output matches the above, then your AKS cluster is now ready to run confidential applications. Follow the instructions below to deploy a test application.
## Hello World from isolated enclave application <a id="hello-world"></a>
-Create a file named *hello-world-enclave.yaml* and paste the following YAML manifest. This Open Enclave based sample application code can be found in the [Open Enclave project](https://github.com/openenclave/openenclave/tree/master/samples/helloworld). The below deployment assumes you have deployed the addon "confcom".
+Create a file named *hello-world-enclave.yaml* and paste the following YAML manifest. This Open Enclave based sample application code can be found in the [Open Enclave project](https://github.com/openenclave/openenclave/tree/master/samples/helloworld). The following deployment assumes you have deployed the addon "confcom".
```yaml apiVersion: batch/v1
You can confirm that the workload successfully created a Trusted Execution Envir
```console $ kubectl get jobs -l app=sgx-test
-```
-```console
-$ kubectl get jobs -l app=sgx-test
NAME COMPLETIONS DURATION AGE sgx-test 1/1 1s 23s ``` ```console $ kubectl get pods -l app=sgx-test
-```
-```console
-$ kubectl get pods -l app=sgx-test
NAME READY STATUS RESTARTS AGE sgx-test-rchvg 0/1 Completed 0 25s ``` ```console $ kubectl logs -l app=sgx-test
-```
-```console
-$ kubectl logs -l app=sgx-test
Hello world from the enclave Enclave called into host to print: Hello World! ``` ## Clean up resources
-To remove the associated node pools or delete the AKS cluster, use the below commands:
+To remove the associated node pools or delete the AKS cluster, use the following commands:
-Deleting the AKS cluster
-``````azurecli-interactive
-az aks delete --resource-group myResourceGroup --name myAKSCluster
-```
-Removing the confidential computing node pool
+### Remove the confidential computing node pool
-``````azurecli-interactive
+```azurecli-interactive
az aks nodepool delete --cluster-name myAKSCluster --name myNodePoolName --resource-group myResourceGroup
-``````
+```
+
+### Delete the AKS cluster
+
+```azurecli-interactive
+az aks delete --resource-group myResourceGroup --name myAKSCluster
+```
## Next steps
-Run Python, Node etc. Applications confidentially through confidential containers by visiting [confidential container samples](https://github.com/Azure-Samples/confidential-container-samples).
+* Run Python, Node, etc. applications confidentially through confidential containers by visiting [confidential container samples](https://github.com/Azure-Samples/confidential-container-samples).
+
+* Run Enclave aware applications by visiting [Enclave Aware Azure Container Samples](https://github.com/Azure-Samples/confidential-computing/blob/main/containersamples/).
-Run Enclave aware applications by visiting [Enclave Aware Azure Container Samples](https://github.com/Azure-Samples/confidential-computing/blob/main/containersamples/).
+<!-- LINKS -->
+[az-group-create]: /cli/azure/group#az_group_create
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
cost-management-billing Manage Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/manage-automation.md
Title: Manage Azure costs with automation
description: This article explains how you can manage Azure costs with automation. Previously updated : 03/08/2021 Last updated : 03/19/2021
Consider using the [Usage Details API](/rest/api/consumption/usageDetails) if yo
The [Usage Details API](/rest/api/consumption/usageDetails) provides an easy way to get raw, unaggregated cost data that corresponds to your Azure bill. The API is useful when your organization needs a programmatic data retrieval solution. Consider using the API if you're looking to analyze smaller cost data sets. However, you should use other solutions identified previously if you have larger datasets. The data in Usage Details is provided on a per meter basis, per day. It's used when calculating your monthly bill. The general availability (GA) version of the APIs is `2019-10-01`. Use `2019-04-01-preview` to access the preview version for reservation and Azure Marketplace purchases with the APIs.
-If you want to get large amounts of exported data on a regular basis, see [Retrieve large cost datasets recurringly with exports](ingest-azure-usage-at-scale.md).
+If you want to get large amounts of exported data regularly, see [Retrieve large cost datasets recurringly with exports](ingest-azure-usage-at-scale.md).
### Usage Details API suggestions
Usage Details price behavior - Usage files show scaled information that may not
- Reservations - Rounding that occurs during calculation ΓÇô Rounding takes into account the consumed quantity, tiered/included quantity pricing, and the scaled unit price.
+### A single resource might have multiple records for a single day
+
+Azure resource providers emit usage and charges to the billing system and populate the `Additional Info` field of the usage records. Occasionally, resource providers might emit usage for a given day and stamp the records with different datacenters in the `Additional Info` field of the usage records. It can cause multiple records for a meter/resource to be present in your usage file for a single day. In that situation, you aren't overcharged. The multiple records represent the full cost of the meter for the resource on that day.
+ ## Example Usage Details API requests The following example requests are used by Microsoft customers to address common scenarios that you might come across.
Languages supported by a culture code:
| pl-pl | Polish (Poland) | | tr-tr | Turkish (Turkey) | | da-dk | Danish (Denmark) |
-| dn-gb | English (United Kingdom) |
+| en-gb | English (United Kingdom) |
| hu-hu | Hungarian (Hungary) |
-| nb-bo | Norwegian Bokmal (Norway) |
+| nb-no | Norwegian Bokmal (Norway) |
| nl-nl | Dutch (Netherlands) | | pt-pt | Portuguese (Portugal) | | sv-se | Swedish (Sweden) |
To enable a consistent experience for all Cost Management subscribers, Cost Mana
- [Analyze Azure costs with the Power BI template app](./analyze-cost-data-azure-cost-management-power-bi-template-app.md). - [Create and manage exported data](./tutorial-export-acm-data.md) with Exports.-- Learn more about the [Usage Details API](/rest/api/consumption/usageDetails).
+- Learn more about the [Usage Details API](/rest/api/consumption/usageDetails).
event-grid Consume Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/consume-private-endpoints.md
To deliver events to event hubs in your Event Hubs namespace using managed ident
To deliver events to Service Bus queues or topics in your Service Bus namespace using managed identity, follow these steps: 1. [Enable system-assigned identity for a topic or a domain](managed-service-identity.md#create-a-custom-topic-or-domain-with-an-identity).
-1. Add the identity to the [Azure Service Bus Data Sender](/service-bus-messaging/service-bus-managed-service-identity.md#azure-built-in-roles-for-azure-service-bus) role on the Service Bus namespace
+1. Add the identity to the [Azure Service Bus Data Sender](/service-bus-messaging/service-bus-managed-service-identity#azure-built-in-roles-for-azure-service-bus) role on the Service Bus namespace
1. [Enable the **Allow trusted Microsoft services to bypass this firewall** setting on your Service Bus namespace](../service-bus-messaging/service-bus-service-endpoints.md#trusted-microsoft-services). 1. [Configure the event subscription](managed-service-identity.md#create-event-subscriptions-that-use-an-identity) that uses a Service Bus queue or topic as an endpoint to use the system-assigned identity.
event-grid Event Schema Communication Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/event-schema-communication-services.md
Last updated 02/11/2021
-# Azure Communication Services as an Event Grid source
+# Event Handling in Azure Communication Services
-> [!IMPORTANT]
-> Azure Communication Services is currently in public preview.
-> This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+Azure Communication Services integrates with [Azure Event Grid](https://azure.microsoft.com/services/event-grid/) to deliver real-time event notifications in a reliable, scalable and secure manner. The purpose of this article is to help you configure your applications to listen to Communication Services events. For example, you may want to update a database, create a work item and deliver a push notification whenever an SMS message is received by a phone number associated with your Communication Services resource.
-This article provides the properties and schema for Azure Communication Services events. For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md).
+Azure Event Grid is a fully managed event routing service, which uses a publish-subscribe model. Event Grid has built-in support for Azure services like [Azure Functions](../azure-functions/functions-overview.md) and [Azure Logic Apps](../azure-functions/functions-overview.md). It can deliver event alerts to non-Azure services using webhooks. For a complete list of the event handlers that Event Grid supports, see [An introduction to Azure Event Grid](overview.md).
-## Available event types
-Event grid uses [event subscriptions](./concepts.md#event-subscriptions) to route event messages to subscribers.
+> [!NOTE]
+> To learn more about how data residency relates to event handling, visit the [Data Residency conceptual documentation](../communication-services/concepts/privacy.md)
+
+## Events types
+
+Event grid uses [event subscriptions](concepts.md#event-subscriptions) to route event messages to subscribers.
Azure Communication Services emits the following event types:
Azure Communication Services emits the following event types:
| -- | - | | Microsoft.Communication.SMSReceived | Published when an SMS is received by a phone number associated with the Communication Service. | | Microsoft.Communication.SMSDeliveryReportReceived | Published when a delivery report is received for an SMS sent by the Communication Service. |
-| Microsoft.Communication.ChatMessageReceived* | Published when a message is received for a user in a chat thread that the user is member of. |
-| Microsoft.Communication.ChatMessageEdited* | Published when a message is edited in a chat thread that the user is member of. |
-| Microsoft.Communication.ChatMessageDeleted* | Published when a message is deleted in a chat thread that the user is member of. |
+| Microsoft.Communication.ChatMessageReceived | Published when a message is received for a user in a chat thread that she is member of. |
+| Microsoft.Communication.ChatMessageEdited | Published when a message is edited in a chat thread that the user is member of. |
+| Microsoft.Communication.ChatMessageDeleted | Published when a message is deleted in a chat thread that the user is member of. |
| Microsoft.Communication.ChatThreadCreatedWithUser | Published when the user is added as member at the time of creation of a chat thread. | | Microsoft.Communication.ChatThreadWithUserDeleted | Published when a chat thread is deleted which the user is member of. | | Microsoft.Communication.ChatThreadPropertiesUpdatedPerUser | Published when a chat thread's properties are updated that the user is member of. | | Microsoft.Communication.ChatMemberAddedToThreadWithUser | Published when the user is added as member to a chat thread. | | Microsoft.Communication.ChatMemberRemovedFromThreadWithUser | Published when the user is removed from a chat thread. |
+| Microsoft.Communication.ChatParticipantAddedToThreadWithUser| Published for a user when a new participant is added to a chat thread, that the user is part of.|
+| Microsoft.Communication.ChatParticipantRemovedFromThreadWithUser | Published for a user when a participant is removed from a chat thread, that the user is part of. |
+| Microsoft.Communication.ChatThreadCreated | Published when a chat thread is created |
+| Microsoft.Communication.ChatThreadDeleted| Published when a chat thread is deleted |
+| Microsoft.Communication.ChatThreadParticipantAdded | Published when a new participant is added to a chat thread |
+| Microsoft.Communication.ChatThreadParticipantRemoved | Published when a new participant is added to a chat thread. |
+| Microsoft.Communication.ChatMessageReceivedInThread | Published when a message is received in a chat thread |
+| Microsoft.Communication.ChatThreadPropertiesUpdated| Published when a chat thread's properties like topic are updated.|
+| Microsoft.Communication.ChatMessageEditedInThread | Published when a message is edited in a chat thread |
+| Microsoft.Communication.ChatMessageDeletedInThread | Published when a message is deleted in a chat thread |
+You can use the Azure portal or Azure CLI to subscribe to events emitted by your Communication Services resource. Get started with handling events by looking at [How to handle SMS Events in Communication Services](../communication-services/quickstarts/telephony-sms/handle-sms-events.md)
-* Make sure you provide "sender name" in your "send message" API calls for these events to get triggered.
## Event subjects
-The `subject` field of all Communication Services events identifies the user, phone number, or entity that is targeted by the event. Common prefixes are used to allow simple [Event Grid Filtering](./event-filtering.md).
+The `subject` field of all Communication Services events identifies the user, phone number or entity that is targeted by the event. Common prefixes are used to allow simple [Event Grid Filtering](event-filtering.md).
| Subject Prefix | Communication Service Entity | | - | - |
The following example shows a filter for all SMS messages and delivery reports s
## Sample event responses
-When an event is triggered, the Event Grid service sends data about that event to subscribing endpoints. This section contains an example of what that data would look like for each event.
+When an event is triggered, the Event Grid service sends data about that event to subscribing endpoints.
-# [Event Grid event schema](#tab/event-grid-event-schema)
+This section contains an example of what that data would look like for each event.
### Microsoft.Communication.SMSDeliveryReportReceived event
When an event is triggered, the Event Grid service sends data about that event t
```json [{
- "id": "c13afb5f-d975-4296-a8ef-348c8fc496ee",
- "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/{thread-id}/sender/{id-of-message-sender}/recipient/{id-of-message-recipient}",
- "data": {
- "messageBody": "Welcome to Azure Communication Services",
- "messageId": "1600389507167",
- "senderId": "8:acs:fac4607d-d2d0-40e5-84df-6f32ebd1251a_00000005-3e0d-e5aa-0e04-343a0d00037c",
- "senderDisplayName": "John",
- "composeTime": "2020-09-18T00:38:27.167Z",
- "type": "Text",
- "version": 1600389507167,
- "recipientId": "8:acs:fac4607d-d2d0-40e5-84df-6f32ebd1251a_00000005-3e1a-3090-6a0b-343a0d000409",
- "transactionId": "WGW1YmwRzkupk0UI0QA9ZA.1.1.1.1.1797783722.1.9",
- "threadId": "19:46df844a4c064bfaa2b3b30e385d1018@thread.v2"
- },
- "eventType": "Microsoft.Communication.ChatMessageReceived",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2020-09-18T00:38:28.0946757Z"
-}
+ "id": "02272459-badb-4e2e-b538-4cb8a2f71da6",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
+ "subject": "thread/{thread-id}/sender/{rawId}/recipient/{rawId}",
+ "data": {
+ "messageBody": "Welcome to Azure Communication Services",
+ "messageId": "1613694358927",
+ "senderId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724",
+ "senderCommunicationIdentifier": {
+ "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724",
+ "communicationUser": {
+ "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724"
+ }
+ },
+ "senderDisplayName": "Jhon",
+ "composeTime": "2021-02-19T00:25:58.927Z",
+ "type": "Text",
+ "version": 1613694358927,
+ "recipientId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d05-83fe-084822000f6d",
+ "recipientCommunicationIdentifier": {
+ "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d05-83fe-084822000f6d",
+ "communicationUser": {
+ "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d05-83fe-084822000f6d"
+ }
+ },
+ "transactionId": "oh+LGB2dUUadMcTAdRWQxQ.1.1.1.1.1827536918.1.7",
+ "threadId": "19:6e5d6ca1d75044a49a36a7965ec4a906@thread.v2"
+ },
+ "eventType": "Microsoft.Communication.ChatMessageReceived",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2021-02-19T00:25:59.9436666Z"
+ }
] ```
When an event is triggered, the Event Grid service sends data about that event t
```json [{
- "id": "18247662-e94a-40cc-8d2f-f7357365309e",
- "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/19:6d20c2f921cd402ead7d1b31b0d030cd@thread.v2/sender/8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003fe/recipient/8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "data": {
- "editTime": "2020-09-18T00:48:47.361Z",
- "messageBody": "Let's Chat about new communication services.",
- "messageId": "1600390097873",
- "senderId": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003fe",
- "senderDisplayName": "Bob(Admin)",
- "composeTime": "2020-09-18T00:48:17.873Z",
- "type": "Text",
- "version": 1600390127361,
- "recipientId": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "transactionId": "bbopOa1JZEW5NDDFLgH1ZQ.2.1.2.1.1822032097.1.5",
- "threadId": "19:6d20c2f921cd402ead7d1b31b0d030cd@thread.v2"
- },
- "eventType": "Microsoft.Communication.ChatMessageEdited",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2020-09-18T00:48:48.037823Z"
-}]
+ "id": "93fc1037-b645-4eb0-a0f2-d7bb3ba6e060",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
+ "subject": "thread/{thread-id}/sender/{rawId}/recipient/{rawId}",
+ "data": {
+ "editTime": "2021-02-19T00:28:20.784Z",
+ "messageBody": "Let's Chat about new communication services.",
+ "messageId": "1613694357917",
+ "senderId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724",
+ "senderCommunicationIdentifier": {
+ "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724",
+ "communicationUser": {
+ "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724"
+ }
+ },
+ "senderDisplayName": "Bob(Admin)",
+ "composeTime": "2021-02-19T00:25:57.917Z",
+ "type": "Text",
+ "version": 1613694500784,
+ "recipientId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d60-83fe-084822000f6f",
+ "recipientCommunicationIdentifier": {
+ "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d60-83fe-084822000f6f",
+ "communicationUser": {
+ "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d60-83fe-084822000f6f"
+ }
+ },
+ "transactionId": "1mL4XZH2gEecu/alk9tOtw.2.1.2.1.1833042153.1.7",
+ "threadId": "19:6e5d6ca1d75044a49a36a7965ec4a906@thread.v2"
+ },
+ "eventType": "Microsoft.Communication.ChatMessageEdited",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2021-02-19T00:28:21.7456718Z"
+ }]
``` ### Microsoft.Communication.ChatMessageDeleted event ```json [{
- "id": "08034616-cf11-4fc2-b402-88963b93d083",
- "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/19:6d20c2f921cd402ead7d1b31b0d030cd@thread.v2/sender/8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003fe/recipient/8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "data": {
- "deleteTime": "2020-09-18T00:48:47.361Z",
- "messageId": "1600390099195",
- "senderId": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003fe",
- "senderDisplayName": "Bob(Admin)",
- "composeTime": "2020-09-18T00:48:19.195Z",
- "type": "Text",
- "version": 1600390152154,
- "recipientId": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "transactionId": "mAxUjeTsG06NpObXkFcjVQ.1.1.2.1.1823015063.1.5",
- "threadId": "19:6d20c2f921cd402ead7d1b31b0d030cd@thread.v2"
- },
- "eventType": "Microsoft.Communication.ChatMessageDeleted",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2020-09-18T00:49:12.6698791Z"
-}]
+ "id": "23cfcc13-33f2-4ae1-8d23-b5015b05302b",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
+ "subject": "thread/{thread-id}/sender/{rawId}/recipient/{rawId}",
+ "data": {
+ "deleteTime": "2021-02-19T00:43:10.14Z",
+ "messageId": "1613695388152",
+ "senderId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d07-83fe-084822000f6e",
+ "senderCommunicationIdentifier": {
+ "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d07-83fe-084822000f6e",
+ "communicationUser": {
+ "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d07-83fe-084822000f6e"
+ }
+ },
+ "senderDisplayName": "Bob(Admin)",
+ "composeTime": "2021-02-19T00:43:08.152Z",
+ "type": "Text",
+ "version": 1613695390361,
+ "recipientId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d60-83fe-084822000f6f",
+ "recipientCommunicationIdentifier": {
+ "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d60-83fe-084822000f6f",
+ "communicationUser": {
+ "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d60-83fe-084822000f6f"
+ }
+ },
+ "transactionId": "fFs4InlBn0O/0WyhfQZVSQ.1.1.2.1.1867776045.1.4",
+ "threadId": "19:48899258eec941e7a281e03edc8f4964@thread.v2"
+ },
+ "eventType": "Microsoft.Communication.ChatMessageDeleted",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2021-02-19T00:43:10.9982947Z"
+ }]
```
-### Microsoft.Communication.ChatThreadCreatedWithUser event
+### Microsoft.Communication.ChatThreadCreatedWithUser event
```json [{
- "id": "06c7c381-bb0a-4fff-aedd-919df1d52137",
- "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/19:7bdf5504a23f41a79d1bd472dd40044a@thread.v2/createdBy/8:acs:73551687-f8c8-48a7-bf06-d8263f15b02a_00000005-3e5f-1bc6-f40f-343a0d0003fe/recipient/8:acs:73551687-f8c8-48a7-bf06-d8263f15b02a_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "data": {
- "createdBy": "8:acs:73551687-f8c8-48a7-bf06-d8263f15b02a_06014f-6001fc107f",
- "properties": {
- "topic": "Chat about new commuication services",
- },
- "members": [
- {
- "displayName": "Bob",
- "memberId": "8:acs:73551687-f8c8-48a7-bf06-d8263f15b02a_00000005-3e5f-1bc6-f40f-343a0d0003f0"
+ "id": "eba02b2d-37bf-420e-8656-3a42ef74c435",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
+ "subject": "thread/{thread-id}/createdBy/rawId/recipient/rawId",
+ "data": {
+ "createdBy": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286d-e1fe-0848220013b9",
+ "createdByCommunicationIdentifier": {
+ "rawId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286d-e1fe-0848220013b9",
+ "communicationUser": {
+ "id": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286d-e1fe-0848220013b9"
+ }
},
- {
- "displayName": "John",
- "memberId": "8:acs:73551687-f8c8-48a7-bf06-d8263f15b02a_00000005-3e5f-1bc6-f40f-343a0d0003f1"
- }
- ],
- "createTime": "2020-09-17T22:06:09.988Z",
- "version": 1600380369988,
- "recipientId": "8:acs:73551687-f8c8-48a7-bf06-d8263f15b02a_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "transactionId": "9ZxrGXVXCkOTygd5iwsvAQ.1.1.1.1.1440874720.1.1",
- "threadId": "19:7bdf5504a23f41a79d1bd472dd40044a@thread.v2"
- },
- "eventType": "Microsoft.Communication.ChatThreadCreatedWithUser",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2020-09-17T22:06:10.3235137Z"
-}]
+ "properties": {
+ "topic": "Chat about new commuication services"
+ },
+ "members": [
+ {
+ "displayName": "Bob",
+ "memberId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286d-e1fe-0848220013b9"
+ },
+ {
+ "displayName": "John",
+ "memberId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-289b-07fd-0848220015ea"
+ }
+ ],
+ "participants": [
+ {
+ "displayName": "Bob",
+ "participantCommunicationIdentifier": {
+ "rawId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286d-e1fe-0848220013b9",
+ "communicationUser": {
+ "id": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286d-e1fe-0848220013b9"
+ }
+ }
+ },
+ {
+ "displayName": "John",
+ "participantCommunicationIdentifier": {
+ "rawId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-289b-07fd-0848220015ea",
+ "communicationUser": {
+ "id": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-289b-07fd-0848220015ea"
+ }
+ }
+ }
+ ],
+ "createTime": "2021-02-18T23:47:26.91Z",
+ "version": 1613692046910,
+ "recipientId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286e-84f5-08482200181c",
+ "recipientCommunicationIdentifier": {
+ "rawId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286e-84f5-08482200181c",
+ "communicationUser": {
+ "id": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286e-84f5-08482200181c"
+ }
+ },
+ "transactionId": "zbZt+9h/N0em+XCW2QvyIA.1.1.1.1.1737228330.0.1737490483.1.6",
+ "threadId": "19:1d594fb1eeb14566903cbc5decb5bf5b@thread.v2"
+ },
+ "eventType": "Microsoft.Communication.ChatThreadCreatedWithUser",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2021-02-18T23:47:34.7437103Z"
+ }]
``` ### Microsoft.Communication.ChatThreadWithUserDeleted event ```json [{
- "id": "7f4fa31b-e95e-428b-a6e8-53e2553620ad",
- "topic":"/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/19:6d20c2f921cd402ead7d1b31b0d030cd@thread.v2/deletedBy/8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003fe/recipient/8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "data": {
- "deletedBy": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003fe",
- "deleteTime": "2020-09-18T00:49:26.3694459Z",
- "createTime": "2020-09-18T00:46:41.559Z",
- "version": 1600390071625,
- "recipientId": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "transactionId": "MoZlSM2j7kSD2b5X8bjH7Q.1.1.2.1.1823539230.1.1",
- "threadId": "19:6d20c2f921cd402ead7d1b31b0d030cd@thread.v2"
- },
- "eventType": "Microsoft.Communication.ChatThreadWithUserDeleted",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2020-09-18T00:49:26.4269056Z"
-}]
+ "id": "f5d6750c-c6d7-4da8-bb05-6f3fca6c7295",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
+ "subject": "thread/{thread-id}/deletedBy/{rawId}/recipient/{rawId}",
+ "data": {
+ "deletedBy": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-5772-6473-83fe-084822000e21",
+ "deletedByCommunicationIdentifier": {
+ "rawId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-5772-6473-83fe-084822000e21",
+ "communicationUser": {
+ "id": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-5772-6473-83fe-084822000e21"
+ }
+ },
+ "deleteTime": "2021-02-18T23:57:51.5987591Z",
+ "createTime": "2021-02-18T23:54:15.683Z",
+ "version": 1613692578672,
+ "recipientId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-5772-647b-e1fe-084822001416",
+ "recipientCommunicationIdentifier": {
+ "rawId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-5772-647b-e1fe-084822001416",
+ "communicationUser": {
+ "id": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-5772-647b-e1fe-084822001416"
+ }
+ },
+ "transactionId": "mrliWVUndEmLwkZbeS5KoA.1.1.2.1.1761607918.1.6",
+ "threadId": "19:5870b8f021d74fd786bf5aeb095da291@thread.v2"
+ },
+ "eventType": "Microsoft.Communication.ChatThreadWithUserDeleted",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2021-02-18T23:57:52.1597234Z"
+ }]
```
-### Microsoft.Communication.ChatThreadPropertiesUpdatedPerUser event
+### Microsoft.Communication.ChatParticipantAddedToThreadWithUser event
+```json
+[{
+ "id": "049a5a7f-6cd7-43c1-b352-df9e9e6146d1",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
+ "subject": "thread/{thread-id}/participantAdded/{rawId}/recipient/{rawId}",
+ "data": {
+ "time": "2021-02-25T06:37:29.9232485Z",
+ "addedByCommunicationIdentifier": {
+ "rawId": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8767-1655-373a0d00885d",
+ "communicationUser": {
+ "id": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8767-1655-373a0d00885d"
+ }
+ },
+ "participantAdded": {
+ "displayName": "John Smith",
+ "participantCommunicationIdentifier": {
+ "rawId": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8785-1655-373a0d00885f",
+ "communicationUser": {
+ "id": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8785-1655-373a0d00885f"
+ }
+ }
+ },
+ "recipientCommunicationIdentifier": {
+ "rawId": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8781-1655-373a0d00885e",
+ "communicationUser": {
+ "id": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8781-1655-373a0d00885e"
+ }
+ },
+ "createTime": "2021-02-25T06:37:17.371Z",
+ "version": 1614235049907,
+ "transactionId": "q7rr9by6m0CiGiQxKdSO1w.1.1.1.1.1473446055.1.6",
+ "threadId": "19:f1400e1c542f4086a606b52ad20cd0bd@thread.v2"
+ },
+ "eventType": "Microsoft.Communication.ChatParticipantAddedToThreadWithUser",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2021-02-25T06:37:31.4880091Z"
+ }]
+```
+### Microsoft.Communication.ChatParticipantRemovedFromThreadWithUser event
```json [{
- "id": "47a66834-57d7-4f77-9c7d-676d45524982",
- "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/19:a33a128babf04431b7fe8cbca82f4238@thread.v2/editedBy/8:acs:fac4607d-d2d0-40e5-84df-6f32ebd1251a_00000005-3e88-2b7f-ac00-343a0d0005a8/recipient/8:acs:fac4607d-d2d0-40e5-84df-6f32ebd1251a_00000005-3e88-15fa-ac00-343a0d0005a7",
- "data": {
- "editedBy": "8:acs:fac4607d-d2d0-40e5-84df-6f32ebd1251a_00000005-3e88-2b7f-ac00-343a0d0005a8",
- "editTime": "2020-09-18T00:40:38.4914428Z",
- "properties": {
- "topic": "Communication in Azure"
+ "id": "e8a4df24-799d-4c53-94fd-1e05703a4549",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
+ "subject": "thread/{thread-id}/participantRemoved/{rawId}/recipient/{rawId}",
+ "data": {
+ "time": "2021-02-25T06:40:20.3564556Z",
+ "removedByCommunicationIdentifier": {
+ "rawId": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8767-1655-373a0d00885d",
+ "communicationUser": {
+ "id": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8767-1655-373a0d00885d"
+ }
+ },
+ "participantRemoved": {
+ "displayName": "Bob",
+ "participantCommunicationIdentifier": {
+ "rawId": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8785-1655-373a0d00885f",
+ "communicationUser": {
+ "id": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8785-1655-373a0d00885f"
+ }
+ }
+ },
+ "recipientCommunicationIdentifier": {
+ "rawId": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8781-1655-373a0d00885e",
+ "communicationUser": {
+ "id": "8:acs:0a420b29-555c-4f6b-841e-de8059893bb9_00000008-77c9-8781-1655-373a0d00885e"
+ }
+ },
+ "createTime": "2021-02-25T06:37:17.371Z",
+ "version": 1614235220325,
+ "transactionId": "usv74GQ5zU+JmWv/bQ+qfg.1.1.1.1.1480065078.1.5",
+ "threadId": "19:f1400e1c542f4086a606b52ad20cd0bd@thread.v2"
},
- "createTime": "2020-09-18T00:39:02.541Z",
- "version": 1600389638481,
- "recipientId": "8:acs:fac4607d-d2d0-40e5-84df-6f32ebd1251a_00000005-3e88-15fa-ac00-343a0d0005a7",
- "transactionId": "+ah9tVwqNkCT6nUGCKIvAg.1.1.1.1.1802895561.1.1",
- "threadId": "19:a33a128babf04431b7fe8cbca82f4238@thread.v2"
- },
- "eventType": "Microsoft.Communication.ChatThreadPropertiesUpdatedPerUser",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2020-09-18T00:40:38.5804349Z"
-}]
+ "eventType": "Microsoft.Communication.ChatParticipantRemovedFromThreadWithUser",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2021-02-25T06:40:24.2244945Z"
+ }]
+```
+
+### Microsoft.Communication.ChatThreadPropertiesUpdatedPerUser event
+
+```json
+[{
+ "id": "d57342ff-264e-4a5e-9c54-ef05b7d50082",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
+ "subject": "thread/{thread-id}/editedBy/{rawId}/recipient/{rawId}",
+ "data": {
+ "editedBy": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d07-83fe-084822000f6e",
+ "editedByCommunicationIdentifier": {
+ "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d07-83fe-084822000f6e",
+ "communicationUser": {
+ "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d07-83fe-084822000f6e"
+ }
+ },
+ "editTime": "2021-02-19T00:28:28.7390282Z",
+ "properties": {
+ "topic": "Communication in Azure"
+ },
+ "createTime": "2021-02-19T00:28:25.864Z",
+ "version": 1613694508719,
+ "recipientId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724",
+ "recipientCommunicationIdentifier": {
+ "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724",
+ "communicationUser": {
+ "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724"
+ }
+ },
+ "transactionId": "WLXPrnJ/I0+LTj2cwMrNMQ.1.1.1.1.1833369763.1.4",
+ "threadId": "19:2cc3504c41244d7483208a4f58a1f188@thread.v2"
+ },
+ "eventType": "Microsoft.Communication.ChatThreadPropertiesUpdatedPerUser",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2021-02-19T00:28:29.559726Z"
+ }]
``` ### Microsoft.Communication.ChatMemberAddedToThreadWithUser event
When an event is triggered, the Event Grid service sends data about that event t
[{ "id": "4abd2b49-d1a9-4fcc-9cd7-170fa5d96443", "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/19:6d20c2f921cd402ead7d1b31b0d030cd@thread.v2/memberAdded/8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003fe/recipient/8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f0",
+ "subject": "thread/{thread-id}/memberAdded/{rawId}/recipient/{rawId}",
"data": { "time": "2020-09-18T00:47:13.1867087Z", "addedBy": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f1",
When an event is triggered, the Event Grid service sends data about that event t
[{ "id": "b3701976-1ea2-4d66-be68-4ec4fc1b4b96", "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/19:6d20c2f921cd402ead7d1b31b0d030cd@thread.v2/memberRemoved/8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003fe/recipient/8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f0",
+ "subject": "thread/{thread-id}/memberRemoved/{rawId}/recipient/{rawId}",
"data": { "time": "2020-09-18T00:47:51.1461742Z", "removedBy": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f1",
When an event is triggered, the Event Grid service sends data about that event t
}] ```
-# [Cloud event schema](#tab/cloud-event-schema)
-
-### Microsoft.Communication.SMSDeliveryReportReceived event
-
-```json
-[{
- "id": "Outgoing_202009180022138813a09b-0cbf-4304-9b03-1546683bb910",
- "source": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
- "subject": "/phonenumber/15555555555",
- "data": {
- "MessageId": "Outgoing_202009180022138813a09b-0cbf-4304-9b03-1546683bb910",
- "From": "15555555555",
- "To": "+15555555555",
- "DeliveryStatus": "Delivered",
- "DeliveryStatusDetails": "No error.",
- "ReceivedTimestamp": "2020-09-18T00:22:20.2855749Z",
- "DeliveryAttempts": [
- {
- "Timestamp": "2020-09-18T00:22:14.9315918Z",
- "SegmentsSucceeded": 1,
- "SegmentsFailed": 0
- }
- ]
- },
- "type": "Microsoft.Communication.SMSDeliveryReportReceived",
- "time": "2020-09-18T00:22:20Z",
- "specversion": "1.0"
-}]
-```
-### Microsoft.Communication.SMSReceived event
+### Microsoft.Communication.ChatThreadCreated event
```json
-[{
- "id": "Incoming_20200918002745d29ebbea-3341-4466-9690-0a03af35228e",
- "source": "/subscriptions/50ad1522-5c2c-4d9a-a6c8-67c11ecb75b8/resourcegroups/acse2e/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
- "subject": "/phonenumber/15555555555",
- "data": {
- "MessageId": "Incoming_20200918002745d29ebbea-3341-4466-9690-0a03af35228e",
- "From": "15555555555",
- "To": "15555555555",
- "Message": "Great to connect with ACS events ",
- "ReceivedTimestamp": "2020-09-18T00:27:45.32Z"
- },
- "type": "Microsoft.Communication.SMSReceived",
- "time": "2020-09-18T00:27:47Z",
- "specversion": "1.0"
-}]
-```
-
-### Microsoft.Communication.ChatMessageReceived event
-
-```json
-[{
- "id": "c13afb5f-d975-4296-a8ef-348c8fc496ee",
- "source": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/{thread-id}/sender/{id-of-message-sender}/recipient/{id-of-message-recipient}",
- "data": {
- "messageBody": "Welcome to Azure Communication Services",
- "messageId": "1600389507167",
- "senderId": "8:acs:fac4607d-d2d0-40e5-84df-6f32ebd1251a_00000005-3e0d-e5aa-0e04-343a0d00037c",
- "senderDisplayName": "John",
- "composeTime": "2020-09-18T00:38:27.167Z",
- "type": "Text",
- "version": 1600389507167,
- "recipientId": "8:acs:fac4607d-d2d0-40e5-84df-6f32ebd1251a_00000005-3e1a-3090-6a0b-343a0d000409",
- "transactionId": "WGW1YmwRzkupk0UI0QA9ZA.1.1.1.1.1797783722.1.9",
- "threadId": "19:46df844a4c064bfaa2b3b30e385d1018@thread.v2"
- },
- "type": "Microsoft.Communication.ChatMessageReceived",
- "time": "2020-09-18T00:38:28.0946757Z",
- "specversion": "1.0"
-}
-]
+[ {
+ "id": "a607ac52-0974-4d3c-bfd8-6f708a26f509",
+ "topic": "/subscriptions/{subscription-id}/resourcegroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
+ "subject": "thread/{thread-id}/createdBy/{rawId}",
+ "data": {
+ "createdByCommunicationIdentifier": {
+ "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453",
+ "communicationUser": {
+ "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453"
+ }
+ },
+ "properties": {
+ "topic": "Talk about new Thread Events in commuication services"
+ },
+ "participants": [
+ {
+ "displayName": "Bob",
+ "participantCommunicationIdentifier": {
+ "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453",
+ "communicationUser": {
+ "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453"
+ }
+ }
+ },
+ {
+ "displayName": "Scott",
+ "participantCommunicationIdentifier": {
+ "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38e6-07fd-084822002467",
+ "communicationUser": {
+ "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38e6-07fd-084822002467"
+ }
+ }
+ },
+ {
+ "displayName": "Shawn",
+ "participantCommunicationIdentifier": {
+ "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38f6-83fe-084822002337",
+ "communicationUser": {
+ "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38f6-83fe-084822002337"
+ }
+ }
+ },
+ {
+ "displayName": "Anthony",
+ "participantCommunicationIdentifier": {
+ "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38e3-e1fe-084822002c35",
+ "communicationUser": {
+ "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38e3-e1fe-084822002c35"
+ }
+ }
+ }
+ ],
+ "createTime": "2021-02-20T00:31:54.365+00:00",
+ "version": 1613781114365,
+ "threadId": "19:e07c8ddc5bab4c059ea9f11d29b544b6@thread.v2",
+ "transactionId": "gK6+kgANy0O1wchlVKVTJg.1.1.1.1.921436178.1"
+ },
+ "eventType": "Microsoft.Communication.ChatThreadCreated",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2021-02-20T00:31:54.5369967Z"
+ }]
```-
-### Microsoft.Communication.ChatMessageEdited event
+### Microsoft.Communication.ChatThreadPropertiesUpdated event
```json [{
- "id": "18247662-e94a-40cc-8d2f-f7357365309e",
- "source": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/19:6d20c2f921cd402ead7d1b31b0d030cd@thread.v2/sender/8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003fe/recipient/8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "data": {
- "editTime": "2020-09-18T00:48:47.361Z",
- "messageBody": "Let's Chat about new communication services.",
- "messageId": "1600390097873",
- "senderId": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003fe",
- "senderDisplayName": "Bob(Admin)",
- "composeTime": "2020-09-18T00:48:17.873Z",
- "type": "Text",
- "version": 1600390127361,
- "recipientId": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "transactionId": "bbopOa1JZEW5NDDFLgH1ZQ.2.1.2.1.1822032097.1.5",
- "threadId": "19:6d20c2f921cd402ead7d1b31b0d030cd@thread.v2"
- },
- "type": "Microsoft.Communication.ChatMessageEdited",
- "time": "2020-09-18T00:48:48.037823Z",
- "specversion": "1"
-}]
+ "id": "cf867580-9caf-45be-b49f-ab1cbfcaa59f",
+ "topic": "/subscriptions/{subscription-id}/resourcegroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
+ "subject": "thread/{thread-id}/editedBy/{rawId}",
+ "data": {
+ "editedByCommunicationIdentifier": {
+ "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5c9e-9e35-07fd-084822002264",
+ "communicationUser": {
+ "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5c9e-9e35-07fd-084822002264"
+ }
+ },
+ "editTime": "2021-02-20T00:04:07.7152073+00:00",
+ "properties": {
+ "topic": "Talk about new Thread Events in commuication services"
+ },
+ "createTime": "2021-02-20T00:00:40.126+00:00",
+ "version": 1613779447695,
+ "threadId": "19:9e8eefe67b3c470a8187b4c2b00240bc@thread.v2",
+ "transactionId": "GBE9MB2a40KEWzexIg0D3A.1.1.1.1.856359041.1"
+ },
+ "eventType": "Microsoft.Communication.ChatThreadPropertiesUpdated",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2021-02-20T00:04:07.8410277Z"
+ }]
```
+### Microsoft.Communication.ChatThreadDeleted event
-### Microsoft.Communication.ChatMessageDeleted event
```json
-[{
- "id": "08034616-cf11-4fc2-b402-88963b93d083",
- "source": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/19:6d20c2f921cd402ead7d1b31b0d030cd@thread.v2/sender/8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003fe/recipient/8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "data": {
- "deleteTime": "2020-09-18T00:48:47.361Z",
- "messageId": "1600390099195",
- "senderId": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003fe",
- "senderDisplayName": "Bob(Admin)",
- "composeTime": "2020-09-18T00:48:19.195Z",
- "type": "Text",
- "version": 1600390152154,
- "recipientId": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "transactionId": "mAxUjeTsG06NpObXkFcjVQ.1.1.2.1.1823015063.1.5",
- "threadId": "19:6d20c2f921cd402ead7d1b31b0d030cd@thread.v2"
- },
- "type": "Microsoft.Communication.ChatMessageDeleted",
- "time": "2020-09-18T00:49:12.6698791Z",
- "specversion": "1.0"
-}]
+[
+{
+ "id": "1dbd5237-4823-4fed-980c-8d27c17cf5b0",
+ "topic": "/subscriptions/{subscription-id}/resourcegroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
+ "subject": "thread/{thread-id}/deletedBy/{rawId}",
+ "data": {
+ "deletedByCommunicationIdentifier": {
+ "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5c9e-a300-07fd-084822002266",
+ "communicationUser": {
+ "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5c9e-a300-07fd-084822002266"
+ }
+ },
+ "deleteTime": "2021-02-20T00:00:42.109802+00:00",
+ "createTime": "2021-02-20T00:00:39.947+00:00",
+ "version": 1613779241389,
+ "threadId": "19:c9e9f3060b884e448671391882066ac3@thread.v2",
+ "transactionId": "KibptDpcLEeEFnlR7cI3QA.1.1.2.1.848298005.1"
+ },
+ "eventType": "Microsoft.Communication.ChatThreadDeleted",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2021-02-20T00:00:42.5428002Z"
+ }
+ ]
```-
-### Microsoft.Communication.ChatThreadCreatedWithUser event
+### Microsoft.Communication.ChatThreadParticipantAdded event
```json
-[{
- "id": "06c7c381-bb0a-4fff-aedd-919df1d52137",
- "source": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/19:7bdf5504a23f41a79d1bd472dd40044a@thread.v2/createdBy/8:acs:73551687-f8c8-48a7-bf06-d8263f15b02a_00000005-3e5f-1bc6-f40f-343a0d0003fe/recipient/8:acs:73551687-f8c8-48a7-bf06-d8263f15b02a_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "data": {
- "createdBy": "8:acs:73551687-f8c8-48a7-bf06-d8263f15b02a_06014f-6001fc107f",
- "properties": {
- "topic": "Chat about new communication services"
- },
- "members": [
- {
+[
+{
+ "id": "3024eb5d-1d71-49d1-878c-7dc3165433d9",
+ "topic": "/subscriptions/{subscription-id}/resourcegroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
+ "subject": "thread/{thread-id}/participantadded/{rawId}",
+ "data": {
+ "time": "2021-02-20T00:54:42.8622646+00:00",
+ "addedByCommunicationIdentifier": {
+ "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453",
+ "communicationUser": {
+ "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453"
+ }
+ },
+ "participantAdded": {
"displayName": "Bob",
- "memberId": "8:acs:73551687-f8c8-48a7-bf06-d8263f15b02a_00000005-3e5f-1bc6-f40f-343a0d0003f0"
+ "participantCommunicationIdentifier": {
+ "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38f3-88f7-084822002454",
+ "communicationUser": {
+ "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38f3-88f7-084822002454"
+ }
+ }
},
- {
- "displayName": "John",
- "memberId": "8:acs:73551687-f8c8-48a7-bf06-d8263f15b02a_00000005-3e5f-1bc6-f40f-343a0d0003f1"
- }
- ],
- "createTime": "2020-09-17T22:06:09.988Z",
- "version": 1600380369988,
- "recipientId": "8:acs:73551687-f8c8-48a7-bf06-d8263f15b02a_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "transactionId": "9ZxrGXVXCkOTygd5iwsvAQ.1.1.1.1.1440874720.1.1",
- "threadId": "19:7bdf5504a23f41a79d1bd472dd40044a@thread.v2"
- },
- "type": "Microsoft.Communication.ChatThreadCreatedWithUser",
- "time": "2020-09-17T22:06:10.3235137Z",
- "specversion": "1.0"
-}]
+ "createTime": "2021-02-20T00:31:54.365+00:00",
+ "version": 1613782482822,
+ "threadId": "19:e07c8ddc5bab4c059ea9f11d29b544b6@thread.v2",
+ "transactionId": "9q6cO7i4FkaZ+5RRVzshVw.1.1.1.1.974913783.1"
+ },
+ "eventType": "Microsoft.Communication.ChatThreadParticipantAdded",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2021-02-20T00:54:43.9866454Z"
+ }
+]
```-
-### Microsoft.Communication.ChatThreadWithUserDeleted event
+### Microsoft.Communication.ChatThreadParticipantRemoved event
```json
-[{
- "id": "7f4fa31b-e95e-428b-a6e8-53e2553620ad",
- "source":"/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/19:6d20c2f921cd402ead7d1b31b0d030cd@thread.v2/deletedBy/8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003fe/recipient/8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "data": {
- "deletedBy": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003fe",
- "deleteTime": "2020-09-18T00:49:26.3694459Z",
- "createTime": "2020-09-18T00:46:41.559Z",
- "version": 1600390071625,
- "recipientId": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "transactionId": "MoZlSM2j7kSD2b5X8bjH7Q.1.1.2.1.1823539230.1.1",
- "threadId": "19:6d20c2f921cd402ead7d1b31b0d030cd@thread.v2"
- },
- "type": "Microsoft.Communication.ChatThreadWithUserDeleted",
- "time": "2020-09-18T00:49:26.4269056Z",
- "specversion": "1.0"
-}]
+[
+{
+ "id": "6ed810fd-8776-4b13-81c2-1a0c4f791a07",
+ "topic": "/subscriptions/{subscription-id}/resourcegroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
+ "subject": "thread/{thread-id}/participantremoved/{rawId}",
+ "data": {
+ "time": "2021-02-20T00:56:18.1118825+00:00",
+ "removedByCommunicationIdentifier": {
+ "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453",
+ "communicationUser": {
+ "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453"
+ }
+ },
+ "participantRemoved": {
+ "displayName": "Shawn",
+ "participantCommunicationIdentifier": {
+ "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38e6-07fd-084822002467",
+ "communicationUser": {
+ "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38e6-07fd-084822002467"
+ }
+ }
+ },
+ "createTime": "2021-02-20T00:31:54.365+00:00",
+ "version": 1613782578096,
+ "threadId": "19:e07c8ddc5bab4c059ea9f11d29b544b6@thread.v2",
+ "transactionId": "zGCq8IGRr0aEF6COuy7wSA.1.1.1.1.978649284.1"
+ },
+ "eventType": "Microsoft.Communication.ChatThreadParticipantRemoved",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2021-02-20T00:56:18.856721Z"
+ }
+]
```-
-### Microsoft.Communication.ChatThreadPropertiesUpdatedPerUser event
+### Microsoft.Communication.ChatMessageReceivedInThread event
```json
-[{
- "id": "47a66834-57d7-4f77-9c7d-676d45524982",
- "source": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/19:a33a128babf04431b7fe8cbca82f4238@thread.v2/editedBy/8:acs:fac4607d-d2d0-40e5-84df-6f32ebd1251a_00000005-3e88-2b7f-ac00-343a0d0005a8/recipient/8:acs:fac4607d-d2d0-40e5-84df-6f32ebd1251a_00000005-3e88-15fa-ac00-343a0d0005a7",
- "data": {
- "editedBy": "8:acs:fac4607d-d2d0-40e5-84df-6f32ebd1251a_00000005-3e88-2b7f-ac00-343a0d0005a8",
- "editTime": "2020-09-18T00:40:38.4914428Z",
- "properties": {
- "topic": "Communication in Azure"
+[
+{
+ "id": "4f614f97-c451-4b82-a8c9-1e30c3bfcda1",
+ "topic": "/subscriptions/{subscription-id}/resourcegroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
+ "subject": "thread/{thread-id}/sender/8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cdb-4916-07fd-084822002624",
+ "data": {
+ "messageBody": "Talk about new Thread Events in commuication services",
+ "messageId": "1613783230064",
+ "type": "Text",
+ "version": "1613783230064",
+ "senderDisplayName": "Bob",
+ "senderCommunicationIdentifier": {
+ "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cdb-4916-07fd-084822002624",
+ "communicationUser": {
+ "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cdb-4916-07fd-084822002624"
+ }
+ },
+ "composeTime": "2021-02-20T01:07:10.064+00:00",
+ "threadId": "19:5b3809e80e4a439d92c3316e273f4a2b@thread.v2",
+ "transactionId": "foMkntkKS0O/MhMlIE5Aag.1.1.1.1.1004077250.1"
},
- "createTime": "2020-09-18T00:39:02.541Z",
- "version": 1600389638481,
- "recipientId": "8:acs:fac4607d-d2d0-40e5-84df-6f32ebd1251a_00000005-3e88-15fa-ac00-343a0d0005a7",
- "transactionId": "+ah9tVwqNkCT6nUGCKIvAg.1.1.1.1.1802895561.1.1",
- "threadId": "19:a33a128babf04431b7fe8cbca82f4238@thread.v2"
- },
- "type": "Microsoft.Communication.ChatThreadPropertiesUpdatedPerUser",
- "time": "2020-09-18T00:40:38.5804349Z",
- "specversion": "1.0"
-}]
+ "eventType": "Microsoft.Communication.ChatMessageReceivedInThread",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2021-02-20T01:07:10.5704596Z"
+ }
+]
```-
-### Microsoft.Communication.ChatMemberAddedToThreadWithUser event
+### Microsoft.Communication.ChatMessageEditedInThread event
```json
-[{
- "id": "4abd2b49-d1a9-4fcc-9cd7-170fa5d96443",
- "source": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/19:6d20c2f921cd402ead7d1b31b0d030cd@thread.v2/memberAdded/8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003fe/recipient/8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "data": {
- "time": "2020-09-18T00:47:13.1867087Z",
- "addedBy": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f1",
- "memberAdded": {
- "displayName": "John Smith",
- "memberId": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003fe"
+[
+ {
+ "id": "7b8dc01e-2659-41fa-bc8c-88a967714510",
+ "topic": "/subscriptions/{subscription-id}/resourcegroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
+ "subject": "thread/{thread-id}/sender/{rawId}",
+ "data": {
+ "editTime": "2021-02-20T00:59:10.464+00:00",
+ "messageBody": "8effb181-1eb2-4a58-9d03-ed48a461b19b",
+ "messageId": "1613782685964",
+ "type": "Text",
+ "version": "1613782750464",
+ "senderDisplayName": "Scott",
+ "senderCommunicationIdentifier": {
+ "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453",
+ "communicationUser": {
+ "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453"
+ }
+ },
+ "composeTime": "2021-02-20T00:58:05.964+00:00",
+ "threadId": "19:e07c8ddc5bab4c059ea9f11d29b544b6@thread.v2",
+ "transactionId": "H8Gpj3NkIU6bXlWw8WPvhQ.2.1.2.1.985333801.1"
},
- "createTime": "2020-09-18T00:46:41.559Z",
- "version": 1600390033176,
- "recipientId": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "transactionId": "pVIjw/pHEEKUOUJ2DAAl5A.1.1.1.1.1818361951.1.1",
- "threadId": "19:6d20c2f921cd402ead7d1b31b0d030cd@thread.v2"
- },
- "type": "Microsoft.Communication.ChatMemberAddedToThreadWithUser",
- "time": "2020-09-18T00:47:13.2342692Z",
- "specversion": "1.0"
-}]
+ "eventType": "Microsoft.Communication.ChatMessageEditedInThread",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2021-02-20T00:59:10.7600061Z"
+ }
+]
```
-### Microsoft.Communication.ChatMemberRemovedFromThreadWithUser event
+### Microsoft.Communication.ChatMessageDeletedInThread event
```json
-[{
- "id": "b3701976-1ea2-4d66-be68-4ec4fc1b4b96",
- "source": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/19:6d20c2f921cd402ead7d1b31b0d030cd@thread.v2/memberRemoved/8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003fe/recipient/8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "data": {
- "time": "2020-09-18T00:47:51.1461742Z",
- "removedBy": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f1",
- "memberRemoved": {
- "displayName": "John",
- "memberId": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003fe"
+[
+ {
+ "id": "17d9c39d-0c58-4ed8-947d-c55959f57f75",
+ "topic": "/subscriptions/{subscription-id}/resourcegroups/{group-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
+ "subject": "thread/{thread-id}/sender/{rawId}",
+ "data": {
+ "deleteTime": "2021-02-20T00:59:10.464+00:00",
+ "messageId": "1613782685440",
+ "type": "Text",
+ "version": "1613782814333",
+ "senderDisplayName": "Scott",
+ "senderCommunicationIdentifier": {
+ "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453",
+ "communicationUser": {
+ "id": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-5cbb-38a0-88f7-084822002453"
+ }
+ },
+ "composeTime": "2021-02-20T00:58:05.44+00:00",
+ "threadId": "19:e07c8ddc5bab4c059ea9f11d29b544b6@thread.v2",
+ "transactionId": "HqU6PeK5AkCRSpW8eAbL0A.1.1.2.1.987824181.1"
},
- "createTime": "2020-09-18T00:46:41.559Z",
- "version": 1600390071131,
- "recipientId": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "transactionId": "G9Y+UbjVmEuxAG3O4bEyvw.1.1.1.1.1819803816.1.1",
- "threadId": "19:6d20c2f921cd402ead7d1b31b0d030cd@thread.v2"
- },
- "type": "Microsoft.Communication.ChatMemberRemovedFromThreadWithUser",
- "time": "2020-09-18T00:47:51.2244511Z",
- "specversion": "1.0"
-}]
+ "eventType": "Microsoft.Communication.ChatMessageDeletedInThread",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2021-02-20T01:00:14.8518034Z"
+ }
+]
``` +
+## Quickstarts and how-tos
+
+| Title | Description |
+| | |
+| [How do handle SMS Events in Communication Services](../communication-services/quickstarts/telephony-sms/handle-sms-events.md) | Handling all SMS events received by your Communication Service using WebHook. |
+ ## Tutorials
event-grid Troubleshoot Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/troubleshoot-issues.md
There are various reasons for client applications not able to connect to an Even
If you receive error messages with error codes like 400, 409, and 403, see [Troubleshoot Event Grid errors](troubleshoot-errors.md). ## Distributed tracing (.NET)
-The Event Grid .NET library supports distributing tracing. To adhere to the [CloudEvents specification's guidance](https://github.com/cloudevents/spec/blob/master/extensions/distributed-tracing.md) on distributing tracing, the library sets the `traceparent` and `tracestate` on the [ExtensionAttributes](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventgrid/Azure.Messaging.EventGrid/src/Customization/CloudEvent.cs#L126) of a `CloudEvent` when distributed tracing is enabled. To learn more about how to enable distributed tracing in your application, take a look at the Azure SDK [distributed tracing documentation](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/core/Azure.Core/samples/Diagnostics.md#Distributed-tracing).
+The Event Grid .NET library supports distributing tracing. To adhere to the [CloudEvents specification's guidance](https://github.com/cloudevents/spec/blob/master/extensions/distributed-tracing.md) on distributing tracing, the library sets the `traceparent` and `tracestate` on the [ExtensionAttributes](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventgrid/Azure.Messaging.EventGrid/src/Customization#L126) of a `CloudEvent` when distributed tracing is enabled. To learn more about how to enable distributed tracing in your application, take a look at the Azure SDK [distributed tracing documentation](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/core/Azure.Core/samples/Diagnostics.md#Distributed-tracing).
### Sample See the [Line Counter sample](/samples/azure/azure-sdk-for-net/line-counter/). This sample app illustrates using Storage, Event Hubs, and Event Grid clients along with ASP.NET Core integration, distributed tracing, and hosted services. It allows users to upload a file to a blob, which triggers an Event Hubs event containing the file name. The Event Hubs Processor receives the event, and then the app downloads the blob and counts the number of lines in the file. The app displays a link to a page containing the line count. When the link is clicked, a CloudEvent containing the name of the file is published using Event Grid.
event-hubs Event Hubs Availability And Consistency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-availability-and-consistency.md
We recommend sending events to an event hub without setting partition informatio
In this section, you learn how to send events to a specific partition using different programming languages. ### [.NET](#tab/dotnet)
-To send events to a specific partition, create the batch using the [EventHubProducerClient.CreateBatchAsync](/dotnet/api/azure.messaging.eventhubs.producer.eventhubproducerclient.createbatchasync#Azure_Messaging_EventHubs_Producer_EventHubProducerClient_CreateBatchAsync_Azure_Messaging_EventHubs_Producer_CreateBatchOptions_System_Threading_CancellationToken_) method by specifying either the `PartitionId` or the `PartitionKey` in [CreateBatchOptions](//dotnet/api/azure.messaging.eventhubs.producer.createbatchoptions). The following code sends a batch of events to a specific partition by specifying a partition key. Event Hubs ensures that all events sharing a partition key value are stored together and delivered in order of arrival.
+To send events to a specific partition, create the batch using the [EventHubProducerClient.CreateBatchAsync](/dotnet/api/azure.messaging.eventhubs.producer.eventhubproducerclient.createbatchasync#Azure_Messaging_EventHubs_Producer_EventHubProducerClient_CreateBatchAsync_Azure_Messaging_EventHubs_Producer_CreateBatchOptions_System_Threading_CancellationToken_) method by specifying either the `PartitionId` or the `PartitionKey` in [CreateBatchOptions](/dotnet/api/azure.messaging.eventhubs.producer.createbatchoptions?view=azure-dotnet). The following code sends a batch of events to a specific partition by specifying a partition key. Event Hubs ensures that all events sharing a partition key value are stored together and delivered in order of arrival.
```csharp var batchOptions = new CreateBatchOptions { PartitionKey = "cities" };
expressroute About Fastpath https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/about-fastpath.md
To configure FastPath, the virtual network gateway must be either:
While FastPath supports most configurations, it does not support the following features:
-* UDR on the gateway subnet: If you apply a UDR to the gateway subnet of your virtual network, the network traffic from your on-premises network will continue to be sent to the virtual network gateway.
+* UDR on the gateway subnet: This UDR has no impact on the network traffic that FastPath sends directly from your on-premises network to the virtual machines in Azure virtual network.
* VNet Peering: If you have other virtual networks peered with the one that is connected to ExpressRoute, the network traffic from your on-premises network to the other virtual networks (i.e. the so-called "Spoke" VNets) will continue to be sent to the virtual network gateway. The workaround is to connect all the virtual networks to the ExpressRoute circuit directly.
expressroute How To Custom Route Alert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/how-to-custom-route-alert.md
Title: 'ExpressRoute: How to configure custom alerts for advertised routes'
-description: This article shows you how to use Azure Automation and Logic Apps to monitor the number of routes advertised from the ExpressRoute gateway to on-premises networks in order to prevent hitting the 200 routes limit.
+description: This article shows you how to use Azure Automation and Logic Apps to monitor the number of routes advertised from the ExpressRoute gateway to on-premises networks in order to prevent hitting the 1000 routes limit.
# Configure custom alerts to monitor advertised routes
-This article helps you use Azure Automation and Logic Apps to constantly monitor the number of routes advertised from the ExpressRoute gateway to on-premises networks. Monitoring can help prevent hitting the [200 routes limit](expressroute-faqs.md#how-many-prefixes-can-be-advertised-from-a-vnet-to-on-premises-on-expressroute-private-peering).
+This article helps you use Azure Automation and Logic Apps to constantly monitor the number of routes advertised from the ExpressRoute gateway to on-premises networks. Monitoring can help prevent hitting the 1000 routes limit](expressroute-faqs.md#how-many-prefixes-can-be-advertised-from-a-vnet-to-on-premises-on-expressroute-private-peering).
**Azure Automation** allows you to automate execution of custom PowerShell script stored in a *runbook*. When using the configuration in this article, the runbook contains a PowerShell script that queries one or more ExpressRoute gateways. It collects a dataset containing the resource group, ExpressRoute gateway name, and number of network prefixes advertised on-premises.
Verify that you have met the following criteria before beginning your configurat
* The custom alert discussed in this article is an add-on to achieve better operation and control. It is not a replacement for the native alerts in ExpressRoute. * Data collection for ExpressRoute gateways runs in the background. Runtime can be longer than expected. To avoid job queuing, the workflow recurrence must be set up properly.
-* Deployments by scripts or ARM templates could happen faster than the custom alarm trigger. This could result in increasing in number of network prefixes in ExpressRoute gateway above the limit of 200 routes.
+* Deployments by scripts or ARM templates could happen faster than the custom alarm trigger. This could result in increasing in number of network prefixes in ExpressRoute gateway above the limit of 1000 routes.
## <a name="accounts"></a>Create and configure accounts
Once the JSON is parsed, the **Parse JSON Data Operations** action stores the co
:::image type="content" source="./media/custom-route-alert-portal/peer-2.png" alt-text="numRoutesPeer2":::
-9. The logic condition is true when one of two dynamic variables, numRoute1 or numRoute2, is greater than the threshold. In this example, the threshold is fixed to 160 (80% of max value of 200 routes). You can change the threshold value to fit your requirements. For consistency, the value should be the same value used in the runbook PowerShell script.
+9. The logic condition is true when one of two dynamic variables, numRoute1 or numRoute2, is greater than the threshold. In this example, the threshold is fixed to 800 (80% of max value of 1000 routes). You can change the threshold value to fit your requirements. For consistency, the value should be the same value used in the runbook PowerShell script.
:::image type="content" source="./media/custom-route-alert-portal/logic-condition.png" alt-text="Logic condition":::
The final step is the workflow validation. In **Logic Apps Overview**, select **
## Next steps
-To learn more about how to customize the workflow, see [Azure Logic Apps](../logic-apps/logic-apps-overview.md).
+To learn more about how to customize the workflow, see [Azure Logic Apps](../logic-apps/logic-apps-overview.md).
firewall-manager Migrate To Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/migrate-to-policy.md
If ($azfw.NetworkRuleCollections.Count -gt 0) {
Write-Host "Created network rule " $firewallPolicyNetRule.Name $firewallPolicyNetRules += $firewallPolicyNetRule }
- $fwpNetRuleCollection = New-AzFirewallPolicyFilterRuleCollection -Name $rc.Name -Priority $rc.Pl.llriority -ActionType $rc.Action.Type -Rule $firewallPolicyNetRules
+ $fwpNetRuleCollection = New-AzFirewallPolicyFilterRuleCollection -Name $rc.Name -Priority $rc.Priority -ActionType $rc.Action.Type -Rule $firewallPolicyNetRules
Write-Host "Created NetworkRuleCollection " $fwpNetRuleCollection.Name } $firewallPolicyNetRuleCollections += $fwpNetRuleCollection
firewall-manager Secure Cloud Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/secure-cloud-network.md
Previously updated : 09/08/2020 Last updated : 03/19/2021
First, create spoke virtual networks where you can place your servers.
The two virtual networks will each have a workload server in them and will be protected by the firewall. 1. From the Azure portal home page, select **Create a resource**.
-2. Under **Networking**, select **Virtual network**.
+2. Search for **Virtual network**, and select **Create**.
2. For **Subscription**, select your subscription.
-1. For **Resource group**, select **Create new**, and type **fw-manager** for the name and select **OK**.
+1. For **Resource group**, select **Create new**, and type **fw-manager-rg** for the name and select **OK**.
2. For **Name**, type **Spoke-01**. 3. For **Region**, select **(US) East US**. 4. Select **Next: IP Addresses**.
-1. For **Address space**, type **10.1.0.0/16**.
-3. Select **Add subnet**.
-4. Type **Workload-01-SN**.
-5. For **Subnet address range**, type **10.1.1.0/24**.
-6. Select **Add**.
+1. For **Address space**, type **10.0.0.0/16**.
+3. Under **Subnet name**, select **default**.
+4. For **Subnet name**, type **Workload-01-SN**.
+5. For **Subnet address range**, type **10.0.1.0/24**.
+6. Select **Save**.
1. Select **Review + create**. 2. Select **Create**. Repeat this procedure to create another similar virtual network: Name: **Spoke-02**<br>
-Address space: **10.2.0.0/16**<br>
+Address space: **10.1.0.0/16**<br>
Subnet name: **Workload-02-SN**<br>
-Subnet address range: **10.2.1.0/24**
+Subnet address range: **10.1.1.0/24**
### Create the secured virtual hub
Create your secured virtual hub using Firewall Manager.
2. In the search box, type **Firewall Manager** and select **Firewall Manager**. 3. On the **Firewall Manager** page, select **View secured virtual hubs**. 4. On the **Firewall Manager | Secured virtual hubs** page, select **Create new secured virtual hub**.
-5. For **Resource group**, select **fw-manager**.
+5. For **Resource group**, select **fw-manager-rg**.
7. For **Region**, select **East US**. 1. For the **Secured virtual hub name**, type **Hub-01**.
-2. For **Hub address space**, type **10.0.0.0/16**.
+2. For **Hub address space**, type **10.2.0.0/16**.
3. For the new vWAN name, type **Vwan-01**. 4. Leave the **Include VPN gateway to enable Trusted Security Partners** check box cleared. 5. Select **Next: Azure Firewall**. 6. Accept the default **Azure Firewall** **Enabled** setting and then select **Next: Trusted Security Partner**. 7. Accept the default **Trusted Security Partner** **Disabled** setting, and select **Next: Review + create**.
-8. Select **Create**. It will take about 30 minutes to deploy.
+8. Select **Create**.
-Now you can get the firewall public IP address.
+ It takes about 30 minutes to deploy.
-1. After the deployment is complete, on the Azure portal select **All services**.
-1. Type **firewall manager** and then select **Firewall Manager**.
-2. Select **Secured virtual hubs**.
+You can get the firewall public IP address after the deployment completes.
+
+1. Open **Firewall Manager**.
+2. Select **Virtual hubs**.
3. Select **hub-01**. 7. Select **Public IP configuration**. 8. Note the public IP address to use later.
Now you can get the firewall public IP address.
Now you can peer the hub and spoke virtual networks.
-1. Select the **fw-manager** resource group, then select the **Vwan-01** virtual WAN.
+1. Select the **fw-manager-rg** resource group, then select the **Vwan-01** virtual WAN.
2. Under **Connectivity**, select **Virtual network connections**. 3. Select **Add connection**. 4. For **Connection name**, type **hub-spoke-01**. 5. For **Hubs**, select **Hub-01**.
-6. For **Resource group**, select **fw-manager**.
+6. For **Resource group**, select **fw-manager-rg**.
7. For **Virtual network**, select **Spoke-01**. 8. Select **Create**.
Repeat to connect the **Spoke-02** virtual network: connection name - **hub-spok
|Setting |Value | |||
- |Resource group |**fw-manager**|
+ |Resource group |**fw-manager-rg**|
|Virtual machine name |**Srv-workload-01**| |Region |**(US) East US)**| |Administrator user name |type a user name|
Repeat to connect the **Spoke-02** virtual network: connection name - **hub-spok
8. Select **Spoke-01** for the virtual network and select **Workload-01-SN** for the subnet. 9. For **Public IP**, select **None**. 11. Accept the other defaults and select **Next: Management**.
-12. Select **Off** to disable boot diagnostics. Accept the other defaults and select **Review + create**.
+12. Select **Disable** to disable boot diagnostics. Accept the other defaults and select **Review + create**.
13. Review the settings on the summary page, and then select **Create**. Use the information in the following table to configure another virtual machine named **Srv-Workload-02**. The rest of the configuration is the same as the **Srv-workload-01** virtual machine.
A firewall policy defines collections of rules to direct traffic on one or more
1. From Firewall Manager, select **View Azure Firewall policies**. 2. Select **Create Azure Firewall Policy**.
-3. Under **Policy details**, for the **Name** type **Policy-01** and for **Region** select **East US**.
-4. Select **Next: DNS Settings (preview)**.
-1. Select **Next: Rules**.
-2. On the **Rules** tab, select **Add a rule collection**.
-3. On the **Add a rule collection** page, type **App-RC-01** for the **Name**.
-4. For **Rule collection type**, select **Application**.
-5. For **Priority**, type **100**.
-6. Ensure **Rule collection action** is **Allow**.
-7. For the rule **Name** type **Allow-msft**.
-8. For the **Source type**, select **IP address**.
-9. For **Source**, type **\***.
-10. For **Protocol**, type **http,https**.
-11. Ensure **Destination type** is **FQDN**.
-12. For **Destination**, type **\*.microsoft.com**.
-13. Select **Add**.
+1. For **Resource group**, select **fw-manager-rg**.
+1. Under **Policy details**, for the **Name** type **Policy-01** and for **Region** select **East US**.
+1. Select **Next: DNS Settings**.
+1. Select **Next: TLS Inspection (preview)**.
+1. Select **Next : Rules**.
+1. On the **Rules** tab, select **Add a rule collection**.
+1. On the **Add a rule collection** page, type **App-RC-01** for the **Name**.
+1. For **Rule collection type**, select **Application**.
+1. For **Priority**, type **100**.
+1. Ensure **Rule collection action** is **Allow**.
+1. For the rule **Name** type **Allow-msft**.
+1. For the **Source type**, select **IP address**.
+1. For **Source**, type **\***.
+1. For **Protocol**, type **http,https**.
+1. Ensure **Destination type** is **FQDN**.
+1. For **Destination**, type **\*.microsoft.com**.
+1. Select **Add**.
Add a DNAT rule so you can connect a remote desktop to the **Srv-Workload-01** virtual machine.
-1. Select **Add a rule collection**.
-2. For **Name**, type **DNAT-rdp**.
-3. For **Rule collection type**, select **DNAT**.
-4. For **Priority**, type **100**.
-5. For the rule **Name** type **Allow-rdp**.
-6. For the **Source type**, select **IP address**.
-7. For **Source**, type **\***.
-8. For **Protocol**, select **TCP**.
-9. For **Destination Ports**, type **3389**.
-10. For **Destination Type**, select **IP Address**.
-11. For **Destination**, type the firewall public IP address that you noted previously.
-12. For **Translated address**, type the private IP address for **Srv-Workload-01** that you noted previously.
-13. For **Translated port**, type **3389**.
-14. Select **Add**.
+1. Select **Add/Rule collection**.
+1. For **Name**, type **dnat-rdp**.
+1. For **Rule collection type**, select **DNAT**.
+1. For **Priority**, type **100**.
+1. For the rule **Name** type **Allow-rdp**.
+1. For the **Source type**, select **IP address**.
+1. For **Source**, type **\***.
+1. For **Protocol**, select **TCP**.
+1. For **Destination Ports**, type **3389**.
+1. For **Destination Type**, select **IP Address**.
+1. For **Destination**, type the firewall public IP address that you noted previously.
+1. For **Translated address**, type the private IP address for **Srv-Workload-01** that you noted previously.
+1. For **Translated port**, type **3389**.
+1. Select **Add**.
Add a network rule so you can connect a remote desktop from **Srv-Workload-01** to **Srv-Workload-02**.
Add a network rule so you can connect a remote desktop from **Srv-Workload-01**
2. For **Name**, type **vnet-rdp**. 3. For **Rule collection type**, select **Network**. 4. For **Priority**, type **100**.
-5. For the rule **Name** type **Allow-vnet**.
-6. For the **Source type**, select **IP address**.
-7. For **Source**, type **\***.
-8. For **Protocol**, select **TCP**.
-9. For **Destination Ports**, type **3389**.
-9. For **Destination Type**, select **IP Address**.
-10. For **Destination**, type the **Srv-Workload-02** private IP address that you noted previously.
-11. Select **Add**.
-1. Select **Next: Threat intelligence**.
-2. Select **Next: Hubs**.
-3. On the **Hubs** tab, select **Associate virtual hubs**.
-4. Select **Hub-01** and then select **Add**.
-5. Select **Review + create**.
-6. Select **Create**.
-
-This can take about five minutes or more to complete.
+1. For **Rule collection action**, select **Allow**.
+1. For the rule **Name** type **Allow-vnet**.
+1. For the **Source type**, select **IP address**.
+1. For **Source**, type **\***.
+1. For **Protocol**, select **TCP**.
+1. For **Destination Ports**, type **3389**.
+1. For **Destination Type**, select **IP Address**.
+1. For **Destination**, type the **Srv-Workload-02** private IP address that you noted previously.
+1. Select **Add**.
+1. Select **Review + create**.
+1. Select **Create**.
+
+## Associate policy
+
+Associate the firewall policy with the hub.
+
+1. From Firewall Manager, select **Azure Firewall Policies**.
+1. Select the check box for **Policy-01**.
+1. Select **Manage associations/Associate hubs**.
+1. Select **hub-01**.
+1. Select **Add**.
## Route traffic to your hub Now you must ensure that network traffic gets routed through your firewall.
-1. From Firewall Manager, select **Secured virtual hubs**.
+1. From Firewall Manager, select **Virtual hubs**.
2. Select **Hub-01**. 3. Under **Settings**, select **Security configuration**. 4. Under **Internet traffic**, select **Azure Firewall**. 5. Under **Private traffic**, select **Send via Azure Firewall**.
-10. Verify that the **hub-spoke** connection shows **Internet Traffic** as **Secured**.
-11. Select **Save**.
+1. Select **Save**.
+ It takes a few minutes to update the route tables.
+1. Verify that the two connections show Azure Firewall secures both Internet and private traffic.
-## Test your firewall
+## Test the firewall
-To test your firewall rules, you'll connect a remote desktop using the firewall public IP address, which is NATed to **Srv-Workload-01**. From there you'll use a browser to test the application rule and connect a remote desktop to **Srv-Workload-02** to test the network rule.
+To test the firewall rules, you'll connect a remote desktop using the firewall public IP address, which is NATed to **Srv-Workload-01**. From there you'll use a browser to test the application rule and connect a remote desktop to **Srv-Workload-02** to test the network rule.
### Test the application rule
Now, test the firewall rules to confirm that it works as expected.
1. Connect a remote desktop to firewall public IP address, and sign in.
-3. Open Internet Explorer and browse to https://www.microsoft.com.
+3. Open Internet Explorer and browse to `https://www.microsoft.com`.
4. Select **OK** > **Close** on the Internet Explorer security alerts. You should see the Microsoft home page.
-5. Browse to https://www.google.com.
+5. Browse to `https://www.google.com`.
You should be blocked by the firewall.
So now you've verified that the firewall application rule is working:
Now test the network rule. -- Open a remote desktop to the **Srv-Workload-02** private IP address.
+- From Srv-Workload-01, open a remote desktop to the Srv-Workload-02 private IP address.
- A remote desktop should connect to **Srv-Workload-02**.
+ A remote desktop should connect to Srv-Workload-02.
So now you've verified that the firewall network rule is working: * You can connect a remote desktop to a server located in another virtual network. ## Clean up resources
-When you are done testing your firewall resources, delete the **fw-manager** resource group to delete all firewall-related resources.
+When you are done testing your firewall resources, delete the **fw-manager-rg** resource group to delete all firewall-related resources.
## Next steps
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso27001-ase-sql-workload/deploy.md
The following table provides a list of the blueprint artifact parameters:
|Azure SQL Database resource group|Resource group|Name|**Locked** - Concatenates the **Organization name** with `-workload-azsql-rg` to make the resource group unique.| |Azure SQL Database resource group|Resource group|Location|**Locked** - Uses the blueprint parameter.| |Azure SQL Database template|Resource Manager template|Azure SQL Server admin username|Username for the Azure SQL Server. Must match same property value in **Key Vault template**. Default value is _sql-admin-user_.|
-|Azure SQL Database template|Resource Manager template|Azure SQL Server admin password (Key Vault Resource ID)|The Resource ID of the Key Vault. Use "/subscription/{subscriptionId}/resourceGroups/{orgName}-workload-kv/providers/Microsoft.KeyVault/vaults/{orgName}-workload-kv" and replace `{subscriptionId}` with your Subscription ID and `{orgName}` with the **Organization name** blueprint parameter.|
+|Azure SQL Database template|Resource Manager template|Azure SQL Server admin password (Key Vault Resource ID)|The Resource ID of the Key Vault. Use "/subscriptions/{subscriptionId}/resourceGroups/{orgName}-workload-kv-rg/providers/Microsoft.KeyVault/vaults/{orgName}-workload-kv" and replace `{subscriptionId}` with your Subscription ID and `{orgName}` with the **Organization name** blueprint parameter.|
|Azure SQL Database template|Resource Manager template|Azure SQL Server admin password (Key Vault Secret Name)|Username of the SQL Server admin. Must match value in **Key Vault template** property **Azure SQL Server admin username**.| |Azure SQL Database template|Resource Manager template|Log retention in days|Data retention in days. Default value is _365_.| |Azure SQL Database template|Resource Manager template|AAD admin object ID|AAD object ID of the user that will get assigned as an Active Directory admin. No default value and can't be left blank. To locate this value from the Azure portal, search for and select "Users" under _Services_. Use the _Name_ box to filter for the account name and select that account. On the _User profile_ page, select the "Click to copy" icon next to the _Object ID_.|
governance Author Policies For Arrays https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/author-policies-for-arrays.md
The fact that the `where` expression is evaluated against the **entire** request
"field": "tags.env", "equals": "prod" }
- }
+ },
+ "equals": 0
} ```
The fact that the `where` expression is evaluated against the **entire** request
| 1 | `tags.env` => `"prod"` | `true` | | 2 | `tags.env` => `"prod"` | `true` |
-Nested count expressions are also allowed:
+Nested count expressions can be used to apply conditions to nested array fields. For example, the following condition checks that the `objectArray[*]` array has exactly 2 members with `nestedArray[*]` that contains 1 or more members:
```json { "count": { "field": "Microsoft.Test/resourceType/objectArray[*]", "where": {
- "allOf": [
- {
- "field": "Microsoft.Test/resourceType/objectArray[*].property",
- "equals": "value2"
- },
- {
- "count": {
+ "count": {
+ "field": "Microsoft.Test/resourceType/objectArray[*].nestedArray[*]"
+ },
+ "greaterOrEquals": 1
+ }
+ },
+ "equals": 2
+}
+```
+
+| Iteration | Selected values | Nested count evaluation result |
+|:|:|:|
+| 1 | `Microsoft.Test/resourceType/objectArray[*].nestedArray[*]` => `1`, `2` | `nestedArray[*]` has 2 members => `true` |
+| 2 | `Microsoft.Test/resourceType/objectArray[*].nestedArray[*]` => `3`, `4` | `nestedArray[*]` has 2 members => `true` |
+
+Since both members of `objectArray[*]` have a child array `nestedArray[*]` with 2 members, the outer count expression returns `2`.
+
+More complex example: check that the `objectArray[*]` array has exactly 2 members with `nestedArray[*]` with any members equal to `2` or `3`:
+
+```json
+{
+ "count": {
+ "field": "Microsoft.Test/resourceType/objectArray[*]",
+ "where": {
+ "count": {
+ "field": "Microsoft.Test/resourceType/objectArray[*].nestedArray[*]",
+ "where": {
"field": "Microsoft.Test/resourceType/objectArray[*].nestedArray[*]",
- "where": {
- "field": "Microsoft.Test/resourceType/objectArray[*].nestedArray[*]",
- "equals": 3
- },
- "greater": 0
- }
+ "in": [ 2, 3 ]
}
- ]
+ },
+ "greaterOrEquals": 1
}
- }
+ },
+ "equals": 2
} ```
-
-| Outer Loop Iteration | Selected values | Inner Loop Iteration | Selected values |
-|:|:|:|:|
-| 1 | `Microsoft.Test/resourceType/objectArray[*].property` => `"value1`</br> `Microsoft.Test/resourceType/objectArray[*].nestedArray[*]` => `1`, `2` | 1 | `Microsoft.Test/resourceType/objectArray[*].nestedArray[*]` => `1` |
-| 1 | `Microsoft.Test/resourceType/objectArray[*].property` => `"value1`</br> `Microsoft.Test/resourceType/objectArray[*].nestedArray[*]` => `1`, `2` | 2 | `Microsoft.Test/resourceType/objectArray[*].nestedArray[*]` => `2` |
-| 2 | `Microsoft.Test/resourceType/objectArray[*].property` => `"value2`</br> `Microsoft.Test/resourceType/objectArray[*].nestedArray[*]` => `3`, `4` | 1 | `Microsoft.Test/resourceType/objectArray[*].nestedArray[*]` => `3` |
-| 2 | `Microsoft.Test/resourceType/objectArray[*].property` => `"value2`</br> `Microsoft.Test/resourceType/objectArray[*].nestedArray[*]` => `3`, `4` | 2 | `Microsoft.Test/resourceType/objectArray[*].nestedArray[*]` => `4` |
+
+| Iteration | Selected values | Nested count evaluation result
+|:|:|:|
+| 1 | `Microsoft.Test/resourceType/objectArray[*].nestedArray[*]` => `1`, `2` | `nestedArray[*]` contains `2` => `true` |
+| 2 | `Microsoft.Test/resourceType/objectArray[*].nestedArray[*]` => `3`, `4` | `nestedArray[*]` contains `3` => `true` |
+
+Since both members of `objectArray[*]` have a child array `nestedArray[*]` that contains either `2` or `3`, the outer count expression returns `2`.
+
+> [!NOTE]
+> Nested field count expressions can only refer to nested arrays. For example, count expression referring to `Microsoft.Test/resourceType/objectArray[*]` can have a nested count targeting the nested array `Microsoft.Test/resourceType/objectArray[*].nestedArray[*]`, but it can't have a nested count expression targeting `Microsoft.Test/resourceType/stringArray[*]`.
#### Accessing current array member with template functions
iot-central Overview Iot Central Operator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-operator.md
Title: Azure IoT Central operator guide description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This article provides an overview of the operator role in IoT Central. -- Previously updated : 03/17/2021++ Last updated : 03/19/2021 -+
+# Device groups, jobs, use dashboards and create personal dashboards
# IoT Central operator guide
-This article provides an overview of the operator role in IoT Central. An operator manages the devices connected to the application.
+*This article applies to operators.*
-An _operator_ manages the devices connected to the application.
+An IoT Central application lets you monitor and manage millions of devices throughout their life cycle. This guide is for operators who use an IoT Central application to manage IoT devices.
-As an operator, you can:
+An operator:
-* Use the **Devices** page to view, add, and delete devices connected to your Azure IoT Central application.
-* Import and export devices in bulk.
-* Maintain an up-to-date inventory of your devices.
-* Keep your device metadata up to date by changing the values stored in the device properties from your views.
-* Control the behavior of your devices by updating a setting on a specific device from your views.
+- Monitors and manages the devices connected to the application.
+- Troubleshoots and remediates issues with devices.
+- Provisions new devices.
-## IoT Central homepage
+## Monitor and manage devices
-The [IoT Central homepage](https://aka.ms/iotcentral-get-started) page is the place where you can learn more about the latest news and features available on IoT Central, create new applications, and see and launch your existing application.
+To monitor devices, an operator can use the device views defined by the solution builder as part of the device template. These views can show device telemetry and property values. An example is the **Overview** view shown on the previous screenshot.
-## View your devices
+For more detailed information, an operator can use device groups and the built-in analytics features. To learn more, see [How to use analytics to analyze device data](howto-create-analytics.md).
-To view an individual device:
+To manage individual devices, an operator can use device views to set device and cloud properties, and call device commands. Examples, include the **Manage device** and **Commands** views in the previous screenshot.
-1. Choose **Devices** on the left pane. Here you see a list of all devices and of your device templates.
+To manage devices in bulk, an operator can create and schedule jobs. Jobs can update properties and run commands on multiple devices. To learn more, see [Create and run a job in your Azure IoT Central application](howto-run-a-job.md).
-1. Choose a device template.
+## Troubleshoot and remediate issues
-1. In the right-hand pane of the **Devices** page, you see a list of devices created from that device template. Choose an individual device to see the device details page for that device:
+The operator is responsible for the health of the application and its devices. The [troubleshooting guide](troubleshoot-connection.md) helps operators diagnose and remediate common issues. An operator can use the **Devices** page to block devices that appear to be malfunctioning until the problem is resolved.
- ![Screenshot of Device Details Page](./media/overview-iot-central-operator/device-list.png)
+## Add and remove devices
-## Add a device
+The operator can add and remove devices to your IoT Central application either individually or in bulk. To learn more, see [Manage devices in your Azure IoT Central application](howto-manage-devices.md).
-To add a device to your Azure IoT Central application:
+## Personalize
-1. Choose **Devices** on the left pane.
-
-1. Choose the device template from which you want to create a device.
-
-1. Choose + **New**.
-
-1. Turn the **Simulated** toggle to **On** or **Off**. A real device is for a physical device that you connect to your Azure IoT Central application. A simulated device has sample data generated for you by Azure IoT Central.
-
-1. Select **Create**.
-
-1. This device now appears in your device list for this template. Select the device to see the device details page that contains all views for the device.
-
-## Import devices
-
-To connect large number of devices to your application, you can bulk import devices from a CSV file. The CSV file should have the following columns and headers:
-
-* **IOTC_DeviceID** - the device ID can contain letters, numbers, and the `-` character.
-* **IOTC_DeviceName** - this column is optional.
-
-To bulk-register devices in your application:
-
-1. Choose **Devices** on the left pane.
-
-1. On the left panel, choose the device template for which you want to bulk create the devices.
-
- > [!NOTE]
- > If you don't have a device template yet then you can import devices under **All devices** and register them without a template. After devices have been imported, you can then migrate them to a template.
-
-1. Select **Import**.
-
- ![Screenshot of Import Action](./media/overview-iot-central-operator/bulk-import-1-a.png)
--
-1. Select the CSV file that has the list of Device IDs to be imported.
-
-1. Device import starts once the file has been uploaded. You can track the import status in the Device Operations panel. This panel appears automatically after the import starts or you can access it through the bell icon in the top right-hand corner.
-
-1. Once the import completes, a success message is shown in the Device Operations panel.
-
- ![Screenshot of Import Success](./media/overview-iot-central-operator/bulk-import-3-a.png)
-
-If the device import operation fails, you see an error message on the Device Operations panel. A log file capturing all the errors is generated that you can download.
-
-## Migrate devices to a template
-
-If you register devices by starting the import under **All devices**, then the devices are created without any device template association. Devices must be associated with a template to explore the data and other details about the device. Follow these steps to associate devices with a template:
-
-1. Choose **Devices** on the left pane.
-
-1. On the left panel, choose **All devices**:
-
- ![Screenshot of Unassociated Devices](./media/overview-iot-central-operator/unassociated-devices-2-a.png)
-
-1. Use the filter on the grid to determine if the value in the **Device Template** column is **Unassociated** for any of your devices.
-
-1. Select the devices you want to associate with a template:
-
-1. Select **Migrate**:
-
- ![Screenshot of Associate Devices](./media/overview-iot-central-operator/unassociated-devices-1-a.png)
-
-1. Choose the template from the list of available templates and select **Migrate**.
-
-1. The selected devices are associated with the device template you chose.
-
-## Export devices
-
-To connect a real device to IoT Central, you need its connection string. You can export device details in bulk to get the information you need to create device connection strings. The export process creates a CSV file with the device identity, device name, and keys for all the selected devices.
-
-To bulk export devices from your application:
-
-1. Choose **Devices** on the left pane.
-
-1. On the left pane, choose the device template from which you want to export the devices.
-
-1. Select the devices that you want to export and then select the **Export** action.
-
- ![Screenshot of Export](./media/overview-iot-central-operator/export-1-a.png)
-
-1. The export process starts. You can track the status using the Device Operations panel.
-
-1. When the export completes, a success message is shown along with a link to download the generated file.
-
-1. Select the **Download File** link to download the file to a local folder on the disk.
-
- ![Screenshot of Export Success](./media/overview-iot-central-operator/export-2-a.png)
-
-1. The exported CSV file contains the following columns: device ID, device name, device keys, and X509 certificate thumbprints:
-
- * IOTC_DEVICEID
- * IOTC_DEVICENAME
- * IOTC_SASKEY_PRIMARY
- * IOTC_SASKEY_SECONDARY
- * IOTC_X509THUMBPRINT_PRIMARY
- * IOTC_X509THUMBPRINT_SECONDARY
-
-For more information about connection strings and connecting real devices to your IoT Central application, see [Device connectivity in Azure IoT Central](concepts-get-connected.md).
-
-## Delete a device
-
-To delete either a real or simulated device from your Azure IoT Central application:
-
-1. Choose **Devices** on the left pane.
-
-1. Choose the device template of the device you want to delete.
-
-1. Use the filter tools to filter and search for your devices. Check the box next to the devices to delete.
-
-1. Choose **Delete**. You can track the status of this deletion in your Device Operations panel.
-
-## Change a property
-
-Cloud properties are the device metadata associated with the device, such as city and serial number. Cloud properties only exist in the IoT Central application and aren't synchronized to your devices. Writeable properties control the behavior of a device and let you set the state of a device remotely, for example by setting the target temperature of a thermostat device. Device properties are set by the device and are read-only within IoT Central. You can view and update properties on the **Device Details** views for your device.
-
-1. Choose **Devices** on the left pane.
-
-1. Choose the device template of the device whose properties you want to change and select the target device.
-
-1. Choose the view that contains properties for your device, this view enables you to input values and select **Save** at the top of the page. Here you see the properties your device has and their current values. Cloud properties and writeable properties have editable fields, while device properties are read-only. For writeable properties, you can see their sync status at the bottom of the field.
-
-1. Modify the properties to the values you need. You can modify multiple properties at a time and update them all at the same time.
-
-1. Choose **Save**. If you saved writeable properties, the values are sent to your device. When the device confirms the change for the writeable property, the status returns back to **synced**. If you saved a cloud property, the value is updated.
+Operators can create personalized dashboards in an IoT Central application that contain links to the resources they use most often. To learn more, see [Manage dashboards](howto-create-personal-dashboards.md#manage-dashboards).
## Next steps
-Now that you've learned how to manage devices in your Azure IoT Central application, the suggested next step is to learn how to [Configure rules](howto-configure-rules.md) for your devices.
+If you want to learn more about using IoT Central, the suggested next steps are to try the quickstarts, beginning with [Create an Azure IoT Central application](./quick-deploy-iot-central.md).
iot-edge How To Update Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-update-iot-edge.md
Check the version of the IoT Edge agent and IoT Edge hub modules currently on yo
The IoT Edge agent and IoT Edge hub images are tagged with the IoT Edge version that they are associated with. There are two different ways to use tags with the runtime images:
-* **Rolling tags** - Use only the first two values of the version number to get the latest image that matches those digits. For example, 1.0 is updated whenever there's a new release to point to the latest 1.0.x version. If the container runtime on your IoT Edge device pulls the image again, the runtime modules are updated to the latest version. This approach is suggested for development purposes. Deployments from the Azure portal default to rolling tags.
+* **Rolling tags** - Use only the first two values of the version number to get the latest image that matches those digits. For example, 1.1 is updated whenever there's a new release to point to the latest 1.1.x version. If the container runtime on your IoT Edge device pulls the image again, the runtime modules are updated to the latest version. Deployments from the Azure portal default to rolling tags. *This approach is suggested for development purposes.*
-* **Specific tags** - Use all three values of the version number to explicitly set the image version. For example, 1.0.7 won't change after its initial release. You can declare a new version number in the deployment manifest when you're ready to update. This approach is suggested for production purposes.
+* **Specific tags** - Use all three values of the version number to explicitly set the image version. For example, 1.1.0 won't change after its initial release. You can declare a new version number in the deployment manifest when you're ready to update. *This approach is suggested for production purposes.*
### Update a rolling tag image
iot-edge Production Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/production-checklist.md
A tag is a docker concept that you can use to distinguish between versions of do
Tags also help you to enforce updates on your IoT Edge devices. When you push an updated version of a module to your container registry, increment the tag. Then, push a new deployment to your devices with the tag incremented. The container engine will recognize the incremented tag as a new version and will pull the latest module version down to your device.
-For an example of a tag convention, see [Update the IoT Edge runtime](how-to-update-iot-edge.md#understand-iot-edge-tags) to learn how IoT Edge uses rolling tags and specific tags to track versions.
+#### Tags for the IoT Edge runtime
+
+The IoT Edge agent and IoT Edge hub images are tagged with the IoT Edge version that they are associated with. There are two different ways to use tags with the runtime images:
+
+* **Rolling tags** - Use only the first two values of the version number to get the latest image that matches those digits. For example, 1.1 is updated whenever there's a new release to point to the latest 1.1.x version. If the container runtime on your IoT Edge device pulls the image again, the runtime modules are updated to the latest version. Deployments from the Azure portal default to rolling tags. *This approach is suggested for development purposes.*
+
+* **Specific tags** - Use all three values of the version number to explicitly set the image version. For example, 1.1.0 won't change after its initial release. You can declare a new version number in the deployment manifest when you're ready to update. *This approach is suggested for production purposes.*
### Store runtime containers in your private registry
iot-fundamentals Iot Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-fundamentals/iot-glossary.md
Last updated 03/08/2021
# Glossary of IoT terms
-This article lists some of the common terms used in the IoT articles.
+This article lists some of the common terms used in the IoT articles.
## A
This article lists some of the common terms used in the IoT articles.
[Advanced Message Queueing Protocol (AMQP)](https://www.amqp.org/) is one of the messaging protocols that [IoT Hub](#iot-hub) supports for communicating with devices. For more information about the messaging protocols that IoT Hub supports, see [Send and receive messages with IoT Hub](../iot-hub/iot-hub-devguide-messaging.md).
+### Allocation policy
+
+In the [Device Provisioning Service](#device-provisioning-service), the allocation policy determines how the service assigns devices to [Linked IoT hubs](#linked-iot-hub).
+
+### Attestation mechanism
+
+In the [Device Provisioning Service](#device-provisioning-service), the attestation mechanism is the method used for confirming a device's identity. The attestation mechanism is configured on an [enrollment](#enrollment).
+
+Attestation mechanisms include X.509 certificates, Trusted Platform Modules, and symmetric keys.
+
+### Automatic deployment
+
+In IoT Edge, an automatic deployment configures a target set of IoT Edge devices to run a set of IoT Edge modules. Each deployment continuously ensures that all devices that match its target condition are running the specified set of modules, even when new devices are created or are modified to match the target condition. Each IoT Edge device only receives the highest priority deployment whose target condition it meets. Learn more about [IoT Edge automatic deployment](../iot-edge/module-deployment-monitoring.md).
+
+### Automatic device configuration
+
+Your solution back end can use [automatic device configurations](../iot-hub/iot-hub-automatic-device-management.md) to assign desired properties to a set of [device twins](#device-twin) and report status using system metrics and custom metrics.
+ ### Automatic device management Automatic device management in Azure IoT Hub automates many of the repetitive and complex tasks of managing large device fleets over the entirety of their lifecycles. With Automatic Device Management, you can target a set of devices based on their properties, define a desired configuration, and let IoT Hub update devices whenever they come into scope. Consists of [automatic device configurations](../iot-hub/iot-hub-automatic-device-management.md) and [IoT Edge automatic deployments](../iot-edge/how-to-deploy-at-scale.md).
-### Automatic device configuration
+### Azure Digital Twins
-Your solution back end can use [automatic device configurations](../iot-hub/iot-hub-automatic-device-management.md) to assign desired properties to a set of [device twins](#device-twin) and report status using system metrics and custom metrics.
+Azure Digital Twins is a platform as a service (PaaS) offering for creating digital representations of real-world things, places, business processes, and people. Build knowledge graphs that represent entire environments, and use them to gain insights to drive better products, optimize operations and costs, and create breakthrough customer experiences. To learn more, see [Azure Digital Twins](../digital-twins/index.yml).
+
+### Azure Digital Twins instance
+
+A single instance of the Azure Digital Twins service in a customer's subscription. While [Azure Digital Twins](#azure-digital-twins) refers to the Azure service as a whole, your Azure Digital Twins **instance** is your individual Azure Digital Twins resource.
### Azure IoT device SDKs
In the context of [IoT Hub](#iot-hub), a back-end app is an app that connects to
### Built-in endpoints
-Every IoT hub includes a built-in [endpoint](../iot-hub/iot-hub-devguide-endpoints.md) that is Event Hub-compatible. You can use any mechanism that works with Event Hubs to read device-to-cloud messages from this endpoint.
+A type of [endpoint](#endpoint) that is built into IoT Hub. Every IoT hub includes a built-in [endpoint](../iot-hub/iot-hub-devguide-endpoints.md) that is Event Hub-compatible. You can use any mechanism that works with Event Hubs to read device-to-cloud messages from this endpoint.
## C
In IoT Plug and Play, commands defined in an [interface](#interface) represent m
### Component
-In IoT Plug and Play, components let you build a model [interface](#interface) as an assembly of other interfaces. A [device model](#device-model) can combine multiple interfaces as components. For example, a model might include a switch component and thermostat component. Multiple components in a model can also use the same interface type. For example, a model might include two thermostat components.
+In IoT Plug and Play and Azure Digital Twins, components let you build a model [interface](#interface) as an assembly of other interfaces. A [device model](#device-model) can combine multiple interfaces as components. For example, a model might include a switch component and thermostat component. Multiple components in a model can also use the same interface type. For example, a model might include two thermostat components.
### Configuration
A data-point message is a [device-to-cloud](#device-to-cloud) message that conta
In IoT Plug and Play, all [device models](#device-model) have a default component. A simple device model only has a default component - such a model is also known as a no component device. A more complex model has multiple components nested underneath the default component.
-### Device certification
-
-The IoT Plug and Play device certification program verifies that a device meets the IoT Plug and Play certification requirements. You can add a certified device to the public [Certified for Azure IoT device catalog](https://aka.ms/devicecatalog).
+### Deployment manifest
-### Device model
-
-A device model uses the [Digital Twins Definition Language](#digital-twins-definition-language) to describe the capabilities of an IoT Plug and Play device. A simple device model uses a single interface to describe the device capabilities. A more complex device model includes multiple components, each of which describe a set of capabilities. To learn more, see [IoT Plug and Play components in models](../iot-pnp/concepts-components.md).
-
-### Device builder
-
-A device builder uses a [device model](#device-model) and [interfaces](#interface) when implementing code to run on an [IoT Plug and Play device](#iot-plug-and-play-device). Device builders typically use one of the [Azure IoT device SDKs](#azure-iot-device-sdks) to implement the device client.
-
-### Device modeling
-
-A [device builder](#device-builder) or [module builder](#module-builder)uses the [Digital Twins Definition Language](#digital-twins-definition-language) to model the capabilities of an [IoT Plug and Play device](#iot-plug-and-play-device). A [solution builder](#solution-builder) can configure an IoT solution from the model.
+In [IoT Edge](#iot-edge), the deployment manifest is a JSON document containing the information to be copied in one or more IoT Edge devices' module twin(s) to deploy a set of modules, routes, and associated module desired properties.
### Desired configuration
In the context of a [device twin](../iot-hub/iot-hub-devguide-device-twins.md),
In the context of a [device twin](../iot-hub/iot-hub-devguide-device-twins.md), desired properties is a subsection of the device twin that is used with [reported properties](#reported-properties) to synchronize device configuration or condition. Desired properties can only be set by a [back-end app](#back-end-app) and are observed by the [device app](#device-app).
-### Device-to-cloud
-
-Refers to messages sent from a connected device to [IoT Hub](#iot-hub). These messages may be [data-point](#data-point-message) or [interactive](#interactive-message) messages. For more information, see [Send and receive messages with IoT Hub](../iot-hub/iot-hub-devguide-messaging.md).
- ### Device In the context of IoT, a device is typically a small-scale, standalone computing device that may collect data or control other devices. For example, a device might be an environmental monitoring device, or a controller for the watering and ventilation systems in a greenhouse. The [device catalog](https://catalog.azureiotsolutions.com/) provides a list of hardware devices certified to work with [IoT Hub](#iot-hub).
In the context of IoT, a device is typically a small-scale, standalone computing
A device app runs on your [device](#device) and handles the communication with your [IoT hub](#iot-hub). Typically, you use one of the [Azure IoT device SDKs](#azure-iot-device-sdks) when you implement a device app. In many of the IoT tutorials, you use a [simulated device](#simulated-device) for convenience.
+### Device builder
+
+A device builder uses a [device model](#device-model) and [interfaces](#interface) when implementing code to run on an [IoT Plug and Play device](#iot-plug-and-play-device). Device builders typically use one of the [Azure IoT device SDKs](#azure-iot-device-sdks) to implement the device client.
+
+### Device certification
+
+The IoT Plug and Play device certification program verifies that a device meets the IoT Plug and Play certification requirements. You can add a certified device to the public [Certified for Azure IoT device catalog](https://aka.ms/devicecatalog).
+ ### Device condition Refers to device state information, such as the connectivity method currently in use, as reported by a [device app](#device-app). [Device apps](#device-app) can also report their capabilities. You can query for condition and capability information using device twins.
Device data refers to the per-device data stored in the IoT Hub [identity regist
### Device identity
-The device identity is the unique identifier assigned to every device registered in the [identity registry](#identity-registry).
+The device identity (or device ID) is the unique identifier assigned to every device registered in the IoT Hub [identity registry](#identity-registry).
### Device management
Device management encompasses the full lifecycle associated with managing the de
[IoT hub](#iot-hub) enables common device management patterns including rebooting, performing factory resets, and performing firmware updates on your devices.
-### Device REST API
+### Device model
-You can use the [Device REST API](/rest/api/iothub/device) from a device to send device-to-cloud messages to an IoT hub, and receive [cloud-to-device](#cloud-to-device) messages from an IoT hub. Typically, you should use one of the higher-level [device SDKs](#azure-iot-device-sdks) as shown in the IoT Hub tutorials.
+A device model is a type of [model](#model) that uses the [Digital Twins Definition Language](#digital-twins-definition-language-dtdl) to describe the capabilities of an IoT Plug and Play device. A simple device model uses a single interface to describe the device capabilities. A more complex device model includes multiple components, each of which describe a set of capabilities. To learn more, see [IoT Plug and Play components in models](../iot-pnp/concepts-components.md).
+
+### Device modeling
+
+A [device builder](#device-builder) or [module builder](#module-builder)uses the [Digital Twins Definition Language](#digital-twins-definition-language-dtdl) to model the capabilities of an [IoT Plug and Play device](#iot-plug-and-play-device). A [solution builder](#solution-builder) can configure an IoT solution from the model.
### Device provisioning Device provisioning is the process of adding the initial [device data](#device-data) to the stores in your solution. To enable a new device to connect to your hub, you must add a device ID and keys to the IoT Hub [identity registry](#identity-registry). As part of the provisioning process, you might need to initialize device-specific data in other solution stores.
+### Device Provisioning Service
+
+IoT Hub Device Provisioning Service (DPS) is a helper service for [IoT Hub](#iot-hub) that you use to configure zero-touch device provisioning to a specified IoT hub. With the DPS, you can provision millions of devices in a secure and scalable manner.
+
+### Device REST API
+
+You can use the [Device REST API](/rest/api/iothub/device) from a device to send device-to-cloud messages to an IoT hub, and receive [cloud-to-device](#cloud-to-device) messages from an IoT hub. Typically, you should use one of the higher-level [device SDKs](#azure-iot-device-sdks) as shown in the IoT Hub tutorials.
+ ### Device twin A device twin is JSON document that stores device state information such as metadata, configurations, and conditions. IoT Hub persists a device twin for each device that you provision in your IoT hub. Device twins enable you to synchronize device conditions and configurations between the device and the solution back end. You can query device twins to locate specific devices and for the status of long-running operations.
-### Direct method
+### Device-to-cloud
-A [direct method](../iot-hub/iot-hub-devguide-direct-methods.md) is a way for you to trigger a method to execute on a device by invoking an API on your IoT hub.
+Refers to messages sent from a connected device to [IoT Hub](#iot-hub). These messages may be [data-point](#data-point-message) or [interactive](#interactive-message) messages. For more information, see [Send and receive messages with IoT Hub](../iot-hub/iot-hub-devguide-messaging.md).
### Digital twin
-A digital twin is a collection of digital data that represents a physical object. Changes in the physical object are reflected in the digital twin. In some scenarios, you can use the digital twin to manipulate the physical object. The [Azure Digital Twins service](../digital-twins/index.yml) uses models expressed in the [Digital Twins Definition Language](#digital-twins-definition-language) to enable a wide range of cloud-based solutions that use digital twins. An [IoT Plug and Play](../iot-pnp/index.yml) device has a digital twin, described by a DTDL [device model](#device-model).
+A digital twin is a collection of digital data that represents a physical object. Changes in the physical object are reflected in the digital twin. In some scenarios, you can use the digital twin to manipulate the physical object. The [Azure Digital Twins service](../digital-twins/index.yml) uses [models](#model) expressed in the [Digital Twins Definition Language (DTDL)](#digital-twins-definition-language-dtdl) to represent digital twins of physical devices or higher-level abstract business concepts, enabling a wide range of cloud-based digital twin solutions. An [IoT Plug and Play](../iot-pnp/index.yml) device has a digital twin, described by a DTDL [device model](#device-model).
### Digital twin change events When an [IoT Plug and Play device](#iot-plug-and-play-device) is connected to an IoT hub, the hub can use its routing capability to send notifications of digital twin changes. For example, whenever a [property](#properties) value changes on a device, IoT Hub can send a notification to an endpoint such as an Event hub.
-### Digital Twins Definition Language
+### Digital twin route
+
+A route set up in an IoT hub to deliver [digital twin change events](#digital-twin-change-events) to an endpoint such as an Event hub.
-A language for describing models and interfaces for [IoT Plug and Play devices](#iot-plug-and-play-device). Use the [Digital Twins Definition Language version 2](https://github.com/Azure/opendigitaltwins-dtdl) to describe a [digital twin's](#digital-twin) capabilities and enable the IoT platform and IoT solutions to use the semantics of the entity.
+### Digital Twins Definition Language (DTDL)
-### Digital twin route
+A JSON-LD language for describing [models](#model) and [interfaces](#interface) for [IoT Plug and Play devices](#iot-plug-and-play-device) and [Azure Digital Twins](../digital-twins/index.yml) entities. Use the [Digital Twins Definition Language version 2](https://github.com/Azure/opendigitaltwins-dtdl) to describe a [digital twin's](#digital-twin) capabilities and enable the IoT platform and IoT solutions to use the semantics of the entity. Digital Twins Definition Language is often abbreviated as DTDL.
+
+### Direct method
+
+A [direct method](../iot-hub/iot-hub-devguide-direct-methods.md) is a way for you to trigger a method to execute on a device by invoking an API on your IoT hub.
-A route set up in an IoT hub to deliver [digital twin change events](#digital-twin-change-events) to and endpoint such as an Event hub.
+### Downstream services
+
+A relative term describing services that receive data from the current context. For instance, if you're thinking in the context of Azure Digital Twins, [Time Series Insights](../time-series-insights/index.yml) would be considered a downstream service if you set up your data to flow from Azure Digital Twins into Time Series Insights.
## E ### Endpoint
+A named representation of a data routing service that can receive data from other services.
+ An IoT hub exposes multiple [endpoints](../iot-hub/iot-hub-devguide-endpoints.md) that enable your apps to connect to the IoT hub. There are device-facing endpoints that enable devices to perform operations such as sending [device-to-cloud](#device-to-cloud) messages and receiving [cloud-to-device](#cloud-to-device) messages. There are service-facing management endpoints that enable [back-end apps](#back-end-app) to perform operations such as [device identity](#device-identity) management and device twin management. There are service-facing [built-in endpoints](#built-in-endpoints) for reading device-to-cloud messages. You can create [custom endpoints](#custom-endpoints) to receive device-to-cloud messages dispatched by a [routing rule](#routing-rules).
+### Enrollment
+
+In the [Device Provisioning Service](#device-provisioning-service), an enrollment is the record of individual devices or groups of devices that may register with a [Linked IoT hub](#linked-iot-hub) through autoprovisioning.
+
+### Enrollment group
+
+In the [Device Provisioning Service](#device-provisioning-service), an enrollment group identifies a group of devices that share an X.509 or symmetric key [attestation mechanism](#attestation-mechanism).
+
+### Event handlers
+
+This can refer to any process that is triggered by the arrival of an event and does some processing action. One way to create event handlers is by adding event processing code to an Azure function, and sending data through it using [endpoints](#endpoint) and [event routing](#event-routing).
+ ### Event Hub-compatible endpoint To read [device-to-cloud](#device-to-cloud) messages sent to your IoT hub, you can connect to an endpoint on your hub and use any Event Hub-compatible method to read those messages. Event Hub-compatible methods include using the [Event Hubs SDKs](../event-hubs/event-hubs-programming-guide.md) and [Azure Stream Analytics](../stream-analytics/stream-analytics-introduction.md).
+### Event routing
+
+The process of sending events and their data from one device or service to the [endpoint](#endpoint) of another.
+
+In Iot Hub, you can define [routing rules](#routing-rules) to describe how messages should be sent. In Azure Digital Twins, event routes are entities that are created for this purpose. Azure Digital Twins event routes can contain filters to limit what types of events are sent to each endpoint.
+ ## F ### Field gateway
A field gateway enables connectivity for devices that cannot connect directly to
A gateway enables connectivity for devices that cannot connect directly to [IoT Hub](#iot-hub). See also [Field Gateway](#field-gateway), [Cloud Gateway](#cloud-gateway), and [Custom Gateway](#custom-gateway).
+### Gateway device
+
+A device is an example of a [field Gateway](#field-gateway). A gateway device could be standard IoT [device](#device) or an [IoT Edge device](#iot-edge-device).
+
+A gateway device enables connectivity for downstream devices that cannot connect directly to [IoT Hub](#iot-hub).
+
+## H
+
+### Hardware security module
+
+A hardware security module (HSM) is used for secure, hardware-based storage of device secrets. It is the most secure form of secret storage for a device. Both X.509 certificates and symmetric keys can be stored in an HSM. In the [Device Provisioning Service](#device-provisioning-service), an [attestation mechanism](#attestation-mechanism) can use an HSM.
+ ## I
+### ID scope
+
+The ID scope is unique value assigned to a [Device Provisioning Service (DPS)](#device-provisioning-service) instance when it's created.
+
+IoT Central applications make use of DPS instances and make the ID Scope available through the IoT Central UI.
+ ### Identity registry The [identity registry](../iot-hub/iot-hub-devguide-identity-registry.md) is the built-in component of an IoT hub that stores information about the individual devices permitted to connect to an IoT hub.
+### Individual enrollment
+
+In the [Device Provisioning Service](#device-provisioning-service), an individual enrollment identifies a single device that uses an X.509 leaf certificate or symmetric key as an [attestation mechanism](#attestation-mechanism).
+ ### Interactive message An interactive message is a [cloud-to-device](#cloud-to-device) message that triggers an immediate action in the solution back end. For example, a device might send an alarm about a failure that should be automatically logged in to a CRM system.
An interactive message is a [cloud-to-device](#cloud-to-device) message that tri
In IoT Plug and Play, an interface describes related capabilities that are implemented by a [IoT Plug and Play device](#iot-plug-and-play-device) or [digital twin](#digital-twin). You can reuse interfaces across different [device models](#device-model). When an interface is used in a device model, it defines a [component](#component) of the device. A simple device only contains a default interface.
+In Azure Digital Twins, *interface* may be used to refer to the top-level code item in a [DTDL](#digital-twins-definition-language-dtdl) model definition.
+ ### IoT Edge Azure IoT Edge enables cloud-driven deployment of Azure services and solution-specific code to on-premises devices. [IoT Edge devices](#iot-edge-device) can aggregate data from other devices to perform computing and analytics before sending the data to the cloud. To learn more, see [Azure IoT Edge](../iot-edge/index.yml).
The part of the IoT Edge runtime responsible for deploying and monitoring module
### IoT Edge device
-An IoT Edge device uses containerized [IoT Edge modules](#iot-edge-module) to run Azure services, third-party services, or your own code. On the IoT Edge device, the [IoT Edge runtime](#iot-edge-runtime) manages the modules. You can remotely monitor and manage an IoT Edge device from the cloud.
-
-### IoT Edge automatic deployment
-
-An IoT Edge automatic deployment configures a target set of IoT Edge devices to run a set of IoT Edge modules. Each deployment continuously ensures that all devices that match its target condition are running the specified set of modules, even when new devices are created or are modified to match the target condition. Each IoT Edge device only receives the highest priority deployment whose target condition it meets. Learn more about [IoT Edge automatic deployment](../iot-edge/module-deployment-monitoring.md).
-
-### IoT Edge deployment manifest
-
-A Json document containing the information to be copied in one or more IoT Edge devices' module twin(s) to deploy a set of modules, routes, and associated module desired properties.
-
-### IoT Edge gateway device
-
-An IoT Edge device with downstream device. The downstream device can be either IoT Edge or not IoT Edge device.
+An IoT Edge device uses containerized IoT Edge [modules](#modules) to run Azure services, third-party services, or your own code. On the IoT Edge device, the [IoT Edge runtime](#iot-edge-runtime) manages the modules. You can remotely monitor and manage an IoT Edge device from the cloud.
### IoT Edge hub The part of the IoT Edge runtime responsible for module to module communications, upstream (toward IoT Hub) and downstream (away from IoT Hub) communications.
-### IoT Edge leaf device
-
-An IoT Edge device with no downstream device.
-
-### IoT Edge module
-
-An IoT Edge module is a Docker container that you can deploy to IoT Edge devices. It performs a specific task, such as ingesting a message from a device, transforming a message, or sending a message to an IoT hub. It communicates with other modules and sends data to the IoT Edge runtime. [Understand the requirements and tools for developing IoT Edge modules](../iot-edge/module-development.md).
-
-### IoT Edge module identity
-
-A record in the IoT Hub module identity registry detailing the existence and security credentials to be used by a module to authenticate with an edge hub or IoT Hub.
-
-### IoT Edge module image
-
-The docker image that is used by the IoT Edge runtime to instantiate module instances.
-
-### IoT Edge module twin
-
-A Json document persisted in the IoT Hub that stores the state information for a module instance.
-
-### IoT Edge priority
-
-When two IoT Edge deployments target the same device, the deployment with higher priority gets applied. If two deployments have the same priority, the deployment with the later creation date gets applied. Learn more about [priority](../iot-edge/module-deployment-monitoring.md#priority).
- ### IoT Edge runtime IoT Edge runtime includes everything that Microsoft distributes to be installed on an IoT Edge device. It includes Edge agent, Edge hub, and the IoT Edge security daemon.
-### IoT Edge set modules to a single device
-
-An operation that copies the content of an IoT Edge manifest on one device module twin. The underlying API is a generic apply configuration, which simply takes an IoT Edge manifest as an input.
-
-### IoT Edge target condition
+### IoT extension for Azure CLI
-In an IoT Edge deployment, target condition is any Boolean condition on device twins' tags to select the target devices of the deployment, for example **tag.environment = prod**. The target condition is continuously evaluated to include any new devices that meet the requirements or remove devices that no longer do. Learn more about [target condition](../iot-edge/module-deployment-monitoring.md#target-condition)
+[The IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) is a cross-platform, command-line tool. The tool enables you to manage your devices in the [identity registry](#identity-registry), send and receive messages and files from your devices, and monitor your IoT hub operations.
### IoT Hub
The [IoT Hub query language](../iot-hub/iot-hub-devguide-query-language.md) is a
You can use the [IoT Hub Resource REST API](/rest/api/iothub/iothubresource) to manage the IoT hubs in your Azure subscription performing operations such as creating, updating, and deleting hubs.
-### IoT solution accelerators
-
-Azure IoT solution accelerators package together multiple Azure services into solutions. These solutions enable you to get started quickly with end-to-end implementations of common IoT scenarios. For more information, see [What are Azure IoT solution accelerators?](../iot-accelerators/about-iot-accelerators.md)
-
-### The IoT extension for Azure CLI
-
-[The IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) is a cross-platform, command-line tool. The tool enables you to manage your devices in the [identity registry](#identity-registry), send and receive messages and files from your devices, and monitor your IoT hub operations.
- ### IoT Plug and Play bridge IoT Plug and Play bridge is an open-source application that enables existing sensors and peripherals attached to Windows or Linux gateways to connect as [IoT Plug and Play devices](#iot-plug-and-play-device).
+### IoT Plug and Play conventions
+
+IoT Plug and Play [devices](#iot-plug-and-play-device) are expected to follow a set of conventions when they exchange data with a solution.
+ ### IoT Plug and Play device An IoT Plug and Play device is typically a small-scale, standalone computing device that collects data or controls other devices, and that runs software or firmware that implements a [device model](#device-model). For example, an IoT Plug and Play device might be an environmental monitoring device, or a controller for a smart-agriculture irrigation system. An IoT Plug and Play device might be implemented directly or as an IoT Edge module. You can write a cloud-hosted IoT solution to command, control, and receive data from IoT Plug and Play devices.
-### IoT Plug and Play conventions
+### IoT solution accelerators
-IoT Plug and Play [devices](#iot-plug-and-play-device) are expected to follow a set of conventions when they exchange data with a solution.
+Azure IoT solution accelerators package together multiple Azure services into solutions. These solutions enable you to get started quickly with end-to-end implementations of common IoT scenarios. For more information, see [What are Azure IoT solution accelerators?](../iot-accelerators/about-iot-accelerators.md)
## J
IoT Plug and Play [devices](#iot-plug-and-play-device) are expected to follow a
Your solution back end can use [jobs](../iot-hub/iot-hub-devguide-jobs.md) to schedule and track activities on a set of devices registered with your IoT hub. Activities include updating device twin [desired properties](#desired-properties), updating device twin [tags](#tags), and invoking [direct methods](#direct-method). [IoT Hub](#iot-hub) also uses to [import to and export](../iot-hub/iot-hub-devguide-identity-registry.md#import-and-export-device-identities) from the [identity registry](#identity-registry).
+## L
+
+### Leaf device
+
+In [IoT Edge](#iot-edge), a leaf device is a device with no downstream device.
+
+### Lifecycle event
+
+In Azure Digital Twins, this type of event is fired when a data itemΓÇösuch as a digital twin, a relationship, or an event handlerΓÇöis created or deleted from your Azure Digital Twins instance.
+
+### Linked IoT hub
+
+The [Device Provisioning Service (DPS)](#device-provisioning-service), can provision devices to IoT hubs that have been linked to it. Linking an IoT hub to a DPS instance lets the service register a device ID and set the initial configuration in the device twin.
+ ## M
+### Model
+
+A model defines a type of entity in your physical environment, including its properties, telemetries, components, and sometimes other information. Models are used to create [digital twins](#digital-twin) that represent specific physical objects of this type. Models are written in the [Digital Twins Definition Language](#digital-twins-definition-language-dtdl).
+
+In the [Azure Digital Twins service](../digital-twins/index.yml), models can define devices or higher-level abstract business concepts. In [IoT Plug and Play](../iot-pnp/index.yml), [device models](#device-model) are used to describe devices specifically.
+ ### Model ID
-When an IoT Plug and Play device connects to an IoT Hub, it sends the **Model ID** of the [DTDL](#digital-twins-definition-language) model it implements. This ID enables the solution to find the device model.
+When an IoT Plug and Play device connects to an IoT Hub, it sends the **Model ID** of the [DTDL](#digital-twins-definition-language-dtdl) model it implements. This ID enables the solution to find the device model.
### Model repository
An API for managing and interacting with the model repository. For example, you
A module builder uses a [device model](#device-model) and [interfaces](#interface) when implementing code to run on an [IoT Plug and Play device](#iot-plug-and-play-device). Module builders implement the code as a module or an IoT Edge module to deploy to the IoT Edge runtime on a device.
-### Modules
+### Module identity
-On the device side, the IoT Hub device SDKs enable you to create [modules](../iot-hub/iot-hub-devguide-module-twins.md) where each one opens an independent connection to IoT Hub. This functionality enables you to use separate namespaces for different components on your device.
+The module identity is the unique identifier assigned to every module that belongs to a device. Module identity is also registered in the [identity registry](#identity-registry).
-Module identity and module twin provide the same capabilities as [device identity](#device-identity) and [device twin](#device-twin) but at a finer granularity. This finer granularity enables capable devices, such as operating system-based devices or firmware devices managing multiple components, to isolate configuration and conditions for each of those components.
+The module identify details the security credentials the module uses to authenticate with the [IoT Hub](#iot-hub) or, in the case of an IoT Edge module to the [IoT Edge hub](#iot-edge-hub).
-### Module identity
+### Module image
-The module identity is the unique identifier assigned to every module that belongs to a device. Module identity is also registered in the [identity registry](#identity-registry).
+The docker image that the [IoT Edge runtime](#iot-edge-runtime) uses to instantiate module instances.
### Module twin Similar to device twin, a module twin is JSON document that stores module state information such as metadata, configurations, and conditions. IoT Hub persists a module twin for each module identity that you provision under a device identity in your IoT hub. Module twins enable you to synchronize module conditions and configurations between the module and the solution back end. You can query module twins to locate specific modules and query the status of long-running operations.
+### Modules
+
+On the device side, the IoT Hub device SDKs enable you to create [modules](../iot-hub/iot-hub-devguide-module-twins.md) where each one opens an independent connection to IoT Hub. This functionality enables you to use separate namespaces for different components on your device.
+
+Module identity and module twin provide the same capabilities as [device identity](#device-identity) and [device twin](#device-twin) but at a finer granularity. This finer granularity enables capable devices, such as operating system-based devices or firmware devices managing multiple components, to isolate configuration and conditions for each of those components.
+
+In [IoT Edge](#iot-edge), a module is a Docker container that you can deploy to IoT Edge devices. It performs a specific task, such as ingesting a message from a device, transforming a message, or sending a message to an IoT hub. It communicates with other modules and sends data to the [IoT Edge runtime](#iot-edge-runtime).
+ ### MQTT [MQTT](https://mqtt.org/) is one of the messaging protocols that [IoT Hub](#iot-hub) supports for communicating with devices. For more information about the messaging protocols that IoT Hub supports, see [Send and receive messages with IoT Hub](../iot-hub/iot-hub-devguide-messaging.md).
When you connect to a device-facing or service-facing endpoint on an IoT hub, yo
### Properties
-Properties are data fields defined in an [interface](#interface) that represent some state of a digital twin. You can declare properties as read-only or writable. Read-only properties, such as serial number, are set by code running on the [IoT Plug and Play device](#iot-plug-and-play-device) itself. Writable properties, such as an alarm threshold, are typically set from the cloud-based IoT solution.
+Properties are data fields defined in an [interface](#interface) that represent some persistent state of a [digital twin](#digital-twin). You can declare properties as read-only or writable. Read-only properties, such as serial number, are set by code running on the [IoT Plug and Play device](#iot-plug-and-play-device) itself. Writable properties, such as an alarm threshold, are typically set from the cloud-based IoT solution.
+
+### Property change event
+
+An event that results from a property change in a [digital twin](#digital-twin).
### Protocol gateway
A protocol gateway is typically deployed in the cloud and provides protocol tran
## R
+### Registration
+
+A registration is the record of a device in the IoT Hub [Identity registry](#identity-registry). You can register or device directly, or use the [Device Provisioning Service](#device-provisioning-service) to automate device registration.
+
+### Registration ID
+
+The registration ID is used to uniquely identify a device [registration](#registration) with the [Device Provisioning Service](#device-provisioning-service). The registration ID may be the same value as the [Device identity](#device-identity).
+
+### Relationship
+
+In the [Azure Digital Twins](../digital-twins/index.yml) service, relationships are used to connect [digital twins](#digital-twin) into knowledge graphs that digitally represent your entire physical environment. The types of relationships that your twins can have are defined as part of the twins' [model](#model) definitionsΓÇöthe [DTDL](#digital-twins-definition-language-dtdl) model for a certain type of twin includes information about what relationships it can have to other twins.
+ ### Reported configuration In the context of a [device twin](../iot-hub/iot-hub-devguide-device-twins.md), reported configuration refers to the complete set of properties and metadata in the device twin that should be reported to the solution back end.
You configure [routing rules](../iot-hub/iot-hub-devguide-messages-read-custom.m
SASL PLAIN is a protocol that the AMQP protocol uses to transfer security tokens.
-### Service REST API
+### Service operations endpoint
-You can use the [Service REST API](/rest/api/iothub/service/configuration) from the solution back end to manage your devices. The API enables you to retrieve and update [device twin](#device-twin) properties, invoke [direct methods](#direct-method), and schedule [jobs](#job). Typically, you should use one of the higher-level [service SDKs](#azure-iot-service-sdks) as shown in the IoT Hub tutorials.
+An [endpoint](#endpoint) for managing service settings used by a service administrator. For example, in the [Device Provisioning Service](#device-provisioning-service) you use the service endpoint to manage enrollments.
-### Shared access signature
+### Service REST API
-Shared Access Signatures (SAS) are an authentication mechanism based on SHA-256 secure hashes or URIs. SAS authentication has two components: a _Shared Access Policy_ and a _Shared Access Signature_ (often called a token). A device uses SAS to authenticate with an IoT hub. [Back-end apps](#back-end-app) also use SAS to authenticate with the service-facing endpoints on an IoT hub. Typically, you include the SAS token in the [connection string](#connection-string) that an app uses to establish a connection to an IoT hub.
+You can use the [Service REST API](/rest/api/iothub/service/configuration) from the solution back end to manage your devices. The API enables you to retrieve and update [device twin](#device-twin) properties, invoke [direct methods](#direct-method), and schedule [jobs](#job). Typically, you should use one of the higher-level [service SDKs](#azure-iot-service-sdks) as shown in the IoT Hub tutorials.
### Shared access policy A shared access policy defines the permissions granted to anyone who has a valid [primary or secondary key](#primary-and-secondary-keys) associated with that policy. You can manage the shared access policies and keys for your hub in the portal.
+### Shared access signature
+
+Shared Access Signatures (SAS) are an authentication mechanism based on SHA-256 secure hashes or URIs. SAS authentication has two components: a _Shared Access Policy_ and a _Shared Access Signature_ (often called a token). A device uses SAS to authenticate with an IoT hub. [Back-end apps](#back-end-app) also use SAS to authenticate with the service-facing endpoints on an IoT hub. Typically, you include the SAS token in the [connection string](#connection-string) that an app uses to establish a connection to an IoT hub.
+ ### Simulated device For convenience, many of the IoT Hub tutorials use simulated devices to enable you to run samples on your local machine. In contrast, a [physical device](#physical-device) is a real device such as a Raspberry Pi that connects to an IoT hub.
In the context of a [device twin](../iot-hub/iot-hub-devguide-device-twins.md),
In the context of a [device twin](../iot-hub/iot-hub-devguide-device-twins.md), tags are device metadata stored and retrieved by the solution back end in the form of a JSON document. Tags are not visible to apps on a device.
+### Target condition
+
+In an IoT Edge deployment, the target condition selects the target devices of the deployment, for example **tag.environment = prod**. The target condition is continuously evaluated to include any new devices that meet the requirements or remove devices that no longer do.
+ ### Telemetry Devices collect telemetry data, such as wind speed or temperature, and use data-point messages to send the telemetry to an IoT hub.
-In IoT Plug and Play, telemetry fields defined in an [interface](#interface) represent measurements. These measurements are typically values such as sensor readings that are sent by the [IoT Plug and Play device](#iot-plug-and-play-device) as a stream of data.
+In IoT Plug and Play and Azure Digital Twins, telemetry fields defined in an [interface](#interface) represent measurements. These measurements are typically values such as sensor readings that are sent by devices, like [IoT Plug and Play devices](#iot-plug-and-play-device), as a stream of data.
+
+Unlike [properties](#properties), telemetry is not stored on a [digital twin](#digital-twin); it is a stream of time-bound data events that need to be handled as they occur.
+
+### Telemetry event
+
+An event that indicates the arrival of telemetry data.
### Token service You can use a token service to implement an authentication mechanism for your devices. It uses an IoT Hub [shared access policy](#shared-access-policy) with **DeviceConnect** permissions to create *device-scoped* tokens. These tokens enable a device to connect to your IoT hub. A device uses a custom authentication mechanism to authenticate with the token service. IF the device authenticates successfully, the token service issues a SAS token for the device to use to access your IoT hub.
+### Twin graph (or digital twin graph)
+
+In the [Azure Digital Twins](../digital-twins/index.yml) service, you can connect [digital twins](#digital-twin) with [relationships](#relationship) to create knowledge graphs that digitally represent your entire physical environment. A single [Azure Digital Twins instance](#azure-digital-twins-instance) can host many disconnected graphs, or one single interconnected graph.
+ ### Twin queries [Device and module twin queries](../iot-hub/iot-hub-devguide-query-language.md) use the SQL-like IoT Hub query language to retrieve information from your device twins or module twins. You can use the same IoT Hub query language to retrieve information about a [Job](#job) running in your IoT hub.
You can use a token service to implement an authentication mechanism for your de
Twin synchronization uses the [desired properties](#desired-properties) in your device twins or module twins to configure your devices or modules and retrieve [reported properties](#reported-properties) from them to store in the twin.
+## U
+
+### Upstream services
+
+A relative term describing services that feed data into the current context. For instance, if you're thinking in the context of Azure Digital Twins, IoT Hub is considered an upstream service because data flows from IoT Hub into Azure Digital Twins.
+ ## X ### X.509 client certificate
iot-hub Iot Hub Devguide Quotas Throttling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-quotas-throttling.md
For example, you use a simulated device to send 200 device-to-cloud messages per
Device identity registry operations are intended for run-time use in device management and provisioning scenarios. Reading or updating a large number of device identities is supported through [import and export jobs](iot-hub-devguide-identity-registry.md#import-and-export-device-identities).
-When initiating identity operations through [bulk device operations](iot-hub-bulk-identity-mgmt.md), the same throttle limits apply. For example, if you want to submit bulk operation to create 50 devices, and you have a S1 IoT Hub with 1 unit, only two of these bulk requests are accepted per minute. This because the identity operation throttle for for an S1 IoT Hub with 1 unit is 100/min/unit. Also in this case, a third request (and beyond) in the same minute would be rejected because the limit had already been reached.
+When initiating identity operations through [bulk registry update operations](https://docs.microsoft.com/rest/api/iothub/service/bulkregistry/updateregistry) (*not* bulk import and export jobs), the same throttle limits apply. For example, if you want to submit bulk operation to create 50 devices, and you have a S1 IoT Hub with 1 unit, only two of these bulk requests are accepted per minute. This because the identity operation throttle for for an S1 IoT Hub with 1 unit is 100/min/unit. Also in this case, a third request (and beyond) in the same minute would be rejected because the limit had already been reached.
### Device connections throttle
kinect-dk Body Sdk Download https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/kinect-dk/body-sdk-download.md
description: Understand how to download each version of the Azure Kinect Sensor
ms.prod: kinect-dk Previously updated : 06/26/2019 Last updated : 03/18/2021 keywords: azure, kinect, sdk, download update, latest, available, install, body, tracking
This document provides links to install each version of the Azure Kinect Body Tr
Version | Download --|-
+1.1.0 | [msi](https://www.microsoft.com/en-us/download/details.aspx?id=102901)
1.0.1 | [msi](https://www.microsoft.com/en-us/download/details.aspx?id=100942) [nuget](https://www.nuget.org/packages/Microsoft.Azure.Kinect.BodyTracking/1.0.1) 1.0.0 | [msi](https://www.microsoft.com/en-us/download/details.aspx?id=100848) [nuget](https://www.nuget.org/packages/Microsoft.Azure.Kinect.BodyTracking/1.0.0)
-0.9.5 | [msi](https://www.microsoft.com/en-us/download/details.aspx?id=100636) [nuget](https://www.nuget.org/packages/Microsoft.Azure.Kinect.BodyTracking/0.9.5)
-0.9.4 | [msi](https://www.microsoft.com/en-us/download/details.aspx?id=100415) [nuget](https://www.nuget.org/packages/Microsoft.Azure.Kinect.BodyTracking/0.9.4)
-0.9.3 | [msi](https://www.microsoft.com/en-us/download/details.aspx?id=100307) [nuget](https://www.nuget.org/packages/Microsoft.Azure.Kinect.BodyTracking/0.9.3)
-0.9.2 | [msi](https://www.microsoft.com/en-us/download/details.aspx?id=100128) [nuget](https://www.nuget.org/packages/Microsoft.Azure.Kinect.BodyTracking/0.9.2)
-0.9.1 | [msi](https://www.microsoft.com/en-us/download/details.aspx?id=100063) [nuget](https://www.nuget.org/packages/Microsoft.Azure.Kinect.BodyTracking/0.9.1)
-0.9.0 | [msi](https://www.microsoft.com/en-us/download/details.aspx?id=58402) [nuget](https://www.nuget.org/packages/Microsoft.Azure.Kinect.BodyTracking/0.9.0)
## Linux installation instructions
-Currently, the only supported distribution is Ubuntu 18.04. To request support for other distributions, see [this page](https://aka.ms/azurekinectfeedback).
+Currently, the only supported distribution is Ubuntu 18.04 and 20.04. To request support for other distributions, see [this page](https://aka.ms/azurekinectfeedback).
First, you'll need to configure [Microsoft's Package Repository](https://packages.microsoft.com/), following the instructions [here](/windows-server/administration/linux-package-repository-for-microsoft-software).
The `libk4abt<major>.<minor>` package contains the shared objects needed to run
The basic tutorials require the `libk4abt<major>.<minor>-dev` package. To install it, run
-`sudo apt install libk4abt1.0-dev`
+`sudo apt install libk4abt<major>.<minor>-dev`
If the command succeeds, the SDK is ready for use.
If the command succeeds, the SDK is ready for use.
## Change log
+### v1.1.0
+* [Feature] Add support for DirectML (Windows only) and TensorRT execution of pose estimation model. See FAQ on new execution environments.
+* [Feature] Add `model_path` to `k4abt_tracker_configuration_t` struct. Allows users to specify the pathname for pose estimation model. Defaults to `dnn_model_2_0_op11.onnx` standard pose estimation model located in the current directory.
+* [Feature] Include `dnn_model_2_0_lite_op11.onnx` lite pose estimation model. This model trades ~2x performance increase for ~5% accuracy decrease.
+* [Feature] Verified samples compile with Visual Studio 2019 and updates samples to use this release.
+* [Breaking Change] Update to ONNX Runtime 1.6 with support for CPU, CUDA 11.1, DirectML (Windows only), and TensorRT 7.2.1 execution environments. Requires NVIDIA driver update to R455 or better.
+* [Breaking Change] No NuGet install.
+* [Bug Fix] Add support for NVIDIA RTX 30xx series GPUs [Link](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1481)
+* [Bug Fix] Add support for AMD and Intel integrated GPUs (Windows only) [Link](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1481)
+* [Bug Fix] Update to CUDA 11.1 [Link](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1125)
+* [Bug Fix] Update to Sensor SDK 1.4.1 [Link](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1248)
+ ### v1.0.1 * [Bug Fix] Fix issue that the SDK crashes if loading onnxruntime.dll from path on Windows build 19025 or later: [Link](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/932)
If the command succeeds, the SDK is ready for use.
* [Bug Fix] Fix issue that the CPU usage goes up to 100% on Linux machine: [Link](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1007) * [Samples] Add two samples to the sample repo. Sample 1 demonstrates how to transform body tracking results from the depth space to color space [Link](https://github.com/microsoft/Azure-Kinect-Samples/tree/master/body-tracking-samples/camera_space_transform_sample); sample 2 demonstrates how to detect floor plane [Link](https://github.com/microsoft/Azure-Kinect-Samples/tree/master/body-tracking-samples/floor_detector_sample)
-### v0.9.5
-* [Feature] C# support. C# wrapper is packed in the nuget package.
-* [Feature] Multi-tracker support. Creating multiple trackers is allowed. Now user can create multiple trackers to track bodies from different Azure Kinect devices.
-* [Feature] Multi-thread processing support for CPU mode. When running on CPU mode, all cores will be used to maximize the speed.
-* [Feature] Add `gpu_device_id` to `k4abt_tracker_configuration_t` struct. Allow users to specify GPU device that is other than the default one to run the body tracking algorithm.
-* [Bug Fix/Breaking Change] Fix typo in a joint name. Change joint name from `K4ABT_JOINT_SPINE_NAVAL` to `K4ABT_JOINT_SPINE_NAVEL`.
-
-### v0.9.4
-* [Feature] Add hand joints support. The SDK will provide information for three additional joints for each hand: HAND, HANDTIP, THUMB.
-* [Feature] Add prediction confidence level for each detected joints.
-* [Feature] Add CPU mode support. By changing the `cpu_only_mode` value in `k4abt_tracker_configuration_t`, now the SDK can run on CPU mode which doesn't require the user to have a powerful graphics card.
-
-### v0.9.3
-* [Feature] Publish a new DNN model dnn_model_2_0.onnx, which largely improves the robustness of the body tracking.
-* [Feature] Disable the temporal smoothing by default. The tracked joints will be more responsive.
-* [Feature] Improve the accuracy of the body index map.
-* [Bug Fix] Fix bug that the sensor orientation setting is not effective.
-* [Bug Fix] Change the body_index_map type from K4A_IMAGE_FORMAT_CUSTOM to K4A_IMAGE_FORMAT_CUSTOM8.
-* [Known Issue] Two close bodies may merge to single instance segment.
-
-### v0.9.2
-* [Breaking Change] Update to depend on the latest Azure Kinect Sensor SDK 1.2.0.
-* [API Change] `k4abt_tracker_create` function will start to take a `k4abt_tracker_configuration_t` input.
-* [API Change] Change `k4abt_frame_get_timestamp_usec` API to `k4abt_frame_get_device_timestamp_usec` to be more specific and consistent with the Sensor SDK 1.2.0.
-* [Feature] Allow users to specify the sensor mounting orientation when creating the tracker to achieve more accurate body tracking results when mounting at different angles.
-* [Feature] Provide new API `k4abt_tracker_set_temporal_smoothing` to change the amount of temporal smoothing that the user wants to apply.
-* [Feature] Add C++ wrapper k4abt.hpp.
-* [Feature] Add version definition header k4abtversion.h.
-* [Bug Fix] Fix bug that caused extremely high CPU usage.
-* [Bug Fix] Fix logger crashing bug.
-
-### v0.9.1
-* [Bug Fix] Fix memory leak when destroying tracker
-* [Bug Fix] Better error messages for missing dependencies
-* [Bug Fix] Fail without crashing when creating a second tracker instance
-* [Bug Fix] Logger environmental variables now work correctly
-* Linux support
-
-### v0.9.0
-
-* [Breaking Change] Downgraded the SDK dependency to CUDA 10.0 (from CUDA 10.1). ONNX runtime officially only supports up to CUDA 10.0.
-* [Breaking Change] Switched to ONNX runtime instead of Tensorflow runtime. Reduces the first frame launching time and memory usage. It also reduces the SDK binary size.
-* [API Change] Renamed `k4abt_tracker_queue_capture()` to `k4abt_tracker_enqueue_capture()`
-* [API Change] Broke `k4abt_frame_get_body()` into two separate functions: `k4abt_frame_get_body_skeleton()` and `k4abt_frame_get_body_id()`. Now you can query the body ID without always copying the whole skeleton structure.
-* [API Change] Added `k4abt_frame_get_timestamp_usec()` function to simplify the steps for the users to query body frame timestamp.
-* Further improved the body tracking algorithm accuracy and tracking reliability
- ## Next steps - [Azure Kinect DK overview](about-azure-kinect-dk.md)
kinect-dk Hardware Specification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/kinect-dk/hardware-specification.md
Title: Azure Kinect DK hardware specifications description: Understand the components, specifications, and capabilities of the Azure Kinect DK.---++ ms.prod: kinect-dk Previously updated : 02/14/2020 Last updated : 03/18/2021 keywords: azure, kinect, specs, hardware, DK, capabilities, depth, color, RGB, IMU, microphone, array, depth
The Azure Kinect device consists of the following size and weight dimensions.
![Azure Kinect DK dimensions](./media/resources/hardware-specs-media/dimensions.png)
+A STEP file for the Azure Kinect device is available [here](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/blob/develop/assets).
+ ## Operating environment Azure Kinect DK is intended for developers and commercial businesses operating under the following ambient conditions:
kinect-dk System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/kinect-dk/system-requirements.md
Title: Azure Kinect Sensor SDK system requirements description: Understand the system requirements for the Azure Kinect Sensor SDK on Windows and Linux.--++ - CI 115266 - CSSTroubleshooting ms.prod: kinect-dk Previously updated : 03/12/2020 Last updated : 03/05/2021 keywords: azure, kinect, system requirements, CPU, GPU, USB, set up, setup, minimum, requirements
The body tracking PC host requirement is more stringent than the general PC host
- Seventh Gen Intel&reg; CoreTM i5 Processor (Quad Core 2.4 GHz or faster) - 4 GB Memory-- NVIDIA GEFORCE GTX 1070 or better
+- NVIDIA GEFORCE GTX 1050 or equivalent
- Dedicated USB3 port The recommended minimum configuration assumes K4A_DEPTH_MODE_NFOV_UNBINNED depth mode at 30fps tracking 5 people. Lower end or older CPUs and NVIDIA GPUs may also work depending on your use-case.
kinect-dk Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/kinect-dk/troubleshooting.md
Title: Azure Kinect known issues and troubleshooting description: Learn about some of the known issues and troubleshooting tips when using the Sensor SDK with Azure Kinect DK.--++ ms.prod: kinect-dk Previously updated : 06/26/2019 Last updated : 03/05/2021 keywords: troubleshooting, update, bug, kinect, feedback, recovery, logging, tips
The Azure Kinect depth engine on Linux uses OpenGL. OpenGL requires a window ins
1. Enable automatic login for the user account you plan to use. Refer to [this](https://vitux.com/how-to-enable-disable-automatic-login-in-ubuntu-18-04-lts/) article for instructions on enabling automatic login. 2. Power down the system, disconnect the monitor and power up the system. Automatic login forces the creation of an x-server session.
-2. Connect via ssh and set the DISPLAY env variable `export DISPLAY=:0`
-3. Start your Azure Kinect application.
+3. Connect via ssh and set the DISPLAY env variable `export DISPLAY=:0`
+4. Start your Azure Kinect application.
The [xtrlock](http://manpages.ubuntu.com/manpages/xenial/man1/xtrlock.1x.html) utility may be used to immediately lock the screen after automatic login. Add the following command to the startup application or systemd service:
-`bash -c ΓÇ£xtrlock -bΓÇ¥`
+`bash -c ΓÇ£xtrlock -bΓÇ¥`
## Missing C# documentation
The Sensor SDK C# documentation is located [here](https://microsoft.github.io/Az
The Body Tracking SDK C# documentation is located [here](https://microsoft.github.io/Azure-Kinect-Body-Tracking/release/1.x.x/namespace_microsoft_1_1_azure_1_1_kinect_1_1_body_tracking.html).
+## Specifying ONNX Runtime execution environment
+
+The Body Tracking SDK supports CPU, CUDA, DirectML (Windows only) and TensorRT execution environments to inference the pose estimation model. The `K4ABT_TRACKER_PROCESSING_MODE_GPU` defaults to CUDA execution on Linux and DirectML execution on Windows. Three additional modes have been added to select specific execution environments: `K4ABT_TRACKER_PROCESSING_MODE_GPU_CUDA`, `K4ABT_TRACKER_PROCESSING_MODE_GPU_DIRECTML`, and `K4ABT_TRACKER_PROCESSING_MODE_GPU_TENSORRT`.
+
+ONNX Runtime includes environment variables to control TensorRT model caching. The recommended values are:
+- ORT_TENSORRT_ENGINE_CACHE_ENABLE=1
+- ORT_TENSORRT_ENGINE_CACHE_PATH="pathname"
+
+The folder must be created prior to starting body tracking.
+
+The TensorRT execution environment supports both FP32 (default) and FP16. FP16 trades ~2x performance increase for minimal accuracy decrease. To specify FP16:
+- ORT_TENSORRT_FP16_ENABLE=1
+
+## Required DLLs for ONNX Runtime execution environments
+
+|Mode | CUDA 11.1 | CUDNN 8.0.5 | TensorRT 7.2.1 |
+|-|-|-|-|
+| CPU | cudart64_110 | cudnn64_8 | - |
+| | cufft64_10 | | |
+| | cublas64_11 | | |
+| | cublasLt64_11 | | |
+| CUDA | cudart64_110 | cudnn64_8 | - |
+| | cufft64_10 | cudnn_ops_infer64_8 | |
+| | cublas64_11 | cudnn_cnn_infer64_8 | |
+| | cublasLt64_11 | | |
+| DirectML | cudart64_110 | cudnn64_8 | - |
+| | cufft64_10 | | |
+| | cublas64_11 | | |
+| | cublasLt64_11 | | |
+| TensorRT | cudart64_110 | cudnn64_8 | nvinfer |
+| | cufft64_10 | cudnn_ops_infer64_8 | nvinfer_plugin |
+| | cublas64_11 | cudnn_cnn_infer64_8 | myelin64_1 |
+| | cublasLt64_11 | | |
+| | nvrtc64_111_0 | | |
+| | nvrtc-builtins64_111 | | |
+ ## Next steps [More support information](support.md)
load-balancer Backend Pool Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/backend-pool-management.md
az vm create \
--generate-ssh-keys ```
-### REST API
-Create the backend pool:
-
-```
-PUT https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Network/loadBalancers/{load-balancer-name}/backendAddressPools/{backend-pool-name}?api-version=2020-05-01
-```
-
-Create a network interface and add it to the backend pool you've created via the IP configurations property of the network interface:
-
-```
-PUT https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Network/networkInterfaces/{nic-name}?api-version=2020-05-01
-```
-
-JSON request body:
-```json
-{
- "properties": {
- "enableAcceleratedNetworking": true,
- "ipConfigurations": [
- {
- "name": "ipconfig1",
- "properties": {
- "subnet": {
- "id": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Network/virtualNetworks/{vnet-name}/subnets/{subnet-name}"
- },
- "loadBalancerBackendAddressPools": [
- {
- "id": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Network/loadBalancers/{load-balancer-name}/backendAddressPools/{backend-pool-name}"
- }
- ]
- }
- }
- ]
- },
- "location": "eastus"
-}
-```
-
-Retrieve the backend pool information for the load balancer to confirm that this network interface is added to the backend pool:
-
-```
-GET https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name/providers/Microsoft.Network/loadBalancers/{load-balancer-name/backendAddressPools/{backend-pool-name}?api-version=2020-05-01
-```
-
-Create a VM and attach the NIC referencing the backend pool:
-
-```
-PUT https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Compute/virtualMachines/{vm-name}?api-version=2019-12-01
-```
-
-JSON request body:
-```JSON
-{
- "location": "easttus",
- "properties": {
- "hardwareProfile": {
- "vmSize": "Standard_D1_v2"
- },
- "storageProfile": {
- "imageReference": {
- "sku": "2016-Datacenter",
- "publisher": "MicrosoftWindowsServer",
- "version": "latest",
- "offer": "WindowsServer"
- },
- "osDisk": {
- "caching": "ReadWrite",
- "managedDisk": {
- "storageAccountType": "Standard_LRS"
- },
- "name": "myVMosdisk",
- "createOption": "FromImage"
- }
- },
- "networkProfile": {
- "networkInterfaces": [
- {
- "id": "/subscriptions/{subscription-id}/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/{nic-name}",
- "properties": {
- "primary": true
- }
- }
- ]
- },
- "osProfile": {
- "adminUsername": "{your-username}",
- "computerName": "myVM",
- "adminPassword": "{your-password}"
- }
- }
-}
-```
- ### Resource Manager Template Follow this [quickstart Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/101-load-balancer-standard-create/) to deploy a load balancer and virtual machines and add the virtual machines to the backend pool via network interface.
In scenarios with pre-populated backend pools, use IP and virtual network.
All backend pool management is done directly on the backend pool object as highlighted in the examples below.
-### Limitations
-A Backend Pool configured by IP address has the following limitations:
- * Can only be used for Standard load balancers
- * Limit of 100 IP addresses in the backend pool
- * The backend resources must be in the same virtual network as the load balancer
- * A Load Balancer with IP-based Backend Pool cannot function as a Private Link service
- * This feature is not currently supported in the Azure portal
- * ACI containers are not currently supported by this feature
- * Load balancers or services fronted by load balancers cannot be placed in the backend pool of the load balancer
- * Inbound NAT Rules cannot be specified by IP address
- ### PowerShell Create new backend pool:
az vm create \
--admin-username azureuser \ --generate-ssh-keys ```
+
+### Limitations
+A Backend Pool configured by IP address has the following limitations:
+ * Can only be used for Standard load balancers
+ * Limit of 100 IP addresses in the backend pool
+ * The backend resources must be in the same virtual network as the load balancer
+ * A Load Balancer with IP-based Backend Pool cannot function as a Private Link service
+ * This feature is not currently supported in the Azure portal
+ * ACI containers are not currently supported by this feature
+ * Load balancers or services such as Application Gateway cannot be placed in the backend pool of the load balancer
+ * Inbound NAT Rules cannot be specified by IP address
-### REST API
-
-Create the backend pool and define the backend addresses via a PUT backend pool request.
-Configure the backend addresses in the JSON body of the PUT request by:
-
-* Address name
-* IP address
-* Virtual network ID
-
-```
-PUT https://management.azure.com/subscriptions/subid/resourceGroups/testrg/providers/Microsoft.Network/loadBalancers/lb/backendAddressPools/backend?api-version=2020-05-01
-```
-
-JSON Request Body:
-```JSON
-{
- "properties": {
- "loadBalancerBackendAddresses": [
- {
- "name": "address1",
- "properties": {
- "virtualNetwork": {
- "id": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Network/virtualNetworks/{vnet-name}"
- },
- "ipAddress": "10.0.0.4"
- }
- },
- {
- "name": "address2",
- "properties": {
- "virtualNetwork": {
- "id": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Network/virtualNetworks/{vnet-name}"
- },
- "ipAddress": "10.0.0.5"
- }
- }
- ]
- }
-}
-```
-
-Retrieve the backend pool information for the load balancer to confirm that the backend addresses are added to the backend pool:
-```
-GET https://management.azure.com/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Network/loadBalancers/{load-balancer-name}/backendAddressPools/{backend-pool-name}?api-version=2020-05-01
-```
-
-Create a network interface and add it to the backend pool. Set the IP address to one of the backend addresses:
-```
-PUT https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Network/networkInterfaces/{nic-name}?api-version=2020-05-01
-```
-
-JSON Request Body:
-```JSON
-{
- "properties": {
- "enableAcceleratedNetworking": true,
- "ipConfigurations": [
- {
- "name": "ipconfig1",
- "properties": {
- "privateIPAddress": "10.0.0.4",
- "subnet": {
- "id": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Network/virtualNetworks/{vnet-name}/subnets/{subnet-name}"
- }
- }
- }
- ]
- },
- "location": "eastus"
-}
-```
-
-Create a VM and attach the NIC with an IP address in the backend pool:
-
-```
-PUT https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Compute/virtualMachines/{vm-name}?api-version=2019-12-01
-```
-
-JSON Request Body:
-```JSON
-{
- "location": "eastus",
- "properties": {
- "hardwareProfile": {
- "vmSize": "Standard_D1_v2"
- },
- "storageProfile": {
- "imageReference": {
- "sku": "2016-Datacenter",
- "publisher": "MicrosoftWindowsServer",
- "version": "latest",
- "offer": "WindowsServer"
- },
- "osDisk": {
- "caching": "ReadWrite",
- "managedDisk": {
- "storageAccountType": "Standard_LRS"
- },
- "name": "myVMosdisk",
- "createOption": "FromImage"
- }
- },
- "networkProfile": {
- "networkInterfaces": [
- {
- "id": "/subscriptions/{subscription-id}/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/{nic-name}",
- "properties": {
- "primary": true
- }
- }
- ]
- },
- "osProfile": {
- "adminUsername": "{your-username}",
- "computerName": "myVM",
- "adminPassword": "{your-password}"
- }
- }
-}
-```
-
## Next steps In this article, you learned about Azure Load Balancer backend pool management and how to configure a backend pool by IP address and virtual network. Learn more about [Azure Load Balancer](load-balancer-overview.md).+
+Review the [REST API](https://docs.microsoft.com/rest/api/load-balancer/loadbalancerbackendaddresspools/createorupdate) for IP based backendpool management.
machine-learning Export Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/algorithm-module-reference/export-data.md
Previously updated : 07/28/2020 Last updated : 03/19/2021 # Export Data module
Before exporting your data, you need to first register a datastore in your Azure
If it is selected, the system will execute the module again to regenerate output.
-1. Define the path in the datastore where the data is. The path is a relative path. The empty paths or a URL paths are not allowed.
+1. Define the path in the datastore where the data is. The path is a relative path.Take `data/testoutput` as an example, which means the input data of **Export Data** will be exported to `data/testoutput` in the datastore you set in the **Output settings** of the module.
+
+ > [!NOTE]
+ > The empty paths or **URL paths** are not allowed.
1. For **File format**, select the format in which data should be stored.
machine-learning Train Clustering Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/algorithm-module-reference/train-clustering-model.md
Previously updated : 11/19/2019 Last updated : 03/17/2021 # Train Clustering Model
After training has completed:
+ To generate scores from the model, use [Assign Data to Clusters](assign-data-to-clusters.md).
+> [!NOTE]
+> If you need to deploy the trained model in the designer, make sure that [Assign Data to Clusters](assign-data-to-clusters.md) instead of **Score Model** is connected to the input of [Web Service Output module](web-service-input-output.md) in the inference pipeline.
+ ## Next steps See the [set of modules available](module-reference.md) to Azure Machine Learning.
machine-learning Train Svd Recommender https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/algorithm-module-reference/train-svd-recommender.md
Previously updated : 02/22/2020 Last updated : 03/17/2021 # Train SVD Recommender This article describes how to use the Train SVD Recommender module in Azure Machine Learning designer. Use this module to train a recommendation model based on the Single Value Decomposition (SVD) algorithm.
-The Train SVD Recommender module reads a dataset of user-item-rating triples. It returns a trained SVD recommender. You can then use the trained model to predict ratings or generate recommendations, by using the [Score SVD Recommender](score-svd-recommender.md) module.
+The Train SVD Recommender module reads a dataset of user-item-rating triples. It returns a trained SVD recommender. You can then use the trained model to predict ratings or generate recommendations, by connecting the [Score SVD Recommender](score-svd-recommender.md) module.
From this sample, you can see that a single user has rated several movies.
5. Submit the pipeline.
+## Results
+
+After pipeline run is completed, to use the model for scoring, connect the [Train SVD Recommender](train-svd-recommender.md) to [Score SVD Recommender](score-svd-recommender.md), to predict values for new input examples.
## Next steps
machine-learning Train Vowpal Wabbit Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/algorithm-module-reference/train-vowpal-wabbit-model.md
Vowpal Wabbit supports incremental training by adding new data to an existing mo
6. Submit the pipeline. 7. Select the module and select **Register dataset** under **Outputs+logs** tab in the right pane, to preserve the updated model in your Azure Machine Learning workspace. If you don't specify a new name, the updated model overwrites the existing saved model.
+## Results
+++ To generate scores from the model, use [Score Vowpal Wabbit Model](score-vowpal-wabbit-model.md).+
+> [!NOTE]
+> If you need to deploy the trained model in the designer, make sure that [Score Vowpal Wabbit Model](score-vowpal-wabbit-model.md) instead of **Score Model** is connected to the input of [Web Service Output module](web-service-input-output.md) in the inference pipeline.
+ ## Technical notes This section contains implementation details, tips, and answers to frequently asked questions.
machine-learning Train Wide And Deep Recommender https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/algorithm-module-reference/train-wide-and-deep-recommender.md
For an example, a typical set of item features might look like this:
17. Run the pipeline.
+## Results
+
+After pipeline run is completed, to use the model for scoring, connect the [Train Wide and Deep Recommender](train-wide-and-deep-recommender.md) to [Score Wide and Deep Recommender](score-wide-and-deep-recommender.md), to predict values for new input examples.
+ ## Technical notes The Wide & Deep jointly trains wide linear models and deep neural networks to combine the strengths of memorization and generalization. The wide component accepts a set of raw features and feature transformations to memorize feature interactions. And with less feature engineering, the deep component generalize to unseen feature combinations through low-dimensional dense feature embeddings.
media-services Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/video-indexer/connect-to-azure.md
To automate the creation of the account is a two steps process:
- An Azure subscription in [Azure Government](../../azure-government/index.yml). - An Azure AD account in Azure Government.-- All pre-requirements of permissions and resources as described above in [Prerequisites for connecting to Azure](#prerequisites-for-connecting-to-azure).
+- All pre-requirements of permissions and resources as described above in [Prerequisites for connecting to Azure](#prerequisites-for-connecting-to-azure). Make sure to check [Additional prerequisites for automatic flow](#additional-prerequisites-for-automatic-flow) and [Additional prerequisites for manual flow](#additional-prerequisites-for-manual-flow).
### Create new account via the Azure Government portal
The account will be permanently deleted in 90 days.
You can programmatically interact with your trial account and/or with your Video Indexer accounts that are connected to Azure by following the instructions in: [Use APIs](video-indexer-use-apis.md).
-You should use the same Azure AD user you used when connecting to Azure.
+You should use the same Azure AD user you used when connecting to Azure.
migrate Tutorial App Containerization Aspnet Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-app-containerization-aspnet-kubernetes.md
+
+ Title: Azure App Containerization ASP.NET; Containerization and migration of ASP.NET applications to Azure Kubernetes.
+description: Tutorial:Containerize & migrate ASP.NET applications to Azure Kubernetes Service.
++++ Last updated : 3/2/2021++
+# Containerize ASP.NET applications and migrate to Azure Kubernetes Service
+
+In this article, you'll learn how to containerize ASP.NET applications and migrate them to [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) using the Azure Migrate: App Containerization tool. The containerization process doesnΓÇÖt require access to your codebase and provides an easy way to containerize existing applications. The tool works by using the running state of the applications on a server to determine the application components and helps you package them in a container image. The containerized application can then be deployed on Azure Kubernetes Service (AKS).
+
+The Azure Migrate: App Containerization tool currently supports -
+
+- Containerizing ASP.NET apps and deploying them on Windows containers on AKS.
+- Containerizing Java Web Apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on AKS. [Learn more](./tutorial-containerize-java-kubernetes.md)
++
+The Azure Migrate: App Containerization tool helps you to -
+
+- **Discover your application**: The tool remotely connects to the application servers running your ASP.NET application and discovers the application components. The tool creates a Dockerfile that can be used to create a container image for the application.
+- **Build the container image**: You can inspect and further customize the Dockerfile as per your application requirements and use that to build your application container image. The application container image is pushed to an Azure Container Registry you specify.
+- **Deploy to Azure Kubernetes Service**: The tool then generates the Kubernetes resource definition YAML files needed to deploy the containerized application to your Azure Kubernetes Service cluster. You can customize the YAML files and use them to deploy the application on AKS.
+
+> [!NOTE]
+> The Azure Migrate: App Containerization tool helps you discover specific application types (ASP.NET and Java web apps on Apache Tomcat) and their components on an application server. To discover servers and the inventory of apps, roles, and features running on on-premises machines, use Azure Migrate: Discovery and assessment capability. [Learn more](./tutorial-discover-vmware.md)
+
+While all applications won't benefit from a straight shift to containers without significant rearchitecting, some of the benefits of moving existing apps to containers without rewriting include:
+
+- **Improved infrastructure utilization:** With containers, multiple applications can share resources and be hosted on the same infrastructure. This can help you consolidate infrastructure and improve utilization.
+- **Simplified management:** By hosting your applications on a modern managed infrastructure platform like AKS, you can simplify your management practices while still retaining control over your infrastructure. You can achieve this by retiring or reducing the infrastructure maintenance and management processes that you'd traditionally perform with owned infrastructure.
+- **Application portability:** With increased adoption and standardization of container specification formats and orchestration platforms, application portability is no longer a concern.
+- **Adopt modern management with DevOps:** Helps you adopt and standardize on modern practices for management and security with Infrastructure as Code and transition to DevOps.
++
+In this tutorial, you'll learn how to:
+
+> [!div class="checklist"]
+> * Set up an Azure account.
+> * Install the Azure Migrate: App Containerization tool.
+> * Discover your ASP.NET application.
+> * Build the container image.
+> * Deploy the containerized application on AKS.
+
+> [!NOTE]
+> Tutorials show you the simplest deployment path for a scenario so that you can quickly set up a proof-of-concept. Tutorials use default options where possible, and don't show all possible settings and paths.
+
+## Prerequisites
+
+Before you begin this tutorial, you should:
+
+**Requirement** | **Details**
+ |
+**Identify a machine to install the tool** | A Windows machine to install and run the Azure Migrate: App Containerization tool. The Windows machine could be a server (Windows Server 2016 or later) or client (Windows 10) operating system, meaning that the tool can run on your desktop as well. <br/><br/> The Windows machine running the tool should have network connectivity to the servers/virtual machines hosting the ASP.NET applications to be containerized.<br/><br/> Ensure that 6-GB space is available on the Windows machine running the Azure Migrate: App Containerization tool for storing application artifacts. <br/><br/> The Windows machine should have internet access, directly or via a proxy. <br/> <br/>Install the Microsoft Web Deploy tool on the machine running the App Containerization helper tool and application server if not already installed. You can download the tool from [here](https://aka.ms/webdeploy3.6)
+**Application servers** | Enable PowerShell remoting on the application servers: Login to the application server and Follow [these](https://docs.microsoft.com/powershell/module/microsoft.powershell.core/enable-psremoting) instructions to turn on PowerShell remoting. <br/><br/> If the application server is running Window Server 2008 R2, ensure that PowerShell 5.1 is installed on the application server. Follow the instruction [here](https://docs.microsoft.com/powershell/scripting/windows-powershell/wmf/setup/install-configure) to download and install PowerShell 5.1 on the application server. <br/><br/> Install the Microsoft Web Deploy tool on the machine running the App Containerization helper tool and application server if not already installed. You can download the tool from [here](https://aka.ms/webdeploy3.6)
+**ASP.NET application** | The tool currently supports <br/><br/> - ASP.NET applications using Microsoft .NET framework 3.5 or later.<br/> - Application servers running Windows Server 2008 R2 or later (application servers should be running PowerShell version 5.1). <br/> - Applications running on Internet Information Services (IIS) 7.5 or later. <br/><br/> The tool currently doesn't support <br/><br/> - Applications requiring Windows authentication (AKS doesnΓÇÖt support gMSA currently). <br/> - Applications that depend on other Windows services hosted outside IIS.
++
+## Prepare an Azure user account
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.
+
+Once your subscription is set up, you'll need an Azure user account with:
+- Owner permissions on the Azure subscription
+- Permissions to register Azure Active Directory apps
+
+If you just created a free Azure account, you're the owner of your subscription. If you're not the subscription owner, work with the owner to assign the permissions as follows:
+
+1. In the Azure portal, search for "subscriptions", and under **Services**, select **Subscriptions**.
+
+ ![Search box to search for the Azure subscription.](./media/tutorial-discover-vmware/search-subscription.png)
+
+2. In the **Subscriptions** page, select the subscription in which you want to create an Azure Migrate project.
+3. In the subscription, select **Access control (IAM)** > **Check access**.
+4. In **Check access**, search for the relevant user account.
+5. In **Add a role assignment**, click **Add**.
+
+ ![Search for a user account to check access and assign a role.](./media/tutorial-discover-vmware/azure-account-access.png)
+
+6. In **Add role assignment**, select the Owner role, and select the account (azmigrateuser in our example). Then click **Save**.
+
+ ![Opens the Add Role assignment page to assign a role to the account.](./media/tutorial-discover-vmware/assign-role.png)
+
+7. Your Azure account also needs **permissions to register Azure Active Directory apps.**
+8. In Azure portal, navigate to **Azure Active Directory** > **Users** > **User Settings**.
+9. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default).
+
+ ![Verify in User Settings that users can register Active Directory apps.](./media/tutorial-discover-vmware/register-apps.png)
+
+10. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of Azure Active Directory App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
++
+## Download and install Azure Migrate: App Containerization tool
+
+1. [Download](https://go.microsoft.com/fwlink/?linkid=2134571) the Azure Migrate: App Containerization installer on a Windows machine.
+2. Launch PowerShell in administrator mode and change the PowerShell directory to the folder containing the installer.
+3. Run the installation script using the command
+
+ ```powershell
+ .\App ContainerizationInstaller.ps1
+ ```
+
+## Launch the App Containerization tool
+
+1. Open a browser on any machine that can connect to the Windows machine running the App Containerization tool, and open the tool URL: **https://*machine name or IP address*: 44368**.
+
+ Alternately, you can open the app from the desktop by selecting the app shortcut.
+
+2. If you see a warning stating that says your connection isnΓÇÖt private, click Advanced and choose to proceed to the website. This warning appears as the web interface uses a self-signed TLS/SSL certificate.
+3. At the sign-in screen, use the local administrator account on the machine to sign-in.
+4. For specify application type, select **ASP.NET web apps** as the type of application you want to containerize.
+
+ ![Default load-up for App Containerization tool.](./media/tutorial-containerize-apps-aks/tool-home.png)
++
+### Complete tool pre-requisites
+1. Accept the **license terms**, and read the third-party information.
+6. In the tool web app > **Set up prerequisites**, do the following steps:
+ - **Connectivity**: The tool checks that the Windows machine has internet access. If the machine uses a proxy:
+ - Click on **Set up proxy** to specify the proxy address (in the form IP address or FQDN) and listening port.
+ - Specify credentials if the proxy needs authentication.
+ - Only HTTP proxy is supported.
+ - If you've added proxy details or disabled the proxy and/or authentication, click on **Save** to trigger connectivity check again.
+ - **Install updates**: The tool will automatically check for latest updates and install them. You can also manually install the latest version of the tool from [here](https://go.microsoft.com/fwlink/?linkid=2134571).
+ - **Install Microsoft Web Deploy tool**: The tool will check that the Microsoft Web Deploy tool is installed on the Windows machine running the Azure Migrate: App Containerization tool.
+ - **Enable PowerShell remoting**: The tool will inform you to ensure that PowerShell remoting is enabled on the application servers running the ASP.NET applications to be containerized.
++
+## Log in to Azure
+
+Click **Login** to log in to your Azure account.
+
+1. You'll need a device code to authenticate with Azure. Clicking on Login will open a modal with the device code.
+2. Click on **Copy code & Login** to copy the device code and open an Azure Login prompt in a new browser tab. If it doesn't appear, make sure you've disabled the pop-up blocker in the browser.
+
+ ![Modal showing device code.](./media/tutorial-containerize-apps-aks/login-modal.png)
+
+3. On the new tab, paste the device code and complete log in using your Azure account credentials. You can close the browser tab after log in is complete and return to the App Containerization tool's web interface.
+4. Select the **Azure tenant** that you want to use.
+5. Specify the **Azure subscription** that you want to use.
+
+## Discover ASP.NET applications
+
+The App Containerization helper tool connects remotely to the application servers using the provided credentials and attempts to discover ASP.NET applications hosted on the application servers.
+
+1. Specify the **IP address/FQDN and the credentials** of the server running the ASP.NET application that should be used to remotely connect to the server for application discovery.
+ - The credentials provided must be for a local administrator (Windows) on the application server.
+ - For domain accounts (the user must be an administrator on the application server), prefix the username with the domain name in the format *<domain\username>*.
+ - You can run application discovery for upto five servers at a time.
+
+2. Click **Validate** to verify that the application server is reachable from the machine running the tool and that the credentials are valid. Upon successful validation, the status column will show the status as **Mapped**.
+
+ ![Screenshot for server IP and credentials.](./media/tutorial-containerize-apps-aks/discovery-credentials.png)
+
+3. Click **Continue** to start application discovery on the selected application servers.
+
+4. Upon successful completion of application discovery, you can select the list of applications to containerize.
+
+ ![Screenshot for discovered ASP.NET application.](./media/tutorial-containerize-apps-aks/discovered-app.png)
++
+4. Use the checkbox to select the applications to containerize.
+5. **Specify container name**: Specify a name for the target container for each selected application. The container name should be specified as <*name:tag*> where the tag is used for container image. For example, you can specify the target container name as *appname:v1*.
+
+### Parameterize application configurations
+Parameterizing the configuration makes it available as a deployment time parameter. This allows you to configure this setting while deploying the application as opposed to having it hard-coded to a specific value in the container image. For example, this option is useful for parameters like database connection strings.
+1. Click **app configurations** to review detected configurations.
+2. Select the checkbox to parameterize the detected application configurations.
+3. Click **Apply** after selecting the configurations to parameterize.
+
+ ![Screenshot for app configuration parameterization ASP.NET application.](./media/tutorial-containerize-apps-aks/discovered-app-configs.png)
+
+### Externalize file system dependencies
+
+ You can add other folders that your application uses. Specify if they should be part of the container image or are to be externalized through persistent volumes on Azure file share. Using persistent volumes works great for stateful applications that store state outside the container or have other static content stored on the file system. [Learn more](https://docs.microsoft.com/azure/aks/concepts-storage)
+
+1. Click **Edit** under App Folders to review the detected application folders. The detected application folders have been identified as mandatory artifacts needed by the application and will be copied into the container image.
+
+2. Click **Add folders** and specify the folder paths to be added.
+3. To add multiple folders to the same volume, provide comma (`,`) separated values.
+4. Select **Persistent Volume** as the storage option if you want the folders to be stored outside the container on a Persistent Volume.
+5. Click **Save** after reviewing the application folders.
+ ![Screenshot for app volumes storage selection.](./media/tutorial-containerize-apps-aks/discovered-app-volumes.png)
+
+6. Click **Continue** to proceed to the container image build phase.
+
+## Build container image
++
+1. **Select Azure Container Registry**: Use the dropdown to select an [Azure Container Registry](https://docs.microsoft.com/azure/container-registry/) that will be used to build and store the container images for the apps. You can use an existing Azure Container Registry or choose to create a new one using the Create new registry option.
+
+ ![Screenshot for app ACR selection.](./media/tutorial-containerize-apps-aks/build-aspnet-app.png)
++
+2. **Review the Dockerfile**: The Dockerfile needed to build the container images for each selected application are generated at the beginning of the build step. Click **Review** to review the Dockerfile. You can also add any necessary customizations to the Dockerfile in the review step and save the changes before starting the build process.
+
+3. **Trigger build process**: Select the applications to build images for and click **Build**. Clicking build will start the container image build for each application. The tool keeps monitoring the build status continuously and will let you proceed to the next step upon successful completion of the build.
+
+4. **Track build status**: You can also monitor progress of the build step by clicking the **Build in Progress** link under the status column. The link takes a couple of minutes to be active after you've triggered the build process.
+
+5. Once the build is completed, click **Continue** to specify deployment settings.
+
+ ![Screenshot for app container image build completion.](./media/tutorial-containerize-apps-aks/build-aspnet-app-completed.png)
+
+## Deploy the containerized app on AKS
+
+Once the container image is built, the next step is to deploy the application as a container on [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/).
+
+1. **Select the Azure Kubernetes Service Cluster**: Specify the AKS cluster that the application should be deployed to.
+
+ - The selected AKS cluster must have a Windows node pool.
+ - The cluster must be configured to allow pulling of images from the Azure Container Registry that was selected to store the images.
+ - Run the following command in Azure CLI to attach the AKS cluster to the ACR.
+ ``` Azure CLI
+ az aks update -n <cluster-name> -g <cluster-resource-group> --attach-acr <acr-name>
+ ```
+ - If you donΓÇÖt have an AKS cluster or would like to create a new AKS cluster to deploy the application to, you can choose to create on from the tool by clicking **Create new AKS cluster**.
+ - The AKS cluster created using the tool will be created with a Windows node pool. The cluster will be configured to allow it to pull images from the Azure Container Registry that was created earlier (if create new registry option was chosen).
+ - Click **Continue** after selecting the AKS cluster.
+
+2. **Specify Azure file share**: If you had added more folders and selected the Persistent Volume option, then specify the Azure file share that should be used by Azure Migrate: App Containerization tool during the deployment process. The tool will create new directories in this Azure file share to copy over the application folders that are configured for Persistent Volume storage. Once the application deployment is complete, the tool will clean up the Azure file share by deleting the directories it had created.
+
+ - If you don't have an Azure file share or would like to create a new Azure file share, you can choose to create on from the tool by clicking **Create new Storage Account and file share**.
+
+3. **Application deployment configuration**: Once you've completed the steps above, you'll need to specify the deployment configuration for the application. Click **Configure** to customize the deployment for the application. In the configure step you can provide the following customizations:
+ - **Prefix string**: Specify a prefix string to use in the name for all resources that are created for the containerized application in the AKS cluster.
+ - **SSL certificate**: If your application requires an https site binding, specify the PFX file that contains the certificate to be used for the binding. The PFX file shouldn't be password protected and the original site shouldn't have multiple bindings.
+ - **Replica Sets**: Specify the number of application instances (pods) that should run inside the containers.
+ - **Load balancer type**: Select *External* if the containerized application should be reachable from public networks.
+ - **Application Configuration**: For any application configurations that were parameterized, provide the values to use for the current deployment.
+ - **Storage**: For any application folders that were configured for Persistent Volume storage, specify whether the volume should be shared across application instances or should be initialized individually with each instance in the container. By default, all application folders on Persistent Volumes are configured as shared.
+ - Click **Apply** to save the deployment configuration.
+ - Click **Continue** to deploy the application.
+
+ ![Screenshot for deployment app configuration.](./media/tutorial-containerize-apps-aks/deploy-aspnet-app-config.png)
+
+4. **Deploy the application**: Once the deployment configuration for the application is saved, the tool will generate the Kubernetes deployment YAML for the application.
+ - Click **Edit** to review and customize the Kubernetes deployment YAML for the applications.
+ - Select the application to deploy.
+ - Click **Deploy** to start deployments for the selected applications
+
+ ![Screenshot for app deployment configuration.](./media/tutorial-containerize-apps-aks/deploy-aspnet-app-deploy.png)
+
+ - Once the application is deployed, you can click the *Deployment status* column to track the resources that were deployed for the application.
+
+## Download generated artifacts
+
+All artifacts that are used to build and deploy the application into AKS, including the Dockerfile and Kubernetes YAML specification files, are stored on the machine running the tool. The artifacts are located at *C:\ProgramData\Microsoft Azure Migrate App Containerization*.
+
+A single folder is created for each application server. You can view and download all intermediate artifacts used in the containerization process by navigating to this folder. The folder, corresponding to the application server, will be cleaned up at the start of each run of the tool for a particular server.
+
+## Troubleshoot issues
+
+To troubleshoot any issues with the tool, you can look at the log files on the Windows machine running the App Containerization tool. Tool log files are located at *C:\ProgramData\Microsoft Azure Migrate App Containerization\Logs* folder.
+
+## Next steps
+
+- Containerizing Java Web Apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on AKS. [Learn more](./tutorial-containerize-java-kubernetes.md)
mysql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-connectivity-architecture.md
The following table lists the gateway IP addresses of the Azure Database for MyS
| West Central US | 13.78.145.25 | | | | West Europe |13.69.105.208, 104.40.169.187 | 40.68.37.158 | 191.237.232.75 | | West US |13.86.216.212, 13.86.217.212 |104.42.238.205 | 23.99.34.75|
-| West US 2 | 13.66.226.202 | | |
+| West US 2 | 13.66.136.192 | 13.66.226.202 | |
|||| ## Connection redirection
mysql Concepts Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-ssl-connection-security.md
For example, setting the value of minimum TLS setting version to TLS 1.0 means y
> > Once you enforce a minimum TLS version, you cannot later disable minimum version enforcement.
-To learn how to set the TLS setting for your Azure Database for MySQL, refer to [How to configure TLS setting](howto-tls-configurations.md).
+The minimum TLS version setting doesnt require any restart of the server can be set while the server is online. To learn how to set the TLS setting for your Azure Database for MySQL, refer to [How to configure TLS setting](howto-tls-configurations.md).
## Cipher support by Azure Database for MySQL Single server
mysql Tutorial Webapp Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/tutorial-webapp-server-vnet.md
ms.devlang: azurecli Previously updated : 9/21/2020 Last updated : 03/18/2021
This tutorial shows you how create a Azure App Service Web App with MySQL Flexible Server (Preview) inside a [Virtual network](../../virtual-network/virtual-networks-overview.md).
+In this tutorial you will learn how to:
+>[!div class="checklist"]
+> * Create a MySQL flexible server in a virtual network
+> * Create a subnet to delegate to App Service
+> * Create a web app
+> * Add the web app to the virtual network
+> * Connect to Postgres from the web app
+ ## Prerequisites If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
az login
If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. Select the specific subscription ID under your account using [az account set](/cli/azure/account) command. Substitute the **subscription ID** property from the **az login** output for your subscription into the subscription ID placeholder. ```azurecli
-az account set --subscription <subscription id>
+az account set --subscription <subscription ID>
``` ## Create an Azure Database for MySQL Flexible Server
Create a private flexible server inside a virtual network (VNET) using the follo
```azurecli az mysql flexible-server create --resource-group myresourcegroup --location westus2 ```
-This command performs the following actions, which may take a few minutes:
+Copy the connection string and the name of the newly created virtual network. This command performs the following actions, which may take a few minutes:
- Create the resource group if it doesn't already exist. - Generates a server name if it is not provided.
This command performs the following actions, which may take a few minutes:
> [!NOTE] > Make a note of your password that will be generate for you if not provided. If you forget the password you would have to reset the password using ``` az mysql flexible-server update``` command
+## Create Subnet for App Service Endpoint
+We now need to have subnet that is delegated to App Service Web App endpoint. Run the following command to create a new subnet in the same virtual network as the database server was created.
+
+```azurecli
+az network vnet subnet create -g myresourcegroup --vnet-name VNETName --name webappsubnetName --address-prefixes 10.0.1.0/24 --delegations Microsoft.Web/serverFarms --service-endpoints Microsoft.Web
+```
+Make a note of the virtual network name and subnet name after this command as would need it to add VNET integration rule for the web app after it is created.
+ ## Create a web app In this section, you create app host in App Service app and connect this app to the MySQL database. Make sure you're in the repository root of your application code in the terminal.
In this section, you create app host in App Service app and connect this app to
Create an App Service app (the host process) with the az webapp up command ```azurecli
-az webapp up --resource-group myresourcegroup --location westus2 --plan testappserviceplan --sku B1 --name mywebapp
+az webapp up --resource-group myresourcegroup --location westus2 --plan testappserviceplan --sku P2V2 --name mywebapp
``` > [!NOTE] > - For the --location argument, use the same location as you did for the database in the previous section. > - Replace _&lt;app-name>_ with a unique name across all Azure (the server endpoint is https://\<app-name>.azurewebsites.net). Allowed characters for <app-name> are A-Z, 0-9, and -. A good pattern is to use a combination of your company name and an app identifier.
+> - App Service Basic tier does not support VNET integration. Please use Standard or Premium.
This command performs the following actions, which may take a few minutes:
This command performs the following actions, which may take a few minutes:
Use **az webapp vnet-integration** command to add a regional virtual network integration to a webapp. Replace _&lt;vnet-name>_ and _&lt;subnet-name_ with the virtual network and subnet name that the flexible server is using. ```azurecli
-az webapp vnet-integration add -g myresourcegroup -n mywebapp --vnet <vnet-name> --subnet <subnet-name>
+az webapp vnet-integration add -g myresourcegroup -n mywebapp --vnet VNETName --subnet webappsubnetName
``` ## Configure environment variables to connect the database
az group delete -n myresourcegroup
## Next steps > [!div class="nextstepaction"]
-> [Map an existing custom DNS name to Azure App Service](../../app-service/app-service-web-tutorial-custom-domain.md)
+> [Map an existing custom DNS name to Azure App Service](../../app-service/app-service-web-tutorial-custom-domain.md)
mysql Howto Tls Configurations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/howto-tls-configurations.md
Follow these steps to set MySQL server minimum TLS version:
:::image type="content" source="./media/howto-tls-configurations/setting-tls-value.png" alt-text="Azure Database for MySQL TLS configuration":::
-1. Click **Save** to save the changes.
+1. Click **Save** to save the changes.
-1. A notification will confirm that connection security setting was successfully enabled.
+1. A notification will confirm that connection security setting was successfully enabled and in effect immediately. There is **no restart** of the server required or performed. After the changes are saved, all new connections to the server are accepted only if the TLS version is greater than or equal to the minimum TLS version set on the portal.
:::image type="content" source="./media/howto-tls-configurations/setting-tls-value-success.png" alt-text="Azure Database for MySQL TLS configuration success"::: ## Next steps -- Learn about [how to create alerts on metrics](howto-alert-on-metric.md)
+- Learn about [how to create alerts on metrics](howto-alert-on-metric.md)
network-watcher Network Watcher Nsg Flow Logging Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-nsg-flow-logging-azure-resource-manager.md
na
Last updated 01/07/2021 +
You can save any of the above example templates locally as `azuredeploy.json`. U
To deploy the template, run the following command in PowerShell. ```azurepowershell
-$context = Get-AzSubscription -SubscriptionId 56acfbd6-vc72-43e9-831f-bcdb6f2c5505
+$context = Get-AzSubscription -SubscriptionId <SubscriptionId>
Set-AzContext $context New-AzResourceGroupDeployment -Name EnableFlowLog -ResourceGroupName NetworkWatcherRG ` -TemplateFile "C:\MyTemplates\azuredeploy.json"
Azure enables resource deletion through the "Complete" deployment mode. To delet
Learn how to visualize your NSG Flow data using: * [Microsoft Power BI](network-watcher-visualize-nsg-flow-logs-power-bi.md) * [Open source tools](network-watcher-visualize-nsg-flow-logs-open-source-tools.md)
-* [Azure Traffic Analytics](./traffic-analytics.md)
+* [Azure Traffic Analytics](./traffic-analytics.md)
networking Networking Partners Msp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/networking-partners-msp.md
Use the links in this section for more information about managed cloud networkin
|[Colt](https://www.colt.net/why-colt/strategic-alliances/microsoft-partnership/msp/)|[Network optimisation on Azure: 2-hr Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/colttechnologyservices.azure_networking)||||| |[Equinix](https://www.equinix.com/)|[Cloud Optimized WAN Workshop](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/equinix.cloudoptimizedwan?tab=Overview)|[ExpressRoute Connectivity Strategy Workshop](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/equinix.expressroutestrategy?tab=Overview); [Equinix Cloud Exchange Fabric](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/equinix.equinix_ecx_fabric?tab=Overview)|||| |[Federated Wireless](https://www.federatedwireless.com/caas/)||||[Federated Wireless Connectivity-as-a-Service](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/federatedwireless1580839623708.fw_caas?tab=Overview)|
-|[HCL](https://www.hcltech.com/)|||[HCL Azure Virtual WAN Services - 1 Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclmanagedazurevitualwan?search=vWAN&page=1)|[HCL Azure Private LTE offering - 1 Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclazureprivatelteoffering)|
+|[HCL](https://www.hcltech.com/)|[HCL Cloud Network Transformation- 1 Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.clo?tab=Overview)|[1-Hour Briefing of HCL Azure ExpressRoute Service](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclmanagedazureexpressroute?tab=Overview)|[HCL Azure Virtual WAN Services - 1 Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclmanagedazurevitualwan?search=vWAN&page=1)|[HCL Azure Private LTE offering - 1 Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclazureprivatelteoffering)|
|[IIJ](https://www.iij.ad.jp/biz/cloudex/)|[ExpressRoute implementation: 1-Hr Briefing](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/internet_initiative_japan_inc.iij_cxm_consulting)|[ExpressRoute: 2-Wk Implementation](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/internet_initiative_japan_inc.iij_cxmer_consulting)|||| |[Infosys](https://www.infosys.com/services/microsoft-cloud-business/pages/index.aspx)|[Infosys Integrate+ for Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/infosysltd.infosys-integrate-for-azure?tab=Overview)||||| |[Interxion](https://www.interxion.com/products/interconnection/cloud-connect/support-your-cloud-strategy/)|[Azure Networking Assessment - 5 Days](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/interxionhq.inxn_azure_networking_assessment)|||||
postgresql Tutorial Webapp Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/tutorial-webapp-server-vnet.md
ms.devlang: azurecli Previously updated : 09/22/2020 Last updated : 03/18/2021
This tutorial shows you how create a Azure App Service Web app with Azure Database for PostgreSQL - Flexible Server (Preview) inside a [Virtual network](../../virtual-network/virtual-networks-overview.md).
-In this tutorial you will
+In this tutorial you will learn how to:
>[!div class="checklist"] > * Create a PostgreSQL flexible server in a virtual network
+> * Create a subnet to delegate to App Service
> * Create a web app > * Add the web app to the virtual network > * Connect to Postgres from the web app
az login
If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. Select the specific subscription ID under your account using [az account set](/cli/azure/account) command. Substitute the **subscription ID** property from the **az login** output for your subscription into the subscription ID placeholder. ```azurecli
-az account set --subscription <subscription id>
+az account set --subscription <subscription ID>
``` ## Create a PostgreSQL Flexible Server in a new virtual network
This command performs the following actions, which may take a few minutes:
> az postgres flexible-server firewall-rule list --resource-group myresourcegroup --server-name mydemoserver --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0 > ```
+## Create Subnet for App Service Endpoint
+We now need to have subnet that is delegated to App Service Web App endpoint. Run the following command to create a new subnet in the same virtual network as the database server was created.
+
+```azurecli
+az network vnet subnet create -g myresourcegroup --vnet-name VNETName --name webappsubnetName --address-prefixes 10.0.1.0/24 --delegations Microsoft.Web/serverFarms --service-endpoints Microsoft.Web
+```
+Make a note of the virtual network name and subnet name after this command as would need it to add VNET integration rule for the web app after it is created.
## Create a Web App
-In this section, you create app host in App Service app, connect this app to the Postgres database, then deploy your code to that host. Make sure you're in the repository root of your application code in the terminal.
+In this section, you create app host in App Service app, connect this app to the Postgres database, then deploy your code to that host. Make sure you're in the repository root of your application code in the terminal. Note Basic Plan does not support VNET integration. Please use Standard or Premium.
Create an App Service app (the host process) with the az webapp up command ```azurecli
-az webapp up --resource-group myresourcegroup --location westus2 --plan testappserviceplan --sku B1 --name mywebapp
+az webapp up --resource-group myresourcegroup --location westus2 --plan testappserviceplan --sku P2V2 --name mywebapp
``` > [!NOTE]
az webapp up --resource-group myresourcegroup --location westus2 --plan testapps
This command performs the following actions, which may take a few minutes: - Create the resource group if it doesn't already exist. (In this command you use the same resource group in which you created the database earlier.)-- Create the App Service plan ```testappserviceplan``` in the Basic pricing tier (B1), if it doesn't exist. --plan and --sku are optional. - Create the App Service app if it doesn't exist. - Enable default logging for the app, if not already enabled. - Upload the repository using ZIP deployment with build automation enabled.
This command performs the following actions, which may take a few minutes:
Use **az webapp vnet-integration** command to add a regional virtual network integration to a webapp. Replace <vnet-name> and <subnet-name> with the virtual network and subnet name that the flexible server is using. ```azurecli
-az webapp vnet-integration add -g myresourcegroup -n mywebapp --vnet <vnet-name> --subnet <subnet-name>
+az webapp vnet-integration add -g myresourcegroup -n mywebapp --vnet VNETName --subnet webappsubnetName
``` ## Configure environment variables to connect the database
az group delete -n myresourcegroup
## Next steps > [!div class="nextstepaction"]
-> [Map an existing custom DNS name to Azure App Service](../../app-service/app-service-web-tutorial-custom-domain.md)
+> [Map an existing custom DNS name to Azure App Service](../../app-service/app-service-web-tutorial-custom-domain.md)
purview Catalog Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-private-link.md
You can use private endpoints for your Purview accounts to allow clients and use
1. Fill basic information, and set connectivity method to Private endpoint in **Networking** tab. Set up your ingestion private endpoints by providing details of **Subscription, Vnet and Subnet** that you want to pair with your private endpoint.
+ > [!NOTE]
+ > Create an ingestion private endpoint only if you intend to enable network isolation for end-to-end scan scenarios, for both your Azure and on-premises sources. We currently do not support ingestion private endpoints working with your AWS sources.
+ :::image type="content" source="media/catalog-private-link/create-pe-azure-portal.png" alt-text="Create a Private Endpoint in the Azure portal"::: 1. You can also optionally choose to set up a **Private DNS zone** for each ingestion private endpoint. 1. Click Add to add a private endpoint for your Purview account.
-1. In the Create private endpoint page, set Purview sub-resource to **account**, choose your virtual network and subnet, and select the Private DNS Zone where the DNS will be registered (you can also utilize your won DNS servers or create DNS records using host files on your virtual machines).
+1. In the Create private endpoint page, set Purview sub-resource to **account**, choose your virtual network and subnet, and select the Private DNS Zone where the DNS will be registered (you can also utilize your own DNS servers or create DNS records using host files on your virtual machines).
:::image type="content" source="media/catalog-private-link/create-pe-account.png" alt-text="Private Endpoint creation selections":::
The instructions below are for accessing Purview securely from an Azure VM. Simi
6. Once the new rule is created, navigate back to the VM and try logging in using your AAD credentials again. If the login succeeds, then Purview portal is ready to use. But in some cases, AAD will redirect to other domains to login based on customer's account type. For e.g. for a live.com account, AAD will redirect to live.com to login, then those requests would be blocked again. For Microsoft employee accounts, AAD will access msft.sts.microsoft.com for login information. Check the networking requests in browser networking tab to see which domain's requests are getting blocked, redo the previous step to get its IP and add outbound port rules in network security group to allow requests for that IP (if possible, add the url and IP to VM's host file to fix the DNS resolution). If you know the exact login domain's IP ranges, you can also directly add them into networking rules. 7. Now login to AAD should be successful. Purview Portal will load successfully but listing all Purview accounts won't work since it can only access a specific Purview account. Enter *web.purview.azure.com/resource/{PurviewAccountName}* to directly visit the Purview account that you successfully set up a private endpoint for.
+
+## Ingestion private endpoints and scanning sources in private networks, Vnets and behind private endpoints
+
+If you want to ensure network isolation for your metadata flowing from the source which is being scanned to the Purview DataMap, then you must follow these steps:
+1. Enable an **ingestion private endpoint** by following steps in [this](#creating-an-ingestion-private-endpoint) section
+1. Scan the source using a **self-hosted IR**.
+
+ 1. All on-premises source types like SQL server, Oracle, SAP and others are currently supported only via self-hosted IR based scans. The self-hosted IR must run within your private network and then be peered with your Vnet in Azure. Your Azure vnet must then be enabled on your ingestion private endpoint by following steps [below](#creating-an-ingestion-private-endpoint)
+ 1. For all **Azure** source types like Azure blob storage, Azure SQL Database and others, you must explicitly choose running the scan using self-hosted IR to ensure network isolation. Follow steps [here](manage-integration-runtimes.md) to set up a self-hosted IR. Then set up your scan on the Azure source by choosing that self-hosted IR in the **connect via integration runtime** dropdown to ensure network isolation.
+
+ :::image type="content" source="media/catalog-private-link/shir-for-azure.png" alt-text="Running Azure scan using self-hosted IR":::
+
+> [!NOTE]
+> We currently do not support MSI credential method when you scan your Azure sources using a self-hosted IR. You must use one of the other supported credential method for that Azure source.
## Enable private endpoint on existing Purview accounts
There are 2 ways you can add Purview private endpoints after creating your Purvi
1. Navigate to the Purview account from the Azure portal, select the Private endpoint connections under the **networking** section of **Settings**.
+ :::image type="content" source="media/catalog-private-link/pe-portal.png" alt-text="Create account private endpoint":::
1. Click +Private endpoint to create a new private endpoint.
There are 2 ways you can add Purview private endpoints after creating your Purvi
> [!NOTE] > You will need to follow the same steps as above for the target sub-resource selected as **Portal** as well.
+#### Creating an ingestion private endpoint
+
+1. Navigate to the Purview account from the Azure portal, select the Private endpoint connections under the **networking** section of **Settings**.
+1. Navigate to the **Ingestion private endpoint connections** tab and Click **+New** to create a new ingestion private endpoint.
+
+1. Fill in basic information and Vnet details.
+
+ :::image type="content" source="media/catalog-private-link/ingestion-pe-fill-details.png" alt-text="Fill private endpoint details":::
+
+1. Click **Create** to finish set up.
+
+> [!NOTE]
+> Ingestion private endpoints can be created only via the Purview Azure portal experience described above. It cannot be created from the private link center.
+ ### Using the Private link center 1. Navigate to the [Azure portal](https://portal.azure.com).
There are 2 ways you can add Purview private endpoints after creating your Purvi
> [!NOTE] > You will need to follow the same steps as above for the target sub-resource selected as **Portal** as well.
+## Firewalls to restrict public access
+
+To cut off access to the Purview account completely from public internet, follow the steps below. This setting will apply to both private endpoint and ingestion private endpoint connections.
+
+1. Navigate to the Purview account from the Azure portal, select the Private endpoint connections under the **networking** section of **Settings**.
+1. Navigate to the firewall tab and ensure that the toggle is set to **Deny**.
+
+ :::image type="content" source="media/catalog-private-link/private-endpoint-firewall.png" alt-text="Private endpoint firewall settings":::
+ ## Next steps - [Browse the Azure Purview Data Catalog](how-to-browse-catalog.md)
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-sql-database.md
On the **Register sources (Azure SQL Database)** screen, do the following:
[!INCLUDE [create and manage scans](includes/manage-scans.md)] > [!NOTE]
-> Deleting your scan does not delete your assets from previous Azure SQL Database scans.
-> The asset will no longer be updated with schema changes if your source table be changed and rescan the source table after editing the description in the schema tab of Purview.
+> * Deleting your scan does not delete your assets from previous Azure SQL Database scans.
+> * The asset will no longer be updated with schema changes if your source table be changed and rescan the source table after editing the description in the schema tab of Purview.
## Next steps
route-server Quickstart Configure Route Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/route-server/quickstart-configure-route-server-powershell.md
This article helps you configure Azure Route Server to peer with a Network Virtu
Before you can create an Azure Route Server, you'll need a virtual network to host the deployment. Use the follow command to create a resource group and virtual network. If you already have a virtual network, you can skip to the next section. ```azurepowershell-interactive
-New-AzResourceGroup ΓÇôName ΓÇ£RouteServerRGΓÇ¥ -Location ΓÇ£West USΓÇ¥
-New-AzVirtualNetwork ΓÇôResourceGroupName ΓÇ£RouteServerRG -Location ΓÇ£West USΓÇ¥ -Name myVirtualNetwork ΓÇôAddressPrefix 10.0.0.0/16
+New-AzResourceGroup ΓÇôName "RouteServerRGΓÇ¥ -Location ΓÇ£West US"
+New-AzVirtualNetwork ΓÇôResourceGroupName "RouteServerRG" -Location "West US" -Name myVirtualNetwork ΓÇôAddressPrefix 10.0.0.0/16
``` ### Add a subnet
New-AzVirtualNetwork ΓÇôResourceGroupName ΓÇ£RouteServerRG -Location ΓÇ£West US
1. Add a subnet named *RouteServerSubnet* to deploy the Azure Route Server into. This subnet is a dedicated subnet only for Azure Route Server. The RouteServerSubnet must be /27 or a shorter prefix (such as /26, /25), or you'll receive an error message when you add the Azure Route Server. ```azurepowershell-interactive
- $vnet = Get-AzVirtualNetwork ΓÇôName ΓÇ£myVirtualNetworkΓÇ¥ - ResourceGroupName ΓÇ£RouteServerRGΓÇ¥
- Add-AzVirtualNetworkSubnetConfig ΓÇôName ΓÇ£RouteServerSubnetΓÇ¥ -AddressPrefix 10.0.0.0/24 -VirtualNetwork $vnet
+ $vnet = Get-AzVirtualNetwork ΓÇôName "myVirtualNetwork" - ResourceGroupName "RouteServerRG"
+ Add-AzVirtualNetworkSubnetConfig ΓÇôName "RouteServerSubnet" -AddressPrefix 10.0.0.0/24 -VirtualNetwork $vnet
$vnet | Set-AzVirtualNetwork ``` 1. Obtain the RouteServerSubnet ID. To see the resource ID of all subnets in the virtual network, use this command: ```azurepowershell-interactive
- $vnet = Get-AzVirtualNetwork ΓÇôName ΓÇ£vnet_nameΓÇ¥ -ResourceGroupName ΓÇ£
+ $vnet = Get-AzVirtualNetwork ΓÇôName "vnet_name" -ResourceGroupName "RouteServerRG"
$vnet.Subnets ```
The RouteServerSubnet ID looks like the following one:
Create the Route Server with this command: ```azurepowershell-interactive
-New-AzRouteServer -RouteServerName myRouteServer -ResourceGroupName RouteServerRG -Location "West USΓÇ¥ -HostedSubnet ΓÇ£RouteServerSubnet_IDΓÇ¥
+New-AzRouteServer -RouteServerName myRouteServer -ResourceGroupName RouteServerRG -Location "West US" -HostedSubnet "RouteServerSubnet_ID"
``` The location needs to match the location of your virtual network. The HostedSubnet is the RouteServerSubnet ID you obtained in the previous section.
If you no longer need the Azure Route Server, use these commands to remove the B
1. Remove the BGP peering between Azure Route Server and an NVA with this command: ```azurepowershell-interactive
-Remove-AzRouteServerPeer -PeerName ΓÇ£nva_nameΓÇ¥ -RouteServerName myRouteServer -ResourceGroupName RouteServerRG
+Remove-AzRouteServerPeer -PeerName "nva_name" -RouteServerName myRouteServer -ResourceGroupName RouteServerRG
``` 2. Remove Azure Route Server with this command:
search Search Howto Connecting Azure Sql Iaas To Azure Search Using Indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-connecting-azure-sql-iaas-to-azure-search-using-indexers.md
Title: Azure SQL VM connection for search indexing
description: Enable encrypted connections and configure the firewall to allow connections to SQL Server on an Azure virtual machine (VM) from an indexer on Azure Cognitive Search. ---++ Previously updated : 07/12/2020 Last updated : 03/19/2021 # Configure a connection from an Azure Cognitive Search indexer to SQL Server on an Azure VM
-As noted in [Connecting Azure SQL Database to Azure Cognitive Search using indexers](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md#faq), creating indexers against **SQL Server on Azure VMs** (or **SQL Azure VMs** for short) is supported by Azure Cognitive Search, but there are a few security-related prerequisites to take care of first.
+When configuring an [Azure SQL indexer](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md#faq) to extract content from a database on an Azure virtual machine, additional steps are required for secure connections.
-Connections from Azure Cognitive Search to SQL Server on a VM is a public internet connection. All of the security measures you would normally follow for these connections apply here as well:
+A connection from Azure Cognitive Search to SQL Server on a virtual machine is a public internet connection. In order for secure connections to succeed, complete the following steps:
-+ Obtain a certificate from a [Certificate Authority provider](https://en.wikipedia.org/wiki/Certificate_authority#Providers) for the fully qualified domain name of the SQL Server instance on the Azure VM.
-+ Install the certificate on the VM, and then enable and configure encrypted connections on the VM using the instructions in this article.
++ Obtain a certificate from a [Certificate Authority provider](https://en.wikipedia.org/wiki/Certificate_authority#Providers) for the fully qualified domain name of the SQL Server instance on the virtual machine+++ Install the certificate on the virtual machine, and then enable and configure encrypted connections on the VM using the instructions in this article. ## Enable encrypted connections+ Azure Cognitive Search requires an encrypted channel for all indexer requests over a public internet connection. This section lists the steps to make this work. 1. Check the properties of the certificate to verify the subject name is the fully qualified domain name (FQDN) of the Azure VM. You can use a tool like CertUtils or the Certificates snap-in to view the properties. You can get the FQDN from the VM service blade's Essentials section, in the **Public IP address/DNS name label** field, in the [Azure portal](https://portal.azure.com/).
-
- * For VMs created using the newer **Resource Manager** template, the FQDN is formatted as `<your-VM-name>.<region>.cloudapp.azure.com`
- * For older VMs created as a **Classic** VM, the FQDN is formatted as `<your-cloud-service-name.cloudapp.net>`.
-
-2. Configure SQL Server to use the certificate using the Registry Editor (regedit).
-
- Although SQL Server Configuration Manager is often used for this task, you can't use it for this scenario. It won't find the imported certificate because the FQDN of the VM on Azure doesn't match the FQDN as determined by the VM (it identifies the domain as either the local computer or the network domain to which it is joined). When names don't match, use regedit to specify the certificate.
-
- * In regedit, browse to this registry key: `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\[MSSQL13.MSSQLSERVER]\MSSQLServer\SuperSocketNetLib\Certificate`.
-
+
+ + For VMs created using the newer **Resource Manager** template, the FQDN is formatted as `<your-VM-name>.<region>.cloudapp.azure.com`
+
+ + For older VMs created as a **Classic** VM, the FQDN is formatted as `<your-cloud-service-name.cloudapp.net>`.
+
+1. Configure SQL Server to use the certificate using the Registry Editor (regedit).
+
+ Although SQL Server Configuration Manager is often used for this task, you can't use it for this scenario. It won't find the imported certificate because the FQDN of the VM on Azure doesn't match the FQDN as determined by the VM (it identifies the domain as either the local computer or the network domain to which it is joined). When names don't match, use regedit to specify the certificate.
+
+ + In regedit, browse to this registry key: `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\[MSSQL13.MSSQLSERVER]\MSSQLServer\SuperSocketNetLib\Certificate`.
+ The `[MSSQL13.MSSQLSERVER]` part varies based on version and instance name.
- * Set the value of the **Certificate** key to the **thumbprint** of the TLS/SSL certificate you imported to the VM.
-
+
+ + Set the value of the **Certificate** key to the **thumbprint** of the TLS/SSL certificate you imported to the VM.
+ There are several ways to get the thumbprint, some better than others. If you copy it from the **Certificates** snap-in in MMC, you will probably pick up an invisible leading character [as described in this support article](https://support.microsoft.com/kb/2023869/), which results in an error when you attempt a connection. Several workarounds exist for correcting this problem. The easiest is to backspace over and then retype the first character of the thumbprint to remove the leading character in the key value field in regedit. Alternatively, you can use a different tool to copy the thumbprint.
-3. Grant permissions to the service account.
-
+1. Grant permissions to the service account.
+ Make sure the SQL Server service account is granted appropriate permission on the private key of the TLS/SSL certificate. If you overlook this step, SQL Server will not start. You can use the **Certificates** snap-in or **CertUtils** for this task.
-
-4. Restart the SQL Server service.
+
+1. Restart the SQL Server service.
## Configure SQL Server connectivity in the VM
-After you set up the encrypted connection required by Azure Cognitive Search, there are additional configuration steps intrinsic to SQL Server on Azure VMs. If you haven't done so already , the next step is to finish configuration using either one of these articles:
-* For a **Resource Manager** VM, see [Connect to a SQL Server Virtual Machine on Azure using Resource Manager](../azure-sql/virtual-machines/windows/ways-to-connect-to-sql.md).
-* For a **Classic** VM, see [Connect to a SQL Server Virtual Machine on Azure Classic](/previous-versions/azure/virtual-machines/windows/sqlclassic/virtual-machines-windows-classic-sql-connect).
+After you set up the encrypted connection required by Azure Cognitive Search, there are additional configuration steps intrinsic to SQL Server on Azure VMs. If you haven't done so already, the next step is to finish configuration using either one of these articles:
+++ For a **Resource Manager** VM, see [Connect to a SQL Server Virtual Machine on Azure using Resource Manager](../azure-sql/virtual-machines/windows/ways-to-connect-to-sql.md). +++ For a **Classic** VM, see [Connect to a SQL Server Virtual Machine on Azure Classic](/previous-versions/azure/virtual-machines/windows/sqlclassic/virtual-machines-windows-classic-sql-connect). In particular, review the section in each article for "connecting over the internet". ## Configure the Network Security Group (NSG)+ It is not unusual to configure the NSG and corresponding Azure endpoint or Access Control List (ACL) to make your Azure VM accessible to other parties. Chances are you've done this before to allow your own application logic to connect to your SQL Azure VM. It's no different for an Azure Cognitive Search connection to your SQL Azure VM. The links below provide instructions on NSG configuration for VM deployments. Use these instructions to ACL an Azure Cognitive Search endpoint based on its IP address. > [!NOTE] > For background, see [What is a Network Security Group?](../virtual-network/network-security-groups-overview.md)
->
->
-* For a **Resource Manager** VM, see [How to create NSGs for ARM deployments](../virtual-network/tutorial-filter-network-traffic.md).
-* For a **Classic** VM, see [How to create NSGs for Classic deployments](/previous-versions/azure/virtual-network/virtual-networks-create-nsg-classic-ps).
++ For a **Resource Manager** VM, see [How to create NSGs for ARM deployments](../virtual-network/tutorial-filter-network-traffic.md).+++ For a **Classic** VM, see [How to create NSGs for Classic deployments](/previous-versions/azure/virtual-network/virtual-networks-create-nsg-classic-ps). IP addressing can pose a few challenges that are easily overcome if you are aware of the issue and potential workarounds. The remaining sections provide recommendations for handling issues related to IP addresses in the ACL.
-#### Restrict access to the Azure Cognitive Search
+### Restrict access to the Azure Cognitive Search
+ We strongly recommend that you restrict the access to the IP address of your search service and the IP address range of `AzureCognitiveSearch` [service tag](../virtual-network/service-tags-overview.md#available-service-tags) in the ACL instead of making your SQL Azure VMs open to all connection requests. You can find out the IP address by pinging the FQDN (for example, `<your-search-service-name>.search.windows.net`) of your search service. Although it is possible for the search service IP address to change, it's unlikely that it will change. The IP address tends to be static for the lifetime of the service. You can find out the IP address range of `AzureCognitiveSearch` [service tag](../virtual-network/service-tags-overview.md#available-service-tags) by either using [Downloadable JSON files](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) or via the [Service Tag Discovery API](../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api-public-preview). The IP address range is updated weekly.
-#### Include the Azure Cognitive Search portal IP addresses
-If you are using the Azure portal to create an indexer, Azure Cognitive Search portal logic also needs access to your SQL Azure VM during creation time. Azure Cognitive Search portal IP addresses can be found by pinging `stamp2.search.ext.azure.com`.
+### Include the Azure Cognitive Search portal IP addresses
+
+If you are using the Azure portal to create an indexer, Azure Cognitive Search portal logic also needs access to your SQL Azure VM during creation time. Azure Cognitive Search portal IP addresses can be found by pinging `stamp2.search.ext.azure.com`, which is the domain of the traffic manager.
+
+Clusters in different regions connect to this traffic manager. The ping might return the IP address and domain of `stamp2.search.ext.azure.com`, but if your service is in a different region, the IP and domain name will be different. The IP address returned from the ping is the correct one for Azure portal in your region.
## Next steps
-With configuration out of the way, you can now specify a SQL Server on Azure VM as the data source for an Azure Cognitive Search indexer. See [Connecting Azure SQL Database to Azure Cognitive Search using indexers](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md) for more information.
+
+With configuration out of the way, you can now specify a SQL Server on Azure VM as the data source for an Azure Cognitive Search indexer. For more information, see [Connecting Azure SQL Database to Azure Cognitive Search using indexers](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md).
search Semantic Answers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/semantic-answers.md
Last updated 03/12/2021
# Return a semantic answer in Azure Cognitive Search > [!IMPORTANT]
-> Semantic search features are in public preview, available through the preview REST API only. Preview features are offered as-is, under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/), and are not guaranteed to have the same implementation at general availability. For more information, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing).
+> Semantic search is in public preview, available through the preview REST API only. Preview features are offered as-is, under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/), and are not guaranteed to have the same implementation at general availability. These features are billable. For more information, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing).
When formulating a [semantic query](semantic-how-to-query-request.md), you can optionally extract content from the top-matching documents that "answers" the query directly. One or more answers can be included in the response, which you can then render on a search page to improve the user experience of your app.
In this article, learn how to request a semantic answer, unpack the response, an
All prerequisites that apply to [semantic queries](semantic-how-to-query-request.md) also apply to answers, including service tier and region.
-+ Queries formulated using the semantic query parameters, and include the "answers" parameter. Required parameters are discussed in this article.
++ Query logic must include the semantic query parameters, plus the "answers" parameter. Required parameters are discussed in this article.
-+ Query strings must be formulated in language having the characteristics of a question (what, where, when, how).
++ Query strings entered by the user must be formulated in language having the characteristics of a question (what, where, when, how).
-+ Search documents must contain text having the characteristics of an answer, and that text must exist in one of the fields listed in "searchFields".
++ Search documents must contain text having the characteristics of an answer, and that text must exist in one of the fields listed in "searchFields". For example, given a query "what is a hash table", if none of the searchFields contain passages that include "A hash table is ..." , then an answer is unlikely to be returned. ## What is a semantic answer?
-A semantic answer is an artifact of a [semantic query](semantic-how-to-query-request.md). It consists of one or more verbatim passages from a search document, formulated as an answer to a query that looks like a question. For an answer to be returned, phrases or sentences must exist in a search document that have the language characteristics of an answer, and the query itself must be posed as a question.
+A semantic answer is a substructure of a [semantic query response](semantic-how-to-query-request.md). It consists of one or more verbatim passages from a search document, formulated as an answer to a query that looks like a question. For an answer to be returned, phrases or sentences must exist in a search document that have the language characteristics of an answer, and the query itself must be posed as a question.
-Cognitive Search uses a machine reading comprehension model to formulate answers. The model produces a set of potential answers from the available documents, and when it reaches a high enough confidence level, it will propose an answer.
+Cognitive Search uses a machine reading comprehension model to pick the best answer. The model produces a set of potential answers from the available content, and when it reaches a high enough confidence level, it will propose an answer.
-Answers are returned as an independent, top-level object in the query response payload that you can choose to render on search pages, along side search results. Structurally, it's an array element of a response that includes text, a document key, and a confidence score.
+Answers are returned as an independent, top-level object in the query response payload that you can choose to render on search pages, along side search results. Structurally, it's an array element within the response consisting of text, a document key, and a confidence score.
<a name="query-params"></a> ## How to request semantic answers in a query
-To return a semantic answer, the query must have the semantic query type, language, search fields, and the "answers" parameter. Specifying the "answers" parameter does not guarantee that you will get an answer, but the request must include this parameter if answer processing is to be invoked at all.
+To return a semantic answer, the query must have the semantic "queryType", "queryLanguage", "searchFields", and the "answers" parameter. Specifying the "answers" parameter does not guarantee that you will get an answer, but the request must include this parameter if answer processing is to be invoked at all.
-The "searchFields" parameter is critical to returning a high quality answer, both in terms of content and order.
+The "searchFields" parameter is crucial to returning a high quality answer, both in terms of content and order (see below).
```json {
The "searchFields" parameter is critical to returning a high quality answer, bot
+ A query string must not be null and should be formulated as question. In this preview, the "queryType" and "queryLanguage" must be set exactly as shown in the example.
-+ The "searchFields" parameter determines which fields provide tokens to the extraction model. Be sure to set this parameter. You must have at least one string field, but include any string field that you think is useful in providing an answer. Collectively across all fields in searchFields, only about 8,000 tokens per document are passed into the model. Start the field list with concise fields, and then progress to text-rich fields. For precise guidance on how to set this field, see [Set searchFields](semantic-how-to-query-request.md#searchfields).
++ The "searchFields" parameter determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. For precise guidance on how to set this field so that it works for both captions and answers, see [Set searchFields](semantic-how-to-query-request.md#searchfields).
-+ For "answers", the basic parameter construction is `"answers": "extractive"`, where the default number of answers returned is one. You can increase the number of answers by adding a count, up to a maximum of five. Whether you need more than one answer depends on the user experience of your app, and how you want to render results.
++ For "answers", parameter construction is `"answers": "extractive"`, where the default number of answers returned is one. You can increase the number of answers by adding a count as shown in the above example, up to a maximum of five. Whether you need more than one answer depends on the user experience of your app, and how you want to render results. ## Deconstruct an answer from the response
Given the query "how do clouds form", the following answer is returned in the re
For best results, return semantic answers on a document corpus having the following characteristics:
-+ "searchFields" must provide fields that offer sufficient text in which an answer is likely to be found. Only verbatim text from a document can be appear as an answer.
++ "searchFields" must provide fields that offer sufficient text in which an answer is likely to be found. Only verbatim text from a document can appear as an answer. + query strings must not be null (search=`*`) and the string should have the characteristics of a question, as opposed to a keyword search (a sequential list of arbitrary terms or phrases). If the query string does not appear to be answer, answer processing is skipped, even if the request specifies "answers" as a query parameter.
search Semantic How To Query Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/semantic-how-to-query-request.md
Previously updated : 03/12/2021 Last updated : 03/18/2021
-# Create a semantic query in Cognitive Search
+# Create a query for semantic captions in Cognitive Search
> [!IMPORTANT]
-> Semantic query type is in public preview, available through the preview REST API and Azure portal. Preview features are offered as-is, under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). For more information, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing).
+> Semantic search is in public preview, available through the preview REST API and Azure portal. Preview features are offered as-is, under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). These features are billable. For more information, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing).
-In this article, learn how to formulate a search request that uses semantic ranking. The request will return semantic captions, and optionally [semantic answers](semantic-answers.md), with highlights over the most relevant terms and phrases.
+In this article, learn how to formulate a search request that uses semantic ranking and returns semantic captions (and optionally [semantic answers](semantic-answers.md)), with highlights over the most relevant terms and phrases. Both captions and answers are returned in queries formulated using the "semantic" query type.
-Both captions and answers are extracted verbatim from text in the search document. The semantic subsystem determines what content has the characteristics of a caption or answer, but it does not compose new sentences or phrases. For this reason, content that includes explanations or definitions work best for semantic search.
+Captions and answers are extracted verbatim from text in the search document. The semantic subsystem determines what part of your content has the characteristics of a caption or answer, but it does not compose new sentences or phrases. For this reason, content that includes explanations or definitions work best for semantic search.
## Prerequisites
Both captions and answers are extracted verbatim from text in the search documen
+ Access to semantic search preview: [sign up](https://aka.ms/SemanticSearchPreviewSignup)
-+ An existing search index, containing English content
++ An existing search index containing English content + A search client for sending queries
- The search client must support preview REST APIs on the query request. You can use [Postman](search-get-started-rest.md), [Visual Studio Code](search-get-started-vs-code.md), or code that you've modified to make REST calls to the preview APIs. You can also use [Search explorer](search-explorer.md) in Azure portal to submit a semantic query.
+ The search client must support preview REST APIs on the query request. You can use [Postman](search-get-started-rest.md), [Visual Studio Code](search-get-started-vs-code.md), or code that makes REST calls to the preview APIs. You can also use [Search explorer](search-explorer.md) in Azure portal to submit a semantic query.
+ A [query request](/rest/api/searchservice/preview-api/search-documents) must include the semantic option and other parameters described in this article.
Only the top 50 matches from the initial results can be semantically ranked, and
## Query with Search explorer
-[Search explorer](search-explorer.md) has been updated to include options for semantic queries. These options become visible in the portal after you get access to the preview. Query options can enable semantic queries, searchFields, and spell correction.
+[Search explorer](search-explorer.md) has been updated to include options for semantic queries. These options become visible in the portal after completing the following steps:
-You can also paste the required query parameters into the query string.
+1. [Sign up](https://aka.ms/SemanticSearchPreviewSignup) and admittance of your search service into the preview program
+
+1. Open the portal with this syntax: `https://portal.azure.com/?feature.semanticSearch=true`
+
+Query options include switches to enable semantic queries, searchFields, and spell correction. You can also paste the required query parameters into the query string.
:::image type="content" source="./media/semantic-search-overview/search-explorer-semantic-query-options.png" alt-text="Query options in Search explorer" border="true":::
The following table summarizes the query parameters used in a semantic query so
|--|-|-| | queryType | String | Valid values include simple, full, and semantic. A value of "semantic" is required for semantic queries. | | queryLanguage | String | Required for semantic queries. Currently, only "en-us" is implemented. |
-| searchFields | String | A comma-delimited list of searchable fields. Optional but recommended. Specifies the fields over which semantic ranking occurs. </br></br>In contrast with simple and full query types, the order in which fields are listed determines precedence. For more usage instructions, see [Step 2: Set searchFields](#searchfields). |
+| searchFields | String | A comma-delimited list of searchable fields. Specifies the fields over which semantic ranking occurs, from which captions and answers are extracted. </br></br>In contrast with simple and full query types, the order in which fields are listed determines precedence. For more usage instructions, see [Step 2: Set searchFields](#searchfields). |
| speller | String | Optional parameter, not specific to semantic queries, that corrects misspelled terms before they reach the search engine. For more information, see [Add spell correction to queries](speller-how-to-add.md). | | answers |String | Optional parameters that specify whether semantic answers are included in the result. Currently, only "extractive" is implemented. Answers can be configured to return a maximum of five. The default is one. This example shows a count of three answers: "extractive\|count3"`. For more information, see [Return semantic answers](semantic-answers.md).|
While content in a search index can be composed in multiple languages, the query
#### Step 2: Set searchFields
-This parameter is optional in that there is no error if you leave it out, but providing an ordered list of fields is strongly recommended for both captions and answers.
- The searchFields parameter is used to identify passages to be evaluated for "semantic similarity" to the query. For the preview, we do not recommend leaving searchFields blank as the model requires a hint as to what fields are the most important to process.
-The order of the searchFields is critical. If you already use searchFields in existing simple or full Lucene queries, be sure that you revisit this parameter to check for field order when switching to a semantic query type.
+The order of the searchFields is critical. If you already use searchFields in existing code for simple or full Lucene queries, revisit this parameter to check for field order when switching to a semantic query type.
-Follow these guidelines to ensure optimum results when two or more searchFields are specified:
+For two or more searchFields:
+ Include only string fields and top-level string fields in collections. If you happen to include non-string fields or lower-level fields in a collection, there is no error, but those fields won't be used in semantic ranking.
Follow these guidelines to ensure optimum results when two or more searchFields
+ Follow those fields by descriptive fields where the answer to semantic queries may be found, such as the main content of a document.
-If only one field specified, use a descriptive field where the answer to semantic queries may be found, such as the main content of a document. Choose a field that provides sufficient content. To ensure timely processing, only about 8,000 tokens of the aggregate contents of searchFields undergo semantic evaluation and ranking.
+If only one field specified, use a descriptive field where the answer to semantic queries may be found, such as the main content of a document.
#### Step 3: Remove orderBy clauses
search Semantic Ranking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/semantic-ranking.md
Title: Semantic ranking
-description: Describes the semantic ranking algorithm in Cognitive Search.
+description: Learn how the semantic ranking algorithm works in Azure Cognitive Search.
Previously updated : 03/12/2021 Last updated : 03/18/2021 # Semantic ranking in Azure Cognitive Search > [!IMPORTANT]
-> Semantic search features are in public preview, available through the preview REST API only. Preview features are offered as-is, under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/), and are not guaranteed to have the same implementation at general availability. For more information, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing).
+> Semantic search features are in public preview, available through the preview REST API only. Preview features are offered as-is, under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/), and are not guaranteed to have the same implementation at general availability. These features are billable. For more information, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing).
-Semantic ranking is an extension of the query execution pipeline that improves the precision and recall by reranking the top matches of an initial result set. Semantic ranking is backed by state-of-the-art deep machine reading comprehension models, trained for queries expressed in natural language as opposed to linguistic matching on keywords. In contrast with the [default similarity ranking algorithm](index-ranking-similarity.md), the semantic ranker uses the context and meaning of words to determine relevance.
+Semantic ranking is an extension of the query execution pipeline that improves the precision and recall by reranking the top matches of an initial result set. Semantic ranking is backed by state-of-the-art machine reading comprehension models, trained for queries expressed in natural language as opposed to linguistic matching on keywords. In contrast with the [default similarity ranking algorithm](index-ranking-similarity.md), the semantic ranker uses the context and meaning of words to determine relevance.
-## How semantic ranking works
+Semantic ranking is both resource and time intensive. In order to complete processing within the expected latency of a query operation, inputs are consolidated and simplified so that summarization and analysis can be completed as quickly as possible.
-The semantic ranking is both resource and time intensive. In order to complete processing within the expected latency of a query operation, the model takes as an input just the top 50 documents returned from the default [similarity ranking algorithm](index-ranking-similarity.md). Results from the initial ranking can include more than 50 matches, but only the first 50 will be reranked semantically.
+## Preparation for semantic ranking
-For semantic ranking, the model uses both machine reading comprehension and transfer learning to re-score the documents based on how well each one matches the intent of the query.
+Before scoring for relevance, content must be reduced to a quantity of parameters that can be handled efficiently by the semantic ranker. Content reduction includes the following sequence of steps.
-### Preparation (passage extraction) phase
+1. Content reduction starts by using the initial results returned by the default [similarity ranking algorithm](index-ranking-similarity.md) used for keyword search. Search results can include up to 1,000 matches, but semantic ranking will only process the top 50.
-For each document in the initial results, there is a passage extraction exercise that identifies key passages. This is a downsizing exercise that reduces content to an amount that can be processed swiftly.
+ Given the query, initial results could be much less than 50, depending on how many matches were found. Whatever the document count, the initial result set is the document corpus for semantic ranking.
-1. For each of the 50 documents, each field in the searchFields parameter is evaluated in consecutive order. Contents from each field are consolidated into one long string.
+1. Across the document corpus, the contents of each field in "searchFields" is extracted and combined into a long string.
-1. The long string is then trimmed to ensure the overall length is not more than 8,000 tokens. For this reason, it's recommended that you position concise fields first so that they are included in the string. If you have very large documents with text-heavy fields, anything after the token limit is ignored.
+1. Any strings that are excessively long are trimmed to ensure the overall length meets the input requirements of the summarization model. This trimming exercise is why it's important to position concise fields first in "searchFields", to ensure they are included in the string. If you have very large documents with text-heavy fields, anything after the maximum limit is ignored.
-1. Each document is now represented by a single long string that is up to 8,000 tokens. These strings are sent to the summarization model, which will reduce the string further. The summarization model evaluates the long string for key sentences or passages that best summarize the document or answer the question.
+Each document is now represented by a single long string.
-1. The output of this phase is a caption (and optionally, an answer). The caption is at most 128 tokens per document, and it is considered the most representative of the document.
+> [!NOTE]
+> Parameter inputs to the models are tokens not characters or words. Tokenization is determined in part by the analyzer assignment on searchable fields. For insights into how strings are tokenized, you can review the token output of an analyzer using the [Test Analyzer REST API](/rest/api/searchservice/test-analyzer).
+>
+> Currently in this preview, long strings can be a maximum of 8,000 tokens in size. If search fails to deliver an expected answer from deep within a document, knowing about content trimming helps you understand why.
-### Scoring and ranking phases
+## Summarization
-In this phase, all 50 captions are evaluated to assess relevance.
+After string reduction, it's now possible to pass the parameters through machine reading comprehension and language representation to determine which sentences and phrases best summarize the model, relative to the query.
+
+Inputs to summarization are the long string from the preparation phase. From that input, the summarization model evaluates the content to find passages that are most representative.
+
+Output is a [semantic caption](semantic-how-to-query-request.md), in plain text and with highlights. The caption is smaller than the long string, usually fewer than 200 words per document, and it's considered the most representative of the document.
+
+A [semantic answer](semantic-answers.md) will also be returned if you specified the "answers" parameter, if the query was posed as a question, and if a passage can be found in the long string that looks like a plausible answer to the question.
+
+## Scoring and ranking
+
+At this point, you now have captions for each document. The captions are evaluated for relevance to the query.
1. Scoring is determined by evaluating each caption for conceptual and semantic relevance, relative to the query provided.
In this phase, all 50 captions are evaluated to assess relevance.
:::image type="content" source="media/semantic-search-overview/semantic-vector-representation.png" alt-text="Vector representation for context" border="true":::
-1. The output of this phase is an @search.rerankerScore assigned to each document. Once all documents are scored, they are listed in descending order and included in the query response payload.
+1. The output of this phase is a @search.rerankerScore assigned to each document. Once all documents are scored, they are listed in descending order and included in the query response payload. The payload includes answers, plain text and highlighted captions, and any fields that you marked as retrievable or specified in a select clause.
## Next steps
-Semantic ranking is offered on Standard tiers, in specific regions. For more information and to sign up, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing). A new query type enables the relevance ranking and response structures of semantic search. To get started, [Create a semantic query](semantic-how-to-query-request.md).
+Semantic ranking is offered on Standard tiers, in specific regions. For more information about available and sign up, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing). A new query type enables the relevance ranking and response structures of semantic search. To get started, [Create a semantic query](semantic-how-to-query-request.md).
-Alternatively, review either of the following articles for related information.
+Alternatively, review the following articles about default ranking. Semantic ranking depends on the similarity ranker to return the initial results. Knowing about query execution and ranking will give you a broad understanding of how the entire process works.
-+ [Semantic search overview](semantic-search-overview.md)
-+ [Return a semantic answer](semantic-answers.md)
++ [Full text search in Azure Cognitive Search](search-lucene-query-architecture.md)++ [Similarity and scoring in Azure Cognitive Search](index-similarity-and-scoring.md)++ [Analyzers for text processing in Azure Cognitive Search](search-analyzers.md)
search Semantic Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/semantic-search-overview.md
Components of semantic search extend the existing query execution pipeline in bo
Query execution proceeds as usual, with term parsing, analysis, and scans over the inverted indexes. The engine retrieves documents using token matching, and scores the results using the [default similarity scoring algorithm](index-similarity-and-scoring.md#similarity-ranking-algorithms). Scores are calculated based on the degree of linguistic similarity between query terms and matching terms in the index. If you defined them, scoring profiles are also applied at this stage. Results are then passed to the semantic search subsystem.
-In the preparation step, the document corpus returned from the initial result set is analyzed at the sentence and paragraph level to find passages that summarize each document. In contrast with keyword search, this step uses machine reading and comprehension to evaluate the content. As part of result composition, a semantic query returns captions and answers. To formulate them, semantic search uses language representation to extract and highlight key passages that best summarize a result. If the search query is a question - and answers are requested - the response will also include a text passage that best answers the question, as expressed by the search query. For both captions and answers, existing text is used in the formulation. The semantic models do not compose new sentences or phrases from the available content, nor does it apply logic to arrive at new conclusions. In short, the system will never return content that doesn't already exist.
+In the preparation step, the document corpus returned from the initial result set is analyzed at the sentence and paragraph level to find passages that summarize each document. In contrast with keyword search, this step uses machine reading and comprehension to evaluate the content. Through this stage of content processing, a semantic query returns [captions](semantic-how-to-query-request.md) and [answers](semantic-answers.md). To formulate them, semantic search uses language representation to extract and highlight key passages that best summarize a result. If the search query is a question - and answers are requested - the response will also include a text passage that best answers the question, as expressed by the search query.
+
+For both captions and answers, existing text is used in the formulation. The semantic models do not compose new sentences or phrases from the available content, nor does it apply logic to arrive at new conclusions. In short, the system will never return content that doesn't already exist.
Results are then re-scored based on the [conceptual similarity](semantic-ranking.md) of query terms.
search Tutorial Javascript Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/tutorial-javascript-create-load-index.md
+
+ Title: "JavaScript tutorial: Add search to web apps"
+
+description: Create index and import CSV data into Search index with JavaScript using the npm SDK @azure/search-documents.
+++++ Last updated : 03/18/2021+
+ms.devlang: javascript
++
+# 2 - Create and load Search Index with JavaScript
+
+Continue to build your Search-enabled website by:
+* Creating a Search resource with the VS Code extension
+* Creating a new index and importing data with JavaScript using the sample script and Azure SDK [@azure/search-documents](https://www.npmjs.com/package/@azure/search-documents).
+
+## Create an Azure Search resource
+
+Create a new Search resource with the [Azure Cognitive Search](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch) extension for Visual Studio Code.
+
+1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
+
+1. In the Side bar, **right-click on your Azure subscription** under the `Azure: Cognitive Search` area and select **Create new search service**.
+
+ :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-create-search-resource.png" alt-text="In the Side bar, right-click on your Azure subscription under the **Azure: Cognitive Search** area and select **Create new search service**.":::
+
+1. Follow the prompts to provide the following information:
+
+ |Prompt|Enter|
+ |--|--|
+ |Enter a globally unique name for the new Search Service.|**Remember this name**. This resource name becomes part of your resource endpoint.|
+ |Select a resource group for new resources|Use the resource group you created for this tutorial.|
+ |Select the SKU for your Search service.|Select **Free** for this tutorial. You can't change a SKU pricing tier after the service is created.|
+ |Select a location for new resources.|Select a region close to you.|
+
+1. After you complete the prompts, your new Search resource is created.
+
+## Get your Search resource admin key
+
+Get your Search resource admin key with the Visual Studio Code extension.
+
+1. In Visual Studio Code, in the Side bar, right-click on your Search resource and select **Copy Admin Key**.
+
+ :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-copy-admin-key.png" alt-text="In the Side bar, right-click on your Search resource and select **Copy Admin Key**.":::
+
+1. Keep this admin key, you will need to use it in [a later section](#prepare-the-bulk-import-script-for-search).
+
+## Prepare the bulk import script for Search
+
+The script uses the Azure SDK for Cognitive Search:
+
+* [npm package @azure/search-documents](https://www.npmjs.com/package/@azure/search-documents)
+* [Reference Documentation](/javascript/api/overview/azure/search-documents-readme)
+
+1. In Visual Studio Code, open the `bulk_insert_books.js` file in the subdirectory, `search-web/bulk-insert`, replace the following variables with your own values to authenticate with the Azure Search SDK:
+
+ * YOUR-SEARCH-RESOURCE-NAME
+ * YOUR-SEARCH-ADMIN-KEY
+
+ :::code language="javascript" source="~/azure-search-javascript-samples/search-website/bulk-insert/bulk_insert_books.js" highlight="16,17" :::
+
+1. Open an integrated terminal in Visual Studio for the project directory's subdirectory, `search-web/bulk-insert`, and run the following command to install the dependencies.
+
+ ```bash
+ npm install
+ ```
+
+## Run the bulk import script for Search
+
+1. Continue using the integrated terminal in Visual Studio for the project directory's subdirectory, `search-web/bulk-insert`, to run the following bash command to run the `bulk_insert_books.js` script:
+
+ ```javascript
+ npm start
+ ```
+
+1. As the code runs, the console displays progress.
+1. When the upload is complete, the last statement printed to the console is "done".
+
+## Review the new Search Index
+
+Once the upload completes, the Search Index is ready to use. Review your new Index.
+
+1. In Visual Studio Code, open the Azure Cognitive Search extension and select your Search resource.
+
+ :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-search-extension-view-resource.png" alt-text="In Visual Studio Code, open the Azure Cognitive Search extension and open your Search resource.":::
+
+1. Expand Indexes, then Documents, then `good-books`, then select a doc to see all the document-specific data.
+
+ :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-search-extension-view-docs.png" lightbox="media/tutorial-javascript-create-load-index/visual-studio-code-search-extension-view-docs.png" alt-text="Expand Indexes, then `good-books`, then select a doc.":::
+
+## Copy your Search resource name
+
+Note your **Search resource name**. You will need this to connect the Azure Function app to your Search resource.
+
+> [!CAUTION]
+> While you may be tempted to use your Search admin key in the Azure Function, that isn't following the principle of least privilege. The Azure Function will use the query key to conform to least privilege.
+
+## Next steps
+
+[Deploy your Static Web App](tutorial-javascript-deploy-static-web-app.md)
search Tutorial Javascript Deploy Static Web App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/tutorial-javascript-deploy-static-web-app.md
+
+ Title: "JavaScript tutorial: Deploy search-enabled website"
+
+description: Deploy search-enabled website to Azure Static web app.
+++++ Last updated : 03/18/2021+
+ms.devlang: javascript
++
+# 3 - Deploy the search-enabled website
+
+Deploy the search-enabled website as an Azure Static web app. This deployment includes both the React app and the Function app.
+
+The Static Web app pulls the information and files for deployment from GitHub using your fork of the samples repository.
+
+## Create a Static Web App in Visual Studio Code
+
+1. Select **Azure** from the Activity Bar, then select **Static Web Apps** from the Side bar.
+1. Right-click on the subscription name then select **Create Static Web App (Advanced)**.
+
+ :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-create-static-web-app-resource-advanced.png" alt-text="Right-click on the subscription name then select **Create Static Web App (Advanced)**.":::
+
+1. Follow the prompts to provide the following information:
+
+ |Prompt|Enter|
+ |--|--|
+ |How do you want to create a Static Web App?|Use existing GitHub repository|
+ |Choose organization|Select your _own_ GitHub alias as the organization.|
+ |Choose repository|Select **azure-search-javascript-samples** from the list. |
+ |Choose branch of repository|Select **master** from the list. |
+ |Enter the name for the new Static Web App.|Create a unique name for your resource. For example, you can prepend your name to the repository name such as, `joansmith-azure-search-javascript-samples`. |
+ |Select a resource group for new resources.|Use the resource group you created for this tutorial.|
+ |Choose build preset to configure default project structure.|Select **Custom**|
+ |Select the location of your application code|`search-website`|
+ |Select the location of your Azure Function code|`search-website/api`|
+ |Enter the path of your build output...|build|
+ |Select a location for new resources.|Select a region close to you.|
+
+1. The resource is created, select **Open Actions in GitHub** from the Notifications. This opens a browser window pointed to your forked repo.
+
+ The list of actions indicates your web app, both client and functions, were successfully pushed to your Azure Static Web App.
+
+ Wait until the build and deployment complete before continuing. This may take a minute or two to finish.
+
+## Get Cognitive Search query key in Visual Studio Code
+
+1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
+
+1. In the Side bar, select your Azure subscription under the **Azure: Cognitive Search** area, then right-click on your Search resource and select **Copy Query Key**.
+
+ :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-copy-query-key.png" alt-text="In the Side bar, select your Azure subscription under the **Azure: Cognitive Search** area, then right-click on your Search resource and select **Copy Query Key**.":::
+
+1. Keep this query key, you will need to use it in the next section. The query key is able to query your Index.
+
+## Add configuration settings in Visual Studio Code
+
+The Azure Function app won't return Search data until the Search secrets are in settings.
+
+1. Select **Azure** from the Activity Bar, then select **Static Web Apps** from the Side bar.
+1. Expand your new Static Web App until the **Application Settings** display.
+1. Right-click on **Application Settings**, then select **Add New Setting**.
+
+ :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-static-web-app-configure-settings.png" alt-text="Right-click on **Application Settings**, then select **Add New Setting**.":::
+
+1. Add the following settings:
+
+ |Setting|Your Search resource value|
+ |--|--|
+ |SearchApiKey|Your Search query key|
+ |SearchServiceName|Your Search resource name|
+ |SearchIndexName|`good-books`|
+ |SearchFacets|`authors*,language_code`|
+
+## Use search in your Static web app
+
+1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
+1. In the Side bar, **right-click on your Azure subscription** under the `Static web apps` area and find the Static web app you created for this tutorial.
+1. Right-click the Static Web App name and select **Browse site**.
+
+ :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-browse-static-web-app.png" alt-text="Right-click the Static Web App name and select **Browse site**.":::
+
+1. Select **Open** in the pop-up dialog.
+1. In the website search bar, enter a search query such as `code`, _slowly_ so the suggest feature suggests book titles. Select a suggestion or continue entering your own query. Press enter when you've completed your search query.
+1. Review the results then select one of the books to see more details.
+
+## Clean up resources
+
+To clean up the resources created in this tutorial, delete the resource group.
+
+1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
+
+1. In the Side bar, **right-click on your Azure subscription** under the `Resource Groups` area and find the resource group you created for this tutorial.
+1. Right-click the resource group name then select **Delete**.
+ This deletes both the Search and Static web app resources.
+1. If you no longer want the GitHub fork of the sample, remember to delete that on GitHub. Go to your fork's **Settings** then delete the fork.
++
+## Next steps
+
+* [Understand Search integration for the search-enabled website](tutorial-javascript-search-query-integration.md)
search Tutorial Javascript Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/tutorial-javascript-overview.md
+
+ Title: "JavaScript tutorial: Search integration overview"
+
+description: Technical overview and setup for adding search to a website and deploying to Azure Static Web App.
+++++ Last updated : 03/18/2021+
+ms.devlang: javascript
++
+# 1 - Overview of adding search to a website
+
+This tutorial builds a website to search through a catalog of books then deploys the website to an Azure Static Web App.
+
+The application is available:
+* [Sample](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website)
+* [Demo website - aka.ms/azs-good-books](https://aka.ms/azs-good-books)
+
+## What does the sample do?
+
+This sample website provides access to a catalog of 10,000 books. A user can search the catalog by entering text in the search bar. While the user enters text, the website uses the Search Index's suggest feature to complete the text. Once the query finishes, the list of books is displayed with a portion of the details. A user can select a book to see all the details, stored in the Search Index, of the book.
++
+The search experience includes:
+
+* Search ΓÇô provides search functionality for the application.
+* Suggest ΓÇô provides suggestions as the user is typing in the search bar.
+* Document Lookup ΓÇô looks up a document by ID to retrieve all of its contents for the details page.
+
+## How is the sample organized?
+
+The [sample](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website) includes the following:
+
+|App|Purpose|GitHub<br>Repository<br>Location|
+|--|--|--|
+|Client|React app (presentation layer) to display books, with search. It calls the Azure Function app. |[/search-website/src](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website/src)|
+|Server|Azure Function app (business layer) - calls the Azure Cognitive Search API using JavaScript SDK |[/search-website/api](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website/src)|
+|Bulk insert|JavaScript file to create the index and add documents to it.|[/search-website/bulk-insert](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/search-website/bulk-insert)|
+
+## Set up your development environment
+
+Install the following for your local development environment.
+
+- [Node.js 12 or 14](https://nodejs.org/en/download)
+ - If you have a different version of Node.js installed on your local computer, consider using [Node Version Manager](https://github.com/nvm-sh/nvm) (nvm) or a Docker container.
+- [Git](https://git-scm.com/downloads)
+- [Visual Studio Code](https://code.visualstudio.com/) and the following extensions
+ - [Azure Resources](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureresourcegroups)
+ - [Azure Cognitive Search 0.2.0+](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch)
+ - [Azure Static Web App](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestaticwebapps)
+- Optional:
+ - This tutorial doesn't run the Azure Function API locally but if you intend to run it locally, you need to install [azure-functions-core-tools](/azure/azure-functions/functions-run-local?tabs=linux%2Ccsharp%2Cbash) globally with the following bash command:
+
+ ```bash
+ npm install -g azure-functions-core-tools
+ ```
+
+## Fork and clone the search sample with git
+
+Forking the sample repository is critical to be able to deploy the Static Web App. The web apps determine the build actions and deployment content based on your own GitHub fork location. Code execution in the Static Web App is remote, with Azure Static Web Apps reading from the code in your forked sample.
+
+1. On GitHub, fork the [sample repository](https://github.com/Azure-Samples/azure-search-javascript-samples).
+
+ Complete the fork process in your web browser with your GitHub account. This tutorial uses your fork as part of the deployment to an Azure Static Web App.
+
+1. At a bash terminal, download the sample application to your local computer.
+
+ Replace `YOUR-GITHUB-ALIAS` with your GitHub alias.
+
+ ```bash
+ git clone https://github.com/YOUR-GITHUB-ALIAS/azure-search-javascript-samples
+ ```
+
+1. In Visual Studio Code, open your local folder of the cloned repository. The remaining tasks are accomplished from Visual Studio Code, unless specified.
+
+## Create a resource group for your Azure resources
+
+1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
+1. In the Side bar, **right-click on your Azure subscription** under the `Resource Groups` area and select **Create resource group**.
+
+ :::image type="content" source="./media/tutorial-javascript-overview/visual-studio-code-create-resource-group.png" alt-text="In the Side bar, **right-click on your Azure subscription** under the `Resource Groups` area and select **Create resource group**.":::
+1. Enter a resource group name, such as `cognitive-search-website-tutorial`.
+1. Select a location close to you.
+1. When you create the Cognitive Search and Static Web App resources, later in the tutorial, use this resource group.
+
+ Creating a resource group gives you a logical unit to manage the resources, including deleting them when you are finished using them.
+
+## Next steps
+
+* [Create a Search Index and load with documents](tutorial-javascript-create-load-index.md)
+* [Deploy your Static Web App](tutorial-javascript-deploy-static-web-app.md)
search Tutorial Javascript Search Query Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/tutorial-javascript-search-query-integration.md
+
+ Title: "JavaScript tutorial: Search integration highlights"
+
+description: Understand the JavaScript SDK Search queries used in the Search-enabled website
+++++ Last updated : 03/09/2021+
+ms.devlang: javascript
++
+# 4 - Search integration highlights
+
+In the previous lessons, you added search to a Static Web App. This lesson highlights the essential steps that establish integration. If you are looking for a cheat sheet on how to integrate search into your JavaScript app, this article explains what you need to know.
+
+## Azure SDK @azure/search-documents
+
+The Function app uses the Azure SDK for Cognitive Search:
+
+* NPM: [@azure/search-documents](https://www.npmjs.com/package/@azure/search-documents)
+* Reference Documentation: [Client Library](/javascript/api/overview/azure/search-documents-readme)
+
+The Function app authenticates through the SDK to the cloud-based Cognitive Search API using your resource name, resource key, and index name. The secrets are stored in the Static Web App settings and pulled in to the Function as environment variables.
+
+## Configure secrets in a configuration file
++
+## Azure Function: Search the catalog
+
+The `Search` [API](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website/api/Search/index.js) takes a search term and searches across the documents in the Search Index, returning a list of matches.
+
+Routing for the Search API is contained in the [function.json](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website/api/Search/function.json) bindings.
+
+The Azure Function pulls in the Search configuration information, and fulfills the query.
++
+## Client: Search from the catalog
+
+Call the Azure Function in the React client with the following code.
++
+## Azure Function: Suggestions from the catalog
+
+The `Suggest` [API](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website/api/Suggest/index.js) takes a search term while a user is typing and suggests search terms such as book titles and authors across the documents in the search index, returning a small list of matches.
+
+The search suggester, `sg`, is defined in the [schema file](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website/bulk-insert/good-books-index.json) used during bulk upload.
+
+Routing for the Suggest API is contained in the [function.json](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website/api/Suggest/function.json) bindings.
++
+## Client: Suggestions from the catalog
+
+Th Suggest function API is called in the React app at `\src\components\SearchBar\SearchBar.js` as part of component initialization:
++
+## Azure Function: Get specific document
+
+The `Lookup` [API](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website/api/Lookup/index.js) takes a ID and returns the document object from the Search Index.
+
+Routing for the Lookup API is contained in the [function.json](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website/api/Lookup/function.json) bindings.
++
+## Client: Get specific document
+
+This function API is called in the React app at `\src\pages\Details\Detail.js` as part of component initialization:
++
+## Next steps
+
+* [Index Azure SQL data](search-indexer-tutorial.md)
spatial-anchors Coarse Reloc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spatial-anchors/concepts/coarse-reloc.md
Title: Coarse Relocalization
-description: Learn about using Coarse relocalization to find anchors near you.
+ Title: Coarse relocalization
+description: Learn how and when to use coarse relocalization. Coarse relocalization helps you find anchors that are near you.
# Coarse relocalization
-Coarse relocalization is a feature that enables large-scale localization by providing an approximate but fast answer to the question: *Where is my device now / What content should I be observing?* The response isn't precise, but instead is in the form: *You're close to these anchors; try locating one of them*.
+Coarse relocalization is a feature that enables large-scale localization by providing an approximate but fast answer to these questions:
+- *Where is my device now?*
+- *What content should I be observing?*
+
+The response isn't precise. It's in this form: *You're close to these anchors. Try to locate one of them*.
-Coarse relocalization works by tagging anchors with various on-device sensor readings that are later used for fast querying. For outdoor scenarios, the sensor data is typically the GPS (Global Positioning System) position of the device. When GPS is not available or unreliable (such as indoors), the sensor data consists of the WiFi access points and Bluetooth beacons in range. The collected sensor data contributes to maintaining a spatial index used by Azure Spatial Anchors to quickly determine which anchors are in proximity of your device.
+Coarse relocalization works by tagging anchors with various on-device sensor readings that are later used for fast querying. For outdoor scenarios, the sensor data is typically the GPS (Global Positioning System) position of the device. When GPS is unavailable or unreliable, like when you're indoors, the sensor data consists of the Wi-Fi access points and Bluetooth beacons in range. The collected sensor data contributes to maintaining a spatial index used by Azure Spatial Anchors to quickly determine which anchors are close to your device.
## When to use coarse relocalization
-If you are planning to handle more than 35 spatial anchors in a space larger than a tennis court, you will likely benefit from coarse relocalization spatial indexing.
+If you're planning to handle more than 35 spatial anchors in a space larger than a tennis court, you'll probably benefit from coarse relocalization spatial indexing.
-The fast look-up of anchors enabled by coarse relocalization is designed to simplify the development of applications backed by world-scale collections of (say, millions of geo-distributed) anchors. The complexity of spatial indexing is all hidden away, allowing you to focus on your application logic. All the anchor heavy-lifting is done for you behind the scenes by Azure Spatial Anchors.
+The fast lookup of anchors enabled by coarse relocalization is designed to simplify the development of applications backed by world-scale collections of, say, millions of geo-distributed anchors. The complexity of spatial indexing is all hidden, so you can focus on your application logic. All the difficult work is done behind the scenes by Azure Spatial Anchors.
## Using coarse relocalization
-The typical workflow to create and query Azure Spatial Anchors with coarse relocalization is:
-1. Create and configure a sensor fingerprint provider to collect sensor data of your choice.
-2. Start an Azure Spatial Anchor Session and create anchors. Because sensor fingerprinting is enabled, the anchors are spatially indexed by coarse relocalization.
-3. Query surrounding anchors using coarse relocalization, using the dedicated search criteria in the Azure Spatial Anchor session.
+Here's the typical workflow to create and query Azure Spatial Anchors with coarse relocalization:
+1. Create and configure a sensor fingerprint provider to collect the sensor data that you want.
+2. Start an Azure Spatial Anchors session and create the anchors. Because sensor fingerprinting is enabled, the anchors are spatially indexed by coarse relocalization.
+3. Query surrounding anchors by using coarse relocalization via the dedicated search criteria in the Spatial Anchors session.
-You can refer to the corresponding following tutorial to set up coarse relocalization in your application:
-* [Coarse Relocalization in Unity](../how-tos/set-up-coarse-reloc-unity.md)
-* [Coarse Relocalization in Objective-C](../how-tos/set-up-coarse-reloc-objc.md)
-* [Coarse Relocalization in Swift](../how-tos/set-up-coarse-reloc-swift.md)
-* [Coarse Relocalization in Java](../how-tos/set-up-coarse-reloc-java.md)
-* [Coarse Relocalization in C++/NDK](../how-tos/set-up-coarse-reloc-cpp-ndk.md)
-* [Coarse Relocalization in C++/WinRT](../how-tos/set-up-coarse-reloc-cpp-winrt.md)
+You can refer to one of these tutorials to set up coarse relocalization in your application:
+* [Coarse relocalization in Unity](../how-tos/set-up-coarse-reloc-unity.md)
+* [Coarse relocalization in Objective-C](../how-tos/set-up-coarse-reloc-objc.md)
+* [Coarse relocalization in Swift](../how-tos/set-up-coarse-reloc-swift.md)
+* [Coarse relocalization in Java](../how-tos/set-up-coarse-reloc-java.md)
+* [Coarse relocalization in C++/NDK](../how-tos/set-up-coarse-reloc-cpp-ndk.md)
+* [Coarse relocalization in C++/WinRT](../how-tos/set-up-coarse-reloc-cpp-winrt.md)
## Sensors and platforms ### Platform availability
-The types of sensor data that you can send to the anchor service are:
+You can send these types of sensor data to the anchor service:
-* GPS position: latitude, longitude, altitude.
-* Signal strength of WiFi access points in range.
-* Signal strength of Bluetooth beacons in range.
+* GPS position: latitude, longitude, altitude
+* Signal strength of Wi-Fi access points in range
+* Signal strength of Bluetooth beacons in range
-The table below summarizes the availability of the sensor data on supported platforms, along with any platform-specific caveats:
+This table summarizes the availability of the sensor data on supported platforms and provides information that you should be aware of:
| | HoloLens | Android | iOS | |--|-||--|
-| **GPS** | NO<sup>1</sup> | YES<sup>2</sup> | YES<sup>3</sup> |
-| **WiFi** | YES<sup>4</sup> | YES<sup>5</sup> | NO |
-| **BLE beacons** | YES<sup>6</sup> | YES<sup>6</sup> | YES<sup>6</sup>|
+| **GPS** | No<sup>1</sup> | Yes<sup>2</sup> | Yes<sup>3</sup> |
+| **Wi-Fi** | Yes<sup>4</sup> | Yes<sup>5</sup> | No |
+| **BLE beacons** | Yes<sup>6</sup> | Yes<sup>6</sup> | Yes<sup>6</sup>|
-<sup>1</sup> An external GPS device can be associated with HoloLens. Contact [our support](../spatial-anchor-support.md) if you would be willing to use HoloLens with a GPS tracker.<br/>
-<sup>2</sup> Supported through [LocationManager][3] APIs (both GPS and NETWORK)<br/>
-<sup>3</sup> Supported through [CLLocationManager][4] APIs<br/>
-<sup>4</sup> Supported at a rate of approximately one scan every 3 seconds <br/>
-<sup>5</sup> Starting with API level 28, WiFi scans are throttled to 4 calls every 2 minutes. From Android 10, the throttling can be disabled from the Developer settings menu. For more information, see the [Android documentation][5].<br/>
-<sup>6</sup> Limited to [Eddystone][1] and [iBeacon][2]
+<sup>1</sup> An external GPS device can be associated with HoloLens. Contact [our support](../spatial-anchor-support.md) if you'd be willing to use HoloLens with a GPS tracker.<br/>
+<sup>2</sup> Supported through [LocationManager][3] APIs (both GPS and NETWORK).<br/>
+<sup>3</sup> Supported through [CLLocationManager][4] APIs.<br/>
+<sup>4</sup> Supported at a rate of approximately one scan every 3 seconds. <br/>
+<sup>5</sup> Starting with API level 28, Wi-Fi scans are throttled to four calls every 2 minutes. Starting with Android 10, you can disable this throttling from the **Developer settings** menu. For more information, see the [Android documentation][5].<br/>
+<sup>6</sup> Limited to [Eddystone][1] and [iBeacon][2].
### Which sensor to enable
-The choice of sensor is specific to the application you are developing and the platform.
-The following diagram provides a starting point on which combination of sensors can be enabled depending on the localization scenario:
+The choice of sensor depends on the application you're developing and the platform.
+This diagram provides a starting point for determining which combination of sensors you can enable, depending on the localization scenario:
-![Diagram of enabled sensors selection](media/coarse-relocalization-enabling-sensors.png)
+![Diagram that shows enabled sensors for various scenarios.](media/coarse-relocalization-enabling-sensors.png)
-The following sections give more insights on the advantages and limitations for each sensor type.
+The following sections provide more insight on the advantages and limitations of each sensor type.
### GPS GPS is the go-to option for outdoor scenarios.
-When using GPS in your application, keep in mind that the readings provided by the hardware are typically:
+When you use GPS in your application, keep in mind that the readings provided by the hardware are typically:
-* asynchronous and low frequency (less than 1 Hz).
-* unreliable / noisy (on average 7-m standard deviation).
+* Asynchronous and low frequency (less than 1 Hz).
+* Unreliable/noisy (on average, 7-m standard deviation).
-In general, both the device OS and Azure Spatial Anchors will do some filtering and extrapolation on the raw GPS signal in an attempt to mitigate these issues. This extra-processing requires time for convergence, so for best results you should try to:
+In general, both the device OS and Spatial Anchors will do some filtering and extrapolation of the raw GPS signal in an attempt to mitigate these problems. This extra processing requires time for convergence, so, for best results, you should try to:
-* create one sensor fingerprint provider as early as possible in your application
-* keep the sensor fingerprint provider alive between multiple sessions
-* share the sensor fingerprint provider between multiple sessions
+* Create one sensor fingerprint provider as early as possible in your application.
+* Keep the sensor fingerprint provider alive between multiple sessions.
+* Share the sensor fingerprint provider between multiple sessions.
-Consumer-grade GPS devices are typically imprecise. A study by [Zandenbergen and Barbeau (2011)][6] reports the median accuracy of mobile phones with assisted GPS (A-GPS) to be around 7 meters - quite a large value to be ignored! To account for these measurement errors, the service treats the anchors as probability distributions in GPS space. As such, an anchor is now the region of space that most likely (that is, with more than 95% confidence) contains its true, unknown GPS position.
+Consumer-grade GPS devices are typically imprecise. A study by [Zandenbergen and Barbeau (2011)][6] reports that the median accuracy of mobile phones that have assisted GPS (A-GPS) is about 7 meters. That's quite a large value to ignore! To account for these measurement errors, the service treats anchors as probability distributions in GPS space. So an anchor is the region of space that most likely (with more than 95% confidence) contains its true, unknown GPS position.
-The same reasoning is applied when querying with GPS. The device is represented as another spatial confidence region around its true, unknown GPS position. Discovering nearby anchors translates into simply finding the anchors with confidence regions *close enough* to the device's confidence region, as illustrated in the image below:
+The same reasoning applies when you query by using GPS. The device is represented as another spatial confidence region around its true, unknown GPS position. Discovering nearby anchors translates to finding the anchors with confidence regions *close enough* to the device's confidence region, as illustrated here:
-![Selection of anchor candidates with GPS](media/coarse-reloc-gps-separation-distance.png)
+![Diagram that illustrates finding anchor candidates by using GPS.](media/coarse-reloc-gps-separation-distance.png)
-### WiFi
+### Wi-Fi
-On HoloLens and Android, WiFi signal strength can be a good option to enable indoor coarse relocalization.
-Its advantage is the potential immediate availability of WiFi access points (common in, e.g., office spaces or shopping malls) with no extra set-up needed.
+On HoloLens and Android, Wi-Fi signal strength can be a good way to enable indoor coarse relocalization.
+The advantage is the potential immediate availability of Wi-Fi access points (common in office spaces and shopping malls, for example) with no extra setup needed.
> [!NOTE]
-> iOS does not provide any API to read WiFi signal strength, and as such cannot be used for WiFi-enabled coarse relocalization.
+> iOS doesn't provide an API for reading Wi-Fi signal strength, so it can't be used for coarse relocalization enabled via Wi-Fi.
-When using WiFi in your application, keep in mind that the readings provided by the hardware are typically:
+When you use Wi-Fi in your application, keep in mind that the readings provided by the hardware are typically:
-* asynchronous and low frequency (less than 0.1 Hz).
-* potentially throttled at the OS level.
-* unreliable / noisy (on average 3-dBm standard deviation).
+* Asynchronous and low frequency (less than 0.1 Hz).
+* Potentially throttled at the OS level.
+* Unreliable/noisy (on average, 3-dBm standard deviation).
-Azure Spatial Anchors will attempt to build a filtered WiFi signal strength map during a session in an attempt to mitigate these issues. For best results you should try to:
+Spatial Anchors will try to build a filtered map of Wi-Fi signal strength during a session in an attempt to mitigate these issues. For best results, try to:
-* create the session well before placing the first anchor.
-* keep the session alive for as long as possible (that is, create all anchors and query in one session).
+* Create the session well before you place the first anchor.
+* Keep the session alive for as long as possible. (That is, create all anchors and query in one session.)
### Bluetooth beacons <a name="beaconsDetails"></a>
-Carefully deploying bluetooth beacons is a good solution for large scale indoor coarse relocalization scenarios, where GPS is absent or inaccurate. It is also the only indoor method that is supported on all three platforms.
+Careful deployment of Bluetooth beacons is a good solution for large-scale indoor coarse relocalization scenarios, where GPS is absent or inaccurate. It's also the only indoor method that's supported on all three platforms.
-Beacons are typically versatile devices, where everything - including UUIDs and MAC addresses - can be configured. Azure Spatial Anchors expects beacons to be uniquely identified by their UUIDs. Failing to ensure this uniqueness will most likely cause incorrect results. For best results you should:
+Beacons are typically versatile devices on which everything can be configured, including UUIDs and MAC addresses. Azure Spatial Anchors expects beacons to be uniquely identified by their UUIDs. If you don't ensure this uniqueness, you'll probably get incorrect results. For best results:
-* assign unique UUIDs to your beacons.
-* deploy them in a way that covers your space uniformly, and so that at least 3 beacons are reachable from any point in space.
-* pass the list of unique beacon UUIDs to the sensor fingerprint provider
+* Assign unique UUIDs to your beacons.
+* Deploy beacons in a way that covers your space uniformly and so that at least three beacons are reachable from any point in space.
+* Pass the list of unique beacon UUIDs to the sensor fingerprint provider.
-Radio signals such as bluetooth are affected by obstacles and can interfere with other radio signals. For these reasons it can be difficult to guess whether your space is uniformly covered. To guarantee a better customer experience we recommend that you manually test the coverage of your beacons. This can be done by walking around your space with candidate devices and an application showing bluetooth in range. While testing the coverage, make sure that you can reach at least 3 beacons from any strategic position of your space. Setting up too many beacons can result in more interference between them and will not necessarily improve coarse relocalization accuracy.
+Radio signals like those of Bluetooth are affected by obstacles and can interfere with other radio signals. So it can be hard to guess whether your space is uniformly covered. To guarantee a better customer experience, we recommend that you manually test the coverage of your beacons. You can conduct a test by walking around your space with candidate devices and an application that shows Bluetooth in range. While you test the coverage, make sure you can reach at least three beacons from any strategic position in your space. Having too many beacons can result in more interference between them and won't necessarily improve the accuracy of coarse relocalization.
-Bluetooth beacons typically have a coverage of 80 meters if no obstacles are present in the space.
-This means that for a space that has no big obstacles, one could deploy beacons on a grid pattern every 40 meters.
+Bluetooth beacons typically cover 80 meters if no obstacles are present in the space.
+So, for a space that has no large obstacles, you could deploy beacons in a grid pattern every 40 meters.
-A beacon running out of battery will affect the results negatively, so make sure you monitor your deployment periodically for low or dead batteries.
+A beacon that's running out of battery will affect the results, so be sure to monitor your deployment periodically for low or uncharged batteries.
-Azure Spatial Anchors will only track Bluetooth beacons that are in the known beacon proximity UUIDs list. Malicious beacons programmed to have allow-listed UUIDs can negatively impact the quality of the service though. For that reason, you will obtain best results in curated spaces where you can control their deployment.
+Azure Spatial Anchors will track only Bluetooth beacons that are in the known-beacon proximity UUIDs list. But malicious beacons programmed to have allowlisted UUIDs can negatively affect the quality of the service. So you'll get the best results in curated spaces where you can control beacon deployment.
-### Sensors accuracy
+### Sensor accuracy
-The accuracy of the GPS signal, both on anchor creation as well as during queries, has a large influence over the set of returned anchors. In contrast, queries based on WiFi / beacons will consider all anchors that have at least one access point / beacon in common with the query. In that sense, the result of a query based on WiFi / beacons is mostly determined by the physical range of the access points / beacons, and environmental obstructions.
-The table below estimates the expected search space for each sensor type:
+The accuracy of the GPS signal, both during anchor creation and during queries, has a significant influence on the set of returned anchors. In contrast, queries based on Wi-Fi/beacons will consider all anchors that have at least one access point / beacon in common with the query. In that sense, the result of a query that's based on Wi-Fi/beacons is determined mostly by the physical range of the access points / beacons and environmental obstructions.
+This table estimates the expected search space for each sensor type:
-| Sensor | Search space radius (approx.) | Details |
+| Sensor | Search-space radius (approximate) | Details |
|-|:-:||
-| GPS | 20 m - 30 m | Determined by the GPS uncertainty among other factors. The reported numbers are estimated for the median GPS accuracy of mobile phones with A-GPS, that is 7 meters. |
-| WiFi | 50 m - 100 m | Determined by the range of the wireless access points. Depends on the frequency, transmitter strength, physical obstructions, interference, and so on. |
-| BLE beacons | 70 m | Determined by the range of the beacon. Depends on the frequency, transmission strength, physical obstructions, interference, and so on. |
+| **GPS** | 20 m to 30 m | Determined by the GPS uncertainty, among other factors. The reported numbers are estimated for the median GPS accuracy of mobile phones with A-GPS: 7 meters. |
+| **Wi-Fi** | 50 m to 100 m | Determined by the range of the wireless access points. Depends on the frequency, transmitter strength, physical obstructions, interference, and so on. |
+| **BLE beacons** | 70 m | Determined by the range of the beacon. Depends on the frequency, transmission strength, physical obstructions, interference, and so on. |
<!-- Reference links in article --> [1]: https://developers.google.com/beacons/eddystone
spatial-anchors Get Started Unity Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spatial-anchors/quickstarts/get-started-unity-android.md
Previously updated : 11/20/2020 Last updated : 03/18/2021
You'll learn how to:
To complete this quickstart, make sure you have: -- A Windows or macOS machine with <a href="https://unity3d.com/get-unity/download" target="_blank">Unity 2019.4 (LTS)</a>, including the **Android Build Support** with **Android SDK & NDK Tools** and **OpenJDK** modules.
+- A Windows or macOS machine with <a href="https://unity3d.com/get-unity/download" target="_blank">Unity (LTS)</a>, including the **Android Build Support** with **Android SDK & NDK Tools** and **OpenJDK** modules. Use **Unity 2020 LTS** with ASA SDK version 2.9 or later (which uses the [Unity XR Plug-in Framework](https://docs.unity3d.com/Manual/XRPluginArchitecture.html)) or **Unity 2019 LTS** with ASA SDK version 2.8 or earlier.
- If running on Windows, you'll also need <a href="https://git-scm.com/download/win" target="_blank">Git for Windows</a> and <a href="https://git-lfs.github.com/">Git LFS</a>. - If running on macOS, get Git installed via HomeBrew. Enter the following command into a single line of the Terminal: `/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"`. Then, run `brew install git` and `brew install git-lfs`. - A <a href="https://developer.android.com/studio/debug/dev-options" target="_blank">developer enabled</a> and <a href="https://developers.google.com/ar/discover/supported-devices" target="_blank">ARCore capable</a> Android device.
spatial-anchors Get Started Unity Hololens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spatial-anchors/quickstarts/get-started-unity-hololens.md
Previously updated : 11/20/2020 Last updated : 03/18/2021
You'll learn how to:
To complete this quickstart: -- You need a Windows computer with <a href="https://unity3d.com/get-unity/download" target="_blank">Unity 2019.4 (LTS)</a> and <a href="https://www.visualstudio.com/downloads/" target="_blank">Visual Studio 2019</a> or later are installed. Your Visual Studio installation must include the **Universal Windows Platform development** workload and the **Windows 10 SDK (10.0.18362.0 or newer)** component. You must also install <a href="https://git-scm.com/download/win" target="_blank">Git for Windows</a> and <a href="https://git-lfs.github.com/">Git LFS</a>.
+- You need a Windows computer with <a href="https://unity3d.com/get-unity/download" target="_blank">Unity (LTS)</a> and <a href="https://www.visualstudio.com/downloads/" target="_blank">Visual Studio 2019</a> or later installed. Use **Unity 2020 LTS** with ASA SDK version 2.9 or later (which uses the [Unity XR Plug-in Framework](https://docs.unity3d.com/Manual/XRPluginArchitecture.html)) or **Unity 2019 LTS** with ASA SDK version 2.8 or earlier. Your Visual Studio installation must include the **Universal Windows Platform development** workload and the **Windows 10 SDK (10.0.18362.0 or newer)** component. You must also install <a href="https://git-scm.com/download/win" target="_blank">Git for Windows</a> and <a href="https://git-lfs.github.com/">Git LFS</a>.
- You need a HoloLens device on which [developer mode](/windows/mixed-reality/using-visual-studio) enabled. [Windows 10 May 2020 Update](/windows/mixed-reality/whats-new/release-notes-may-2020) must be installed on the device. To update to the latest release on HoloLens, open the **Settings** app, go to **Update & Security**, and then select **Check for updates**. - On your app, you need to enable the **SpatialPerception** capability. This setting is in **Build Settings** > **Player Settings** > **Publishing Settings** > **Capabilities**. - On your app, you need to enable **Virtual Reality Supported** with **Windows Mixed Reality SDK**. This setting is in **Build Settings** > **Player Settings** > **XR Settings**.
spatial-anchors Get Started Unity Ios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spatial-anchors/quickstarts/get-started-unity-ios.md
Previously updated : 11/20/2020 Last updated : 03/18/2021
You'll learn how to:
To complete this quickstart, make sure you have: -- A macOS machine with <a href="https://unity3d.com/get-unity/download" target="_blank">Unity 2019.4 (LTS)</a>, the latest version of <a href="https://geo.itunes.apple.com/us/app/xcode/id497799835?mt=12" target="_blank">Xcode</a> installed.
+- A macOS machine with the latest version of <a href="https://geo.itunes.apple.com/us/app/xcode/id497799835?mt=12" target="_blank">Xcode</a> and <a href="https://unity3d.com/get-unity/download" target="_blank">Unity (LTS)</a> installed. Use **Unity 2020 LTS** with ASA SDK version 2.9 or later (which uses the [Unity XR Plug-in Framework](https://docs.unity3d.com/Manual/XRPluginArchitecture.html)) or **Unity 2019 LTS** with ASA SDK version 2.8 or earlier.
- Git installed via HomeBrew. Enter the following command into a single line of the Terminal: `/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"`. Then, run `brew install git` and `brew install git-lfs`. - A developer enabled <a href="https://developer.apple.com/documentation/arkit/verifying_device_support_and_user_permission" target="_blank">ARKit compatible</a> iOS device.
spatial-anchors Unity Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spatial-anchors/unity-overview.md
description: Learn how Azure Spatial Anchors can be used within Unity Apps. Revi
- Last updated 2/4/2021
storage Data Lake Storage Acl Javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-acl-javascript.md
Title: Use JavaScript to set ACLs in Azure Data Lake Storage Gen2
+ Title: Use JavaScript (Node.js) to set ACLs in Azure Data Lake Storage Gen2
description: Use Azure Storage Data Lake client library for JavaScript to manage access control lists (ACL) in storage accounts that has hierarchical namespace (HNS) enabled. Previously updated : 02/17/2021 Last updated : 03/19/2021
-# Use JavaScript to manage ACLs in Azure Data Lake Storage Gen2
+# Use JavaScript SDK in Node.js to manage ACLs in Azure Data Lake Storage Gen2
-This article shows you how to use JavaScript to get, set, and update the access control lists of directories and files.
+This article shows you how to use Node.js to get, set, and update the access control lists of directories and files.
[Package (Node Package Manager)](https://www.npmjs.com/package/@azure/storage-file-datalake) | [Samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-file-datalake/samples) | [Give Feedback](https://github.com/Azure/azure-sdk-for-java/issues)
npm install @azure/storage-file-datalake
Import the `storage-file-datalake` package by placing this statement at the top of your code file. ```javascript
-const AzureStorageDataLake = require("@azure/storage-file-datalake");
+const {
+AzureStorageDataLake,
+DataLakeServiceClient,
+StorageSharedKeyCredential
+} = require("@azure/storage-file-datalake");
``` ## Connect to the account
storage Storage Blob Storage Tiers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-blob-storage-tiers.md
description: Read about hot, cool, and archive access tiers for Azure Blob Stora
Previously updated : 01/11/2021 Last updated : 03/18/2021
Only hot and cool access tiers can be set as the default account access tier. Ar
Blob-level tiering allows you to upload data to the access tier of your choice using the [Put Blob](/rest/api/storageservices/put-blob) or [Put Block List](/rest/api/storageservices/put-block-list) operations and change the tier of your data at the object level using the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation or [lifecycle management](#blob-lifecycle-management) feature. You can upload data to your required access tier then easily change the blob access tier among the hot, cool, or archive tiers as usage patterns change, without having to move data between accounts. All tier change requests happen immediately and tier changes between hot and cool are instantaneous. Rehydrating a blob from the archive