Updates from: 03/23/2021 04:09:23
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Password Change Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-password-change-policy.md
Previously updated : 12/17/2020 Last updated : 03/22/2021 zone_pivot_groups: b2c-policy-type
zone_pivot_groups: b2c-policy-type
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
+In Azure Active Directory B2C (Azure AD B2C), you can enable users who are signed in with a local account to change their password without having to prove their identity through email verification. The password change flow involves following steps:
+1. The user signs in to their local account. If the session is still active, Azure AD B2C authorizes the user and skips to the next step.
+1. The user verifies the **Old password**, and then creates and confirms the **New password**.
+![Password change flow](./media/add-password-change-policy/password-change-flow.png)
+> [!TIP]
+> The password change flow allows users to change their password only when the user knows their password and wants to change it. We recommend you to also enable [self-service password reset](add-password-reset-policy.md) to support cases where the user forgets their password.
-In Azure Active Directory B2C (Azure AD B2C), you can enable users who are signed in with a local account to change their password without having to prove their authenticity by email verification. The password change flow involves following steps:
-1. Sign-in with a local account. If the session is still active, Azure AD B2C authorizes the user, and skips to the next step.
-1. Users must verify the **old password**, create, and confirm the **new password**.
-![Password change flow](./media/add-password-change-policy/password-change-flow.png)
## Prerequisites
active-directory-b2c Add Password Reset Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-password-reset-policy.md
Previously updated : 03/08/2021 Last updated : 03/22/2021
The [sign-up and sign-in journey](add-sign-up-and-sign-in-policy.md) allows user
The password reset flow applies to local accounts in Azure AD B2C that use an [email address](identity-provider-local.md#email-sign-in) or [username](identity-provider-local.md#username-sign-in) with a password for sign-in.
+> [!TIP]
+> The self-service password reset flow allows users to change their password when the user forgets their password and wants to reset it. Consider configuring a [password change flow](add-password-change-policy.md) to support cases where a user knows their password and wants to change it.
+ A common practice after migrating users to Azure AD B2C with random passwords is to have the users verify their email addresses and reset their passwords during their first sign-in. It's also common to force the user to reset their password after an administrator changes their password; see [force password reset](force-password-reset.md) to enable this feature. ## Prerequisites
active-directory-b2c Identity Provider Apple Id https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-apple-id.md
Previously updated : 03/15/2021 Last updated : 03/22/2021
You can define an Apple ID as a claims provider by adding it to the **ClaimsProv
<OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="sub" /> <OutputClaim ClaimTypeReferenceId="identityProvider" DefaultValue="https://appleid.apple.com" AlwaysUseDefaultValue="true" /> <OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="socialIdpAuthentication" AlwaysUseDefaultValue="true" />
- <OutputClaim ClaimTypeReferenceId="givenName" PartnerClaimType="user.firstName"/>
- <OutputClaim ClaimTypeReferenceId="surname" PartnerClaimType="user.lastName"/>
+ <OutputClaim ClaimTypeReferenceId="givenName" PartnerClaimType="user.name.firstName"/>
+ <OutputClaim ClaimTypeReferenceId="surname" PartnerClaimType="user.name.lastName"/>
<OutputClaim ClaimTypeReferenceId="email" PartnerClaimType="user.email"/> </OutputClaims> <OutputClaimsTransformations>
active-directory-b2c Identity Provider Generic Saml Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-generic-saml-options.md
Previously updated : 03/15/2021 Last updated : 03/22/2021
The following is an example of an Azure AD metadata single sign-on service with
</IDPSSODescriptor> ```
-SAML responses are transmitted to Azure AD B2C via HTTP POST binding. Azure AD B2C policy metadata sets the `AssertionConsumerService` binding to `urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST`.
+### Assertion consumer service
-The following is an example of an Azure AD B2C policy metadata assertion consumer service element.
+The Assertion Consumer Service (or ACS) is where the identity provider SAML responses can be sent and received by Azure AD B2C. SAML responses are transmitted to Azure AD B2C via HTTP POST binding. The ACS location points to your relying party's base policy. For example, if the relying policy is *B2C_1A_signup_signin*, the ACS is the base policy of the *B2C_1A_signup_signin*, such as *B2C_1A_TrustFrameworkBase*.
+
+The following is an example of an Azure AD B2C policy metadata assertion consumer service element.
```xml <SPSSODescriptor AuthnRequestsSigned="true" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
active-directory-b2c Javascript And Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/javascript-and-page-layout.md
Previously updated : 12/10/2020 Last updated : 03/22/2021
Follow these guidelines when you customize the interface of your application usi
- Don't use JavaScript directly to call Azure AD B2C endpoints. - You can embed your JavaScript or you can link to external JavaScript files. When using an external JavaScript file, make sure to use the absolute URL and not a relative URL. - JavaScript frameworks:
- - Azure AD B2C uses a specific version of jQuery. DonΓÇÖt include another version of jQuery. Using more than one version on the same page causes issues.
+ - Azure AD B2C uses a [specific version of jQuery](page-layout.md#jquery-version). DonΓÇÖt include another version of jQuery. Using more than one version on the same page causes issues.
- Using RequireJS isn't supported. - Most JavaScript frameworks are not supported by Azure AD B2C. - Azure AD B2C settings can be read by calling `window.SETTINGS`, `window.CONTENT` objects, such as the current UI language. DonΓÇÖt change the value of these objects.
active-directory-b2c Microsoft Graph Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/microsoft-graph-operations.md
For more information about accessing Azure AD B2C audit logs, see [Accessing Azu
## Conditional Access -- [List all of the Conditional Access policies](/graph/api/resources/conditionalaccessroot-list-policies)
+- [List all of the Conditional Access policies](/graph/api/conditionalaccessroot-list-policies?view=graph-rest-beta&tabs=http)
- [Read properties and relationships of a Conditional Access policy](/graph/api/conditionalaccesspolicy-get) - [Create a new Conditional Access policy](/graph/api/resources/application) - [Update a Conditional Access policy](/graph/api/conditionalaccesspolicy-update)
active-directory-b2c Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/page-layout.md
Previously updated : 03/09/2021 Last updated : 03/22/2021
Page layout packages are periodically updated to include fixes and improvements in their page elements. The following change log specifies the changes introduced in each version.
+## jQuery version
+
+Azure AD B2C page layout uses the following version of the [jQuery library](https://jquery.com/):
+
+|From page layout version |jQuery version |
+|||
+|2.1.4 | 3.5.1 |
+|1.2.0 | 3.4.1 |
+|1.1.0 | 1.10.2 |
+ ## Self-asserted page (selfasserted) **2.1.2**
active-directory-b2c Tutorial Create User Flows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/tutorial-create-user-flows.md
Previously updated : 12/16/2020 Last updated : 03/22/2021
In this article, you learn how to:
> [!div class="checklist"] > * Create a sign-up and sign-in user flow
+> * Enable self-service password reset
> * Create a profile editing user flow
-> * Create a password reset user flow
+ This tutorial shows you how to create some recommended user flows by using the Azure portal. If you're looking for information about how to set up a resource owner password credentials (ROPC) flow in your application, see [Configure the resource owner password credentials flow in Azure AD B2C](add-ropc-policy.md).
The sign-up and sign-in user flow handles both sign-up and sign-in experiences w
> [!NOTE] > The "Run user flow" experience is not currently compatible with the SPA reply URL type using authorization code flow. To use the "Run user flow" experience with these kinds of apps, register a reply URL of type "Web" and enable the implicit flow as described [here](tutorial-register-spa.md).
+## Enable self-service password reset
+
+To enable [self-service password reset](add-password-reset-policy.md) for the sign-up or sign-in user flow:
+
+1. Select the sign-up or sign-in user flow you created.
+1. Under **Settings** in the left menu, select **Properties**.
+1. Under **Password complexity**, select **Self-service password reset**.
+1. Select **Save**.
+
+### Test the user flow
+
+1. Select the user flow you created to open its overview page, then select **Run user flow**.
+1. For **Application**, select the web application named *webapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
+1. Select **Run user flow**.
+1. From the sign-up or sign-in page, select **Forgot your password?**.
+1. Verify the email address of the account that you previously created, and then select **Continue**.
+1. You now have the opportunity to change the password for the user. Change the password and select **Continue**. The token is returned to `https://jwt.ms` and should be displayed to you.
+ ## Create a profile editing user flow If you want to enable users to edit their profile in your application, you use a profile editing user flow.
If you want to enable users to edit their profile in your application, you use a
1. Click **Run user flow**, and then sign in with the account that you previously created. 1. You now have the opportunity to change the display name and job title for the user. Click **Continue**. The token is returned to `https://jwt.ms` and should be displayed to you.
-## Create a password reset user flow
-
-To enable users of your application to reset their password, you use a password reset user flow.
-
-1. In the Azure AD B2C tenant overview menu, select **User flows**, and then select **New user flow**.
-1. On the **Create a user flow** page, select the **Password reset** user flow.
-1. Under **Select a version**, select **Recommended**, and then select **Create**.
-1. Enter a **Name** for the user flow. For example, *passwordreset1*.
-1. For **Identity providers**, enable **Reset password using email address**.
-2. Under Application claims, click **Show more** and choose the claims that you want returned in the authorization tokens sent back to your application. For example, select **User's Object ID**.
-3. Click **OK**.
-4. Click **Create** to add the user flow. A prefix of *B2C_1* is automatically appended to the name.
-
-### Test the user flow
-
-1. Select the user flow you created to open its overview page, then select **Run user flow**.
-1. For **Application**, select the web application named *webapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Click **Run user flow**, verify the email address of the account that you previously created, and select **Continue**.
-1. You now have the opportunity to change the password for the user. Change the password and select **Continue**. The token is returned to `https://jwt.ms` and should be displayed to you.
- ## Next steps In this article, you learned how to:
active-directory-domain-services Tutorial Create Replica Set https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/tutorial-create-replica-set.md
Previously updated : 02/26/2021 Last updated : 03/22/2021 #Customer intent: As an identity administrator, I want to create and use replica sets in Azure Active Directory Domain Services to provide resiliency or geographical distributed managed domain data.
active-directory Tutorial Pilot Aadc Aadccp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/tutorial-pilot-aadc-aadccp.md
Previously updated : 05/19/2020 Last updated : 03/22/2021
In case the pilot does not work as expected, you can go back to the Azure AD Con
1. Disable provisioning configuration in the Azure portal. 2. Disable all the custom sync rules created for Cloud Provisioning using the Sync Rule Editor tool. Disabling should cause full sync on all the connectors.
-## Configure Azure AD Connect sync to exclude the pilot OU
-Once you have verified that users from the pilot OU are successfully managed by cloud sync, you can re-configure Azure AD Connect to exclude the pilot OU that was created above. The cloud provisioning agent will handle synchronization for these users going forward. Use the following steps to scope Azure AD Connect.
-
- 1. On the server that is running Azure AD Connect, double-click on the Azure AD Connect icon.
- 2. Click **Configure**
- 3. Select **Customize synchronization options** and click next.
- 4. Sign-in to Azure AD and click **Next**.
- 5. On the **Connect your directories** screen click **Next**.
- 6. On the **Domain and OU filtering** screen, select **Sync selected domains and OUs**.
- 7. Expand your domain and **de-select** the **CPUsers** OU. Click **Next**.
-![scope](media/tutorial-existing-forest/scope-1.png)</br>
- 9. On the **Optional features** screen, click **Next**.
- 10. On the **Ready to configure** screen click **Configure**.
- 11. Once that has completed, click **Exit**.
+ ## Next steps
active-directory Authentication National Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/authentication-national-cloud.md
The following table lists the base URLs for the Azure AD endpoints used to acqui
|-|-| | Azure AD for US Government | `https://login.microsoftonline.us` | | Azure AD Germany| `https://login.microsoftonline.de` |
-| Azure AD China operated by 21Vianet | `https://login.chinacloudapi.cn` |
+| Azure AD China operated by 21Vianet | `https://login.partner.microsoftonline.cn/common` |
| Azure AD (global service)| `https://login.microsoftonline.com` | You can form requests to the Azure AD authorization or token endpoints by using the appropriate region-specific base URL. For example, for Azure Germany:
active-directory Msal Net Initializing Client Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-initializing-client-applications.md
# Initialize client applications using MSAL.NET
-This article describes initializing public client and confidential client applications using the Microsoft Authentication Library for .NET (MSAL.NET). To learn more about the client application types and application configuration options, read the [overview](msal-client-applications.md).
+This article describes initializing public client and confidential client applications using the Microsoft Authentication Library for .NET (MSAL.NET). To learn more about the client application types, see [Public client and confidential client applications](msal-client-applications.md).
With MSAL.NET 3.x, the recommended way to instantiate an application is by using the application builders: `PublicClientApplicationBuilder` and `ConfidentialClientApplicationBuilder`. They offer a powerful mechanism to configure the application either from the code, or from a configuration file, or even by mixing both approaches.
+[API reference documentation](/dotnet/api/microsoft.identity.client) | [Package on NuGet](https://www.nuget.org/packages/Microsoft.Identity.Client/) | [Library source code](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | [Code samples](sample-v2-code.md)
+ ## Prerequisites Before initializing an application, you first need to [register it](quickstart-register-app.md) so that your app can be integrated with the Microsoft identity platform. After registration, you may need the following information (which can be found in the Azure portal):
active-directory Scenario Desktop Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-desktop-acquire-token.md
return result;
```
-# [Python](#tab/python)
-
-```Python
-result = None
-
-# Firstly, check the cache to see if this end user has signed in before
-accounts = app.get_accounts(username=config["username"])
-if accounts:
- result = app.acquire_token_silent(config["scope"], account=accounts[0])
-
-if not result:
- result = app.acquire_token_by_xxx(scopes=config["scope"])
-```
- # [macOS](#tab/macOS) ### In MSAL for iOS and macOS
application.acquireTokenSilent(with: silentParameters) { (result, error) in
} } ```+
+# [Node.js](#tab/nodejs)
+
+In MSAL Node, you acquire tokens via authorization code flow with Proof Key for Code Exchange (PKCE). MSAL Node uses an in-memory token cache to see if there are any user accounts in the cache. If there is, the account object can be passed to the `acquireTokenSilent()` method to retrieve a cached access token.
+
+```JavaScript
+
+const msal = require("@azure/msal-node");
+
+const msalConfig = {
+ auth: {
+ clientId: "your_client_id_here",
+ authority: "your_authority_here",
+ }
+};
+
+const pca = new msal.PublicClientApplication(msalConfig);
+const msalTokenCache = pca.getTokenCache();
+
+let accounts = await msalTokenCache.getAllAccounts();
+
+ if (accounts.length > 0) {
+
+ const silentRequest = {
+ account: accounts[0], // Index must match the account that is trying to acquire token silently
+ scopes: ["user.read"],
+ };
+
+ pca.acquireTokenSilent(silentRequest).then((response) => {
+ console.log("\nSuccessful silent token acquisition");
+ console.log("\nResponse: \n:", response);
+ res.sendStatus(200);
+ }).catch((error) => console.log(error));
+ } else {
+ const {verifier, challenge} = await msal.cryptoProvider.generatePkceCodes();
+
+ const authCodeUrlParameters = {
+ scopes: ["User.Read"],
+ redirectUri: "your_redirect_uri",
+ codeChallenge: challenge, // PKCE Code Challenge
+ codeChallengeMethod: "S256" // PKCE Code Challenge Method
+ };
+
+ // get url to sign user in and consent to scopes needed for application
+ pca.getAuthCodeUrl(authCodeUrlParameters).then((response) => {
+ console.log(response);
+
+ const tokenRequest = {
+ code: response["authorization_code"],
+ codeVerifier: verifier // PKCE Code Verifier
+ redirectUri: "your_redirect_uri",
+ scopes: ["User.Read"],
+ };
+
+ // acquire a token by exchanging the code
+ pca.acquireTokenByCode(tokenRequest).then((response) => {
+ console.log("\nResponse: \n:", response);
+ }).catch((error) => {
+ console.log(error);
+ });
+ }).catch((error) => console.log(JSON.stringify(error)));
+ }
+```
+
+# [Python](#tab/python)
+
+```Python
+result = None
+
+# Firstly, check the cache to see if this end user has signed in before
+accounts = app.get_accounts(username=config["username"])
+if accounts:
+ result = app.acquire_token_silent(config["scope"], account=accounts[0])
+
+if not result:
+ result = app.acquire_token_by_xxx(scopes=config["scope"])
+```
Here are the various ways to acquire tokens in a desktop application.
Here are the various ways to acquire tokens in a desktop application.
The following example shows minimal code to get a token interactively for reading the user's profile with Microsoft Graph. # [.NET](#tab/dotnet)+ ### In MSAL.NET ```csharp
private static IAuthenticationResult acquireTokenInteractive() throws Exception
} ```
-# [Python](#tab/python)
-
-MSAL Python doesn't provide an interactive acquire token method directly. Instead, it requires the application to send an authorization request in its implementation of the user interaction flow to obtain an authorization code. This code can then be passed to the `acquire_token_by_authorization_code` method to get the token.
-
-```Python
-result = None
-
-# Firstly, check the cache to see if this end user has signed in before
-accounts = app.get_accounts(username=config["username"])
-if accounts:
- result = app.acquire_token_silent(config["scope"], account=accounts[0])
-
-if not result:
- result = app.acquire_token_by_authorization_code(
- request.args['code'],
- scopes=config["scope"])
-
-```
- # [macOS](#tab/macOS) ### In MSAL for iOS and macOS
application.acquireToken(with: interactiveParameters, completionBlock: { (result
let accessToken = authResult.accessToken }) ```+
+# [Node.js](#tab/nodejs)
+
+In MSAL Node, you acquire tokens via authorization code flow with Proof Key for Code Exchange (PKCE). The process has two steps: first, the application obtains a URL that can be used to generate an authorization code. This URL can be opened in a browser of choice, where the user can input their credentials, and will be redirected back to the `redirectUri` (registered during the app registration) with an authorization code. Second, the application passes the authorization code received to the `acquireTokenByCode()` method which exchanges it for an access token.
+
+```JavaScript
+const msal = require("@azure/msal-node");
+
+const msalConfig = {
+ auth: {
+ clientId: "your_client_id_here",
+ authority: "your_authority_here",
+ }
+};
+
+const pca = new msal.PublicClientApplication(msalConfig);
+
+const {verifier, challenge} = await msal.cryptoProvider.generatePkceCodes();
+
+const authCodeUrlParameters = {
+ scopes: ["User.Read"],
+ redirectUri: "your_redirect_uri",
+ codeChallenge: challenge, // PKCE Code Challenge
+ codeChallengeMethod: "S256" // PKCE Code Challenge Method
+};
+
+// get url to sign user in and consent to scopes needed for application
+pca.getAuthCodeUrl(authCodeUrlParameters).then((response) => {
+ console.log(response);
+
+ const tokenRequest = {
+ code: response["authorization_code"],
+ codeVerifier: verifier // PKCE Code Verifier
+ redirectUri: "your_redirect_uri",
+ scopes: ["User.Read"],
+ };
+
+ // acquire a token by exchanging the code
+ pca.acquireTokenByCode(tokenRequest).then((response) => {
+ console.log("\nResponse: \n:", response);
+ }).catch((error) => {
+ console.log(error);
+ });
+}).catch((error) => console.log(JSON.stringify(error)));
+```
+
+# [Python](#tab/python)
+
+MSAL Python doesn't provide an interactive acquire token method directly. Instead, it requires the application to send an authorization request in its implementation of the user interaction flow to obtain an authorization code. This code can then be passed to the `acquire_token_by_authorization_code` method to get the token.
+
+```Python
+result = None
+
+# Firstly, check the cache to see if this end user has signed in before
+accounts = app.get_accounts(username=config["username"])
+if accounts:
+ result = app.acquire_token_silent(config["scope"], account=accounts[0])
+
+if not result:
+ result = app.acquire_token_by_authorization_code(
+ request.args['code'],
+ scopes=config["scope"])
+
+```
## Integrated Windows Authentication
private static IAuthenticationResult acquireTokenIwa() throws Exception {
} ```
-# [Python](#tab/python)
-
-This flow isn't yet supported in MSAL Python.
- # [macOS](#tab/macOS) This flow doesn't apply to macOS.
+# [Node.js](#tab/nodejs)
+
+This flow isn't yet supported in MSAL Node.
+
+# [Python](#tab/python)
+
+This flow isn't yet supported in MSAL Python.
+ ## Username and password
private static IAuthenticationResult acquireTokenUsernamePassword() throws Excep
} ```
+# [macOS](#tab/macOS)
+
+This flow isn't supported on MSAL for macOS.
+
+# [Node.js](#tab/nodejs)
+
+This extract is from the [MSAL Node dev samples](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-node-samples/standalone-samples/username-password). In the code snippet below, the username and password are hardcoded for illustration purposes only. This should be avoided in production. Instead, a basic UI prompting the user to enter her username/password would be recommended.
+
+```JavaScript
+const msal = require("@azure/msal-node");
+
+const msalConfig = {
+ auth: {
+ clientId: "your_client_id_here",
+ authority: "your_authority_here",
+ }
+};
+
+const pca = new msal.PublicClientApplication(msalConfig);
+
+// For testing, enter your username and password below.
+// In production, replace this with a UI prompt instead.
+const usernamePasswordRequest = {
+ scopes: ["user.read"],
+ username: "", // Add your username here
+ password: "", // Add your password here
+};
+
+pca.acquireTokenByUsernamePassword(usernamePasswordRequest).then((response) => {
+ console.log("acquired token by password grant");
+}).catch((error) => {
+ console.log(error);
+});
+```
+ # [Python](#tab/python) This extract is from the [MSAL Python dev samples](https://github.com/AzureAD/microsoft-authentication-library-for-python/blob/dev/sample/).
if not result:
config["username"], config["password"], scopes=config["scope"]) ```
-# [macOS](#tab/macOS)
-
-This flow isn't supported on MSAL for macOS.
- ## Command-line tool without a web browser
private static IAuthenticationResult acquireTokenDeviceCode() throws Exception {
} ```
+# [macOS](#tab/macOS)
+
+This flow doesn't apply to macOS.
+
+# [Node.js](#tab/nodejs)
+
+This extract is from the [MSAL Node dev samples](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-node-samples/standalone-samples/device-code).
+
+```JavaScript
+const msal = require('@azure/msal-node');
+
+const msalConfig = {
+ auth: {
+ clientId: "your_client_id_here",
+ authority: "your_authority_here",
+ }
+};
+
+const pca = new msal.PublicClientApplication(msalConfig);
+
+const deviceCodeRequest = {
+ deviceCodeCallback: (response) => (console.log(response.message)),
+ scopes: ["user.read"],
+ timeout: 20,
+};
+
+pca.acquireTokenByDeviceCode(deviceCodeRequest).then((response) => {
+ console.log(JSON.stringify(response));
+}).catch((error) => {
+ console.log(JSON.stringify(error));
+});
+```
+ # [Python](#tab/python) This extract is from the [MSAL Python dev samples](https://github.com/AzureAD/microsoft-authentication-library-for-python/blob/dev/sample/).
if not result:
# and then keep calling acquire_token_by_device_flow(flow) in your own customized loop ```
-# [macOS](#tab/macOS)
-
-This flow doesn't apply to macOS.
- ## File-based token cache
namespace CommonCacheMsalV3
## Next steps Move on to the next article in this scenario,
-[Call a web API from the desktop app](scenario-desktop-call-api.md).
+[Call a web API from the desktop app](scenario-desktop-call-api.md).
active-directory Scenario Desktop App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-desktop-app-configuration.md
PublicClientApplication pca = PublicClientApplication.builder(CLIENT_ID)
.build(); ```
-# [Python](#tab/python)
-
-```Python
-config = json.load(open(sys.argv[1]))
-
-app = msal.PublicClientApplication(
- config["client_id"], authority=config["authority"],
- # token_cache=... # Default cache is in memory only.
- # You can learn how to use SerializableTokenCache from
- # https://msal-python.rtfd.io/en/latest/#msal.SerializableTokenCache
- )
-```
- # [MacOS](#tab/macOS) The following code instantiates a public client application and signs in users in the Microsoft Azure public cloud with a work or school account or a personal Microsoft account.
let authority = try? MSALAADAuthority(cloudInstance: .usGovernmentCloudInstance,
let config = MSALPublicClientApplicationConfig(clientId: "<your-client-id-here>", redirectUri: "<your-redirect-uri-here>", authority: authority) if let application = try? MSALPublicClientApplication(configuration: config) { /* Use application */} ```+
+# [Node.js](#tab/nodejs)
+
+Configuration parameters can be loaded from many sources, like a JSON file or from environment variables. Below, an *.env* file is used.
+
+```Text
+# Credentials
+CLIENT_ID=Enter_the_Application_Id_Here
+TENANT_ID=Enter_the_Tenant_Info_Here
+
+# Configuration
+REDIRECT_URI=msal://redirect
+
+# Endpoints
+AAD_ENDPOINT_HOST=Enter_the_Cloud_Instance_Id_Here
+GRAPH_ENDPOINT_HOST=Enter_the_Graph_Endpoint_Here
+
+# RESOURCES
+GRAPH_ME_ENDPOINT=v1.0/me
+GRAPH_MAIL_ENDPOINT=v1.0/me/messages
+
+# SCOPES
+GRAPH_SCOPES=User.Read Mail.Read
+```
+
+Load the *.env* file to environment variables. MSAL Node can be initialized minimally as below. See the available [configuration options](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/configuration.md).
+
+```JavaScript
+const { PublicClientApplication } = require('@azure/msal-node');
+
+const MSAL_CONFIG = {
+ auth: {
+ clientId: process.env.CLIENT_ID,
+ authority: `${process.env.AAD_ENDPOINT_HOST}${process.env.TENANT_ID}`,
+ redirectUri: process.env.REDIRECT_URI,
+ },
+ system: {
+ loggerOptions: {
+ loggerCallback(loglevel, message, containsPii) {
+ console.log(message);
+ },
+ piiLoggingEnabled: false,
+ logLevel: LogLevel.Verbose,
+ }
+ }
+};
+
+clientApplication = new PublicClientApplication(MSAL_CONFIG);
+```
+
+# [Python](#tab/python)
+
+```Python
+config = json.load(open(sys.argv[1]))
+
+app = msal.PublicClientApplication(
+ config["client_id"], authority=config["authority"],
+ # token_cache=... # Default cache is in memory only.
+ # You can learn how to use SerializableTokenCache from
+ # https://msal-python.rtfd.io/en/latest/#msal.SerializableTokenCache
+ )
+```
+ ## Next steps
active-directory Scenario Desktop App Registration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-desktop-app-registration.md
Specify the redirect URI for your app by [configuring the platform settings](qui
> As a security best practice, we recommend explicitly setting `https://login.microsoftonline.com/common/oauth2/nativeclient` or `http://localhost` as the redirect URI. Some authentication libraries like MSAL.NET use a default value of `urn:ietf:wg:oauth:2.0:oob` when no other redirect URI is specified, which is not recommended. This default will be updated as a breaking change in the next major release. - If you build a native Objective-C or Swift app for macOS, register the redirect URI based on your application's bundle identifier in the following format: `msauth.<your.app.bundle.id>://auth`. Replace `<your.app.bundle.id>` with your application's bundle identifier.
+- If you build a Node.js Electron app, use a custom file protocol instead of a regular web (https://) redirect URI in order to handle the redirection step of the authorization flow, for instance `msal://redirect`. The custom file protocol name shouldn't be obvious to guess and should follow the suggestions in the [OAuth2.0 specification for Native Apps](https://tools.ietf.org/html/rfc8252#section-7.1).
- If your app uses only Integrated Windows Authentication or a username and a password, you don't need to register a redirect URI for your application. These flows do a round trip to the Microsoft identity platform v2.0 endpoint. Your application won't be called back on any specific URI. - To distinguish [device code flow](scenario-desktop-acquire-token.md#device-code-flow), [Integrated Windows Authentication](scenario-desktop-acquire-token.md#integrated-windows-authentication), and a [username and a password](scenario-desktop-acquire-token.md#username-and-password) from a confidential client application using a client credential flow used in [daemon applications](scenario-daemon-overview.md), none of which requires a redirect URI, configure it as a public client application. To achieve this configuration:
active-directory Scenario Desktop Call Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-desktop-call-api.md
Now that you have a token, you can call a protected web API.
[!INCLUDE [Call web API in .NET](../../../includes/active-directory-develop-scenarios-call-apis-dotnet.md)]
-<!--
-More includes will come later for Python and Java
>
-# [Python](#tab/python)
-
-```Python
-endpoint = "url to the API"
-http_headers = {'Authorization': 'Bearer ' + result['access_token'],
- 'Accept': 'application/json',
- 'Content-Type': 'application/json'}
-data = requests.get(endpoint, headers=http_headers, stream=False).json()
-```
- # [Java](#tab/java) ```Java
catch(MsalUiRequiredException ex)
.ExecuteAsync(); } ```+
+# [Node.js](#tab/nodejs)
+
+Using an HTTP client like [Axios](https://www.npmjs.com/package/axios), call the API endpoint URI with an access token as *authorization bearer*.
+
+```javascript
+const axios = require('axios');
+
+async function callEndpointWithToken(endpoint, accessToken) {
+ const options = {
+ headers: {
+ Authorization: `Bearer ${accessToken}`
+ }
+ };
+
+ console.log('Request made at: ' + new Date().toString());
+
+ const response = await axios.default.get(endpoint, options);
+
+ return response.data;
+}
+
+```
+
+<!--
+More includes will come later for Python and Java
+-->
+# [Python](#tab/python)
+
+```Python
+endpoint = "url to the API"
+http_headers = {'Authorization': 'Bearer ' + result['access_token'],
+ 'Accept': 'application/json',
+ 'Content-Type': 'application/json'}
+data = requests.get(endpoint, headers=http_headers, stream=False).json()
+```
+ ## Next steps
active-directory Scenario Desktop Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-desktop-overview.md
If you haven't already, create your first application by completing a quickstart
- [Quickstart: Acquire a token and call Microsoft Graph API from a Windows desktop app](./quickstart-v2-windows-desktop.md) - [Quickstart: Acquire a token and call Microsoft Graph API from a UWP app](./quickstart-v2-uwp.md) - [Quickstart: Acquire a token and call Microsoft Graph API from a macOS native app](./quickstart-v2-ios.md)
+- [Quickstart: Acquire a token and call Microsoft Graph API from an Node.js & Electron app](./quickstart-v2-nodejs-desktop.md)
## Overview
You write a desktop application, and you want to sign in users to your applicati
- If your desktop application supports graphical controls, for instance, if it's a Windows.Form application, a WPF application, or a macOS native application. - Or, if it's a .NET Core application and you agree to have the authentication interaction with Azure Active Directory (Azure AD) happen in the system browser.
+ - Or, if it's a Node.js Electron application, which runs on a Chromium instance.
- For Windows hosted applications, it's also possible for applications running on computers joined to a Windows domain or Azure AD joined to acquire a token silently by using Integrated Windows Authentication. - Finally, and although it's not recommended, you can use a username and a password in public client applications. It's still needed in some scenarios like DevOps. Using it imposes constraints on your application. For instance, it can't sign in a user who needs to perform [multi-factor authentication](../authentication/concept-mfa-howitworks.md) (conditional access). Also, your application won't benefit from single sign-on (SSO).
active-directory Scenario Web App Sign User App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-web-app-sign-user-app-configuration.md
In ASP.NET Core, these settings are located in the [appsettings.json](https://gi
// - "https://login.microsoftonline.com/" for Azure public cloud // - "https://login.microsoftonline.us/" for Azure US government // - "https://login.microsoftonline.de/" for Azure AD Germany
- // - "https://login.chinacloudapi.cn/" for Azure AD China operated by 21Vianet
+ // - "https://login.partner.microsoftonline.cn/common" for Azure AD China operated by 21Vianet
"Instance": "https://login.microsoftonline.com/", // Azure AD audience among:
active-directory Single Sign Out Saml Protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/single-sign-out-saml-protocol.md
description: This article describes the Single Sign-Out SAML Protocol in Azure A
- Previously updated : 07/19/2017 Last updated : 03/22/2021
# Single Sign-Out SAML Protocol
-Azure Active Directory (Azure AD) supports the SAML 2.0 web browser single sign-out profile. For single sign-out to work correctly, the **LogoutURL** for the application must be explicitly registered with Azure AD during application registration. Azure AD uses the LogoutURL to redirect users after they're signed out.
+Azure Active Directory (Azure AD) supports the SAML 2.0 web browser single sign-out profile. For single sign-out to work correctly, the **LogoutURL** for the application must be explicitly registered with Azure AD during application registration. If the app is [added to the Azure App Gallery](v2-howto-app-gallery-listing.md) then this value can be set by default. Otherwise, the value must be determined and set by the person adding the app to their Azure AD tenant. Azure AD uses the LogoutURL to redirect users after they're signed out.
Azure AD supports redirect binding (HTTP GET), and not HTTP POST binding.
active-directory Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/faq.md
Create a different local account before you use Azure Active Directory join to f
**A:** When your users add their accounts to apps on a domain-joined device, they might be prompted with **Add account to Windows?** If they enter **Yes** on the prompt, the device registers with Azure AD. The trust type is marked as Azure AD registered. After you enable hybrid Azure AD join in your organization, the device also gets hybrid Azure AD joined. Then two device states show up for the same device.
-Hybrid Azure AD join takes precedence over the Azure AD registered state. So your device is considered hybrid Azure AD joined for any authentication and Conditional Access evaluation. You can safely delete the Azure AD registered device record from the Azure AD portal. Learn to [avoid or clean up this dual state on the Windows 10 machine](hybrid-azuread-join-plan.md#review-things-you-should-know).
+In most cases, Hybrid Azure AD join takes precedence over the Azure AD registered state, resulting in your device being considered hybrid Azure AD joined for any authentication and Conditional Access evaluation. However, sometimes, this dual state can result in a non-deterministic evaluation of the device and cause access issues. We strongly recommend upgrading to Windows 10 version 1803 and above where we automatically clean up the Azure AD registered state. Learn how to [avoid or clean up this dual state on the Windows 10 machine](hybrid-azuread-join-plan.md#review-things-you-should-know).
active-directory Howto Device Identity Virtual Desktop Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/howto-device-identity-virtual-desktop-infrastructure.md
Administrators should reference the following articles, based on their identity
- [Configure hybrid Azure Active Directory join for federated environment](hybrid-azuread-join-federated-domains.md) - [Configure hybrid Azure Active Directory join for managed environment](hybrid-azuread-join-managed-domains.md)
+### Non-persistent VDI
+ When deploying non-persistent VDI, Microsoft recommends that IT administrators implement the guidance below. Failure to do so will result in your directory having lots of stale Hybrid Azure AD joined devices that were registered from your non-persistent VDI platform resulting in increased pressure on your tenant quota and risk of service interruption due to running out of tenant quota. - If you are relying on the System Preparation Tool (sysprep.exe) and if you are using a pre-Windows 10 1809 image for installation, make sure that image is not from a device that is already registered with Azure AD as hybrid Azure AD joined.
When deploying non-persistent VDI, Microsoft recommends that IT administrators i
- Define and implement process for [managing stale devices](manage-stale-devices.md). - Once you have a strategy to identify your non-persistent Hybrid Azure AD joined devices (e.g. using computer display name prefix), you should be more aggressive on the clean-up of these devices to ensure your directory does not get consumed with lots of stale devices. - For non-persistent VDI deployments on Windows current and down-level, you should delete devices that have **ApproximateLastLogonTimestamp** of older than 15 days.+
+### Persistent VDI
+
+When deploying persistent VDI, Microsoft recommends that IT administrators implement the guidance below. Failure to do so will result in deployment and authentication issues.
+
+- If you are relying on the System Preparation Tool (sysprep.exe) and if you are using a pre-Windows 10 1809 image for installation, make sure that image is not from a device that is already registered with Azure AD as hybrid Azure AD joined.
+- If you are relying on a Virtual Machine (VM) snapshot to create additional VMs, make sure that snapshot is not from a VM that is already registered with Azure AD as Hybrid Azure AD join.
+
+In addition, we recommend you to implement process for [managing stale devices](manage-stale-devices.md). This will ensure your directory does not get consumed with lots of stale devices if you periodically reset your VMs.
## Next steps
active-directory Reset Redemption Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/reset-redemption-status.md
After a guest user has redeemed your invitation for B2B collaboration, there mig
To manage these scenarios previously, you had to manually delete the guest userΓÇÖs account from your directory and reinvite the user. Now you can use PowerShell or the Microsoft Graph invitation API to reset the user's redemption status and reinvite the user while retaining the user's object ID, group memberships, and app assignments. When the user redeems the new invitation, the UPN of the user doesn't change, but the user's sign-in name changes to the new email. The user can subsequently sign in using the new email or an email you've added to the `otherMails` property of the user object.
+## Reset the email address used for sign-in
+
+If a user wants to sign in using a different email:
+
+1. Make sure the new email address is added to the `mail` or `otherMails` property of the user object.
+2. Replace the email address in the `InvitedUserEmailAddress` property with the new email address.
+3. Use one of the methods below to reset the user's redemption status.
+
+> [!NOTE]
+>During public preview, when you're resetting the user's email address, we recommend setting the `mail` property to the new email address. This way the user can redeem the invitation by signing into your directory in addition to using the redemption link in the invitation.
+>
## Use PowerShell to reset redemption status
-Install the latest AzureADPreview PowerShell module and create a new invitation with `InvitedUserEMailAddress` set to the new email address, and `ResetRedemption` set to `true`.
+Install the latest AzureADPreview PowerShell module and create a new invitation with `InvitedUserEmailAddress` set to the new email address, and `ResetRedemption` set to `true`.
```powershell Uninstall-Module AzureADPreview
New-AzureADMSInvitation -InvitedUserEmailAddress <<external email>> -SendInvitat
## Use Microsoft Graph API to reset redemption status
-Using the [Microsoft Graph invitation API](/graph/api/resources/invitation), set the `resetRedemption` property to `true` and specify the new email address in the `invitedUserEmailAddress` property.
+Using the [Microsoft Graph invitation API](/graph/api/resources/invitation?view=graph-rest-1.0), set the `resetRedemption` property to `true` and specify the new email address in the `invitedUserEmailAddress` property.
```json POST https://graph.microsoft.com/beta/invitations
active-directory Application Management Certs Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-management-certs-faq.md
+
+ Title: Azure Active Directory Application Management certificates frequently asked questions
+description: Learn answers to frequently asked questions (FAQ) about managing certificates for apps using Azure Active Directory as an Identity Provider (IdP).
+++++++ Last updated : 03/19/2021++++
+# Azure Active Directory (Azure AD) Application Management certificates frequently asked questions
+
+This page answers frequently asked questions about managing the certificates for apps using Azure Active Directory (Azure AD) as an Identity Provider (IdP).
+
+## Is there a way to generate a list of expiring SAML signing certificates?
+
+You can export all app registrations with expiring secrets, certificates and their owners for the specified apps from your directory in a CSV file through [PowerShell scripts](app-management-powershell-samples.md).
+
+## Where can I find the information about soon to expire certificates renewal steps?
+
+You can find the steps [here](manage-certificates-for-federated-single-sign-on.md#renew-a-certificate-that-will-soon-expire).
+
+## How can I customize the expiration date for the certificates issued by Azure AD?
+
+By default, Azure AD configures a certificate to expire after three years when it is created automatically during SAML single sign-on configuration. Because you can't change the date of a certificate after you save it, you need to create a new certificate. For steps on how to do so, please refer [Customize the expiration date for your federation certificate and roll it over to a new certificate](manage-certificates-for-federated-single-sign-on.md#customize-the-expiration-date-for-your-federation-certificate-and-roll-it-over-to-a-new-certificate).
+
+## How can I automate the certificates expiration notifications?
+
+Azure AD will send an email notification 60, 30, and 7 days before the SAML certificate expires. You may add more than one email address to receive notifications.
+
+> [!NOTE]
+> You can add up to 5 email addresses to the Notification list (including the email address of the admin who added the application). If you need more people to be notified, use the distribution list emails.
+
+To specify the emails you want the notifications to be sent to, see [Add email notification addresses for certificate expiration](manage-certificates-for-federated-single-sign-on.md#add-email-notification-addresses-for-certificate-expiration).
+
+There is no option to edit or customize these email notifications received from `aadnotification@microsoft.com`. However, you can export app registrations with expiring secrets and certificates through [PowerShell scripts](app-management-powershell-samples.md).
+
+## Who can update the certificates?
+
+The owner of the application or Global Administrator or Application Administrator can update the certificates through Azure portal UI, PowerShell or Microsoft Graph.
+
+## I need more details about certificate signing options.
+
+In Azure AD, you can set up certificate signing options and the certificate signing algorithm. To learn more, see [Advanced SAML token certificate signing options for Azure AD apps](certificate-signing-options.md).
+
+## I need to replace the certificate for Azure AD Application Proxy applications and need more instructions.
+
+To replace certificates for Azure AD Application Proxy applications, see [PowerShell sample - Replace certificate in Application Proxy apps](scripts/powershell-get-custom-domain-replace-cert.md).
+
+## How do I manage certificates for custom domains in Azure AD Application Proxy?
+
+To configure an on-premises app to use a custom domain, you need a verified Azure Active Directory custom domain, a PFX certificate for the custom domain, and an on-premises app to configure. To learn more, see [Custom domains in Azure AD Application Proxy](application-proxy-configure-custom-domain.md).
+
+## I need to update the token signing certificate on the application side. Where can I get it on Azure AD side?
+
+You can renew a SAML X.509 Certificate, see [SAML Signing certificate](configure-saml-single-sign-on.md#saml-signing-certificate).
+
+## What is Azure AD signing key rollover?
+
+You can find more details [here](../develop/active-directory-signing-key-rollover.md).
+
+## How do I renew application token encryption certificate?
+
+To renew an application token encryption certificate, see [How to renew a token encryption certificate for an enterprise application](howto-saml-token-encryption.md).
+
+## How do I renew application token signing certificate?
+
+To renew an application token signing certificate, see [How to renew a token signing certificate for an enterprise application](manage-certificates-for-federated-single-sign-on.md).
+
+## How do I update Azure AD after changing my federation certificates?
+
+To update Azure AD after changing your federation certificates, see [Renew federation certificates for Microsoft 365 and Azure Active Directory](../hybrid/how-to-connect-fed-o365-certs.md).
active-directory Application Proxy Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-faq.md
This may be due to either the updater service not working correctly or if there
The updater service is healthy if itΓÇÖs running and there are no errors recorded in the event log (Applications and Services logs -> Microsoft -> AadApplicationProxy -> Updater -> Admin). > [!IMPORTANT]
-> Only major versions are released for auto-upgrade. We recommend updating your connector manually on a regular schedule. For more information on new releases, the type of the release (download, auto-upgrade), bug fixes and new features see, [Azure AD Application Proxy: Version release history](application-proxy-release-version-history.md).
+> Only major versions are released for auto-upgrade. We recommend updating your connector manually only if it's necessary. For example, you cannot wait for a major release, because you must fix a known problem or you want to use a new feature. For more information on new releases, the type of the release (download, auto-upgrade), bug fixes and new features see, [Azure AD Application Proxy: Version release history](application-proxy-release-version-history.md).
To manually upgrade a connector:
active-directory Pim Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-configure.md
Previously updated : 09/29/2020 Last updated : 03/19/2021
To better understand Privileged Identity Management and its documentation, you s
| activated | State | A user that has an eligible role assignment, performed the actions to activate the role, and is now active. Once activated, the user can use the role for a preconfigured period-of-time before they need to activate again. | | permanent eligible | Duration | A role assignment where a user is always eligible to activate the role. | | permanent active | Duration | A role assignment where a user can always use the role without performing any actions. |
-| expire eligible | Duration | A role assignment where a user is eligible to activate the role within a specified start and end date. |
-| expire active | Duration | A role assignment where a user can use the role without performing any actions within a specified start and end date. |
+| time-bound eligible | Duration | A role assignment where a user is eligible to activate the role only within start and end dates. |
+| time-bound active | Duration | A role assignment where a user can use the role only within start and end dates. |
| just-in-time (JIT) access | | A model in which users receive temporary permissions to perform privileged tasks, which prevents malicious or unauthorized users from gaining access after the permissions have expired. Access is granted only when users need it. | | principle of least privilege access | | A recommended security practice in which every user is provided with only the minimum privileges needed to accomplish the tasks they are authorized to perform. This practice minimizes the number of Global Administrators and instead uses specific administrator roles for certain scenarios. |
active-directory Pim How To Activate Role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-activate-role.md
Previously updated : 11/18/2020 Last updated : 03/22/2021
If you do not require activation of a role that requires approval, you can cance
### Permissions are not granted after activating a role
-When you activate a role in Privileged Identity Management, the activation may not instantly propagate to all portals that require the privileged role. Sometimes, even if the change is propagated, web caching in a portal may result in the change not taking effect immediately. If your activation is delayed, here is what you should do.
-
-1. Sign out of the Azure portal and then sign back in.
-
-1. In Privileged Identity Management, verify that you are listed as the member of the role.
+When you activate a role in Privileged Identity Management, the activation may not instantly propagate to all portals that require the privileged role. Sometimes, even if the change is propagated, web caching in a portal may result in the change not taking effect immediately. If your activation is delayed, sign out of the portal you are trying to perform the action and then sign back in. In the Azure portal, PIM signs you out and back in automatically.
# [Previous version](#tab/previous)
If you do not require activation of a role that requires approval, you can cance
### Permissions are not granted after activating a role
-When you activate a role in Privileged Identity Management, the activation may not instantly propagate to all portals that require the privileged role. Sometimes, even if the change is propagated, web caching in a portal may result in the change not taking effect immediately. If your activation is delayed, here is what you should do.
-
-1. Sign out of the Azure portal and then sign back in.
-
- When you activate an Azure AD role, you will see the stages of your activation. Once all the stages are complete, you will see a **Sign out** link. You can use this link to sign out. This will solve most cases for activation delay.
-
-1. In Privileged Identity Management, verify that you are listed as the member of the role.
+When you activate a role in Privileged Identity Management, your activation might be delayed in admin portals other than the Azure portal, such as the Office 365 portal. If your activation is delayed, sign out of the portal you're in and then sign back in. Then, use Privileged Identity Management to verify that you are listed as the member of the role.
active-directory Amazon Web Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/amazon-web-service-tutorial.md
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * An AWS single sign-on (SSO) enabled subscription.
+> [!Note]
+> Roles should not be manually edited in Azure AD when doing role imports.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
active-directory Github Enterprise Managed User Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/github-enterprise-managed-user-provisioning-tutorial.md
This tutorial describes the steps you need to perform in both GitHub Enterprise
> * Provision groups and group memberships in GitHub Enterprise Managed User > * Single sign-on to GitHub Enterprise Managed User (recommended)
+> [!NOTE]
+> This provisioning connector is enabled only for Enterprise Managed Users beta participants.
++ ## Prerequisites The scenario outlined in this tutorial assumes that you already have the following prerequisites:
active-directory Moqups Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/moqups-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Moqups SSO
-To configure single sign-on on **Moqups** side, you need to send the **App Federation Metadata Url** to [Moqups support team](mailto:support@moqups.com). They set this setting to have the SAML SSO connection set properly on both sides.
+1. Sign in to the Moqups website as an administrator.
+
+1. Go to the **Account** and select the **Integration** tab.
+
+1. In the **SAML Authentication** section, paste the **App Federation Metadata Url** value, which you have copied from the Azure portal.
+
+ ![Screenshot for Configuration section.](./media/moqups-tutorial/saml-authentication.png)
+
+1. Click on the **Configure** button.
### Create Moqups test user
active-directory Ringcentral Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/ringcentral-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
## Step 2. Configure RingCentral to support provisioning with Azure AD
-1. Sign in to your [RingCentral Admin Console](https://login.ringcentral.com/sw.html). Navigate to **Tools > Directory Integration**.
-
- ![RingCentral Admin Console](media/ringcentral-provisioning-tutorial/admin.png)
-
-2. Choose **SCIM** under **Select Directory Provider**. (In the future there will be an option called Azure Active Directory). Click **Enable SCIM service**.
-
- ![RingCentral Add SCIM](media/ringcentral-provisioning-tutorial/scim.png)
-
-3. Contact RingCentral support team at matthew.hunt@ringcentral.com for a **SCIM Authentication Token**. This value will be entered in the Secret Token field in the Provisioning tab of your RingCentral application in the Azure portal.
+A [RingCentral](https://www.ringcentral.com/office/plansandpricing.html) admin account is required to Authorize in the Admin Credentials section in Step 5.
> [!NOTE] > To assign licenses to users, refer to the video link [here](https://support.ringcentral.com/s/article/5-10-Adding-Extensions-via-Web?language).
This section guides you through the steps to configure the Azure AD provisioning
![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input `https://platform.ringcentral.com/scim/v2` in **Tenant URL**. Input the **SCIM Authentication Token** value retrieved earlier in **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to RingCentral. If the connection fails, ensure your RingCentral account has Admin permissions and try again.
+5. Under the **Admin Credentials** section, click on **Authorize**. You will be redirected to RingCentral's Sign In page. Input your Email / Phone Number and Password and click on the **Sign In** button. Click **Authorize** in the RingCentral **Access Request** page. Click **Test Connection** to ensure Azure AD can connect to RingCentral. If the connection fails, ensure your RingCentral account has Admin permissions and try again.
+
+ ![AAD](./media/ringcentral-provisioning-tutorial/admincredentials.png)
+
+ ![Access](./media/ringcentral-provisioning-tutorial/authorize.png)
- ![Screenshot of the Tenant URL and Secret Token text fields with the Test Connection option called out.](./media/ringcentral-provisioning-tutorial/provisioning.png)
+ ![Authorize](./media/ringcentral-provisioning-tutorial/accessrequest.png)
6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
Once you've configured provisioning, use the following resources to monitor your
## Change log * 09/10/2020 - Removed support for "displayName" and "manager" attributes.
+* 03/15/2021 - Updated authorization method from permanent bearer token to OAuth code grant flow.
## Additional resources
active-directory Salesforce Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/salesforce-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* An Azure Active directory tenant * A Salesforce.com tenant
+> [!Note]
+> Roles should not be manually edited in Azure Active Directory when doing role imports.
+ > [!IMPORTANT] > If you are using a Salesforce.com trial account, then you will be unable to configure automated user provisioning. Trial accounts do not have the necessary API access enabled until they are purchased. You can get around this limitation by using a free [developer account](https://developer.salesforce.com/signup) to complete this tutorial.
For more information on how to read the Azure AD provisioning logs, see [Reporti
* [Managing user account provisioning for Enterprise Apps](tutorial-list.md) * [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
-* [Configure Single Sign-on](./salesforce-tutorial.md)
+* [Configure Single Sign-on](./salesforce-tutorial.md)
active-directory Samanage Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/samanage-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* A [SolarWinds Service Desk tenant](https://www.samanage.com/pricing/) with the Professional package. * A user account in SolarWinds Service Desk with admin permissions.
+> [!Note]
+> Roles should not be manually edited in Azure Active Directory when doing role imports.
+ ## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). 2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
If you select the **Sync all users and groups** option and configure a value for
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
The following parameters can be leveraged to configure Private DNS Zone.
- "System" is the default value. If the --private-dns-zone argument is omitted, AKS will create a Private DNS Zone in the Node Resource Group. - "None" means AKS will not create a Private DNS Zone. This requires you to Bring Your Own DNS Server and configure the DNS resolution for the Private FQDN. If you don't configure DNS resolution, DNS is only resolvable within the agent nodes and will cause cluster issues after deployment. -- "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID" requires you to create a Private DNS Zone in this format for azure global cloud: `privatelink.<region>.azmk8s.io`. You will need the Resource Id of that Private DNS Zone going forward. Additionally, you will need a user assigned identity or service principal with at least the `private dns zone contributor` role.
+- "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID" requires you to create a Private DNS Zone in this format for azure global cloud: `privatelink.<region>.azmk8s.io`. You will need the Resource Id of that Private DNS Zone going forward. Additionally, you will need a user assigned identity or service principal with at least the `private dns zone contributor` and `vnet contributor` roles.
- "fqdn-subdomain" can be utilized with "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID" only to provide subdomain capabilities to `privatelink.<region>.azmk8s.io` ### Prerequisites
api-management Validation Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/validation-policies.md
This policy can be used in the following policy [sections](./api-management-howt
The `validate-parameters` policy validates the header, query, or path parameters in requests against the API schema. > [!IMPORTANT]
-> If you imported an API using a management API version prior to `2021-01-01-preview`, the `validate-parameters` policy might not work. You may need to reimport your API using management API version `2021-01-01-preview` or later.
+> If you imported an API using a management API version prior to `2021-01-01-preview`, the `validate-parameters` policy might not work. You may need to [reimport your API](/rest/api/apimanagement/2021-01-01-preview/apis/createorupdate) using management API version `2021-01-01-preview` or later.
### Policy statement
In this example, all query and path parameters are validated in the prevention m
<parameter name="User-Agent" action="ignore" /> <parameter name="Host" action="ignore" /> <parameter name="Referrer" action="ignore" />
+ </headers>
</validate-parameters> ```
app-service Deploy Staging Slots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-staging-slots.md
You can also customize the warm-up behavior with one or both of the following [a
- `WEBSITE_SWAP_WARMUP_PING_PATH`: The path to ping to warm up your site. Add this app setting by specifying a custom path that begins with a slash as the value. An example is `/statuscheck`. The default value is `/`. - `WEBSITE_SWAP_WARMUP_PING_STATUSES`: Valid HTTP response codes for the warm-up operation. Add this app setting with a comma-separated list of HTTP codes. An example is `200,202` . If the returned status code isn't in the list, the warmup and swap operations are stopped. By default, all response codes are valid.
+- `WEBSITE_WARMUP_PATH`: A relative path on the site that should be pinged whenever the site restarts (not only during slot swaps). Example values include `/statuscheck` or the root path, `/`.
> [!NOTE] > The `<applicationInitialization>` configuration element is part of each app start-up, whereas the two warm-up behavior app settings apply only to slot swaps.
Here are some common swap errors:
- After slot swaps, the app may experience unexpected restarts. This is because after a swap, the hostname binding configuration goes out of sync, which by itself doesn't cause restarts. However, certain underlying storage events (such as storage volume failovers) may detect these discrepancies and force all worker processes to restart. To minimize these types of restarts, set the [`WEBSITE_ADD_SITENAME_BINDINGS_IN_APPHOST_CONFIG=1` app setting](https://github.com/projectkudu/kudu/wiki/Configurable-settings#disable-the-generation-of-bindings-in-applicationhostconfig) on *all slots*. However, this app setting does *not* work with Windows Communication Foundation (WCF) apps. ## Next steps
-[Block access to non-production slots](app-service-ip-restrictions.md)
+[Block access to non-production slots](app-service-ip-restrictions.md)
app-service Troubleshoot Intermittent Outbound Connection Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/troubleshoot-intermittent-outbound-connection-errors.md
JDBC Connection Pooling.
HTTP Connection Pooling
-* [Apache Connection Management](https://hc.apache.org/httpcomponents-client-ga/tutorial/html/connmgmt.html)
-* [Class PoolingHttpClientConnectionManager](http://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/impl/conn/PoolingHttpClientConnectionManager.html)
+* [Apache Connection Management](https://hc.apache.org/httpcomponents-client-5.0.x/)
+* [Class PoolingHttpClientConnectionManager](https://hc.apache.org/httpcomponents-client-5.0.x/)
#### PHP
application-gateway Key Vault Certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/key-vault-certs.md
Application Gateway integration with Key Vault requires a three-step configurati
1. **Configure your key vault**
- You then either import an existing certificate or create a new one in your key vault. The certificate will be used by applications that run through the application gateway. In this step, you can also use a key vault secret that's stored as a password-less, base-64 encoded PFX file. We recommend using a certificate type because of the autorenewal capability that's available with certificate type objects in the key vault. After you've created a certificate or a secret, you define access policies in the key vault to allow the identity to be granted *get* access to the secret.
+ You then either import an existing certificate or create a new one in your key vault. The certificate will be used by applications that run through the application gateway. In this step, you can also use a Key Vault Secret which also allows storing a password-less, base-64 encoded PFX file. We recommend using a ΓÇ£CertificateΓÇ¥ type because of the autorenewal capability that's available with this type of objects in the Key Vault. After you've created a Certificate or a Secret, you must define Access Policies in the Key Vault to allow the identity to be granted get access to the secret.
> [!IMPORTANT]
- > Application Gateway currently requires Key Vault to allow access from all networks in order to leverage the integration. It does not support Key Vault integration when Key Vault is set to only allow private endpoints and select networks access. Support for private and select networks is in the works for full integration of Key Vault with Application Gateway.
+ > Starting March 15th 2021, Key Vault recognizes Azure Application Gateway as one of the Trusted Services, thus allowing you to build a secure network boundary in Azure. This gives you an ability to deny access to traffic from all networks (including internet traffic) to Key Vault but still make it accessible for Application Gateway resource under your subscription.
+
+ > You can configure your Application Gateway in a restricted network of Key Vault in the following manner. <br />
+ > a) Under Key VaultΓÇÖs Networking blade <br />
+ > b) choose Private endpoint and selected networks in "Firewall and Virtual Networks" tab <br/>
+ > c) then using Virtual Networks, add your Application GatewayΓÇÖs virtual network and Subnet. During the process also configure ΓÇÿMicrosoft.KeyVault' service endpoint by selecting its checkbox. <br/>
+ > d) Finally, select ΓÇ£YesΓÇ¥ to allow Trusted Services to bypass Key VaultΓÇÖs firewall. <br/>
+ >
+ > ![Key Vault Firewall](media/key-vault-certs/key-vault-firewall.png)
+ > [!NOTE] > If you deploy the application gateway via an ARM template, either by using the Azure CLI or PowerShell, or via an Azure application deployed from the Azure portal, the SSL certificate is stored in the key vault as a base64-encoded PFX file. You must complete the steps in [Use Azure Key Vault to pass secure parameter value during deployment](../azure-resource-manager/templates/key-vault-parameter.md).
automation Automation Secure Asset Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-secure-asset-encryption.md
PATCH https://management.azure.com/subscriptions/00000000-0000-0000-0000-0000000
Request body: ```json
- {
- "properties": {
- "encryption": {
- "keySource": "Microsoft.Keyvault",
- "keyvaultProperties": {
- "keyName": "sample-vault-key",
- "keyvaultUri": "https://sample-vault-key12.vault.azure.net",
- "keyVersion": "7c73556c521340209371eaf623cc099d"
- }
- }
- }
- }
+{
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "properties": {
+ "encryption": {
+ "keySource": "Microsoft.Keyvault",
+ "keyvaultProperties": {
+ "keyName": "sample-vault-key",
+ "keyvaultUri": "https://sample-vault-key12.vault.azure.net",
+ "keyVersion": "7c73556c521340209371eaf623cc099d"
+ }
+ }
+ }
+}
``` Sample response
automation Automation Tutorial Troubleshoot Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-tutorial-troubleshoot-changes.md
description: This article tells how to troubleshoot changes on an Azure VM.
keywords: change, tracking, change tracking, inventory, automation Previously updated : 12/05/2018 Last updated : 03/21/2021
Viewing changes in the Azure portal can be helpful, but being able to be alerted
14. For **Actions**, enter a name for the action, such as **Email Administrators**.
-15. For **ACTION TYPE**, select **Email/SMS/Push/Voice**.
+15. For **ACTION TYPE**, select **Email/SMS message/Push/Voice**.
16. For **DETAILS**, select **Edit details**.
- ![Add action group](./media/automation-tutorial-troubleshoot-changes/add-action-group.png)
+ :::image type="content" source="./media/automation-tutorial-troubleshoot-changes/add-action-group.png" alt-text="Usage and estimated costs." lightbox="./media/automation-tutorial-troubleshoot-changes/add-action-group.png":::
-17. In the Email/SMS/Push/Voice pane, enter a name, select the **Email** checkbox, and then enter a valid email address. When finished, click **OK** on the pane, then click **OK** on the Add action group page.
+17. In the **Email/SMS message/Push/Voice** pane, enter a name, select the **Email** checkbox, and then enter a valid email address. When finished, click **OK** on the pane, then click **OK** on the **Add action group** page.
-18. To customize the subject of the alert email, select **Customize Actions**.
+18. To customize the subject of the alert email, select **Customize Actions**.
19. For **Create rule**, select **Email subject**, then choose **Create alert rule**. The alert tells you when an update deployment succeeds, and which machines were part of that update deployment run. The following image is an example email received when the W3SVC service stops.
azure-app-configuration Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/quickstart-python.md
In this quickstart, you will use Azure App Configuration to centralize storage a
## Prerequisites - Azure subscription - [create one for free](https://azure.microsoft.com/free/)-- Python 2.7, or 3.5 or later - For information on setting up Python on Windows, see the [Python on Windows documentation]( https://docs.microsoft.com/windows/python/)
+- Python 2.7, or 3.6 or later - For information on setting up Python on Windows, see the [Python on Windows documentation]( https://docs.microsoft.com/windows/python/)
## Create an App Configuration store
In this quickstart, you created a new App Configuration store and learnt how to
For additional code samples, visit: > [!div class="nextstepaction"]
-> [Azure App Configuration client library samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/appconfiguration/azure-appconfiguration/samples)
+> [Azure App Configuration client library samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/appconfiguration/azure-appconfiguration/samples)
azure-arc Manage Vm Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-vm-extensions.md
Title: VM extension management with Azure Arc enabled servers description: Azure Arc enabled servers can manage deployment of virtual machine extensions that provide post-deployment configuration and automation tasks with non-Azure VMs. Previously updated : 03/01/2021 Last updated : 03/22/2021
In this release, we support the following VM extensions on Windows and Linux mac
To learn about the Azure Connected Machine agent package and details about the Extension agent component, see [Agent overview](agent-overview.md#agent-component-details).
+> [!NOTE]
+> Recently support for the DSC VM extension was removed for Arc enabled servers. Alternatively, we recommend using the Custom Script Extension to manage the post-deployment configuration of your server or machine.
+ ### Windows extensions |Extension |Publisher |Type |Additional information |
azure-functions Quickstart Python Vscode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/quickstart-python-vscode.md
After you've verified that the function runs correctly on your local computer, i
## Test your function in Azure
-1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in this format: `http://<functionappname>.azurewebsites.net/orchestrators/HelloOrchestrator`
+1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in this format: `http://<functionappname>.azurewebsites.net/api/orchestrators/HelloOrchestrator`
2. Paste this new URL for the HTTP request into your browser's address bar. You should get the same status response as before when using the published app.
azure-monitor Itsmc Connections Cherwell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsmc-connections-cherwell.md
Use the following procedure to create a Cherwell connection:
1. In Azure portal, go to **All Resources** and look for **ServiceDesk(YourWorkspaceName)** 2. Under **WORKSPACE DATA SOURCES** click **ITSM Connections**.
- ![New connection](/media/itsmc-overview/add-new-itsm-connection.png)
+ ![New connection](/azure/azure-monitor/alerts/media/itsmc-connections-scsm/add-new-itsm-connection.png)
3. At the top of the right pane, click **Add**.
To generate the client ID/key for Cherwell, use the following procedure:
* [ITSM Connector Overview](itsmc-overview.md) * [Create ITSM work items from Azure alerts](./itsmc-definition.md#create-itsm-work-items-from-azure-alerts)
-* [Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md)
+* [Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md)
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/snapshot-debugger.md
Debug snapshots are stored for 15 days. This retention policy is set on a per-ap
## Enable Application Insights Snapshot Debugger for your application Snapshot collection is available for: * .NET Framework and ASP.NET applications running .NET Framework 4.5 or later.
-* .NET Core 2.0 and ASP.NET Core 2.0 applications running on Windows.
+* .NET Core and ASP.NET Core applications running .NET Core 2.1 (LTS) or 3.1 (LTS) on Windows.
+* .NET 5.0 applications on Windows.
+
+We don't recommend using .NET Core 2.0, 2.2 or 3.0 since they are out of support.
The following environments are supported:
azure-monitor Sql Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-overview.md
See [Enable SQL insights](sql-insights-enable.md) for the detailed procedure to
## Data collected by SQL insights
-In the public preview, SQL insights only supports the remote method of monitoring. The Telegraf agent is not installed on the SQL Server. It uses the SQL Server input plugin for Telegraf and use the three groups of queries for the different types of SQL it monitors: Azure SQL DB, Azure SQL Managed Instance, SQL server running on an Azure VM.
+In the public preview, SQL insights only supports the remote method of monitoring. The [telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/) is not installed on the SQL Server. It uses the [SQL Server input plugin for telegraf](https://www.influxdata.com/integration/microsoft-sql-server/) and use the three groups of queries for the different types of SQL it monitors: Azure SQL DB, Azure SQL Managed Instance, SQL server running on an Azure VM.
The following tables summarize the following:
azure-monitor Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/log-analytics-tutorial.md
A **where** statement is added to the query with the value you selected. The res
## Time range All tables in a Log Analytics workspace have a column called **TimeGenerated** which is the time that the record was created. All queries have a time range that limits the results to records with a **TimeGenerated** value within that range. The time range can either be set in the query or with the selector at the top of the screen.
-By default, the query will return records form the last 24 hours. Select the **Time range** dropdown and change it to **7 days**. Click **Run** again to return the results. You can see that results are returned, but we have a message here that we're not seeing all of the results. This is because Log Analytics can return a maximum of 10,000 records, and our query returned more records than that.
+By default, the query will return records form the last 24 hours. Select the **Time range** dropdown and change it to **7 days**. Click **Run** again to return the results. You can see that results are returned, but we have a message here that we're not seeing all of the results. This is because Log Analytics can return a maximum of 30,000 records, and our query returned more records than that.
[![Time range](media/log-analytics-tutorial/query-results-max.png)](media/log-analytics-tutorial/query-results-max.png#lightbox)
Try selecting **Results** to view the output of the query as a table.
Now that you know how to use Log Analytics, complete the tutorial on using log queries. > [!div class="nextstepaction"]
-> [Write Azure Monitor log queries](get-started-queries.md)
+> [Write Azure Monitor log queries](get-started-queries.md)
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
Usage
| where TimeGenerated > ago(32d) | where StartTime >= startofday(ago(31d)) and EndTime < startofday(now()) | where IsBillable == true
-| summarize BillableDataGB = sum(Quantity) by Solution, DataType
+| summarize BillableDataGB = sum(Quantity) / 1000 by Solution, DataType
| sort by Solution asc, DataType asc ```
There are some additional Log Analytics limits, some of which depend on the Log
- Change [performance counter configuration](../agents/data-sources-performance-counters.md). - To modify your event collection settings, review [event log configuration](../agents/data-sources-windows-events.md). - To modify your syslog collection settings, review [syslog configuration](../agents/data-sources-syslog.md).-- To modify your syslog collection settings, review [syslog configuration](../agents/data-sources-syslog.md).
+- To modify your syslog collection settings, review [syslog configuration](../agents/data-sources-syslog.md).
azure-monitor Vminsights Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/vminsights-workbooks.md
The following table summarizes the workbooks that VM insights includes to get yo
| Workbook | Description | Scope | |-|-|-|
-| Performance | Provides a customizable version of our Top N List and Charts view in a single workbook that leverages all of the Log Analytics performance counters that you have enabled.| At scale |
-| Performance counters | A Top N chart view across a wide set of performance counters. | At scale |
-| Connections | Connections provides an in-depth view of the inbound and outbound connections from your monitored VMs. | At scale |
-| Active Ports | Provides a list of the processes that have bound to the ports on the monitored VMs and their activity in the chosen timeframe. | At scale |
-| Open Ports | Provides the number of ports open on your monitored VMs and the details on those open ports. | At scale |
-| Failed Connections | Display the count of failed connections on your monitored VMs, the failure trend, and if the percentage of failures is increasing over time. | At scale |
-| Security and Audit | An analysis of your TCP/IP traffic that reports on overall connections, malicious connections, where the IP endpoints reside globally. To enable all features, you will need to enable Security Detection. | At scale |
-| TCP Traffic | A ranked report for your monitored VMs and their sent, received, and total network traffic in a grid and displayed as a trend line. | At scale |
-| Traffic Comparison | This workbooks lets you compare network traffic trends for a single machine or a group of machines. | At scale |
+| Performance | Provides a customizable version of our Top N List and Charts view in a single workbook that leverages all of the Log Analytics performance counters that you have enabled.| Multiple VMs |
+| Performance counters | A Top N chart view across a wide set of performance counters. | Multiple VMs |
+| Connections | Connections provides an in-depth view of the inbound and outbound connections from your monitored VMs. | Multiple VMs |
+| Active Ports | Provides a list of the processes that have bound to the ports on the monitored VMs and their activity in the chosen timeframe. | Multiple VMs |
+| Open Ports | Provides the number of ports open on your monitored VMs and the details on those open ports. | Multiple VMs |
+| Failed Connections | Display the count of failed connections on your monitored VMs, the failure trend, and if the percentage of failures is increasing over time. | Multiple VMs |
+| Security and Audit | An analysis of your TCP/IP traffic that reports on overall connections, malicious connections, where the IP endpoints reside globally. To enable all features, you will need to enable Security Detection. | Multiple VMs |
+| TCP Traffic | A ranked report for your monitored VMs and their sent, received, and total network traffic in a grid and displayed as a trend line. | Multiple VMs |
+| Traffic Comparison | This workbooks lets you compare network traffic trends for a single machine or a group of machines. | Multiple VMs |
| Performance | Provides a customizable version of our Performance view that leverages all of the Log Analytics performance counters that you have enabled. | Single VM | | Connections | Connections provides an in-depth view of the inbound and outbound connections from your VM. | Single VM |
To pin a link to a workbook to an Azure Dashboard:
- To identify limitations and overall VM performance, see [View Azure VM Performance](vminsights-performance.md). -- To learn about discovered application dependencies, see [View VM insights Map](vminsights-maps.md).
+- To learn about discovered application dependencies, see [View VM insights Map](vminsights-maps.md).
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
na ms.devlang: na Previously updated : 02/16/2021 Last updated : 03/01/2021 # Create an SMB volume for Azure NetApp Files
Before creating an SMB volume, you need to create an Active Directory connection
* Select **SMB** as the protocol type for the volume. * Select your **Active Directory** connection from the drop-down list. * Specify the name of the shared volume in **Share name**.
+ * If you want to enable Continuous Availability for the SMB volume, select **Enable Continuous Availability**.
- ![Specify SMB protocol](../media/azure-netapp-files/azure-netapp-files-protocol-smb.png)
+ > [!IMPORTANT]
+ > The SMB Continuous Availability feature is currently in public preview. You need to submit a waitlist request for accessing the feature through the **[Azure NetApp Files SMB Continuous Availability Shares Public Preview waitlist submission page](https://aka.ms/anfsmbcasharespreviewsignup)**. Wait for an official confirmation email from the Azure NetApp Files team before using the Continuous Availability feature.
+ >
+ > You should enable Continuous Availability only for SQL workloads. Using SMB Continuous Availability shares for workloads other than SQL Server is *not* supported. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported. If you are using a non-administrator (domain) account to install SQL Server, ensure that the account has the required security privilege assigned. If the domain account does not have the required security privilege (`SeSecurityPrivilege`), and the privilege cannot be set at the domain level, you can grant the privilege to the account by using the **Security privilege users** field of Active Directory connections. See [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection).
+
+ <!-- [1/13/21] Commenting out command-based steps below, because the plan is to use form-based (URL) registration, similar to CRR feature registration -->
+ <!--
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSMBCAShare
+ ```
+
+ Check the status of the feature registration:
+
+ > [!NOTE]
+ > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to`Registered`. Wait until the status is `Registered` before continuing.
+
+ ```azurepowershell-interactive
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSMBCAShare
+ ```
+
+ You can also use [Azure CLI commands](/cli/azure/feature?preserve-view=true&view=azure-cli-latest) `az feature register` and `az feature show` to register the feature and display the registration status.
+ -->
+
+ ![Screenshot that describes the Protocol tab of creating an SMB volume.](../media/azure-netapp-files/azure-netapp-files-protocol-smb.png)
5. Click **Review + Create** to review the volume details. Then click **Create** to create the SMB volume.
azure-netapp-files Azure Netapp Files Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-faqs.md
The volume size reported by the SMB client is the maximum size the Azure NetApp
As a best practice, set the maximum tolerance for computer clock synchronization to five minutes. For more information, see [Maximum tolerance for computer clock synchronization](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/jj852172(v=ws.11)).
-<!--
-### Does Azure NetApp Files support LDAP signing?
-
-Yes, Azure NetApp Files supports LDAP signing by default. This functionality enables secure LDAP lookups between the Azure NetApp Files service and the user-specified [Active Directory Domain Services domain controllers](/windows/win32/ad/active-directory-domain-services). For more information, see [ADV190023 | Microsoft Guidance for Enabling LDAP Channel Binding and LDAP Signing](https://portal.msrc.microsoft.com/en-us/security-guidance/advisory/ADV190023).
> - ## Capacity management FAQs ### How do I monitor usage for capacity pool and volume of Azure NetApp Files?
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
This section provides references for Windows applications and SQL Server solutio
### SQL Server * [Deploy SQL Server Over SMB with Azure NetApp Files](https://www.youtube.com/watch?v=x7udfcYbibs)
-<!-- * [Deploy SQL Server Always-On Failover Cluster over SMB with Azure NetApp Files](https://www.youtube.com/watch?v=zuNJ5E07e8Q) -->
-<!-- * [Deploy Always-On Availability Groups with Azure NetApp Files](https://www.youtube.com/watch?v=y3VQmzzeyvc) -->
+* [Deploy SQL Server Always-On Failover Cluster over SMB with Azure NetApp Files](https://www.youtube.com/watch?v=zuNJ5E07e8Q)
+* [Deploy Always-On Availability Groups with Azure NetApp Files](https://www.youtube.com/watch?v=y3VQmzzeyvc)
+* [Benefits of using Azure NetApp Files for SQL Server deployment](solutions-benefits-azure-netapp-files-sql-server.md)
## SAP on Azure solutions
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-active-directory-connections.md
na ms.devlang: na Previously updated : 02/16/2021 Last updated : 03/01/2021 # Create and manage Active Directory connections for Azure NetApp Files
This setting is configured in the **Active Directory Connections** under **NetAp
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
+ * **Security privilege users** <!-- SMB CA share feature -->
+ You can grant security privilege (`SeSecurityPrivilege`) to users that require elevated privilege to access the Azure NetApp Files volumes. The specified user accounts will be allowed to perform certain actions on Azure NetApp Files SMB shares that require security privilege not assigned by default to domain users.
+
+ For example, user accounts used for installing SQL Server in certain scenarios must be granted elevated security privilege. If you are using a non-administrator (domain) account to install SQL Server and the account does not have the security privilege assigned, you should add security privilege to the account.
+
+ > [!IMPORTANT]
+ > The domain account used for installing SQL Server must already exist before you add it to the **Security privilege users** field. When you add the SQL Server installer's account to **Security privilege users**, the Azure NetApp Files service might validate the account by contacting the domain controller. The command might fail if it cannot contact the domain controller.
+
+ For more information about `SeSecurityPrivilege` and SQL Server, see [SQL Server installation fails if the Setup account doesn't have certain user rights](/troubleshoot/sql/install/installation-fails-if-remove-user-right).
+
+ ![Screenshot showing the Security privilege users box of Active Directory connections window.](../media/azure-netapp-files/security-privilege-users.png)
+ * **Backup policy users** You can include additional accounts that require elevated privileges to the computer account created for use with Azure NetApp Files. The specified accounts will be allowed to change the NTFS permissions at the file or folder level. For example, you can specify a non-privileged service account used for migrating data to an SMB file share in Azure NetApp Files.
azure-netapp-files Solutions Benefits Azure Netapp Files Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/solutions-benefits-azure-netapp-files-sql-server.md
+
+ Title: Benefits of using Azure NetApp Files for SQL Server deployment | Microsoft Docs
+description: Shows a detailed cost analysis performance benefits about using Azure NetApp Files for SQL Server deployment.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ms.devlang: na
+ Last updated : 02/08/2021++
+# Benefits of using Azure NetApp Files for SQL Server deployment
+
+Azure NetApp Files reduces SQL Server total cost of ownership (TCO) as compared to block storage solutions. With block storage, virtual machines have imposed limits on I/O and bandwidth for disk operations, network bandwidth limits alone are applied against Azure NetApp Files only at that. In other words, no VM level I/O limits are applied to Azure NetApp Files. Without these I/O limits, SQL Server running on smaller virtual machines connected to Azure NetApp Files can perform as well as SQL Server running on much larger virtual machines. Sizing instances down as such reduces the compute cost to 25% of the former price tag. *You can reduce compute costs with Azure NetApp Files.*
+
+Compute costs, however, are small compared to SQL Server license costs. Microsoft SQL Server [licensing](https://download.microsoft.com/download/B/C/0/BC0B2EA7-D99D-42FB-9439-2C56880CAFF4/SQL_Server_2017_Licensing_Datasheet.pdf) is tied to physical core count. As such, decreasing instance size introduces an even larger cost saving for software licensing. *You can reduce software license costs with Azure NetApp Files.*
+
+The cost of the storage itself is variable depending on the actual size of the database. Regardless of the storage selected, capacity has cost, whether it is a managed disk or file share. As database sizes increase and the storage increases in cost, the storage contributes to the TCO increases, affecting the overall cost. As such, the assertion is adjusted to as follows: *You can reduce SQL Server deployment costs with Azure NetApp Files.*
+
+This article shows a detailed cost analysis and performance benefits about using Azure NetApp Files for SQL Server deployment. Not only do smaller instances have sufficient CPU to do the database work only possible with block on larger instances, *in many cases, the smaller instances are even more performant than their larger, disk-based counterparts because of Azure NetApp Files.*
+
+## Detailed cost analysis
+
+The two sets of graphics in this section show the TCO example. The number and type of managed disks, the Azure NetApp Files service level, and the capacity for each scenario have been selected to achieve the best price-capacity-performance. Each graphic is made up of grouped machines (D16 with Azure NetApp Files, compared to D64 with managed disk by example), and prices are broken down for each machine type.
+
+The first set of graphic shows the overall cost of the solution using a 1-TiB database size, comparing the D16s_v3 to the D64, the D8 to the D32, and the D4 to the D16. The projected IOPs for each configuration are indicated by a green or yellow line and corresponds to the right-hand side Y axis.
+
+[ ![Graphic that shows overall cost of the solution using a 1-TiB database size.](../media/azure-netapp-files/solution-sql-server-cost-1-tib.png) ](../media/azure-netapp-files/solution-sql-server-cost-1-tib.png#lightbox)
++
+The second set of graphic shows the overall cost using a 50-TiB database. The comparisons are otherwise the same ΓÇô D16 compared with Azure NetApp Files versus D64 with block by example.
+
+[ ![Graphic that shows overall cost using a 50-TiB database size.](../media/azure-netapp-files/solution-sql-server-cost-50-tib.png) ](../media/azure-netapp-files/solution-sql-server-cost-50-tib.png#lightbox)
+
+## Performance, and lots of it
+
+To deliver on the significant cost reduction assertion requires lots of performance - the largest instances in the general Azure inventory support 80,000 disk IOPS by example. A single Azure NetApp Files volume can achieve 80,000 database IOPS, and instances such as the D16 are able to consume the same. The D16, normally capable of 25,600 disk IOPS, is 25% the size of the D64. The D64s_v3 is capable of 80,000 disk IOPS, and as such, presents an excellent upper level comparison point.
+
+The D16s_v3 can drive an Azure NetApp Files volume to 80,000 database IOPS. As proven by the SQL Storage Benchmark (SSB) benchmarking tool, the D16 instance achieved a workload 125% greater than that achievable to disk from the D64 instance. See the [SSB testing tool](#ssb-testing-tool) section for details about the tool.
+
+Using a 1-TiB working set size and an 80% read, 20% update SQL Server workload, performance capabilities of most the instances in the D instance class were measured; most, not all, as the D2 and D64 instances themselves were excluded from testing. The former was left out as it doesn't support accelerated networking, and the latter because it's the comparison point. See the following graph to understand the limits of D4s_v3, D8s_v3, D16s_v3, and D32s_v3, respectively. Managed disk storage tests are not shown in the graph. Comparison values are drawn directly from the [Azure Virtual Machine limits table](../virtual-machines/dv3-dsv3-series.md) for the D class instance type.
+
+With Azure NetApp Files, each of the instances in the D class can meet or exceed the disk performance capabilities of instances two times larger. *You can reduce software license costs significantly with Azure NetApp Files.*
+
+* The D4 at 75% CPU utilization matched the disk capabilities of the D16.
+ * The D16 is rate limited at 25,600 disk IOPS.
+* The D8 at 75% CPU utilization matched the disk capabilities of the D32.
+ * The D32 is rate limited at 51,200 disk IOPS.
+* The D16 at 55% CPU utilization matched the disk capabilities of the D64.
+ * The D64 is rate limited at 80,000 disk IOPS.
+* The D32 at 15% CPU utilization matched the disk capabilities of the D64 as well.
+ * The D64 as stated above is rate limited at 80,000 disk IOPS.
+
+### S3B CPU limits test ΓÇô Performance versus processing power
+
+The following diagram summarizes the S3B CPU limits test:
+
+![Diagram that shows average CPU percentage for single-instance SQL Server over Azure NetApp Files.](../media/azure-netapp-files/solution-sql-server-single-instance-average-cpu.png)
+
+Scalability is only part of the story. The other part is latency. ItΓÇÖs one thing for smaller virtual machines to have the ability to drive much higher I/O rates, itΓÇÖs another thing to do so with low single-digit latencies as shown below.
+
+* The D4 drove 26,000 IOPS against Azure NetApp Files at 2.3-ms latency.
+* The D8 drove 51,000 IOPS against Azure NetApp Files at 2.0-ms latency.
+* The D16 drove 88,000 IOPS against Azure NetApp Files at 2.8-ms latency.
+* The D32 drove 80,000 IOPS against Azure NetApp Files at 2.4-ms latency.
+
+### S3B per instance type latency results
+
+The following diagram shows the latency for single-instance SQL Server over Azure NetApp Files:
+
+![Diagram that shows latency for single-instance SQL Server over Azure NetApp Files.](../media/azure-netapp-files/solution-sql-server-single-instance-latency.png)
+
+## SSB testing tool
+
+The [TPC-E](http://www.tpc.org/tpce/) benchmarking tool, by design, stresses *compute* rather than *storage*. The test results shown in this section are based on a stress testing tool named SQL Storage Benchmark (SSB). The SQL Server Storage Benchmark can drive massive-scale SQL execution against a SQL Server database to simulate an OLTP workload, similar to the [SLOB2 Oracle benchmarking tool](https://kevinclosson.net/slob/).
+
+The SSB tool generates a SELECT and UPDATE driven workload issuing the said statements directly to the SQL Server database running within the Azure virtual machine. For this project, the SSB workloads ramped from 1 to 100 SQL Server users, with 10 or 12 intermediate points at 15 minutes per user count. All performance metrics from these runs were from the point of view of perfmon, for repeatability SSB ran three times per scenario.
+
+The tests themselves were configured as 80% SELECT and 20% UPDATE statement, thus 90% random read. The database itself, which SSB created, was 1000 GB in size. It's comprised of 15 user tables and 9,000,000 rows per user table and 8192 bytes per row.
+
+The SSB benchmark is an open-source tool. It's freely available at the [SQL Storage Benchmark GitHub page](https://github.com/NetApp/SQL_Storage_Benchmark.git).
++
+## In summary
+
+With Azure NetApp Files, you can increase SQL server performance while reducing your total cost of ownership significantly.
+
+## Next Steps
+
+* [Create an SMB volume for Azure NetApp Files](azure-netapp-files-create-volumes-smb.md)
+* [Solution architectures using Azure NetApp Files ΓÇô SQL Server](azure-netapp-files-solution-architectures.md#sql-server)
+
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/whats-new.md
# What's new in Azure NetApp Files
-Azure NetApp Files is updated on a regular basis. This article provides a summary about the latest new features and enhancements.
+Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
-## March 2021
+## March 2021
+
+* SMB Continuous Availability (CA) shares (Preview)
+
+ SMB Transparent Failover enables maintenance operations on the Azure NetApp Files service without interrupting connectivity to server applications storing and accessing data on SMB volumes. To support SMB Transparent Failover, Azure NetApp Files now supports the SMB Continuous Availability shares option for use with SQL Server applications over SMB running on Azure VMs. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported. Enabling this feature provides significant SQL Server performance improvements and scale and cost benefits for [Single Instance, Always-On Failover Cluster Instance and Always-On Availability Group deployments](azure-netapp-files-solution-architectures.md#sql-server). See [Benefits of using Azure NetApp Files for SQL Server deployment](solutions-benefits-azure-netapp-files-sql-server.md).
* [Automatic resizing of a cross-region replication destination volume](azure-netapp-files-resize-capacity-pools-or-volumes.md#resize-a-cross-region-replication-destination-volume)
Azure NetApp Files is updated on a regular basis. This article provides a summar
## December 2020
-* [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (Public Preview)
+* [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (Preview)
Azure Application Consistent Snapshot Tool (AzAcSnap) is a command-line tool that enables you to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, SUSE and RHEL).
Azure NetApp Files is updated on a regular basis. This article provides a summar
## September 2020
-* [Azure NetApp Files cross-region replication](cross-region-replication-introduction.md) (Public Preview)
+* [Azure NetApp Files cross-region replication](cross-region-replication-introduction.md) (Preview)
Azure NetApp Files now supports cross-region replication. With this new disaster recovery capability, you can replicate your Azure NetApp Files volumes from one Azure region to another in a fast and cost-effective way, protecting your data from unforeseeable regional failures. Azure NetApp Files cross region replication leverages NetApp SnapMirror® technology; only changed blocks are sent over the network in a compressed, efficient format. This proprietary technology minimizes the amount of data required to replicate across the regions, therefore saving data transfer costs. It also shortens the replication time, so you can achieve a smaller Restore Point Objective (RPO).
azure-sql Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/resource-limits.md
SQL Managed Instance has two service tiers: [General Purpose](../database/servic
| Max number of database files per instance | Up to 280, unless the instance storage size or [Azure Premium Disk storage allocation space](../database/doc-changes-updates-release-notes.md#exceeding-storage-space-with-small-database-files) limit has been reached. | 32,767 files per database, unless the instance storage size limit has been reached. | | Max data file size | Limited to currently available instance storage size (max 2 TB - 8 TB) and [Azure Premium Disk storage allocation space](../database/doc-changes-updates-release-notes.md#exceeding-storage-space-with-small-database-files). | Limited to currently available instance storage size (up to 1 TB - 4 TB). | | Max log file size | Limited to 2 TB and currently available instance storage size. | Limited to 2 TB and currently available instance storage size. |
-| Data/Log IOPS (approximate) | Up to 30-40 K IOPS per instance*, 500 - 7500 per file<br/>\*[Increase file size to get more IOPS](#file-io-characteristics-in-general-purpose-tier)| 10 K - 200 K (4000 IOPS/vCore)<br/>Add more vCores to get better IO performance. |
+| Data/Log IOPS (approximate) | Up to 30-40 K IOPS per instance*, 500 - 7500 per file<br/>\*[Increase file size to get more IOPS](#file-io-characteristics-in-general-purpose-tier)| 16 K - 320 K (4000 IOPS/vCore)<br/>Add more vCores to get better IO performance. |
| Log write throughput limit (per instance) | 3 MB/s per vCore<br/>Max 120 MB/s per instance<br/>22 - 65 MB/s per DB<br/>\*[Increase the file size to get better IO performance](#file-io-characteristics-in-general-purpose-tier) | 4 MB/s per vCore<br/>Max 96 MB/s | | Data throughput (approximate) | 100 - 250 MB/s per file<br/>\*[Increase the file size to get better IO performance](#file-io-characteristics-in-general-purpose-tier) | Not limited. | | Storage IO latency (approximate) | 5-10 ms | 1-2 ms |
azure-vmware Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-identity.md
Title: Concepts - Identity and access description: Learn about the identity and access concepts of Azure VMware Solution Previously updated : 03/18/2021 Last updated : 03/22/2021 # Azure VMware Solution identity concepts
-Azure VMware Solution private clouds are provisioned with a vCenter server and NSX-T Manager. You use vCenter to manage virtual machine (VM) workloads. You use the NSX-T Manager to manage and extend the private cloud network.
+Azure VMware Solution private clouds are provisioned with a vCenter Server and NSX-T Manager. You use vCenter to manage virtual machine (VM) workloads and NSX-T Manager to manage and extend the private cloud. Access and identity management use the the CloudAdmin role for vCenter and restricted administrator rights for NSX-T Manager.
-The vCenter Access and identity management uses the buildin CloudAdmin group privileges. The NSX-T Manager uses restricted administrator permissions. This is by nature of the managed service and ensures that your private cloud platform upgrades with the newest features and patches as to be expected. For more information, see [private cloud upgrades concepts article][concepts-upgrades].
+For more information, see [private cloud upgrades concepts article][concepts-upgrades].
## vCenter access and identity
-The vCenter CloudAdmin group defines and provides the privileges in vCenter. Another option is to provide access and identity through the integration of vCenter LDAP single sign-on with Azure Active Directory. You enable that integration after you deploy your private cloud.
-
-The table shows **CloudAdmin** and **CloudGlobalAdmin** privileges.
-
-| Privilege Set | CloudAdmin | CloudGlobalAdmin | Comment |
-| : | :: | :: | :--: |
-| Alarms | A CloudAdmin user has all Alarms privileges for alarms in the Compute-ResourcePool and VMs. | -- | -- |
-| Auto Deploy | -- | -- | Microsoft does host management. |
-| Certificates | -- | -- | Microsoft does certificate management. |
-| Content Library | A CloudAdmin user has privileges to create and use files in a Content Library. | Enabled with SSO. | Microsoft will distribute files in the Content Library to ESXi hosts. |
-| Datacenter | -- | -- | Microsoft does all data center operations. |
-| Datastore | Datastore.AllocateSpace, Datastore.Browse, Datastore.Config, Datastore.DeleteFile, Datastore.FileManagement, Datastore.UpdateVirtualMachineMetadata | -- | -- |
-| ESX Agent Manager | -- | -- | Microsoft does all operations. |
-| Folder | A CloudAdmin user has all Folder privileges. | -- | -- |
-| Global | Global.CancelTask, Global.GlobalTag, Global.Health, Global.LogEvent, Global.ManageCustomFields, Global.ServiceManagers, Global.SetCustomField, Global.SystemTag | | |
-| Host | Host.Hbr.HbrManagement | -- | Microsoft does all other Host operations. |
-| InventoryService | InventoryService.Tagging | -- | -- |
-| Network | Network.Assign | | Microsoft does all other Network operations. |
-| Permissions | -- | -- | Microsoft does all Permissions operations. |
-| Profile-driven Storage | -- | -- | Microsoft does all Profile operations. |
-| Resource | A CloudAdmin user has all Resource privileges. | -- | -- |
-| Scheduled Task | A CloudAdmin user has all ScheduleTask privileges. | -- | -- |
-| Sessions | Sessions.GlobalMessage, Sessions.ValidateSession | -- | Microsoft does all other Sessions operations. |
-| Storage Views | StorageViews.View | -- | Microsoft does all other Storage View operations (Configure Service). |
-| Tasks | -- | -- | Microsoft manages extensions that manage tasks. |
-| vApp | A CloudAdmin user has all vApp privileges. | -- | -- |
-| Virtual Machine | A CloudAdmin user has all VirtualMachine privileges. | -- | -- |
-| vService | A CloudAdmin user has all vService privileges. | -- | -- |
+In Azure VMware Solution, vCenter has a built-in local user called cloudadmin and assigned to the CloudAdmin role. The local cloudadmin user is used to set up users in Active Directory (AD). In general, the CloudAdmin role creates and manages workloads in your private cloud. But in Azure VMware Solution, the CloudAdmin role has vCenter privileges that differ from other VMware cloud solutions.
+
+- In a vCenter and ESXi on-premises deployment, the administrator has access to the vCenter administrator\@vsphere.local account. They can also have more AD users and groups assigned.
+
+- In an Azure VMware Solution deployment, the administrator doesn't have access to the administrator user account. They can, however, assign AD users and groups to the CloudAdmin role on vCenter.
+
+The private cloud user doesn't have access to and can't configure specific management components supported and managed by Microsoft. For example, clusters, hosts, datastores, and distributed virtual switches.
+
+> [!IMPORTANT]
+> Azure VMware Solution offers custom roles on vCenter but currently doesn't offer them on the Azure VMware Solution portal. For more information, see the [Create custom roles on vCenter](#create-custom-roles-on-vcenter) section later in this article.
+
+### View the vCenter privileges
+
+You can view the privileges granted to the Azure VMware Solution CloudAdmin role on your Azure VMware Solution private cloud vCenter.
+
+1. Log into the SDDC vSphere Client and go to **Menu** > **Administration**.
+1. Under **Access Control**, select **Roles**.
+1. From the list of roles, select **CloudAdmin** and then select **Privileges**.
+
+ :::image type="content" source="media/role-based-access-control-cloudadmin-privileges.png" alt-text="How to view the CloudAdmin role privileges in vSphere Client":::
+
+The CloudAdmin role in Azure VMware Solution has the following privileges on vCenter. For more details, see the [VMware product documentation](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-ED56F3C4-77D0-49E3-88B6-B99B8B437B62.html).
+
+| Privilege | Description |
+| | -- |
+| **Alarms** | Acknowledge alarm<br />Create alarm<br />Disable alarm action<br />Modify alarm<br />Remove alarm<br />Set alarm status |
+| **Content Library** | Add library item<br />Create a subscription for a published library<br />Create local library<br />Create subscribed library<br />Delete library item<br />Delete local library<br />Delete subscribed library<br />Delete subscription of a published library<br />Download files<br />Evict library items<br />Evict subscribed library<br />Import storage<br />Probe subscription information<br />Publish a library item to its subscribers<br />Publish a library to its subscribers<br />Read storage<br />Sync library item<br />Sync subscribed library<br />Type introspection<br />Update configuration settings<br />Update files<br />Update library<br />Update library item<br />Update local library<br />Update subscribed library<br />Update subscription of a published library<br />View configuration settings |
+| **Cryptographic operations** | Direct access |
+| **Datastore** | Allocate space<br />Browse datastore<br />Configure datastore<br />Low-level file operations<br />Remove files<br />Update virtual machine metadata |
+| **Folder** | Create folder<br />Delete folder<br />Move folder<br />Rename folder |
+| **Global** | Cancel task<br />Global tag<br />Health<br />Log event<br />Manage custom attributes<br />Service managers<br />Set custom attribute<br />System tag |
+| **Host** | vSphere Replication<br />&#160;&#160;&#160;&#160;Manage replication |
+| **Network** | Assign network |
+| **Permissions** | Modify permissions<br />Modify role |
+| **Profile** | Profile driven storage view |
+| **Resource** | Apply recommendation<br />Assign vApp to resource pool<br />Assign virtual machine to resource pool<br />Create resource pool<br />Migrate powered off virtual machine<br />Migrate powered on virtual machine<br />Modify resource pool<br />Move resource pool<br />Query vMotion<br />Remove resource pool<br />Rename resource pool |
+| **Scheduled task** | Create task<br />Modify task<br />Remove task<br />Run task |
+| **Sessions** | Message<br />Validate session |
+| **Storage view** | View |
+| **vApp** | Add virtual machine<br />Assign resource pool<br />Assign vApp<br />Clone<br />Create<br />Delete<br />Export<br />Import<br />Move<br />Power off<br />Power on<br />Rename<br />Suspend<br />Unregister<br />View OVF environment<br />vApp application configuration<br />vApp instance configuration<br />vApp managedBy configuration<br />vApp resource configuration |
+| **Virtual machine** | Change Configuration<br />&#160;&#160;&#160;&#160;Acquire disk lease<br />&#160;&#160;&#160;&#160;Add existing disk<br />&#160;&#160;&#160;&#160;Add new disk<br />&#160;&#160;&#160;&#160;Add or remove device<br />&#160;&#160;&#160;&#160;Advanced configuration<br />&#160;&#160;&#160;&#160;Change CPU count<br />&#160;&#160;&#160;&#160;Change memory<br />&#160;&#160;&#160;&#160;Change settings<br />&#160;&#160;&#160;&#160;Change swapfile placement<br />&#160;&#160;&#160;&#160;Change resource<br />&#160;&#160;&#160;&#160;Configure host USB device<br />&#160;&#160;&#160;&#160;Configure raw device<br />&#160;&#160;&#160;&#160;Configure managedBy<br />&#160;&#160;&#160;&#160;Display connection settings<br />&#160;&#160;&#160;&#160;Extend virtual disk<br />&#160;&#160;&#160;&#160;Modify device settings<br />&#160;&#160;&#160;&#160;Query fault tolerance compatibility<br />&#160;&#160;&#160;&#160;Query unowned files<br />&#160;&#160;&#160;&#160;Reload from paths<br />&#160;&#160;&#160;&#160;Remove disk<br />&#160;&#160;&#160;&#160;Rename<br />&#160;&#160;&#160;&#160;Reset guest information<br />&#160;&#160;&#160;&#160;Set annotation<br />&#160;&#160;&#160;&#160;Toggle disk change tracking<br />&#160;&#160;&#160;&#160;Toggle fork parent<br />&#160;&#160;&#160;&#160;Upgrade virtual machine compatibility<br />Edit inventory<br />&#160;&#160;&#160;&#160;Create from existing<br />&#160;&#160;&#160;&#160;Create new<br />&#160;&#160;&#160;&#160;Move<br />&#160;&#160;&#160;&#160;Register<br />&#160;&#160;&#160;&#160;Remove<br />&#160;&#160;&#160;&#160;Unregister<br />Guest operations<br />&#160;&#160;&#160;&#160;Guest operation alias modification<br />&#160;&#160;&#160;&#160;Guest operation alias query<br />&#160;&#160;&#160;&#160;Guest operation modifications<br />&#160;&#160;&#160;&#160;Guest operation program execution<br />&#160;&#160;&#160;&#160;Guest operation queries<br />Interaction<br />&#160;&#160;&#160;&#160;Answer question<br />&#160;&#160;&#160;&#160;Back up operation on virtual machine<br />&#160;&#160;&#160;&#160;Configure CD media<br />&#160;&#160;&#160;&#160;Configure floppy media<br />&#160;&#160;&#160;&#160;Connect devices<br />&#160;&#160;&#160;&#160;Console interaction<br />&#160;&#160;&#160;&#160;Create screenshot<br />&#160;&#160;&#160;&#160;Defragment all disks<br />&#160;&#160;&#160;&#160;Drag and drop<br />&#160;&#160;&#160;&#160;Guest operating system management by VIX API<br />&#160;&#160;&#160;&#160;Inject USB HID scan codes<br />&#160;&#160;&#160;&#160;Install VMware tools<br />&#160;&#160;&#160;&#160;Pause or Unpause<br />&#160;&#160;&#160;&#160;Wipe or shrink operations<br />&#160;&#160;&#160;&#160;Power off<br />&#160;&#160;&#160;&#160;Power on<br />&#160;&#160;&#160;&#160;Record session on virtual machine<br />&#160;&#160;&#160;&#160;Replay session on virtual machine<br />&#160;&#160;&#160;&#160;Suspend<br />&#160;&#160;&#160;&#160;Suspend fault tolerance<br />&#160;&#160;&#160;&#160;Test failover<br />&#160;&#160;&#160;&#160;Test restart secondary VM<br />&#160;&#160;&#160;&#160;Turn off fault tolerance<br />&#160;&#160;&#160;&#160;Turn on fault tolerance<br />Provisioning<br />&#160;&#160;&#160;&#160;Allow disk access<br />&#160;&#160;&#160;&#160;Allow file access<br />&#160;&#160;&#160;&#160;Allow read-only disk access<br />&#160;&#160;&#160;&#160;Allow virtual machine download<br />&#160;&#160;&#160;&#160;Clone template<br />&#160;&#160;&#160;&#160;Clone virtual machine<br />&#160;&#160;&#160;&#160;Create template from virtual machine<br />&#160;&#160;&#160;&#160;Customize guest<br />&#160;&#160;&#160;&#160;Deploy template<br />&#160;&#160;&#160;&#160;Mark as template<br />&#160;&#160;&#160;&#160;Modify customization specification<br />&#160;&#160;&#160;&#160;Promote disks<br />&#160;&#160;&#160;&#160;Read customization specifications<br />Service configuration<br />&#160;&#160;&#160;&#160;Allow notifications<br />&#160;&#160;&#160;&#160;Allow polling of global event notifications<br />&#160;&#160;&#160;&#160;Manage service configuration<br />&#160;&#160;&#160;&#160;Modify service configuration<br />&#160;&#160;&#160;&#160;Query service configurations<br />&#160;&#160;&#160;&#160;Read service configuration<br />Snapshot management<br />&#160;&#160;&#160;&#160;Create snapshot<br />&#160;&#160;&#160;&#160;Remove snapshot<br />&#160;&#160;&#160;&#160;Rename snapshot<br />&#160;&#160;&#160;&#160;Revert snapshot<br />vSphere Replication<br />&#160;&#160;&#160;&#160;Configure replication<br />&#160;&#160;&#160;&#160;Manage replication<br />&#160;&#160;&#160;&#160;Monitor replication |
+| **vService** | Create dependency<br />Destroy dependency<br />Reconfigure dependency configuration<br />Update dependency |
+| **vSphere tagging** | Assign and unassign vSphere tag<br />Create vSphere tag<br />Create vSphere tag category<br />Delete vSphere tag<br />Delete vSphere tag category<br />Edit vSphere tag<br />Edit vSphere tag category<br />Modify UsedBy field for category<br />Modify UsedBy field for tag |
+
+### Create custom roles on vCenter
+
+Azure VMware Solution supports the use of custom roles with equal or lesser privileges than the CloudAdmin role.
+
+The CloudAdmin role can create, modify, or delete custom roles that have privileges lesser than or equal to their current role. You may be able to create roles that have privileges greater than CloudAdmin but you will not be able to assign the role to any users or groups or delete the role.
+
+To prevent the creation of roles that can't be assigned or deleted, Azure VMware Solution recommends cloning the CloudAdmin role as the basis for creating new custom roles.
+
+#### Create a custom role
+1. Sign into vCenter with cloudadmin\@vsphere.local or a user with the CloudAdmin role.
+2. Navigate to the **Roles** configuration section and select **Menu** > **Administration** > **Access Control** > **Roles**.
+3. Select the **CloudAdmin** role and select the **Clone role action** icon.
+
+ > [!NOTE]
+ > Do not clone the **Administrator** role. This role cannot be used and the custom role created cannot be deleted by cloudadmin\@vsphere.local.
+
+4. Provide the name you want for the cloned role.
+5. Add or remove privileges for the role and select **OK**. The cloned role should now be visible in the **Roles** list.
++
+#### Use a custom role
+
+1. Navigate to the object that requires the added permission. For example, to apply the permission to a folder, navigate to **Menu** > **VMs and Templates** > **Folder Name**
+1. Right-click the object and select **Add Permission**.
+1. In the **Add Permission** window, select the Identity Source in the **User** drop-down where the group or user can be found.
+1. Search for the user or group after selecting the Identity Source under the **User** section.
+1. Select the role that will be applied for the user or group.
+1. Check the **Propagate to children** if needed, and select **OK**.
+ The added permission displays in the **Permissions** section for the object.
## NSX-T Manager access and identity
-Use the *administrator* account to access NSX-T Manager. It has full privileges and lets you create and manage Tier-1 (T1) Gateways, segments (logical switches) and all services. This account also provides access to the NSX-T Tier-0 (T0) Gateway. Be mindfull on makeing such changes, since that could result in degraded network performance or no private cloud access. Open a support request in the Azure portal to request any changes to your NSX-T T0 Gateway.
-
+Use the *administrator* account to access NSX-T Manager. It has full privileges and lets you create and manage Tier-1 (T1) Gateways, segments (logical switches), and all services. The privileges give you access to the NSX-T Tier-0 (T0) Gateway. A change to the T0 Gateway could result in degraded network performance or no private cloud access. Open a support request in the Azure portal to request any changes to your NSX-T T0 Gateway.
+
+
## Next steps Now that you've covered Azure VMware Solution access and identity concepts, you may want to learn about: - [Private cloud upgrade concepts](concepts-upgrades.md).-- [vSphere role-based access control for Azure VMware Solution](concepts-role-based-access-control.md). - [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md).
+- [Details of each privilege](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-ED56F3C4-77D0-49E3-88B6-B99B8B437B62.html).
+- [How Azure VMware Solution monitors and repairs private clouds](concepts-monitor-repair-private-cloud.md).
+- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md).
+
-<!-- LINKS - external -->
+<!-- LINKS - external-->
+[VMware product documentation]: https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-ED56F3C4-77D0-49E3-88B6-B99B8B437B62.html
<!-- LINKS - internal --> [concepts-upgrades]: ./concepts-upgrades.md
azure-vmware Concepts Private Clouds Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-private-clouds-clusters.md
Hosts used to build or scale clusters come from an isolated pool of hosts. Those
[!INCLUDE [vmware-software-versions](includes/vmware-software-versions.md)]
+## Update frequency
+ ## Host maintenance and lifecycle management
azure-vmware Concepts Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-role-based-access-control.md
- Title: Concepts - vSphere role-based access control (vSphere RBAC)
-description: Learn about the key capabilities of vSphere role-based access control for Azure VMware Solution
- Previously updated : 03/18/2021--
-# vSphere role-based access control (vSphere RBAC) for Azure VMware Solution
-
-In Azure VMware Solution, vCenter has a built-in local user called cloudadmin and assigned to the built-in CloudAdmin role. The local cloudadmin user is used to set up users in AD. In general, the CloudAdmin role creates and manages workloads in your private cloud. In Azure VMware Solution, the CloudAdmin role has vCenter privileges that differ from other VMware cloud solutions.
-
-> [!NOTE]
-> Azure VMware Solution offers custom roles on vCenter does not offer them on the Azure VMware Solution portal. For more information, see the [Create custom roles on vCenter](#create-custom-roles-on-vcenter) section later in this article.
-
-In a vCenter and ESXi on-premises deployment, the administrator has access to the vCenter administrator@vsphere.local account. They can also have more Active Directory (AD) users/groups assigned.
-
-In an Azure VMware Solution deployment, the administrator doesn't have access to the administrator user account. But they can assign AD users and groups to the CloudAdmin role on vCenter.
-
-The private cloud user doesn't have access and can not configure specific management components supported and managed by Microsoft. For example clusters, hosts, datastores, and distributed virtual switches.
-
-## Azure VMware Solution CloudAdmin role on vCenter
-
-You can view the privileges granted to the Azure VMware Solution CloudAdmin role on your Azure VMware Solution private cloud vCenter.
-
-1. Log into vCenter and go to **Menu** > **Administration**.
-1. Under **Access Control**, select **Roles**.
-1. From the list of roles, select **CloudAdmin** and then select **Privileges**.
-
- :::image type="content" source="media/role-based-access-control-cloudadmin-privileges.png" alt-text="How to view the CloudAdmin role privileges in vSphere Client":::
-
-The CloudAdmin role in Azure VMware Solution has the following privileges on vCenter. Refer to the [VMware product documentation](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-ED56F3C4-77D0-49E3-88B6-B99B8B437B62.html) for a detailed explanation of each privilege.
-
-| Privilege | Description |
-| | -- |
-| **Alarms** | Acknowledge alarm<br />Create alarm<br />Disable alarm action<br />Modify alarm<br />Remove alarm<br />Set alarm status |
-| **Permissions** | Modify permissions<br />Modify role |
-| **Content Library** | Add library item<br />Create a subscription for a published library<br />Create local library<br />Create subscribed library<br />Delete library item<br />Delete local library<br />Delete subscribed library<br />Delete subscription of a published library<br />Download files<br />Evict library items<br />Evict subscribed library<br />Import storage<br />Probe subscription information<br />Publish a library item to its subscribers<br />Publish a library to its subscribers<br />Read storage<br />Sync library item<br />Sync subscribed library<br />Type introspection<br />Update configuration settings<br />Update files<br />Update library<br />Update library item<br />Update local library<br />Update subscribed library<br />Update subscription of a published library<br />View configuration settings |
-| **Cryptographic operations** | Direct access |
-| **Datastore** | Allocate space<br />Browse datastore<br />Configure datastore<br />Low-level file operations<br />Remove files<br />Update virtual machine metadata |
-| **Folder** | Create folder<br />Delete folder<br />Move folder<br />Rename folder |
-| **Global** | Cancel task<br />Global tag<br />Health<br />Log event<br />Manage custom attributes<br />Service managers<br />Set custom attribute<br />System tag |
-| **Host** | vSphere Replication<br />&#160;&#160;&#160;&#160;Manage replication |
-| **vSphere tagging** | Assign and unassign vSphere tag<br />Create vSphere tag<br />Create vSphere tag category<br />Delete vSphere tag<br />Delete vSphere tag category<br />Edit vSphere tag<br />Edit vSphere tag category<br />Modify UsedBy field for category<br />Modify UsedBy field for tag |
-| **Network** | Assign network |
-| **Resource** | Apply recommendation<br />Assign vApp to resource pool<br />Assign virtual machine to resource pool<br />Create resource pool<br />Migrate powered off virtual machine<br />Migrate powered on virtual machine<br />Modify resource pool<br />Move resource pool<br />Query vMotion<br />Remove resource pool<br />Rename resource pool |
-| **Scheduled task** | Create task<br />Modify task<br />Remove task<br />Run task |
-| **Sessions** | Message<br />Validate session |
-| **Profile** | Profile driven storage view |
-| **Storage view** | View |
-| **vApp** | Add virtual machine<br />Assign resource pool<br />Assign vApp<br />Clone<br />Create<br />Delete<br />Export<br />Import<br />Move<br />Power off<br />Power on<br />Rename<br />Suspend<br />Unregister<br />View OVF environment<br />vApp application configuration<br />vApp instance configuration<br />vApp managedBy configuration<br />vApp resource configuration |
-| **Virtual machine** | Change Configuration<br />&#160;&#160;&#160;&#160;Acquire disk lease<br />&#160;&#160;&#160;&#160;Add existing disk<br />&#160;&#160;&#160;&#160;Add new disk<br />&#160;&#160;&#160;&#160;Add or remove device<br />&#160;&#160;&#160;&#160;Advanced configuration<br />&#160;&#160;&#160;&#160;Change CPU count<br />&#160;&#160;&#160;&#160;Change memory<br />&#160;&#160;&#160;&#160;Change settings<br />&#160;&#160;&#160;&#160;Change swapfile placement<br />&#160;&#160;&#160;&#160;Change resource<br />&#160;&#160;&#160;&#160;Configure host USB device<br />&#160;&#160;&#160;&#160;Configure raw device<br />&#160;&#160;&#160;&#160;Configure managedBy<br />&#160;&#160;&#160;&#160;Display connection settings<br />&#160;&#160;&#160;&#160;Extend virtual disk<br />&#160;&#160;&#160;&#160;Modify device settings<br />&#160;&#160;&#160;&#160;Query fault tolerance compatibility<br />&#160;&#160;&#160;&#160;Query unowned files<br />&#160;&#160;&#160;&#160;Reload from paths<br />&#160;&#160;&#160;&#160;Remove disk<br />&#160;&#160;&#160;&#160;Rename<br />&#160;&#160;&#160;&#160;Reset guest information<br />&#160;&#160;&#160;&#160;Set annotation<br />&#160;&#160;&#160;&#160;Toggle disk change tracking<br />&#160;&#160;&#160;&#160;Toggle fork parent<br />&#160;&#160;&#160;&#160;Upgrade virtual machine compatibility<br />Edit inventory<br />&#160;&#160;&#160;&#160;Create from existing<br />&#160;&#160;&#160;&#160;Create new<br />&#160;&#160;&#160;&#160;Move<br />&#160;&#160;&#160;&#160;Register<br />&#160;&#160;&#160;&#160;Remove<br />&#160;&#160;&#160;&#160;Unregister<br />Guest operations<br />&#160;&#160;&#160;&#160;Guest operation alias modification<br />&#160;&#160;&#160;&#160;Guest operation alias query<br />&#160;&#160;&#160;&#160;Guest operation modifications<br />&#160;&#160;&#160;&#160;Guest operation program execution<br />&#160;&#160;&#160;&#160;Guest operation queries<br />Interaction<br />&#160;&#160;&#160;&#160;Answer question<br />&#160;&#160;&#160;&#160;Back up operation on virtual machine<br />&#160;&#160;&#160;&#160;Configure CD media<br />&#160;&#160;&#160;&#160;Configure floppy media<br />&#160;&#160;&#160;&#160;Connect devices<br />&#160;&#160;&#160;&#160;Console interaction<br />&#160;&#160;&#160;&#160;Create screenshot<br />&#160;&#160;&#160;&#160;Defragment all disks<br />&#160;&#160;&#160;&#160;Drag and drop<br />&#160;&#160;&#160;&#160;Guest operating system management by VIX API<br />&#160;&#160;&#160;&#160;Inject USB HID scan codes<br />&#160;&#160;&#160;&#160;Install VMware tools<br />&#160;&#160;&#160;&#160;Pause or Unpause<br />&#160;&#160;&#160;&#160;Wipe or shrink operations<br />&#160;&#160;&#160;&#160;Power off<br />&#160;&#160;&#160;&#160;Power on<br />&#160;&#160;&#160;&#160;Record session on virtual machine<br />&#160;&#160;&#160;&#160;Replay session on virtual machine<br />&#160;&#160;&#160;&#160;Suspend<br />&#160;&#160;&#160;&#160;Suspend fault tolerance<br />&#160;&#160;&#160;&#160;Test failover<br />&#160;&#160;&#160;&#160;Test restart secondary VM<br />&#160;&#160;&#160;&#160;Turn off fault tolerance<br />&#160;&#160;&#160;&#160;Turn on fault tolerance<br />Provisioning<br />&#160;&#160;&#160;&#160;Allow disk access<br />&#160;&#160;&#160;&#160;Allow file access<br />&#160;&#160;&#160;&#160;Allow read-only disk access<br />&#160;&#160;&#160;&#160;Allow virtual machine download<br />&#160;&#160;&#160;&#160;Clone template<br />&#160;&#160;&#160;&#160;Clone virtual machine<br />&#160;&#160;&#160;&#160;Create template from virtual machine<br />&#160;&#160;&#160;&#160;Customize guest<br />&#160;&#160;&#160;&#160;Deploy template<br />&#160;&#160;&#160;&#160;Mark as template<br />&#160;&#160;&#160;&#160;Modify customization specification<br />&#160;&#160;&#160;&#160;Promote disks<br />&#160;&#160;&#160;&#160;Read customization specifications<br />Service configuration<br />&#160;&#160;&#160;&#160;Allow notifications<br />&#160;&#160;&#160;&#160;Allow polling of global event notifications<br />&#160;&#160;&#160;&#160;Manage service configuration<br />&#160;&#160;&#160;&#160;Modify service configuration<br />&#160;&#160;&#160;&#160;Query service configurations<br />&#160;&#160;&#160;&#160;Read service configuration<br />Snapshot management<br />&#160;&#160;&#160;&#160;Create snapshot<br />&#160;&#160;&#160;&#160;Remove snapshot<br />&#160;&#160;&#160;&#160;Rename snapshot<br />&#160;&#160;&#160;&#160;Revert snapshot<br />vSphere Replication<br />&#160;&#160;&#160;&#160;Configure replication<br />&#160;&#160;&#160;&#160;Manage replication<br />&#160;&#160;&#160;&#160;Monitor replication |
-| **vService** | Create dependency<br />Destroy dependency<br />Reconfigure dependency configuration<br />Update dependency |
-
-## Create custom roles on vCenter
-
-Azure VMware Solution supports the use of custom roles with equal or lesser privileges than the CloudAdmin role.
-
-The CloudAdmin role can create, modify, or delete custom roles that have privileges lesser than or equal to their current role. You may be able to create roles that have privileges greater than CloudAdmin but you will not be able to assign the role to any users or groups or delete the role.
-
-To prevent the creation of roles that can't be assigned or deleted it is recommends to clone the CloudAdmin role as the basis for creating new custom roles.
-
-### Create a custom role
-1. Sign into vCenter with cloudadmin\@vsphere.local or a user with the CloudAdmin role.
-2. Navigate to the **Roles** configuration section and select **Menu** > **Administration** > **Access Control** > **Roles**.
-3. Select the **CloudAdmin** role and select the **Clone role action** icon.
-
- > [!NOTE]
- > Do not clone the **Administrator** role. This role cannot be used and the custom role created cannot be deleted by cloudadmin\@vsphere.local.
-
-4. Provide the name you want for the cloned role.
-5. Add or remove privileges for the role and select **OK**. The cloned role should now be visible in the **Roles** list.
--
-### Use a custom role
-
-1. Navigate to the object that requires the added permission. For example, to apply the permission to a folder, navigate to **Menu** > **VMs and Templates** > **Folder Name**
-1. Right-click the object and select **Add Permission**.
-1. In the **Add Permission** window, select the Identity Source in the **User** drop-down where the group or user can be found.
-1. Search for the user or group after selecting the Identity Source under the **User** section.
-1. Select the role that will be applied for the user or group.
-1. Check the **Propagate to children** if needed, and select **OK**.
- The added permission displays in the **Permissions** section for the object.
-
-## Next steps
-
-Now that you've covered the basics of vSphere role-based access control for Azure VMware Solution, you may want to learn about:
--- The details of each privilege in the [VMware product documentation](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-ED56F3C4-77D0-49E3-88B6-B99B8B437B62.html).-- [How Azure VMware Solution monitors and repairs private clouds](concepts-monitor-repair-private-cloud.md).-- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md).-
-<!-- LINKS - internal -->
-
-<!-- LINKS - external-->
-[VMware product documentation]: https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-ED56F3C4-77D0-49E3-88B6-B99B8B437B62.html
-
azure-vmware Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-storage.md
You can use Azure storage services in workloads running in your private cloud. T
Now that you've covered Azure VMware Solution storage concepts, you may want to learn about: - [Private cloud identity concepts](concepts-identity.md).-- [vSphere role-based access control for Azure VMware Solution](concepts-role-based-access-control.md).
+- [vSphere role-based access control for Azure VMware Solution](concepts-identity.md).
- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md). - [Azure NetApp Files with Azure VMware Solution](netapp-files-with-azure-vmware-solution.md)
azure-vmware Ecosystem Migration Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/ecosystem-migration-vms.md
+
+ Title: Migration solutions for Azure VMware Solution virtual machines
+description: Learn about leading migration solutions for your Azure VMware Solution virtual machines.
+ Last updated : 03/22/2021++
+# Migration solutions for Azure VMware Solution virtual machines (VMs)
+
+One of the most common use cases for using Azure VMware Solution is data center evacuation. It allows you to continue to maximize your VMware investments, because Azure VMware Solution will always be up to date. Additionally, you can enhance your workloads with the full range of native Azure services. An initial key step in this process is the migration of your legacy VMware-based environment onto Azure VMware Solution.
+
+Our migration partners have industry-leading migration solutions in VMware-based environments. Customers around the world have used these solutions for their migrations to both Azure and Azure VMware Solution.
+
+You aren't required to use VMware HCX as a migration tool, which means you can also migrate physical workloads into Azure VMware Solution. Additionally, migrations to your Azure VMware Solution environment don't need an ExpressRoute connection if it's not available within your source environment. Migrations can be done to multiple locations if you decide to host those workloads in multiple Azure regions.
+
+For more information on these solutions, see [RiverMeadow](https://www.rivermeadow.com/migrating-to-vmware-on-azure).
azure-vmware Tutorial Deploy Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-deploy-vmware-hcx.md
For more information on using HCX, go to the VMware technical documentation:
* [VMware HCX Documentation](https://docs.vmware.com/en/VMware-HCX/https://docsupdatetracker.net/index.html) * [Migrating Virtual Machines with VMware HCX](https://docs.vmware.com/en/VMware-HCX/services/user-guide/GUID-D0CD0CC6-3802-42C9-9718-6DA5FEC246C6.html?hWord=N4IghgNiBcIBIGEAaACAtgSwOYCcwBcMB7AOxAF8g) * [HCX required ports](https://ports.vmware.com/home/VMware-HCX)
+* [Set up an HCX proxy servr before you approve the license key](https://docs.vmware.com/en/VMware-HCX/services/user-guide/GUID-920242B3-71A3-4B24-9ACF-B20345244AB2.html?hWord=N4IghgNiBcIA4CcD2APAngAgBIGEAaIAvkA)
azure-vmware Tutorial Expressroute Global Reach Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-expressroute-global-reach-private-cloud.md
Before you enable connectivity between two ExpressRoute circuits using ExpressRo
>[!IMPORTANT] >In the context of these prerequisites, your on-premises ExpressRoute circuit is _circuit 1_, and your private cloud ExpressRoute circuit is in a different subscription and labeled _circuit 2_.
-## Create an ExpressRoute authorization key in the on-premises circuit
+## Create an ExpressRoute authorization key in the private cloud ExpressRoute circuit
[!INCLUDE [request-authorization-key](includes/request-authorization-key.md)]
azure-vmware Vrealize Operations For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/vrealize-operations-for-azure-vmware-solution.md
Once the instance has been deployed, you can configure vRealize Operations to co
## Known limitations -- The **cloudadmin\@vsphere.local** user in Azure VMware Solution has [limited privileges](concepts-role-based-access-control.md). Virtual machines (VMs) on Azure VMware Solution doesn't support in-guest memory collection using VMware tools. Active and consumed memory utilization continues to work in this case.
+- The **cloudadmin\@vsphere.local** user in Azure VMware Solution has [limited privileges](concepts-identity.md). Virtual machines (VMs) on Azure VMware Solution doesn't support in-guest memory collection using VMware tools. Active and consumed memory utilization continues to work in this case.
- Workload optimization for host-based business intent doesn't work because Azure VMware Solutions manage cluster configurations, including DRS settings. - Workload optimization for the cross-cluster placement within the SDDC using the cluster-based business intent is fully supported with vRealize Operations Manager 8.0 and onwards. However, workload optimization isn't aware of resource pools and places the VMs at the cluster level. A user can manually correct it in the Azure VMware Solution vCenter Server interface. - You can't sign in to vRealize Operations Manager using your Azure VMware Solution vCenter Server credentials.
bastion Bastion Vm Copy Paste https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-vm-copy-paste.md
Previously updated : 05/04/2020 Last updated : 03/22/2021 # Customer intent: I want to copy and paste to and from VMs using Azure Bastion.
For browsers that support the advanced Clipboard API access, you can copy and pa
![Allow clipboard](./media/bastion-vm-manage/allow.png)
-Only text copy/paste is supported. For direct copy and paste, your browser may prompt you for clipboard access when the Bastion session is being initialized. **Allow** the web page to access the clipboard.
+Only text copy/paste is supported. For direct copy and paste, your browser may prompt you for clipboard access when the Bastion session is being initialized. **Allow** the web page to access the clipboard. If you are working from a Mac, the keyboard shortcut to paste is **SHIFT-CTRL-V**.
## <a name="to"></a>Copy to a remote session
batch Batch Pool Vm Sizes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-pool-vm-sizes.md
Title: Choose VM sizes and images for pools description: How to choose from the available VM sizes and OS versions for compute nodes in Azure Batch pools Previously updated : 03/08/2021 Last updated : 11/24/2020
Batch pools in the Virtual Machine configuration support almost all [VM sizes](.
| DC | Not supported | | Dv2, DSv2 | All sizes | | Dv3, Dsv3 | All sizes |
-| Dav4, Dasv4 | All sizes |
+| Dav4 | All sizes |
+| Dasv4 | All sizes |
| Ddv4, Ddsv4 | All sizes | | Dv4, Dsv4 | Not supported | | Ev3, Esv3 | All sizes, except for E64is_v3 |
-| Eav4, Easv4 | All sizes |
+| Eav4 | All sizes |
+| Easv4 | All sizes |
| Edv4, Edsv4 | All sizes | | Ev4, Esv4 | Not supported | | F, Fs | All sizes |
Batch pools in the Virtual Machine configuration support almost all [VM sizes](.
| NC | All sizes | | NCv2 | All sizes | | NCv3 | All sizes |
-| NCasT4_v3 | All sizes |
+| NCasT4_v3 | None - not yet available |
| ND | All sizes | | NDv2 | None - not yet available | | NV | All sizes |
Use one of the following APIs to return a list of Windows and Linux VM images cu
- PowerShell: [Get-AzBatchSupportedImage](/powershell/module/az.batch/get-azbatchsupportedimage) - Azure CLI: [az batch pool supported-images](/cli/azure/batch/pool/supported-images)
+It is strongly recommended to avoid images with impending Batch support end of life (EOL) dates. These dates can be discovered via the [`ListSupportedImages` API](https://docs.microsoft.com/rest/api/batchservice/account/listsupportedimages), [PowerShell](https://docs.microsoft.com/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](https://docs.microsoft.com/cli/azure/batch/pool/supported-images). Please see the [Batch best practices guide](best-practices.md) for more information regarding Batch pool VM image selection.
+ ## Next steps - Learn about the [Batch service workflow and primary resources](batch-service-workflow-features.md) such as pools, nodes, jobs, and tasks.
batch Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/best-practices.md
This article discusses a collection of best practices and useful tips for using
- **Pools should have more than one compute node:** Individual nodes are not guaranteed to always be available. While uncommon, hardware failures, operating system updates, and a host of other issues can cause individual nodes to be offline. If your Batch workload requires deterministic, guaranteed progress, you should allocate pools with multiple nodes. -- **Do not reuse resource names:** Batch resources (jobs, pools, etc.) often come and go over time. For example, you may create a pool on Monday, delete it on Tuesday, and then create another pool on Thursday. Each new resource you create should be given a unique name that you haven't used before. This can be done by using a GUID (either as the entire resource name, or as a part of it) or embedding the time the resource was created in the resource name. Batch supports [DisplayName](/dotnet/api/microsoft.azure.batch.jobspecification.displayname), which can be used to give a resource a human readable name even if the actual resource ID is something that isn't that human friendly. Using unique names makes it easier for you to differentiate which particular resource did something in logs and metrics. It also removes ambiguity if you ever have to file a support case for a resource.
+- **Do not use images with impending end-of-life (EOL) dates.**
+ It is strongly recommended to avoid images with impending Batch support end of life (EOL) dates. These dates can be discovered via the [`ListSupportedImages` API](https://docs.microsoft.com/rest/api/batchservice/account/listsupportedimages), [PowerShell](https://docs.microsoft.com/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](https://docs.microsoft.com/cli/azure/batch/pool/supported-images). It is your responsibility to periodically refresh your view of the EOL dates pertinent to your pools and migrate your workloads before the EOL date occurs. If you are using a custom image with a specified node agent, then you will need to ensure that you follow Batch support end-of-life dates for the image for which your custom image is derived or aligned with.
+
+- **Do not reuse resource names.**
+ Batch resources (jobs, pools, etc.) often come and go over time. For example, you may create a pool on Monday, delete it on Tuesday, and then create another pool on Thursday. Each new resource you create should be given a unique name that you haven't used before. This can be done by using a GUID (either as the entire resource name, or as a part of it) or embedding the time the resource was created in the resource name. Batch supports [DisplayName](/dotnet/api/microsoft.azure.batch.jobspecification.displayname), which can be used to give a resource a human readable name even if the actual resource ID is something that isn't that human friendly. Using unique names makes it easier for you to differentiate which particular resource did something in logs and metrics. It also removes ambiguity if you ever have to file a support case for a resource.
+ - **Continuity during pool maintenance and failure:** It's best to have your jobs use pools dynamically. If your jobs use the same pool for everything, there's a chance that your jobs won't run if something goes wrong with the pool. This is especially important for time-sensitive workloads. To fix this, select or create a pool dynamically when you schedule each job, or have a way to override the pool name so that you can bypass an unhealthy pool.
The automated cleanup for the working directory will be blocked if you run a ser
- Learn about the [Batch service workflow and primary resources](batch-service-workflow-features.md) such as pools, nodes, jobs, and tasks. - Learn about [default Azure Batch quotas, limits, and constraints, and how to request quota increases](batch-quota-limit.md).-- Learn how to to [detect and avoid failures in pool and node background operations ](batch-pool-node-error-checking.md).
+- Learn how to to [detect and avoid failures in pool and node background operations ](batch-pool-node-error-checking.md).
cloud-services-extended-support Deploy Prerequisite https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-prerequisite.md
Deployments that utilized the old diagnostics plugins need the settings removed
## Key Vault creation
-Key Vault is used to store certificates that are associated to Cloud Services (extended support). Add the certificates to Key Vault, then reference the certificate thumbprints in Service Configuration file. You also need to enable Key Vault for appropriate permissions so that Cloud Services (extended support) resource can retrieve certificate stored as secrets from Key Vault. You can create a key vault in the [Azure portal](../key-vault/general/quick-create-portal.md) or by using [PowerShell](../key-vault/general/quick-create-powershell.md). The key vault must be created in the same region and subscription as the cloud service. For more information, see [Use certificates with Azure Cloud Services (extended support)](certificates-and-key-vault.md).
+Key Vault is used to store certificates that are associated to Cloud Services (extended support). Add the certificates to Key Vault, then reference the certificate thumbprints in Service Configuration file. You also need to enable Key Vault 'Access policies' (in portal) for access to 'Azure Virtual Machines for deployment' and 'Azure Resource Manager for template deployment' so that Cloud Services (extended support) resource can retrieve certificate stored as secrets from Key Vault. You can create a key vault in the [Azure portal](../key-vault/general/quick-create-portal.md) or by using [PowerShell](../key-vault/general/quick-create-powershell.md). The key vault must be created in the same region and subscription as the cloud service. For more information, see [Use certificates with Azure Cloud Services (extended support)](certificates-and-key-vault.md).
## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support).
cloud-services-extended-support Enable Wad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/enable-wad.md
Windows Azure Diagnostics extension can be enabled for Cloud Services (extended
```powershell # Create WAD extension object $storageAccountKey = Get-AzStorageAccountKey -ResourceGroupName "ContosOrg" -Name "contosostorageaccount"
-$configFile = "<WAD public configuration file path>"
-$wadExtension = New-AzCloudServiceDiagnosticsExtension -Name "WADExtension" -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS" -StorageAccountName "contosostorageaccount" -StorageAccountKey $storageAccountKey[0].Value -DiagnosticsConfigurationPath $configFile -TypeHandlerVersion "1.5" -AutoUpgradeMinorVersion $true
+$configFilePath = "<Insert WAD public configuration file path>"
+$wadExtension = New-AzCloudServiceDiagnosticsExtension -Name "WADExtension" -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS" -StorageAccountName "contosostorageaccount" -StorageAccountKey $storageAccountKey[0].Value -DiagnosticsConfigurationPath $configFilePath -TypeHandlerVersion "1.5" -AutoUpgradeMinorVersion $true
+
+# Add <privateConfig> settings
+$wadExtension.ProtectedSetting = "<Insert WAD Private Configuration as raw string here>"
# Get existing Cloud Service $cloudService = Get-AzCloudService -ResourceGroup "ContosOrg" -CloudServiceName "ContosoCS"
$cloudService.ExtensionProfile.Extension = $cloudService.ExtensionProfile.Extens
# Update Cloud Service $cloudService | Update-AzCloudService ```
+Download the public configuration file schema definition by executing the following PowerShell command:
+
+```powershell
+(Get-AzureServiceAvailableExtension -ExtensionName 'PaaSDiagnostics' -ProviderNamespace 'Microsoft.Azure.Diagnostics').PublicConfigurationSchema | Out-File -Encoding utf8 -FilePath 'PublicWadConfig.xsd'
+```
+Here is an example of the public configuration XML file
+```
+<?xml version="1.0" encoding="utf-8"?>
+<PublicConfig xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration">
+ <WadCfg>
+ <DiagnosticMonitorConfiguration overallQuotaInMB="25000">
+ <PerformanceCounters scheduledTransferPeriod="PT1M">
+ <PerformanceCounterConfiguration counterSpecifier="\Processor(_Total)\% Processor Time" sampleRate="PT1M" unit="percent" />
+ <PerformanceCounterConfiguration counterSpecifier="\Memory\Committed Bytes" sampleRate="PT1M" unit="bytes"/>
+ </PerformanceCounters>
+ <EtwProviders>
+ <EtwEventSourceProviderConfiguration provider="SampleEventSourceWriter" scheduledTransferPeriod="PT5M">
+ <Event id="1" eventDestination="EnumsTable"/>
+ <DefaultEvents eventDestination="DefaultTable" />
+ </EtwEventSourceProviderConfiguration>
+ </EtwProviders>
+ </DiagnosticMonitorConfiguration>
+ </WadCfg>
+</PublicConfig>
+```
+Download the private configuration file schema definition by executing the following PowerShell command:
+
+```powershell
+(Get-AzureServiceAvailableExtension -ExtensionName 'PaaSDiagnostics' -ProviderNamespace 'Microsoft.Azure.Diagnostics').PrivateConfigurationSchema | Out-File -Encoding utf8 -FilePath 'PrivateWadConfig.xsd'
+```
+Here is an example of the private configuration XML file
+
+```
+<?xml version="1.0" encoding="utf-8"?>
+<PrivateConfig xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration">
+ <StorageAccount name="string" key="string" />
+ <AzureMonitorAccount>
+ <ServicePrincipalMeta>
+ <PrincipalId>string</PrincipalId>
+ <Secret>string</Secret>
+ </ServicePrincipalMeta>
+ </AzureMonitorAccount>
+ <SecondaryStorageAccounts>
+ <StorageAccount name="string" />
+ </SecondaryStorageAccounts>
+ <SecondaryEventHubs>
+ <EventHub Url="string" SharedAccessKeyName="string" SharedAccessKey="string" />
+ </SecondaryEventHubs>
+</PrivateConfig>
+```
## Apply Windows Azure Diagnostics extension using ARM template ```json
$cloudService | Update-AzCloudService
``` + ## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support). - Review [frequently asked questions](faq.md) for Cloud Services (extended support).
cloud-shell Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-shell/features.md
Cloud Shell includes pre-configured authentication for open-source tools such as
|Java |1.8 | |Node.js |8.16.0 | |PowerShell |[7.0.0](https://github.com/PowerShell/powershell/releases) |
-|Python |2.7 and 3.5 (default)|
+|Python |2.7 and 3.7 (default)|
## Next steps [Bash in Cloud Shell Quickstart](quickstart.md) <br> [PowerShell in Cloud Shell Quickstart](quickstart-powershell.md) <br> [Learn about Azure CLI](/cli/azure/) <br>
-[Learn about Azure PowerShell](/powershell/azure/) <br>
+[Learn about Azure PowerShell](/powershell/azure/) <br>
cognitive-services Luis Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-language-support.md
LUIS understands utterances in the following languages:
| Korean |`ko-KR` |Γ£ö|-|-|Key phrase only| | Marathi | `mr-IN`|-|-|-|-| | Portuguese (Brazil) |`pt-BR` |Γ£ö| Γ£ö |Γ£ö |not all sub-cultures|
-| Spanish (Mexico)|`es-MX` |-|-|Γ£ö|Γ£ö|
+| Spanish (Mexico)|`es-MX` |-|Γ£ö|Γ£ö|Γ£ö|
| Spanish (Spain) |`es-ES` |Γ£ö| Γ£ö |Γ£ö|Γ£ö| | Tamil | `ta-IN`|-|-|-|-| | Telugu | `te-IN`|-|-|-|-|
Tokenizer JSON for version 1.0.1. Notice the property value for `tokenizerVersi
Tokenization happens at the app level. There is no support for version-level tokenization.
-[Import the file as a new app](luis-how-to-start-new-app.md), instead of a version. This action means the new app has a different app ID but uses the tokenizer version specified in the file.
+[Import the file as a new app](luis-how-to-start-new-app.md), instead of a version. This action means the new app has a different app ID but uses the tokenizer version specified in the file.
cognitive-services Luis Reference Prebuilt Entities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-reference-prebuilt-entities.md
Unless otherwise noted, prebuilt entities are available in all LUIS application
|Korean|[ko-KR](#korean-entity-support)|| |Portuguese|[pt-BR (Brazil)](#portuguese-brazil-entity-support)|| |Spanish|[es-ES (Spain)](#spanish-spain-entity-support), [es-MX (Mexico)](#spanish-mexico-entity-support)||
-|Turkish|[turkish](#turkish-entity-support)|No prebuilt entities supported in Turkish|
+|Turkish|[turkish](#turkish-entity-support)||
## Prediction endpoint runtime
The following entities are supported:
[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 | [URL](luis-reference-prebuilt-url.md) | V2, V3 |
+KeyPhrase is not available in all subcultures of Portuguese (Brazil) - ```pt-BR```.
+ ## Spanish (Spain) entity support The following entities are supported:
The following entities are supported:
See notes on [Deprecated prebuilt entities](luis-reference-prebuilt-deprecated.md)
-KeyPhrase is not available in all subcultures of Portuguese (Brazil) - ```pt-BR```.
- ## Turkish entity support
-**There are no prebuilt entities supported in Turkish.**
-
-<!--
- | Prebuilt entity | tr-tr | | | :: | [Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | - |
KeyPhrase is not available in all subcultures of Portuguese (Brazil) - ```pt-BR`
[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | - | [Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | - | [Email](luis-reference-prebuilt-email.md) | - |
-[GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |
[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | - | [Number](luis-reference-prebuilt-number.md) | - | [Ordinal](luis-reference-prebuilt-ordinal.md) | - | [Percentage](luis-reference-prebuilt-percentage.md) | - |
-[PersonName](luis-reference-prebuilt-person.md) | - |
[Phonenumber](luis-reference-prebuilt-phonenumber.md) | - | [Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | - | [URL](luis-reference-prebuilt-url.md) | - |
+<!
See notes on [Deprecated prebuilt entities](luis-reference-prebuilt-deprecated.md)-- KeyPhrase is not available. -->
cognitive-services Anomaly Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/big-data/recipes/anomaly-detection.md
assert (location is not None)
Next, let's read the IoTSignals file into a DataFrame. Open a new notebook in your Synapse workspace and create a DataFrame from the file. ```python
-df_device_info = spark.read.csv("wasbs://publicwasb@mmlspark.blob.core.windows.net/iot/IoTSignals.csv", header=True, inferSchema=True)
+df_signals = spark.read.csv("wasbs://publicwasb@mmlspark.blob.core.windows.net/iot/IoTSignals.csv", header=True, inferSchema=True)
``` ### Run anomaly detection using Cognitive Services on Spark
If successful, your output will look like this:
## Next steps
-Learn how to do predictive maintenance at scale with Azure Cognitive Services, Azure Synapse Analytics, and Azure CosmosDB. For more information, see the full sample on [GitHub](https://github.com/Azure-Samples/cosmosdb-synapse-link-samples).
+Learn how to do predictive maintenance at scale with Azure Cognitive Services, Azure Synapse Analytics, and Azure CosmosDB. For more information, see the full sample on [GitHub](https://github.com/Azure-Samples/cosmosdb-synapse-link-samples).
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/immersive-reader/overview.md
keywords: readers, language learners, display pictures, improve reading, read co
This documentation contains the following types of articles:
-* **[Quickstarts](quickstarts/client-libraries.md)** are step-by-step instructions that enable you to make calls to the service and get results.
-* **[How-to guides](how-to-create-immersive-reader.md)** contain instructions for using the service in more specific or customized ways.
+* **[Quickstarts](quickstarts/client-libraries.md)** are getting-started instructions to guide you through making requests to the service.
+* **[How-to guides](how-to-create-immersive-reader.md)** contain instructions for using the service in more specific or customized ways.
## Use Immersive Reader to improve reading accessibility
communication-services Teams Embed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/ui-framework/teams-embed.md
+
+ Title: UI Framework Teams Embed
+
+description: In this document, you'll learn how the Azure Communication Services UI Framework Teams Embed capability can be used to build turnkey calling experiences.
++ Last updated : 11/16/2020+++++
+# Teams Embed
+++
+Teams Embed is an Azure Communication Services capability focused on common business-to-consumer and business-to-business calling interactions. The core of the Teams Embed system is [video and voice calling](../voice-video-calling/calling-sdk-features.md), but the Teams Embed system builds on Azure's calling primitives to deliver a complete user experience based on Microsoft Teams meetings.
+
+Teams Embed client libraries are closed-source and make these capabilities available to you in a turnkey, composite format. You drop Teams Embed into your app's canvas and the client library generates a complete user experience. Because this user experience is very similar to Microsoft Teams meetings you can take advantage of:
+
+- Reduced development time and engineering complexity
+- End-user familiarity with Teams
+- Ability to re-use [Teams end-user training content](https://support.microsoft.com/office/meetings-in-teams-e0b0ae21-53ee-4462-a50d-ca9b9e217b67)
+
+The Teams Embed provides most features supported in Teams meetings, including:
+
+- Pre-meeting experience where a user configures their audio and video devices
+- In-meeting experience for configuring audio and video devices
+- [Video Backgrounds](https://support.microsoft.com/office/change-your-background-for-a-teams-meeting-f77a2381-443a-499d-825e-509a140f4780): allowing participants to blur or replace their backgrounds
+- [Multiple options for the video gallery](https://support.microsoft.com/office/using-video-in-microsoft-teams-3647fc29-7b92-4c26-8c2d-8a596904cdae) large gallery, together mode, focus, pinning, and spotlight
+- [Content Sharing](https://support.microsoft.comoffice/share-content-in-a-meeting-in-teams-fcc2bf59-aecd-4481-8f99-ce55dd836ce8#ID0EABAAA=Mobile): allowing participants to share their screen
+
+For more information about this UI compared to other Azure Communication SDKs, see the [UI SDK concept introduction](ui-sdk-overview.md).
communication-services Getting Started With Teams Embed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/meeting/getting-started-with-teams-embed.md
+
+ Title: Quickstart - Add joining a teams meeting to your app
+
+description: In this quickstart, you'll learn how to add join teams meeting capabilities to your app using Azure Communication Services.
++ Last updated : 01/25/2021+++
+zone_pivot_groups: acs-plat-ios-android
++
+# Quickstart: Add joining a teams meeting to your app
++
+Get started with Azure Communication Services by using the Communication Services Teams Embed client library to add teams meetings to your app.
++++
+## Clean up resources
+
+If you want to clean up and remove a Communication Services resource, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
communication-services Samples For Teams Embed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/meeting/samples-for-teams-embed.md
+
+ Title: Using the Azure Communication Services Teams Embed Library
+description: Learn about the Communication Services Teams Embed library capabilities.
++ Last updated : 02/24/2021+++
+zone_pivot_groups: acs-plat-ios-android
+++
+# Use the Communication Services Teams Embed library
++
+Get started with Azure Communication Services by using the Communication Services Teams Embed library to add Teams meetings to your app.
+++
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+For more information, see the following articles:
+
+- Check out our [Getting started with Teams Embed samples](./getting-started-with-teams-embed.md)
communication-services Calling Hero Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/calling-hero-sample.md
Last updated 03/10/2021
+zone_pivot_groups: acs-web-ios-android
# Get started with the group calling hero sample [!INCLUDE [Web Calling Hero Sample](./includes/web-calling-hero.md)]++
container-instances Container Instances Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-overview.md
Title: Serverless containers in Azure description: The Azure Container Instances service offers the fastest and simplest way to run isolated containers in Azure, without having to manage virtual machines and without having to adopt a higher-level orchestrator. Previously updated : 08/10/2020 Last updated : 03/22/2021
container-registry Container Registry Transfer Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-transfer-images.md
IMPORT_RUN_RES_ID=$(az deployment group show \
--name importPipelineRun \ --query 'properties.outputResources[0].id' \ --output tsv)
+```
When deployment completes successfully, verify artifact import by listing the repositories in the target container registry. For example, run [az acr repository list][az-acr-repository-list]:
container-registry Push Multi Architecture Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/push-multi-architecture-images.md
steps:
## Next steps
-* Use [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines.md) to build container images for different architectures.
+* Use [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) to build container images for different architectures.
* Learn about building multi-platform images using the experimental Docker [buildx](https://docs.docker.com/buildx/working-with-buildx/) plug-in. <!-- LINKS - external -->
cosmos-db Cassandra Migrate Cosmos Db Databricks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-migrate-cosmos-db-databricks.md
Title: Migrate data from Apache Cassandra to Azure Cosmos DB Cassandra API using Databricks (Spark)
-description: Learn how to migrate data from Apache Cassandra database to Azure Cosmos DB Cassandra API using Azure Databricks and Spark.
+ Title: Migrate data from Apache Cassandra to the Azure Cosmos DB Cassandra API by using Databricks (Spark)
+description: Learn how to migrate data from an Apache Cassandra database to the Azure Cosmos DB Cassandra API by using Azure Databricks and Spark.
-# Migrate data from Cassandra to Azure Cosmos DB Cassandra API account using Azure Databricks
+# Migrate data from Cassandra to an Azure Cosmos DB Cassandra API account by using Azure Databricks
[!INCLUDE[appliesto-cassandra-api](includes/appliesto-cassandra-api.md)]
-Cassandra API in Azure Cosmos DB has become a great choice for enterprise workloads running on Apache Cassandra for a variety of reasons such as:
+Cassandra API in Azure Cosmos DB has become a great choice for enterprise workloads running on Apache Cassandra for several reasons:
-* **No overhead of managing and monitoring:** It eliminates the overhead of managing and monitoring a myriad of settings across OS, JVM, and yaml files and their interactions.
+* **No overhead of managing and monitoring:** It eliminates the overhead of managing and monitoring settings across OS, JVM, and YAML files and their interactions.
-* **Significant cost savings:** You can save cost with Azure Cosmos DB, which includes the cost of VMΓÇÖs, bandwidth, and any applicable licenses. Additionally, you donΓÇÖt have to manage the data centers, servers, SSD storage, networking, and electricity costs.
+* **Significant cost savings:** You can save costs with the Azure Cosmos DB, which includes the cost of VMs, bandwidth, and any applicable licenses. You don't have to manage datacenters, servers, SSD storage, networking, and electricity costs.
-* **Ability to use existing code and tools:** Azure Cosmos DB provides wire protocol level compatibility with existing Cassandra SDKs and tools. This compatibility ensures you can use your existing codebase with Azure Cosmos DB Cassandra API with trivial changes.
+* **Ability to use existing code and tools:** The Azure Cosmos DB provides wire protocol-level compatibility with existing Cassandra SDKs and tools. This compatibility ensures that you can use your existing codebase with the Azure Cosmos DB Cassandra API with trivial changes.
-There are various ways to migrate database workloads from one platform to another. [Azure Databricks](https://azure.microsoft.com/services/databricks/) is a platform as a service offering for [Apache Spark](https://spark.apache.org/) that offers a way to perform offline migrations at large scale. This article describes the steps required to migrate data from native Apache Cassandra keyspaces/tables to Azure Cosmos DB Cassandra API using Azure Databricks.
+There are many ways to migrate database workloads from one platform to another. [Azure Databricks](https://azure.microsoft.com/services/databricks/) is a platform as a service (PaaS) offering for [Apache Spark](https://spark.apache.org/) that offers a way to perform offline migrations on a large scale. This article describes the steps required to migrate data from native Apache Cassandra keyspaces and tables into the Azure Cosmos DB Cassandra API by using Azure Databricks.
## Prerequisites
-* [Provision an Azure Cosmos DB Cassandra API account](create-cassandra-dotnet.md#create-a-database-account)
+* [Provision an Azure Cosmos DB Cassandra API account](create-cassandra-dotnet.md#create-a-database-account).
-* [Review the basics of connecting to Azure Cosmos DB Cassandra API](cassandra-spark-generic.md)
+* [Review the basics of connecting to an Azure Cosmos DB Cassandra API](cassandra-spark-generic.md).
-* Review the [supported features in Azure Cosmos DB Cassandra API](cassandra-support.md) to ensure compatibility.
+* Review the [supported features in the Azure Cosmos DB Cassandra API](cassandra-support.md) to ensure compatibility.
-* Ensure you have already created empty keyspace and tables in your target Azure Cosmos DB Cassandra API account
+* Ensure you've already created empty keyspaces and tables in your target Azure Cosmos DB Cassandra API account.
-* [Use cqlsh or hosted shell for validation](cassandra-support.md#hosted-cql-shell-preview)
+* [Use cqlsh or hosted shell for validation](cassandra-support.md#hosted-cql-shell-preview).
## Provision an Azure Databricks cluster
-You can follow instructions to [Provision an Azure Databricks cluster](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal). We recommend selecting Databricks runtime version 7.5, which supports Spark 3.0:
-
+You can follow instructions to [provision an Azure Databricks cluster](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal). We recommend selecting Databricks runtime version 7.5, which supports Spark 3.0.
## Add dependencies
-You will need to add the Apache Spark Cassandra connector library to your cluster in order to connect to both native and Cosmos DB Cassandra endpoints. In your cluster select libraries -> install new -> maven. add `com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0` in maven coordinates:
+You need to add the Apache Spark Cassandra Connector library to your cluster to connect to both native and Azure Cosmos DB Cassandra endpoints. In your cluster, select **Libraries** > **Install New** > **Maven**, and then add `com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0` in Maven coordinates.
-Select install, and ensure you restart the cluster when installation is complete.
+Select **Install**, and then restart the cluster when installation is complete.
> [!NOTE]
-> Ensure that you restart the Databricks cluster after the Cassandra Connector library has been installed.
+> Make sure that you restart the Databricks cluster after the Cassandra Connector library has been installed.
## Create Scala Notebook for migration
-Create a Scala Notebook in Databricks with the following code. Replace your source and target cassandra configurations with corresponding credentials, and source/target keyspaces and tables, then run:
+Create a Scala Notebook in Databricks. Replace your source and target Cassandra configurations with the corresponding credentials, and source and target keyspaces and tables. Then run the following code:
```scala import com.datastax.spark.connector._
DFfromNativeCassandra
``` > [!NOTE]
-> The values for `spark.cassandra.output.batch.size.rows` and `spark.cassandra.output.concurrent.writes`, as well as the number of workers in your Spark cluster, are important configurations to tune in order to avoid [rate limiting](/samples/azure-samples/azure-cosmos-cassandra-java-retry-sample/azure-cosmos-db-cassandra-java-retry-sample/), which happens when requests to Azure Cosmos DB exceed provisioned throughput/([request units](./request-units.md)). You may need to adjust these settings depending on the number of executors in the Spark cluster, and potentially the size (and therefore RU cost) of each record being written to the target tables.
+> The `spark.cassandra.output.batch.size.rows` and `spark.cassandra.output.concurrent.writes` values and the number of workers in your Spark cluster are important configurations to tune in order to avoid [rate limiting](/samples/azure-samples/azure-cosmos-cassandra-java-retry-sample/azure-cosmos-db-cassandra-java-retry-sample/). Rate limiting happens when requests to Azure Cosmos DB exceed provisioned throughput or [request units](./request-units.md) (RUs). You might need to adjust these settings, depending on the number of executors in the Spark cluster and potentially the size (and therefore RU cost) of each record being written to the target tables.
-## Troubleshooting
+## Troubleshoot
### Rate limiting (429 error)
-You may see an error code of 429 or `request rate is large` error text, despite reducing the above settings to their minimum values. The following are some such scenarios:
-- **Throughput allocated to the table is less than 6000 [request units](./request-units.md)**. Even at minimum settings, Spark will be able to execute writes at a rate of around 6000 request units or more. If you have provisioned a table in a keyspace with shared throughput provisioned, it is possible that this table has less than 6000 RUs available at runtime. Ensure the table you are migrating to has at least 6000 RUs available to it when running the migration, and if necessary allocate dedicated request units to that table. -- **Excessive data skew with large data volume**. If you have a large amount of data (that is table rows) to migrate into a given table but have a significant skew in the data (i.e. a large number of records being written for the same partition key value), then you may still experience rate-limiting even if you have a large amount of [request units](./request-units.md) provisioned in your table. This is because request units are divided equally among physical partitions, and heavy data skew can result in a bottleneck of requests to a single partition, causing rate limiting. In this scenario, it is advised to reduce to minimal throughput settings in Spark to avoid rate limiting and force the migration to run slowly. This scenario can be more common when migrating reference or control tables, where access is less frequent but skew can be high. However, if a significant skew is present in any other type of table, it may also be advisable to review your data model to avoid hot partition issues for your workload during steady-state operations.
+You might see a 429 error code or "request rate is large" error text even if you reduced settings to their minimum values. The following scenarios can cause rate limiting:
+
+* **Throughput allocated to the table is less than 6,000 [request units](./request-units.md)**. Even at minimum settings, Spark can write at a rate of around 6,000 request units or more. If you have provisioned a table in a keyspace with shared throughput, it's possible that this table has fewer than 6,000 RUs available at runtime.
+
+ Ensure that the table you are migrating to has at least 6,000 RUs available when you run the migration. If necessary, allocate dedicated request units to that table.
+* **Excessive data skew with large data volume**. If you have a large amount of data to migrate into a given table but have a significant skew in the data (that is, a large number of records being written for the same partition key value), then you might still experience rate limiting even if you have several [request units](./request-units.md) provisioned in your table. Request units are divided equally among physical partitions, and heavy data skew can cause a bottleneck of requests to a single partition.
+ In this scenario, reduce to minimal throughput settings in Spark and force the migration to run slowly. This scenario can be more common when you're migrating reference or control tables, where access is less frequent and skew can be high. However, if a significant skew is present in any other type of table, you might want to review your data model to avoid hot partition issues for your workload during steady-state operations.
## Next steps
-* [Provision throughput on containers and databases](set-throughput.md)
+* [Provision throughput on containers and databases](set-throughput.md)
* [Partition key best practices](partitioning-overview.md#choose-partitionkey)
-* [Estimate RU/s using the Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) articles
+* [Estimate RU/s using the Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
* [Elastic Scale in Azure Cosmos DB Cassandra API](manage-scale-cassandra.md)
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/concepts-limits.md
Previously updated : 01/19/2021 Last updated : 03/22/2021 # Azure Cosmos DB service quotas
Depending on which API you use, an Azure Cosmos container can represent either a
| | | | Maximum length of database or container name | 255 | | Maximum stored procedures per container | 100 <sup>*</sup>|
-| Maximum UDFs per container | 25 <sup>*</sup>|
+| Maximum UDFs per container | 50 <sup>*</sup>|
| Maximum number of paths in indexing policy| 100 <sup>*</sup>| | Maximum number of unique keys per container|10 <sup>*</sup>| | Maximum number of paths per unique key constraint|16 <sup>*</sup>|
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/configure-synapse-link.md
You can build a serverless SQL pool database and views over Synapse Link for Azu
The [Azure Resource Manager template](./manage-with-templates.md#azure-cosmos-account-with-analytical-store) creates a Synapse Link enabled Azure Cosmos DB account for SQL API. This template creates a Core (SQL) API account in one region with a container configured with analytical TTL enabled, and an option to use manual or autoscale throughput. To deploy this template, click on **Deploy to Azure** on the readme page.
-## <a id="cosmosdb-synapse-link-samples"></a> Getting started with Azure Synpase Link - Samples
+## <a id="cosmosdb-synapse-link-samples"></a> Getting started with Azure Synapse Link - Samples
You can find samples to get started with Azure Synapse Link on [GitHub](https://aka.ms/cosmosdb-synapselink-samples). These showcase end-to-end solutions with IoT and retail scenarios. You can also find the samples corresponding to Azure Cosmos DB API for MongoDB in the same repo under the [MongoDB](https://github.com/Azure-Samples/Synapse/tree/main/Notebooks/PySpark/Synapse%20Link%20for%20Cosmos%20DB%20samples) folder.
cosmos-db Consistency Levels https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/consistency-levels.md
Previously updated : 12/09/2020 Last updated : 03/22/2021 # Consistency levels in Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
You can configure the default consistency level on your Azure Cosmos account at
Azure Cosmos DB guarantees that 100 percent of read requests meet the consistency guarantee for the consistency level chosen. The precise definitions of the five consistency levels in Azure Cosmos DB using the TLA+ specification language are provided in the [azure-cosmos-tla](https://github.com/Azure/azure-cosmos-tla) GitHub repo.
-The semantics of the five consistency levels are described here:
+The semantics of the five consistency levels are described in the following sections.
-- **Strong**: Strong consistency offers a linearizability guarantee. Linearizability refers to serving requests concurrently. The reads are guaranteed to return the most recent committed version of an item. A client never sees an uncommitted or partial write. Users are always guaranteed to read the latest committed write.
+### Strong consistency
+
+Strong consistency offers a linearizability guarantee. Linearizability refers to serving requests concurrently. The reads are guaranteed to return the most recent committed version of an item. A client never sees an uncommitted or partial write. Users are always guaranteed to read the latest committed write.
The following graphic illustrates the strong consistency with musical notes. After the data is written to the "West US 2" region, when you read the data from other regions, you get the most recent value: :::image type="content" source="media/consistency-levels/strong-consistency.gif" alt-text="Illustration of strong consistency level"::: -- **Bounded staleness**: The reads are guaranteed to honor the consistent-prefix guarantee. The reads might lag behind writes by at most *"K"* versions (that is, "updates") of an item or by *"T"* time interval, whichever is reached first. In other words, when you choose bounded staleness, the "staleness" can be configured in two ways:
+### Bounded staleness consistency
+
+In bounded staleness consistency, the reads are guaranteed to honor the consistent-prefix guarantee. The reads might lag behind writes by at most *"K"* versions (that is, "updates") of an item or by *"T"* time interval, whichever is reached first. In other words, when you choose bounded staleness, the "staleness" can be configured in two ways:
- The number of versions (*K*) of the item - The time interval (*T*) reads might lag behind the writes
Inside the staleness window, Bounded Staleness provides the following consistenc
:::image type="content" source="media/consistency-levels/bounded-staleness-consistency.gif" alt-text="Illustration of bounded staleness consistency level"::: -- **Session**: Within a single client session reads are guaranteed to honor the consistent-prefix, monotonic reads, monotonic writes, read-your-writes, and write-follows-reads guarantees. This assumes a single "writer" session or sharing the session token for multiple writers.
+### Session consistency
+
+In session consistency, within a single client session reads are guaranteed to honor the consistent-prefix, monotonic reads, monotonic writes, read-your-writes, and write-follows-reads guarantees. This assumes a single "writer" session or sharing the session token for multiple writers.
Clients outside of the session performing writes will see the following guarantees:
Clients outside of the session performing writes will see the following guarante
:::image type="content" source="media/consistency-levels/session-consistency.gif" alt-text="Illustration of session consistency level"::: -- **Consistent prefix**: Updates that are returned contain some prefix of all the updates, with no gaps. Consistent prefix consistency level guarantees that reads never see out-of-order writes.
+### Consistent prefix consistency
+
+In consistent prefix option, updates that are returned contain some prefix of all the updates, with no gaps. Consistent prefix consistency level guarantees that reads never see out-of-order writes.
If writes were performed in the order `A, B, C`, then a client sees either `A`, `A,B`, or `A,B,C`, but never out-of-order permutations like `A,C` or `B,A,C`. Consistent Prefix provides write latencies, availability, and read throughput comparable to that of eventual consistency, but also provides the order guarantees that suit the needs of scenarios where order is important.
The following graphic illustrates the consistency prefix consistency with musica
:::image type="content" source="media/consistency-levels/consistent-prefix.gif" alt-text="Illustration of consistent prefix"::: -- **Eventual**: There's no ordering guarantee for reads. In the absence of any further writes, the replicas eventually converge.
+### Eventual consistency
+
+In eventual consistency, there's no ordering guarantee for reads. In the absence of any further writes, the replicas eventually converge.
Eventual consistency is the weakest form of consistency because a client may read the values that are older than the ones it had read before. Eventual consistency is ideal where the application does not require any ordering guarantees. Examples include count of Retweets, Likes, or non-threaded comments. The following graphic illustrates the eventual consistency with musical notes. :::image type="content" source="media/consistency-levels/eventual-consistency.gif" alt-text="viIllustration of eventual consistency":::
cosmos-db Create Graph Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-graph-python.md
In this quickstart, you create and manage an Azure Cosmos DB Gremlin (graph) API
## Prerequisites - An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription.-- [Python 3.5+](https://www.python.org/downloads/) including [pip](https://pip.pypa.io/en/stable/installing/) package installer.
+- [Python 3.6+](https://www.python.org/downloads/) including [pip](https://pip.pypa.io/en/stable/installing/) package installer.
- [Python Driver for Gremlin](https://github.com/apache/tinkerpop/tree/master/gremlin-python). - [Git](https://git-scm.com/downloads).
cosmos-db Create Sql Api Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-sql-api-dotnet.md
az group delete -g "myResourceGroup"
## Next steps
-In this quickstart, you learned how to create an Azure Cosmos account, create a database and a container using a .NET Core app. You can now import additional data to your Azure Cosmos account with the instructions int the following article.
+In this quickstart, you learned how to create an Azure Cosmos account, create a database and a container using a .NET Core app. You can now import additional data to your Azure Cosmos account with the instructions in the following article.
> [!div class="nextstepaction"]
-> [Import data into Azure Cosmos DB](import-data.md)
+> [Import data into Azure Cosmos DB](import-data.md)
cosmos-db Create Sql Api Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-sql-api-python.md
In this quickstart, you create and manage an Azure Cosmos DB SQL API account fro
* Without an Azure active subscription: * [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/), a tests environment that lasts for 30 days. * [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) -- [Python 2.7 or 3.5.3+](https://www.python.org/downloads/), with the `python` executable in your `PATH`.
+- [Python 2.7 or 3.6+](https://www.python.org/downloads/), with the `python` executable in your `PATH`.
- [Visual Studio Code](https://code.visualstudio.com/). - The [Python extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-python.python#overview). - [Git](https://www.git-scm.com/downloads).
The following snippets are all taken from the *cosmos_get_started.py* file.
In this quickstart, you've learned how to create an Azure Cosmos DB account, create a container using the Data Explorer, and run a Python app in Visual Studio Code. You can now import additional data to your Azure Cosmos DB account. > [!div class="nextstepaction"]
-> [Import data into Azure Cosmos DB for the SQL API](import-data.md)
+> [Import data into Azure Cosmos DB for the SQL API](import-data.md)
cosmos-db Find Request Unit Charge Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/find-request-unit-charge-mongodb.md
Previously updated : 10/14/2020 Last updated : 03/19/2021
The RU charge is exposed by a custom [database command](https://docs.mongodb.com
1. Go to the **Data Explorer** pane, and then select the container you want to work on.
-1. Select **New Query**.
+1. Select the **...** next to the container name and select **New Query**.
1. Enter a valid query, and then select **Execute Query**.
-1. Select **Query Stats** to display the actual request charge for the request you executed.
+1. Select **Query Stats** to display the actual request charge for the request you executed. This query editor allows you to run and view request unit charges for only query predicates. You can't use this editor for data manipulation commands such as insert statements.
+ :::image type="content" source="./media/find-request-unit-charge/portal-mongodb-query.png" alt-text="Screenshot of a MongoDB query request charge in the Azure portal":::
+
+1. To get request charges for data manipulation commands, run the `getLastRequestStatistics` command from a shell based UI such as Mongo shell, [Robo 3T](mongodb-robomongo.md), [MongoDB Compass](mongodb-compass.md), or a VS Code extension with shell scripting.
+
+ `db.runCommand({getLastRequestStatistics: 1})`
## Use the MongoDB .NET driver
cosmos-db Graph Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph-introduction.md
Previously updated : 11/25/2020 Last updated : 03/22/2021 # Introduction to Gremlin API in Azure Cosmos DB
This article provides an overview of the Azure Cosmos DB Gremlin API and explain
Azure Cosmos DB's Gremlin API combines the power of graph database algorithms with highly scalable, managed infrastructure to provide a unique, flexible solution to most common data problems associated with lack of flexibility and relational approaches. > [!NOTE]
-> The [serverless capacity mode](serverless.md) is now available on Azure Cosmos DB's Gremlin API.
+> Azure Cosmos DB graph engine closely follows Apache TinkerPop specification. However, there are some differences in the implementation details that are specific for Azure Cosmos DB. Some features supported by Apache TinkerPop are not available in Azure Cosmos DB, to learn more about the unsupported features, see [compatibility with Apache TinkerPop](gremlin-support.md) article.
## Features of Azure Cosmos DB's Gremlin API
-
+ Azure Cosmos DB is a fully managed graph database that offers global distribution, elastic scaling of storage and throughput, automatic indexing and query, tunable consistency levels, and support for the TinkerPop standard.
+> [!NOTE]
+> The [serverless capacity mode](serverless.md) is now available on Azure Cosmos DB's Gremlin API.
+ The following are the differentiated features that Azure Cosmos DB Gremlin API offers: * **Elastically scalable throughput and storage**
cosmos-db Sql Api Python Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-python-samples.md
Sample solutions that do CRUD operations and other common operations on Azure Co
* Without an Azure active subscription: * [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/), a tests environment that lasts for 30 days. * [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) -- [Python 2.7 or 3.5.3+](https://www.python.org/downloads/), with the `python` executable in your `PATH`.
+- [Python 2.7 or 3.6+](https://www.python.org/downloads/), with the `python` executable in your `PATH`.
- [Visual Studio Code](https://code.visualstudio.com/). - The [Python extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-python.python#overview). - [Git](https://www.git-scm.com/downloads).
cosmos-db Sql Api Sdk Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-python.md
|**API documentation**|[Python API reference documentation](https://docs.microsoft.com/python/api/azure-cosmos/azure.cosmos?view=azure-python&preserve-view=true)| |**SDK installation instructions**|[Python SDK installation instructions](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/cosmos/azure-cosmos)| |**Get started**|[Get started with the Python SDK](create-sql-api-python.md)|
-|**Current supported platform**|[Python 2.7](https://www.python.org/downloads/) and [Python 3.5.3+](https://www.python.org/downloads/)|
+|**Current supported platform**|[Python 2.7](https://www.python.org/downloads/) and [Python 3.6+](https://www.python.org/downloads/)|
## Release history
cosmos-db Table Sdk Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-sdk-python.md
|**API documentation**|[Python API reference documentation](/python/api/overview/azure/cosmosdb)| |**SDK installation instructions**|[Python SDK installation instructions](https://github.com/Azure/azure-cosmosdb-python/tree/master/azure-cosmosdb-table)| |**Contribute to SDK**|[GitHub](https://github.com/Azure/azure-cosmosdb-python/tree/master/azure-cosmosdb-table)|
-|**Current supported platform**|[Python 2.7](https://www.python.org/downloads/) or [Python 3.3, 3.4, 3.5, or 3.6](https://www.python.org/downloads/)|
+|**Current supported platform**|[Python 2.7](https://www.python.org/downloads/) or [Python 3.6+](https://www.python.org/downloads/)|
> [!IMPORTANT] > If you created a Table API account during the preview, please create a [new Table API account](create-table-dotnet.md#create-a-database-account) to work with the generally available Table API SDKs.
New features and functionality and optimizations are only added to the current S
[!INCLUDE [cosmos-db-sdk-faq](../../includes/cosmos-db-sdk-faq.md)] ## See also
-To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
+To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Table Storage How To Use Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-storage-how-to-use-python.md
While working through the scenarios in this sample, you may want to refer to the
You need the following to complete this sample successfully:
-* [Python](https://www.python.org/downloads/) 2.7, 3.3, 3.4, 3.5, or 3.6
+* [Python](https://www.python.org/downloads/) 2.7 or 3.6+.
* [Azure Cosmos DB Table SDK for Python](https://pypi.python.org/pypi/azure-cosmosdb-table/). This SDK connects with both Azure Table storage and the Azure Cosmos DB Table API.
-* [Azure Storage account](../storage/common/storage-account-create.md) or [Azure Cosmos DB account](https://azure.microsoft.com/try/cosmosdb/)
+* [Azure Storage account](../storage/common/storage-account-create.md) or [Azure Cosmos DB account](https://azure.microsoft.com/try/cosmosdb/).
## Create an Azure service account
table_service.delete_table('tasktable')
[py_update_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice [py_delete_table]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice [py_TableService]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
-[py_TableBatch]: https://docs.microsoft.com/python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
+[py_TableBatch]: https://docs.microsoft.com/python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
cosmos-db Tutorial Develop Table Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/tutorial-develop-table-dotnet.md
Now you can sign into the Azure portal and verify that the data exists in the ta
You can now proceed to the next tutorial and learn how to migrate data to Azure Cosmos DB Table API account. > [!div class="nextstepaction"]
->[Migrate data to Azure Comsos DB Table API](../cosmos-db/table-import.md)
+>[Migrate data to Azure Cosmos DB Table API](../cosmos-db/table-import.md)
cosmos-db Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/use-cases.md
IoT use cases commonly share some patterns in how they ingest, process, and stor
:::image type="content" source="./media/use-cases/iot.png" alt-text="Azure Cosmos DB IoT reference architecture" border="false":::
-Bursts of data can be ingested by Azure Event Hubs as it offers high throughput data ingestion with low latency. Data ingested that needs to be processed for real-time insight can be funneled to Azure Stream Analytics for real-time analytics. Data can be loaded into Azure Cosmos DB for adhoc querying. Once the data is loaded into Azure Cosmos DB, the data is ready to be queried. In addition, new data and changes to existing data can be read on change feed. Change feed is a persistent, append only log that stores changes to Cosmos containers in sequential order. The all data or just changes to data in Azure Cosmos DB can be used as reference data as part of real-time analytics. In addition, data can further be refined and processed by connecting Azure Cosmos DB data to HDInsight for Pig, Hive, or Map/Reduce jobs. Refined data is then loaded back to Azure Cosmos DB for reporting.
+Bursts of data can be ingested by Azure Event Hubs as it offers high throughput data ingestion with low latency. Data ingested that needs to be processed for real-time insight can be funneled to Azure Stream Analytics for real-time analytics. Data can be loaded into Azure Cosmos DB for adhoc querying. Once the data is loaded into Azure Cosmos DB, the data is ready to be queried. In addition, new data and changes to existing data can be read on change feed. Change feed is a persistent, append only log that stores changes to Cosmos containers in sequential order. Then all data or just changes to data in Azure Cosmos DB can be used as reference data as part of real-time analytics. In addition, data can further be refined and processed by connecting Azure Cosmos DB data to HDInsight for Pig, Hive, or Map/Reduce jobs. Refined data is then loaded back to Azure Cosmos DB for reporting.
For a sample IoT solution using Azure Cosmos DB, EventHubs and Storm, see the [hdinsight-storm-examples repository on GitHub](https://github.com/hdinsight/hdinsight-storm-examples/).
JSON, a format supported by Cosmos DB, is an effective format to represent UI la
* To get started with Azure Cosmos DB, follow our [quick starts](create-sql-api-dotnet.md), which walk you through creating an account and getting started with Cosmos DB.
-* If you'd like to read more about customers using Azure Cosmos DB, see the [customer case studies](https://azure.microsoft.com/case-studies/?service=cosmos-db) page.
+* If you'd like to read more about customers using Azure Cosmos DB, see the [customer case studies](https://azure.microsoft.com/case-studies/?service=cosmos-db) page.
cost-management-billing Ea Portal Agreements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/ea-portal-agreements.md
Any quota described above is not a Service Prepayment. For purposes of determini
## Requesting a quota increase
-You can request a quota increase at any time by submitting an [online request](https://g.microsoftonline.com/0WAEP00en/6). To process your request, provide the following information:
+You can request a quota increase at any time by submitting an [online request](https://ms.portal.azure.com/). To process your request, provide the following information:
- The Microsoft account or work or school account associated with the account owner of your subscription. This is the email address utilized to sign in to the Microsoft Azure portal to manage your subscription(s). Please also identify that this account is associated with an EA enrollment. - The resource(s) and amount for which you desire a quota increase.
data-factory Ci Cd Github Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/ci-cd-github-troubleshoot-guide.md
Title: Troubleshoot CI-CD, Azure DevOps, and GitHub issues in ADF
+ Title: Troubleshoot CI-CD, Azure DevOps and GitHub issues in ADF
description: Use different methods to troubleshoot CI-CD issues in ADF.
Last updated 03/12/2021
-# Troubleshoot CI-CD, Azure DevOps, and GitHub issues in ADF
+# Troubleshoot CI-CD, Azure DevOps and GitHub issues in ADF
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
When trying to publish changes to a Data Factory, you get following error messag
"details": null } `-
-#### Symptom
+### Cause
You have detached the Git configuration and set it up again with the "Import resources" flag selected, which sets the Data Factory as "in sync". This means no changes to publish.
You have created a customer role as the user and it did not have the necessary p
In order to resolve the issue, you need to add the following permission to your role: *Microsoft.DataFactory/factories/queryFeaturesValue/action*. This permission should be included by default in the "Data Factory Contributor" role.
-### Automatic publishing for CI/CD without clicking Publish button
-
-#### Issue
-
-Manual publishing with button click in ADF portal does not enable automatic CI/CD operation.
+### Cannot automate publishing for CI/CD
#### Cause
Azure Resource Manager restricts template size to be 4mb. Limit the size of your
For small to medium solutions, a single template is easier to understand and maintain. You can see all the resources and values in a single file. For advanced scenarios, linked templates enable you to break down the solution into targeted components. Please follow best practice at [Using Linked and Nested Templates](../azure-resource-manager/templates/linked-templates.md?tabs=azure-powershell).
-### Cannot connect to GIT Enterprise Cloud
+### Cannot connect to GIT Enterprise
##### Issue
-You cannot connect to GIT Enterprise Cloud because of permission issues. You can see error like **422 - Unprocessable Entity.**
+You cannot connect to GIT Enterprise because of permission issues. You can see error like **422 - Unprocessable Entity.**
#### Cause
-* You are using Git Enterprise on prem server.
* You have not configured Oauth for ADF. * Your URL is misconfigured.
You cannot connect to GIT Enterprise Cloud because of permission issues. You can
You grant Oauth access to ADF at first. Then, you have to use correct URL to connect to GIT Enterprise. The configuration must be set to the customer organization(s). For example, ADF will try *https://hostname/api/v3/search/repositories?q=user%3<customer credential>....* at first and fail. Then, it will try *https://hostname/api/v3/orgs/<org>/<repo>...*, and succeed.
-### Recover from a deleted data factory
+### Cannot recover from a deleted data factory
#### Issue Customer deleted Data factory or the resource group containing the Data Factory. He would like to know how to restore a deleted data factory.
To recover the Deleted Data Factory which has Source Control refer the steps bel
* Create a new Azure Data Factory.
- * Reconfigure Git with the same settings, but make sure Import existing Data Factory resources to the selected repository, and choose New branch.
+ * Reconfigure Git with the same settings, but make sure to import existing Data Factory resources to the selected repository, and choose New branch.
* Create a pull request to merge the changes to the collaboration branch and publish.
data-factory Concepts Datasets Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-datasets-linked-services.md
Title: Datasets
description: 'Learn about datasets in Data Factory. Datasets represent input/output data.' -+
data-factory Concepts Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-linked-services.md
Title: Linked services in Azure Data Factory
description: 'Learn about linked services in Data Factory. Linked services link compute/data stores to data factory.' -+ Last updated 08/21/2020
data-factory Concepts Pipeline Execution Triggers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-pipeline-execution-triggers.md
Title: Pipeline execution and triggers in Azure Data Factory
description: This article provides information about how to execute a pipeline in Azure Data Factory, either on-demand or by creating a trigger. -+ Last updated 07/05/2018
data-factory Continuous Integration Deployment Improvements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/continuous-integration-deployment-improvements.md
description: Learn how to publish for continuous integration and delivery automa
-+ Last updated 02/02/2021
data-factory Continuous Integration Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/continuous-integration-deployment.md
description: Learn how to use continuous integration and delivery to move Data F
-+ Last updated 03/11/2021
data-factory Control Flow Append Variable Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-append-variable-activity.md
-+ Last updated 10/09/2018
data-factory Control Flow Azure Function Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-azure-function-activity.md
Title: Azure Function Activity in Azure Data Factory
description: Learn how to use the Azure Function activity to run an Azure Function in a Data Factory pipeline -+ Last updated 01/09/2019
data-factory Control Flow Execute Pipeline Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-execute-pipeline-activity.md
Title: Execute Pipeline Activity in Azure Data Factory
description: Learn how you can use the Execute Pipeline Activity to invoke one Data Factory pipeline from another Data Factory pipeline. -+ Last updated 01/10/2018
data-factory Control Flow Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-expression-language-functions.md
Title: Expression and functions in Azure Data Factory
description: This article provides information about expressions and functions that you can use in creating data factory entities. -+ Last updated 11/25/2019
data-factory Control Flow Filter Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-filter-activity.md
Title: Filter activity in Azure Data Factory
description: The Filter activity filters the inputs. -+ Last updated 05/04/2018
data-factory Control Flow For Each Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-for-each-activity.md
Title: ForEach activity in Azure Data Factory
description: The For Each Activity defines a repeating control flow in your pipeline. It is used for iterating over a collection and execute specified activities. -+ Last updated 01/23/2019
data-factory Control Flow If Condition Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-if-condition-activity.md
Title: If Condition activity in Azure Data Factory
description: The If Condition activity allows you to control the processing flow based on a condition. -+ Last updated 01/10/2018
data-factory Control Flow Set Variable Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-set-variable-activity.md
Last updated 04/07/2020 -+ # Set Variable Activity in Azure Data Factory [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
data-factory Control Flow Switch Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-switch-activity.md
Title: Switch activity in Azure Data Factory
description: The Switch activity allows you to control the processing flow based on a condition. -+ Last updated 10/08/2019
data-factory Control Flow System Variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-system-variables.md
Title: System variables in Azure Data Factory
description: This article describes system variables supported by Azure Data Factory. You can use these variables in expressions when defining Data Factory entities. -+ Last updated 06/12/2018
data-factory Control Flow Until Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-until-activity.md
Title: Until activity in Azure Data Factory
description: The Until activity executes a set of activities in a loop until the condition associated with the activity evaluates to true or it times out. -+ Last updated 01/10/2018
data-factory Control Flow Validation Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-validation-activity.md
Title: Validation activity in Azure Data Factory
description: The Validation activity does not continue execution of the pipeline until it validates the attached dataset with certain criteria the user specifies. -+ Last updated 03/25/2019
data-factory Control Flow Webhook Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-webhook-activity.md
Title: Webhook activity in Azure Data Factory
description: The webhook activity doesn't continue execution of the pipeline until it validates the attached dataset with certain criteria the user specifies. -+ Last updated 03/25/2019
data-factory Copy Activity Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-monitoring.md
description: Learn about how to monitor the copy activity execution in Azure Dat
Previously updated : 08/06/2020 Last updated : 03/22/2021 # Monitor copy activity
Copy activity execution details and performance characteristics are also returne
| rowsCopied | Number of rows copied to sink. This metric does not apply when copying files as-is without parsing them, for example, when source and sink datasets are binary format type, or other format type with identical settings. | Int64 value (no unit) | | rowsSkipped | Number of incompatible rows that were skipped. You can enable incompatible rows to be skipped by setting `enableSkipIncompatibleRow` to true. | Int64 value (no unit) | | copyDuration | Duration of the copy run. | Int32 value, in seconds |
-| throughput | Rate of data transfer. | Floating point number, in KBps |
+| throughput | Rate of data transfer, calculated by `dataRead` divided by `copyDuration`. | Floating point number, in KBps |
| sourcePeakConnections | Peak number of concurrent connections established to the source data store during the Copy activity run. | Int32 value (no unit) | | sinkPeakConnections| Peak number of concurrent connections established to the sink data store during the Copy activity run.| Int32 value (no unit) | | sqlDwPolyBase | Whether PolyBase is used when data is copied into Azure Synapse Analytics. | Boolean |
data-factory Copy Clone Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-clone-data-factory.md
description: Learn how to copy or clone a data factory in Azure Data Factory
-+ Last updated 06/30/2020
data-factory How To Create Custom Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-custom-event-trigger.md
description: Learn how to create a custom trigger in Azure Data Factory that run
-+ Last updated 03/11/2021
data-factory How To Create Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-event-trigger.md
description: Learn how to create a trigger in Azure Data Factory that runs a pip
-+ Last updated 03/11/2021
data-factory How To Create Schedule Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-schedule-trigger.md
Title: Create schedule triggers in Azure Data Factory
description: Learn how to create a trigger in Azure Data Factory that runs a pipeline on a schedule. -+ Last updated 10/30/2020
data-factory How To Create Tumbling Window Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-tumbling-window-trigger.md
Title: Create tumbling window triggers in Azure Data Factory
description: Learn how to create a trigger in Azure Data Factory that runs a pipeline on a tumbling window. -+ Last updated 10/25/2020
data-factory How To Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-expression-language-functions.md
Title: How to use parameters and expressions in Azure Data Factory
description: This How To article provides information about expressions and functions that you can use in creating data factory entities. -+ Last updated 03/08/2020
data-factory Monitor Using Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/monitor-using-azure-monitor.md
Title: Monitor data factories using Azure Monitor
description: Learn how to use Azure Monitor to monitor /Azure Data Factory pipelines by enabling diagnostic logs with information from Data Factory. -+ Last updated 07/13/2020
data-factory Monitor Visually https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/monitor-visually.md
Title: Visually monitor Azure Data Factory
description: Learn how to visually monitor Azure data factories -+ Last updated 06/30/2020
data-factory Naming Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/naming-rules.md
Title: Rules for naming Azure Data Factory entities
description: Describes naming rules for Data Factory entities. -+ Last updated 10/15/2020
data-factory Pricing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/pricing-concepts.md
Title: Understanding Azure Data Factory pricing through examples
description: This article explains and demonstrates the Azure Data Factory pricing model with detailed examples -+ Last updated 09/14/2020
data-factory Quickstart Create Data Factory Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-python.md
Title: 'Quickstart: Create an Azure Data Factory using Python'
description: Use a data factory to copy data from one location in Azure Blob storage to another location. -+ ms.devlang: python
Pipelines can ingest data from disparate data stores. Pipelines process or trans
pip install azure-mgmt-datafactory ```
- The [Python SDK for Data Factory](https://github.com/Azure/azure-sdk-for-python) supports Python 2.7, 3.3, 3.4, 3.5, 3.6 and 3.7.
+ The [Python SDK for Data Factory](https://github.com/Azure/azure-sdk-for-python) supports Python 2.7 and 3.6+.
4. To install the Python package for Azure Identity authentication, run the following command:
data-factory Quickstart Create Data Factory Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-resource-manager-template.md
tags: azure-resource-manager -+ Last updated 07/16/2020
data-factory Samples Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/samples-powershell.md
description: Azure PowerShell Samples - Scripts to help you create and manage da
-+ Last updated 03/16/2021
data-factory Transform Data Spark Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/scripts/transform-data-spark-powershell.md
Title: Transform data in cloud using PowerShell
description: "This PowerShell script transforms data in the cloud by running Spark program on an Azure HDInsight Spark cluster." -+
data-factory Ssis Azure Connect With Windows Auth https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/ssis-azure-connect-with-windows-auth.md
ms.technology: integration-services -+ # Access data stores and file shares with Windows authentication from SSIS packages in Azure
data-factory Ssis Azure Files File Shares https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/ssis-azure-files-file-shares.md
ms.prod: sql
ms.technology: integration-services -+ # Open and save files on premises and in Azure with SSIS packages deployed in Azure
data-factory Tutorial Control Flow Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-control-flow-portal.md
Title: Branching and chaining activities in a pipeline using Azure portal
description: Learn how to control flow of data in Azure Data Factory pipeline by using the Azure portal. -+
data-factory Tutorial Control Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-control-flow.md
Title: Branching in Azure Data Factory pipeline
description: Learn how to control flow of data in Azure Data Factory by branching and chaining activities. -+
data-factory Tutorial Incremental Copy Multiple Tables Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-incremental-copy-multiple-tables-powershell.md
Title: Incrementally copy multiple tables using PowerShell
description: In this tutorial, you create an Azure Data Factory with a pipeline that loads delta data from multiple tables in a SQL Server database to Azure SQL Database. -+
data-factory Update Machine Learning Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/update-machine-learning-models.md
Title: Update Azure Machine Learning Studio (classic) models using Azure Data Fa
description: Describes how to create predictive pipelines using Azure Data Factory and Azure Machine Learning Studio (classic) -+ Last updated 07/16/2020
data-factory Data Factory Api Change Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-api-change-log.md
Title: Data Factory - .NET API Change Log
description: Describes breaking changes, feature additions, bug fixes, and so on, in a specific version of .NET API for the Azure Data Factory. -+
data-factory Data Factory Azure Ml Batch Execution Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-azure-ml-batch-execution-activity.md
Title: Create predictive data pipelines using Azure Data Factory
description: Describes how to create create predictive pipelines using Azure Data Factory and Azure Machine Learning Studio (classic) -+ Last updated 01/22/2018
data-factory Data Factory Azure Ml Update Resource Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-azure-ml-update-resource-activity.md
Title: Update Machine Learning models using Azure Data Factory
description: Describes how to create predictive pipelines using Azure Data Factory v1 and Azure Machine Learning Studio (classic) -+ Last updated 01/22/2018
data-factory Data Factory Build Your First Pipeline Using Arm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-arm.md
Title: Build your first data factory (Resource Manager template)
description: In this tutorial, you create a sample Azure Data Factory pipeline using an Azure Resource Manager template. -+ Last updated 01/22/2018
data-factory Data Factory Build Your First Pipeline Using Editor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-editor.md
Title: Build your first data factory (Azure portal)
description: In this tutorial, you create a sample Azure Data Factory pipeline by using the Data Factory Editor in the Azure portal. -+ Last updated 01/22/2018
data-factory Data Factory Build Your First Pipeline Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-powershell.md
Title: Build your first data factory (PowerShell)
description: In this tutorial, you create a sample Azure Data Factory pipeline using Azure PowerShell. -+ Last updated 01/22/2018
data-factory Data Factory Build Your First Pipeline Using Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-rest-api.md
Title: Build your first data factory (REST)
description: In this tutorial, you create a sample Azure Data Factory pipeline using Data Factory REST API. -+ Last updated 11/01/2017
data-factory Data Factory Build Your First Pipeline Using Vs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-vs.md
Title: Build your first data factory (Visual Studio)
description: In this tutorial, you create a sample Azure Data Factory pipeline using Visual Studio. -+
data-factory Data Factory Build Your First Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-build-your-first-pipeline.md
Title: 'Data Factory tutorial: First data pipeline '
description: This Azure Data Factory tutorial shows you how to create and schedule a data factory that processes data using Hive script on a Hadoop cluster. -+ Last updated 01/22/2018
data-factory Data Factory Compute Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-compute-linked-services.md
Title: Compute environments supported by Azure Data Factory version 1
description: Learn about compute environments that you can use in Azure Data Factory pipelines (such as Azure HDInsight) to transform or process data. -+ Last updated 01/10/2018
data-factory Data Factory Create Data Factories Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-create-data-factories-programmatically.md
Title: Create data pipelines by using Azure .NET SDK
description: Learn how to programmatically create, monitor, and manage Azure data factories by using Data Factory SDK. -+ Last updated 01/22/2018
data-factory Data Factory Create Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-create-datasets.md
Title: Create datasets in Azure Data Factory
description: Learn how to create datasets in Azure Data Factory, with examples that use properties such as offset and anchorDateTime. -+ Last updated 01/10/2018
data-factory Data Factory Create Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-create-pipelines.md
Title: Create/Schedule Pipelines, Chain Activities in Data Factory
description: Learn to create a data pipeline in Azure Data Factory to move and transform data. Create a data driven workflow to produce ready to use information. -+ Last updated 01/10/2018
data-factory Data Factory Customer Case Studies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-customer-case-studies.md
Title: Azure Data Factory - Customer case studies
description: Learn about how some of our customers have been using Azure Data Factory. -+ Last updated 01/10/2018
data-factory Data Factory Customer Profiling Usecase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-customer-profiling-usecase.md
Title: Use Case - Customer Profiling
description: Learn how Azure Data Factory is used to create a data-driven workflow (pipeline) to profile gaming customers. -+ Last updated 01/10/2018
data-factory Data Factory Data Processing Using Batch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-data-processing-using-batch.md
Title: Process large-scale datasets by using Data Factory and Batch
description: Describes how to process huge amounts of data in an Azure Data Factory pipeline by using the parallel processing capability of Azure Batch. -+ Last updated 01/10/2018
data-factory Data Factory Data Transformation Activities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-data-transformation-activities.md
Title: 'Data Transformation: Process & transform data '
description: Learn how to transform data or process data in Azure Data Factory using Hadoop, Azure Machine Learning Studio (classic), or Azure Data Lake Analytics. -+ Last updated 01/10/2018
data-factory Data Factory Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-faq.md
Title: Azure Data Factory - Frequently Asked Questions
description: Frequently asked questions about Azure Data Factory. -+ Last updated 01/10/2018
data-factory Data Factory Functions Variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-functions-variables.md
Title: Data Factory Functions and System Variables
description: Provides a list of Azure Data Factory functions and system variables -+ Last updated 01/10/2018
data-factory Data Factory Hadoop Streaming Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-hadoop-streaming-activity.md
Title: Transform data using Hadoop Streaming Activity - Azure
description: Learn how you can use the Hadoop Streaming Activity in an Azure data factory to transform data by running Hadoop Streaming programs on an on-demand/your own HDInsight cluster. -+ Last updated 01/10/2018
data-factory Data Factory Hive Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-hive-activity.md
Title: Transform data using Hive Activity - Azure
description: Learn how you can use the Hive Activity in Azure Data Factory v1 to run Hive queries on an on-demand/your own HDInsight cluster. -+ Last updated 01/10/2018
data-factory Data Factory How To Use Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-how-to-use-resource-manager-templates.md
Title: Use Resource Manager templates in Data Factory
description: Learn how to create and use Azure Resource Manager templates to create Data Factory entities. -+ Last updated 01/10/2018
data-factory Data Factory Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-introduction.md
Title: Introduction to Data Factory, a data integration service
description: 'Learn what Azure Data Factory is: A cloud data integration service that orchestrates and automates movement and transformation of data.' -+ Last updated 01/22/2018
data-factory Data Factory Json Scripting Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-json-scripting-reference.md
Title: Azure Data Factory - JSON Scripting Reference
description: Provides JSON schemas for Data Factory entities. -+ Last updated 01/10/2018
data-factory Data Factory Map Reduce https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-map-reduce.md
Title: Invoke MapReduce Program from Azure Data Factory
description: Learn how to process data by running MapReduce programs on an Azure HDInsight cluster from an Azure data factory. -+ Last updated 01/10/2018
data-factory Data Factory Monitor Manage App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-monitor-manage-app.md
Title: Monitor and manage data pipelines - Azure
description: Learn how to use the Monitoring and Management app to monitor and manage Azure data factories and pipelines. -+ Last updated 01/10/2018
data-factory Data Factory Monitor Manage Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-monitor-manage-pipelines.md
Title: Monitor and manage pipelines by using the Azure portal and PowerShell
description: Learn how to use the Azure portal and Azure PowerShell to monitor and manage the Azure data factories and pipelines that you have created. -+ Last updated 04/30/2018
data-factory Data Factory Naming Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-naming-rules.md
Title: Rules for naming Azure Data Factory entities - version 1
description: Describes naming rules for Data Factory v1 entities. -+ Last updated 01/10/2018
data-factory Data Factory Pig Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-pig-activity.md
Title: Transform data using Pig Activity in Azure Data Factory
description: Learn how you can use the Pig Activity in Azure Data Factory v1 to run Pig scripts on an on-demand/your own HDInsight cluster. -+ Last updated 01/10/2018
data-factory Data Factory Product Reco Usecase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-product-reco-usecase.md
Title: Data Factory Use Case - Product Recommendations
description: Learn about an use case implemented by using Azure Data Factory along with other services. -+ Last updated 01/10/2018
data-factory Data Factory Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-samples.md
Title: Azure Data Factory - Samples
description: Provides details about samples that ship with the Azure Data Factory service. -+ Last updated 01/10/2018
data-factory Data Factory Scheduling And Execution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-scheduling-and-execution.md
Title: Scheduling and Execution with Data Factory
description: Learn scheduling and execution aspects of Azure Data Factory application model. -+ Last updated 01/10/2018
data-factory Data Factory Spark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-spark.md
Title: Invoke Spark programs from Azure Data Factory
description: Learn how to invoke Spark programs from an Azure data factory by using the MapReduce activity. -+ Last updated 01/10/2018
data-factory Data Factory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-troubleshoot.md
Last updated 01/10/2018 -+ # Troubleshoot Data Factory issues
defender-for-iot Agent Based Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/agent-based-recommendations.md
Title: Agent based recommendations- description: Learn about the concept of security recommendations and how they are used for Defender for IoT devices.----- - Last updated 02/16/2021- # Security recommendations for IoT devices
Operational recommendations provide insights and suggestions to improve security
| Severity | Name | Data Source | Description | |--|--|--|--|
-| Low | Agent sends unutilized messages | Classic Defender-IoT-micro-agent| 10% or more of security messages were smaller than 4 KB during the last 24 hours. |
-| Low | Security twin configuration not optimal | Classic Defender-IoT-micro-agent| Security twin configuration is not optimal. |
-| Low | Security twin configuration conflict | Classic Defender-IoT-micro-agent| Conflicts were identified in the security twin configuration. | |
+| Low | Agent sends unutilized messages | Classic Defender-IoT-micro-agent | 10% or more of security messages were smaller than 4 KB during the last 24 hours. |
+| Low | Security twin configuration not optimal | Classic Defender-IoT-micro-agent | Security twin configuration is not optimal. |
+| Low | Security twin configuration conflict | Classic Defender-IoT-micro-agent | Conflicts were identified in the security twin configuration. |
## Next steps
defender-for-iot Agent Based Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/agent-based-security-alerts.md
Title: Agent based security alerts- description: Learn about security alerts and recommended remediation using Defender for IoT device's features and service.----- - Last updated 2/16/2021- # Defender for IoT devices security alerts
defender-for-iot Agent Based Security Custom Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/agent-based-security-custom-alerts.md
Title: Agent based security custom alerts- description: Learn about customizable security alerts and recommended remediation using Defender for IoT device's features and service.----- - Last updated 2/16/2021-
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/alert-engine-messages.md
+
+ Title: Alert types and descriptions
+description: Review Defender for IoT Alert descriptions.
+++ Last updated : 03/22/2021++++
+# Defender for IoT Engine alerts
+
+This article describes alerts that may be generated from the Defender for IoT engines. Alerts appear in the Alerts window, where you can manage the alert event.
+
+## Policy engine alerts
+
+Policy engine alerts describe deviations from learned baseline network behavior.
+
+| Title | Description | Severity |
+|--|--|--|
+| Abnormal usage of MAC Addresses | A new source device was detected on the network but has not been authorized. | Minor |
+| Beckhoff Software Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
+| Database Login Failed | A failed login attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major |
+| Emerson ROC Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
+| External address within the network communicated with Internet | A source device defined as part of your network is communicating with Internet addresses. The source is not authorized to communicate with Internet addresses. | Critical |
+| Field Device Discovered Unexpectedly | A new source device was detected on the network but has not been authorized. | Major |
+| Firmware Change Detected | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
+| Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
+| Foxboro I/A Unauthorized Operation | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| FTP Login Failed | A failed login attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major |
+| Function Code Raised Unauthorized Exception | A source device (slave) returned an exception to a destination device (master). | Major |
+| GOOSE Message Type Settings | Message (identified by protocol ID) settings were changed on a source device. | Warning |
+| Honeywell Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
+| Illegal HTTP Communication | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Internet Access Detected | A source device defined as part of your network is communicating with Internet addresses. The source is not authorized to communicate with Internet addresses. | Major |
+| Mitsubishi Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
+| Modbus Address Range Violation | A master device requested access to a new slave memory address. | Major |
+| Modbus Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
+| New Activity Detected - CIP Class | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - CIP Class Service | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - CIP PCCC Command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - CIP Symbol | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - EtherNet/IP I/O Connection | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - EtherNet/IP Protocol Command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - GSM Message Code | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - LonTalk Command Codes | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Port Discovery | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Warning |
+| New Activity Detected - LonTalk Network Variable | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Ovation Data Request | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Read/Write Command (AMS Index Group) | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Read/Write Command (AMS Index Offset) | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Unauthorized DeltaV Message Type | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Unauthorized DeltaV ROC Operation | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Unauthorized RPC Message Type | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Unauthorized RPC Procedure Invocation | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Using AMS Protocol Command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Using Siemens SICAM Command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Using Suitelink Protocol command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Using Suitelink Protocol sessions | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Using Yokogawa VNetIP Command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Asset Detected | A new source device was detected on the network but has not been authorized. | Major |
+| New LLDP Device Configuration | A new source device was detected on the network but has not been authorized. | Major |
+| New Port Discovery | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Warning |
+| Omron FINS Unauthorized Command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| S7 Plus PLC Firmware Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
+| Sampled Values Message Type Settings | Message (identified by protocol ID) settings were changed on a source device. | Warning |
+| Suspicion of Illegal Integrity Scan | A scan was detected on a DNP3 source device (outstation). This scan was not authorized as learned traffic on your network. | Major |
+| Toshiba Computer Link Unauthorized Command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Minor |
+| Unauthorized ABB Totalflow File Operation | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized ABB Totalflow Register Operation | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized Access to Siemens S7 Data Block | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices has not been authorized as learned traffic on your network. | Warning |
+| Unauthorized Access to Siemens S7 Plus Object | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized Access to Wonderware Tag | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices has not been authorized as learned traffic on your network. | Major |
+| Unauthorized BACNet Object Access | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized BACNet Route | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized Database Login | A login attempt between a source client and destination server was detected. Communication between these devices has not been authorized as learned traffic on your network. | Major |
+| Unauthorized Database Operation | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized Emerson ROC Operation | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized GE SRTP File Access | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized GE SRTP Protocol Command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized GE SRTP System Memory Operation | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized HTTP Activity | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized HTTP Server | An unauthorized application was detected on a source device. The application has not been authorized as a learned application on your network. | Major |
+| Unauthorized HTTP SOAP Action | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized HTTP User Agent | An unauthorized application was detected on a source device. The application has not been authorized as a learned application on your network. | Major |
+| Unauthorized Internet Connectivity Detected | A source device defined as part of your network is communicating with Internet addresses. The source is not authorized to communicate with Internet addresses. | Critical |
+| Unauthorized Mitsubishi MELSEC Command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized MMS Program Access | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices has not been authorized as learned traffic on your network. | Major |
+| Unauthorized MMS Service | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized Multicast/Broadcast Connection | A Multicast/Broadcast connection was detected between a source device and other devices. Multicast/Broadcast communication is not authorized. | Critical |
+| Unauthorized Name Query | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized OPC UA Activity | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized OPC UA Request/Response | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized Operation was detected by a User Defined Rule | Traffic was detected between two devices. This activity is unauthorized based on a Custom Alert Rule defined by a user. | Major |
+| Unauthorized PLC Configuration Read | The source device is not defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Warning |
+| Unauthorized PLC Configuration Write | The source device sent a command to read/write the program of a destination controller. This activity was not previously seen. | Major |
+| Unauthorized PLC Program Upload | The source device sent a command to read/write the program of a destination controller. This activity was not previously seen. | Major |
+| Unauthorized PLC Programming | The source device is not defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Critical |
+| Unauthorized Profinet Frame Type | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized SAIA S-Bus Command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized Siemens S7 Execution of Control Function | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized Siemens S7 Execution of User Defined Function | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized Siemens S7 Plus Block Access | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized Siemens S7 Plus Operation | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized SMB Login | A login attempt between a source client and destination server was detected. Communication between these devices has not been authorized as learned traffic on your network. | Major |
+| Unauthorized SNMP Operation | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized SSH Access | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized Windows Process | An unauthorized application was detected on a source device. The application has not been authorized as a learned application on your network. | Major |
+| Unauthorized Windows Service | An unauthorized application was detected on a source device. The application has not been authorized as a learned application on your network. | Major |
+| Unauthorized Operation was detected by a User Defined Rule | New traffic parameters were detected. This parameter combination violates a user defined rule | Major |
+| Unpermitted Modbus Schneider Electric Extension | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unpermitted Usage of ASDU Types | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unpermitted Usage of DNP3 Function Code | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unpermitted Usage of Internal Indication (IIN) | A DNP3 source device (outstation) reported an internal indication (IIN) that has not authorized as learned traffic on your network. | Major |
+| Unpermitted Usage of Modbus Function Code | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+
+## Anomaly engine alerts
+
+| Title | Description | Severity |
+|--|--|--|
+| Abnormal Exception Pattern in Slave | An excessive number of errors were detected on a source device. This may be the result of an operational issue. | Minor |
+| Abnormal HTTP Header Length | The source device sent an abnormal message. This may indicate an attempt to attack the destination device. | Critical |
+| Abnormal Number of Parameters in HTTP Header | The source device sent an abnormal message. This may indicate an attempt to attack the destination device. | Critical |
+| Abnormal Periodic Behavior In Communication Channel | A change in the frequency of communication between the source and destination devices was detected. | Minor |
+| Abnormal Termination of Applications | An excessive number of stop commands were detected on a source device. This may be the result of an operational issue or an attempt to manipulate the device. | Major |
+| Abnormal Traffic Bandwidth | Abnormal bandwidth was detected on a channel. Bandwidth appears to be significantly lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning |
+| Abnormal Traffic Bandwidth Between Devices | Abnormal bandwidth was detected on a channel. Bandwidth appears to be significantly lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning |
+| Address Scan Detected | A source device was detected scanning network devices. This device has not been authorized as a network scanning device. | Critical |
+| ARP Address Scan Detected | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address has not been authorized as valid ARP scanning address. | Critical |
+| ARP Address Scan Detected | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address has not been authorized as valid ARP scanning address. | Critical |
+| ARP Spoofing | An abnormal quantity of packets was detected in the network. This could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. | Warning |
+| Excessive Login Attempts | A source device was seen performing excessive login attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
+| Excessive Number of Sessions | A source device was seen performing excessive login attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
+| Excessive Restart Rate of an Outstation | An excessive number of restart commands were detected on a source device. This may be the result of an operational issue or an attempt to manipulate the device. | Major |
+| Excessive SMB login attempts | A source device was seen performing excessive login attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
+| ICMP Flooding | An abnormal quantity of packets was detected in the network. This could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. | Warning |
+| Illegal HTTP Header Content | The source device initiated an invalid request. | Critical |
+| Inactive Communication Channel | A communication channel between two devices was inactive during a period in which activity is usually seen. This might indicate that the program generating this traffic was changed, or the program might be unavailable. It is recommended to review the configuration of installed program and verify it is configured properly. | Warning |
+| Long Duration Address Scan Detected | A source device was detected scanning network devices. This device has not been authorized as a network scanning device. | Critical |
+| Password Guessing Attempt Detected | A source device was seen performing excessive login attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
+| PLC Scan Detected | A source device was detected scanning network devices. This device has not been authorized as a network scanning device. | Critical |
+| Port Scan Detected | A source device was detected scanning network devices. This device has not been authorized as a network scanning device. | Critical |
+| Unexpected message length | The source device sent an abnormal message. This may indicate an attempt to attack the destination device. | Critical |
+| Unexpected Traffic for Standard Port | Traffic was detected on a device using a port reserved for another protocol. | Major |
+
+## Protocol violation engine alerts
+
+| Title | Description | Severity |
+|--|--|--|
+| Excessive Malformed Packets In a Single Session | An abnormal number of malformed packets sent from the source device to the destination device. This might indicate erroneous communications, or an attempt to manipulate the targeted device. | Major |
+| Firmware Update | A source device sent a command to update firmware on a destination device. Verify that recent programming, configuration and firmware upgrades made to the destination device are valid. | Warning |
+| Function Code Not Supported by Outstation | The destination device received an invalid request. | Major |
+| Illegal BACNet message | The source device initiated an invalid request. | Major |
+| Illegal Connection Attempt on Port 0 | A source device attempted to connect to destination device on port number zero (0). For TCP, port 0 is reserved and cannot be used. For UDP, the port is optional and a value of 0 means no port. There is usually no service on a system that listens on port 0. This event may indicate an attempt to attack the destination device, or indicate that an application was programmed incorrectly. | Minor |
+| Illegal DNP3 Operation | The source device initiated an invalid request. | Major |
+| Illegal MODBUS Operation (Exception Raised by Master) | The source device initiated an invalid request. | Major |
+| Illegal MODBUS Operation (Function Code Zero) | The source device initiated an invalid request. | Major |
+| Illegal Protocol Version | The source device initiated an invalid request. | Major |
+| Incorrect Parameter Sent to Outstation | The destination device received an invalid request. | Major |
+| Initiation of an Obsolete Function Code (Initialize Data) | The source device initiated an invalid request. | Minor |
+| Initiation of an Obsolete Function Code (Save Config) | The source device initiated an invalid request. | Minor |
+| Master Requested an Application Layer Confirmation | The source device initiated an invalid request. | Warning |
+| Modbus Exception | A source device (slave) returned an exception to a destination device (master). | Major |
+| Slave Device Received Illegal ASDU Type | The destination device received an invalid request. | Major |
+| Slave Device Received Illegal Command Cause of Transmission | The destination device received an invalid request. | Major |
+| Slave Device Received Illegal Common Address | The destination device received an invalid request. | Major |
+| Slave Device Received Illegal Data Address Parameter | The destination device received an invalid request. | Major |
+| Slave Device Received Illegal Data Value Parameter | The destination device received an invalid request. | Major |
+| Slave Device Received Illegal Function Code | The destination device received an invalid request. | Major |
+| Slave Device Received Illegal Information Object Address | The destination device received an invalid request. | Major |
+| Unknown Object Sent to Outstation | The destination device received an invalid request. | Major |
+| Usage of a Reserved Function Code | The source device initiated an invalid request. | Major |
+| Usage of Improper Formatting by Outstation | The source device initiated an invalid request. | Warning |
+| Usage of Reserved Status Flags (IIN) | A DNP3 source device (outstation) used the reserved Internal Indicator 2.6. It is recommended to check the device's configuration. | Warning |
+
+## Malware engine alerts
+
+| Title | Description| Severity |
+|--|--|--|
+| Connection Attempt to Known Malicious IP | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
+| Invalid SMB Message (DoublePulsar Backdoor Implant) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
+| Malicious Domain Name Request | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
+| Malware Test File Detected - EICAR AV Success | An EICAR AV test file was detected in traffic between two devices. The file is not malware. It is used to confirm that the antivirus software is installed correctly; demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Major |
+| Suspicion of Conficker Malware | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
+| Suspicion of Denial Of Service Attack | A source device attempted to initiate an excessive number of new connections to a destination device. This may be a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, impact performance and service availability, or cause unrecoverable errors. | Critical |
+| Suspicion of Malicious Activity | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
+| Suspicion of Malicious Activity (BlackEnergy) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
+| Suspicion of Malicious Activity (DarkComet) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
+| Suspicion of Malicious Activity (Duqu) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
+| Suspicion of Malicious Activity (Flame) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
+| Suspicion of Malicious Activity (Havex) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
+| Suspicion of Malicious Activity (Karagany) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
+| Suspicion of Malicious Activity (LightsOut) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
+| Suspicion of Malicious Activity (Name Queries) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
+| Suspicion of Malicious Activity (Poison Ivy) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
+| Suspicion of Malicious Activity (Regin) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
+| Suspicion of Malicious Activity (Stuxnet) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
+| Suspicion of Malicious Activity (WannaCry) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
+| Suspicion of NotPetya Malware - Illegal SMB Parameters Detected | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
+| Suspicion of NotPetya Malware - Illegal SMB Transaction Detected | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
+| Suspicion of Remote Code Execution with PsExec | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
+| Suspicion of Remote Windows Service Management | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
+| Suspicious Executable File Detected on Endpoint | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
+| Suspicious Traffic Detected | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
+
+## Operational engine alerts
+
+| Title | Description | Severity |
+|--|--|--|
+| An S7 Stop PLC Command was Sent | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning |
+| BACNet Operation Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major |
+| Bad MMS Device State | An MMS Virtual Manufacturing Device (VMD) sent a status message. The message indicates that the server may not be configured correctly, partially operational, or not operational at all. | Major |
+| Change of Device Configuration | A configuration change was detected on a source device. | Minor |
+| Continuous Event Buffer Overflow at Outstation | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. | Major |
+| Controller Reset | A source device sent a reset command to a destination controller. The controller stopped operating temporarily and started again automatically. | Warning |
+| Controller Stop | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning |
+| Device Failed to Receive a Dynamic IP Address | The source device is configured to receive a dynamic IP address from a DHCP server but did not receive an address. This indicates a configuration error on the device, or an operational error in the DHCP server. It is recommended to notify the network administrator of the incident | Major |
+| Device is Suspected to be Disconnected (Unresponsive) | A source device did not respond to a command sent to it. It may have been disconnected when the command was sent. | Major |
+| EtherNet/IP CIP Service Request Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major |
+| EtherNet/IP Encapsulation Protocol Command Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major |
+| Event Buffer Overflow in Outstation | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. | Major |
+| Expected Backup Operation Did Not Occur | Expected backup/file transfer activity did not occur between two devices. This may indicate errors in the backup / file transfer process. | Major |
+| GE SRTP Command Failure | A server returned an error code. This indicates a server error or an invalid request by a client. | Major |
+| GE SRTP Stop PLC Command was Sent | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning |
+| GOOSE Control Block Requires Further Configuration | A source device sent a GOOSE message indicating that the device needs commissioning. This means the GOOSE control block requires further configuration and GOOSE messages are partially or completely non-operational. | Major |
+| GOOSE Dataset Configuration was Changed | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning |
+| Honeywell Controller Unexpected Status | A Honeywell Controller sent an unexpected diagnostic message indicating a status change. | Warning |
+| HTTP Client Error | The source device initiated an invalid request. | Warning |
+| Illegal IP Address | System detected traffic between a source device and IP address which is an invalid address. This may indicate wrong configuration or an attempt to generate illegal traffic. | Minor |
+| Master-Slave Authentication Error | The authentication process between a DNP3 source device (master) and a destination device (outstation) failed. | Minor |
+| MMS Service Request Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major |
+| No Traffic Detected on Sensor Interface | A sensor stopped detecting network traffic on a network interface. | Critical |
+| OPC UA Server Raised an Event That Requires User's Attention | An OPC UA server sent an event notification to a client. This type of event requires user attention | Major |
+| OPC UA Service Request Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major |
+| Outstation Restarted | A cold restart was detected on a source device. This means the device was physically turned off and back on again. | Warning |
+| Outstation Restarts Frequently | An excessive number of cold restarts were detected on a source device. This means the device was physically turned off and back on again an excessive number of times. | Minor |
+| Outstation's Configuration Changed | A configuration change was detected on a source device. | Major |
+| Outstation's Corrupted Configuration Detected | This DNP3 source device (outstation) reported a corrupted configuration. | Major |
+| Profinet DCP Command Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major |
+| Profinet Device Factory Reset | A source device sent a factory reset command to a Profinet destination device. The reset command clears Profinet device configurations and stops its operation. | Warning |
+| RPC Operation Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major |
+| Sampled Values Message Dataset Configuration was Changed | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning |
+| Slave Device Unrecoverable Failure | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major |
+| Suspicion of Hardware Problems in Outstation | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major |
+| Suspicion of Unresponsive MODBUS Device | A source device did not respond to a command sent to it. It may have been disconnected when the command was sent. | Minor |
+| Traffic Detected on Sensor Interface | A sensor resumed detecting network traffic on a network interface. | Warning |
+
+## Next steps
+
+You can [Manage alert events](how-to-manage-the-alert-event.md).
+Learn how to [Forward alert information](how-to-forward-alert-information-to-partners.md).
defender-for-iot Architecture Agent Based https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/architecture-agent-based.md
Title: Agent-based solution architecture description: Learn about Azure Defender for IoT agent-based architecture and information flow.----- - Last updated 1/25/2021- # Agent-based solution for device builders
defender-for-iot Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/architecture.md
Title: Agentless solution architecture description: Learn about Azure Defender for IoT agentless architecture and information flow.----- - Last updated 1/25/2021
defender-for-iot Azure Iot Security Local Configuration C https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/azure-iot-security-local-configuration-c.md
Title: Security agent local configuration (C) description: Learn about Defender for agent local configurations for C.----- - Last updated 10/08/2020- # Understanding the LocalConfiguration.json file - C agent
defender-for-iot Azure Iot Security Local Configuration Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/azure-iot-security-local-configuration-csharp.md
Title: Defender for IoT security agent local configuration (C#) description: Learn more about the Defender for IoT security service, security agent local configuration file for C#.----- - Last updated 10/08/2020- # Understanding the local configuration file (C# agent)
defender-for-iot Azure Rtos Security Module Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/azure-rtos-security-module-api.md
Title: Defender-IoT-micro-agent for Azure RTOS API description: Reference API for the Defender-IoT-micro-agent for Azure RTOS.------ - Last updated 09/07/2020
defender-for-iot Concept Agent Portfolio Overview Os Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-agent-portfolio-overview-os-support.md
Title: Agent portfolio overview and OS support (Preview) description: Azure Defender for IoT provides a large portfolio of agents based on the device type. --- Last updated 1/20/2021 - # Agent portfolio overview and OS support (Preview)
defender-for-iot Concept Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-baseline.md
Title: Baseline and custom checks description: Learn about the concept of Azure Defender for IoT baseline.----- - Last updated 10/07/2019- # Azure Defender for IoT baseline and custom checks
defender-for-iot Concept Customizable Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-customizable-security-alerts.md
Title: Custom security alerts for IoT Hub description: Learn about customizable security alerts and recommended remediation using Defender for IoT Hub's features and service.----- - Last updated 2/16/2021- # Defender for IoT Hub custom security alerts
defender-for-iot Concept Event Aggregation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-event-aggregation.md
Title: Event aggregation (Preview)- description: Defender for IoT security agents collects data and system events from your local device, and sends the data to the Azure cloud for processing, and analytics.--- Last updated 1/20/2021 - # Event aggregation (Preview)
defender-for-iot Concept Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-key-concepts.md
Title: Key advantages description: Learn about basic Defender for IoT concepts.--- Last updated 12/13/2020 - # Basic concepts
defender-for-iot Concept Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-recommendations.md
Title: Security recommendations for IoT Hub description: Learn about the concept of security recommendations and how they are used in the Defender for IoT Hub.----- - Last updated 02/16/2021- # Security recommendations for IoT Hub
defender-for-iot Concept Rtos Security Alerts Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-rtos-security-alerts-recommendations.md
Title: Defender-IoT-micro-agent for Azure RTOS built-in & customizable alerts and recommendations description: Learn about security alerts and recommended remediation using the Azure IoT Defender-IoT-micro-agent -RTOS.------ - Last updated 09/07/2020- # Defender-IoT-micro-agent for Azure RTOS security alerts and recommendations (preview)
defender-for-iot Concept Rtos Security Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-rtos-security-module.md
Title: Conceptual explanation of the basics of the Defender-IoT-micro-agent for Azure RTOS description: Learn the basics about the Defender-IoT-micro-agent for Azure RTOS concepts and workflow.------ - Last updated 09/09/2020- # Defender-IoT-micro-agent for Azure RTOS (preview)
defender-for-iot Concept Security Agent Authentication Methods https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-security-agent-authentication-methods.md
Title: Security agent authentication methods description: Learn about the different authentication methods available when using the Defender for IoT service.----- - Last updated 01/24/2021- # Security agent authentication methods
defender-for-iot Concept Security Agent Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-security-agent-authentication.md
Title: Security agent authentication (Preview)- description: Perform micro agent authentication with two possible methods.--- Last updated 1/20/2021 - # Micro agent authentication methods (Preview)
defender-for-iot Concept Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-security-alerts.md
Title: Built-in & custom alerts list description: Learn about security alerts and recommended remediation using Defender for IoT Hub's features and service.----- - Last updated 2/16/2021- # Defender for IoT Hub security alerts
defender-for-iot Concept Security Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-security-module.md
Title: Defender-IoT-micro-agent and device twins description: Learn about the concept of Defender-IoT-micro-agent twins and how they are used in Defender for IoT.----- - Last updated 07/24/2019- # Defender-IoT-micro-agent
defender-for-iot Concept Security Posture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-security-posture.md
Title: Security posture - CIS benchmark- description: Improve your security compliance and posture by using Defender for IoT micro agent.--- Last updated 1/20/2021 - # Security posture – CIS benchmark
defender-for-iot Concept Standalone Micro Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-standalone-micro-agent-overview.md
Title: Standalone micro agent overview (Preview)- description: The Azure Defender for IoT security agents allows you to build security directly into your new IoT devices and Azure IoT projects.--- Last updated 1/19/2021 - # Standalone micro agent overview (Preview)
defender-for-iot Edge Security Module Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/edge-security-module-deprecation.md
Title: Feature support and retirement- description: Defender for IoT will continue to support C, C#, and Edge until March 1, 2022. --- Last updated 1/21/2021-
defender-for-iot Event Aggregation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/event-aggregation.md
Title: Defender-IoT-micro-agent classic event aggregation description: Learn about Defender for IoT event aggregation.----- - Previously updated : 1/20/2021- Last updated : 3/23/2021 # Defender-IoT-micro-agent classic event aggregation
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/getting-started.md
Title: "Quickstart: Getting started" description: In this quickstart you will learn how to get started with understanding the basic workflow for Defender for IoT deployment.---- - Last updated 2/18/2021- # Quickstart: Get started with Defender for IoT
defender-for-iot How To Accelerate Alert Incident Response https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-accelerate-alert-incident-response.md
Title: Accelerate alert workflows description: Improve alert and incident workflows.--- Last updated 12/02/2020-
These fields should be configured in the partner solution to display the alert g
### Default alert groups The following alert groups are automatically defined:
-| | | |
-|--|--|--|
-| Abnormal communication behavior | Custom alerts | Remote access |
-| Abnormal HTTP communication behavior | Discovery | Restart and stop commands |
-| Authentication | Firmware change | Scan |
-| Unauthorized communication behavior | Illegal commands | Sensor traffic |
-| Bandwidth anomalies | Internet access | Suspicion of malware |
-| Buffer overflow | Operation failures | Suspicion of malicious activity |
-| Command failures | Operational issues | |
-| Configuration changes | Programming | |
+
+- Abnormal communication behavior
+- Custom alerts
+- Remote access
+- Abnormal HTTP communication behavior
+- Discovery
+- Restart and stop commands
+- Authentication
+- Firmware change
+- Scan
+- Unauthorized communication behavior
+- Illegal commands
+- Sensor traffic
+- Bandwidth anomalies
+- Internet access
+- Suspicion of malware
+- Buffer overflow
+- Operation failures
+- Suspicion of malicious activity
+- Command failures
+- Operational issues
+- Configuration changes
+- Programming
Alert groups are predefined. For details about alerts associated with alert groups, and about creating custom alert groups, contact [Microsoft Support](https://support.microsoft.com/supportforbusiness/productselection?sapId=82c8f35-1b8e-f274-ec11-c6efdd6dd099).
defender-for-iot How To Activate And Set Up Your On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-activate-and-set-up-your-on-premises-management-console.md
Title: Activate and set up your on-premises management console description: Activating the management console ensures that sensors are registered with Azure and send information to the on-premises management console, and that the on-premises management console carries out management tasks on connected sensors.--- Last updated 3/18/2021 - # Activate and set up your on-premises management console
defender-for-iot How To Activate And Set Up Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-activate-and-set-up-your-sensor.md
Title: Activate and set up your sensor description: This article describes how to sign in and activate a sensor console.--- Last updated 1/12/2021 - # Activate and set up your sensor
defender-for-iot How To Agent Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-agent-configuration.md
Title: Configure security agents description: Learn how to configure Defender for IoT security agents for use with the Defender for IoT security service.----- - Last updated 09/09/2020- # Tutorial: Configure security agents
defender-for-iot How To Azure Rtos Security Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-azure-rtos-security-module.md
Title: Configure and customize Defender-IoT-micro-agent for Azure RTOS description: Learn about how to configure and customize your Defender-IoT-micro-agent for Azure RTOS.----- - Last updated 03/07/2021- # Configure and customize Defender-IoT-micro-agent for Azure RTOS (preview)
defender-for-iot How To Configure Agent Based Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-configure-agent-based-solution.md
Title: Configure Azure Defender for IoT agent-based solution description: Learn how to configure data collection in Azure Defender for IoT agent-based solution--- Last updated 1/21/2021 - # Configure Azure Defender for IoT agent-based solution
defender-for-iot How To Configure With Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-configure-with-sentinel.md
Title: Configure Azure Sentinel for Defender for IoT description: Explains how to configure Azure Sentinel to receive data from your Defender for IoT solution.---- - Last updated 12/28/2020- # Connect your data from Defender for IoT to Azure Sentinel
defender-for-iot How To Control What Traffic Is Monitored https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-control-what-traffic-is-monitored.md
Title: Control what traffic is monitored description: Sensors automatically perform deep packet detection for IT and OT traffic and resolve information about network devices, such as device attributes and network behavior. Several tools are available to control the type of traffic that each sensor detects. --- Last updated 12/07/2020 - # Control what traffic is monitored
defender-for-iot How To Create And Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-create-and-manage-users.md
Title: Create and manage users description: Create and manage users of sensors and the on-premises management console. Users can be assigned the role of administrator, security analyst, or read-only user.--- Last updated 03/03/2021 - # About Defender for IoT console users
defender-for-iot How To Create Attack Vector Reports https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-create-attack-vector-reports.md
Title: Create attack vector reports description: Attack vector reports provide a graphical representation of a vulnerability chain of exploitable devices.--- Last updated 12/17/2020 - # Attack vector reporting
defender-for-iot How To Create Data Mining Queries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-create-data-mining-queries.md
Title: Create data mining reports description: generate comprehensive and granular information about your network devices at various layers, such as protocols, firmware versions, or programming commands.--- Last updated 01/20/2021 - # Sensor data mining queries
defender-for-iot How To Create Risk Assessment Reports https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-create-risk-assessment-reports.md
Title: Create risk assessment reports description: Gain insight into network risks detected by individual sensors or an aggregate view of risks detected by all sensors.--- Last updated 12/17/2020 - # Risk assessment reporting
defender-for-iot How To Create Trends And Statistics Reports https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-create-trends-and-statistics-reports.md
Title: Generate trends and statistics reports description: Gain insight into network activity, statistics, and trends by using Defender for IoT Trends and Statistics widgets.--- Last updated 2/21/2021 - # Sensor trends and statistics reports
defender-for-iot How To Define Global User Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-define-global-user-access-control.md
Title: Define global user access control description: In large organizations, user permissions can be complex and might be determined by a global organizational structure, in addition to the standard site and zone structure.--- Last updated 12/08/2020 - # Define global access control
defender-for-iot How To Deploy Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-deploy-agent.md
Title: Select and deploy security agents description: Learn about how select and deploy Defender for IoT security agents on IoT devices.----- - Last updated 07/23/2019- # Select and deploy a security agent on your IoT device
defender-for-iot How To Deploy Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-deploy-edge.md
Title: Deploy IoT Edge Defender-IoT-micro-agent description: Learn about how to deploy a Defender for IoT security agent on IoT Edge.----- - Last updated 1/30/2020- # Deploy a Defender-IoT-micro-agent on your IoT Edge device
defender-for-iot How To Deploy Linux C https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-deploy-linux-c.md
Title: Install & deploy Linux C agent description: Learn how to install and deploy the Defender for IoT C-based security agent on Linux----- - Last updated 07/23/2019- # Deploy Defender for IoT C based security agent for Linux
defender-for-iot How To Deploy Linux Cs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-deploy-linux-cs.md
Title: Install & deploy Linux C# agent description: Learn how to install and deploy the Defender for IoT C#-based security agent on Linux------ - Last updated 09/09/2020- # Deploy Defender for IoT C# based security agent for Linux
defender-for-iot How To Deploy Windows Cs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-deploy-windows-cs.md
Title: Install C# agent on Windows device description: Learn about how to install Defender for IoT agent on 32-bit or 64-bit Windows devices.------ - Last updated 09/09/2020- # Deploy a Defender for IoT C#-based security agent for Windows
defender-for-iot How To Enhance Port And Vlan Name Resolution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-enhance-port-and-vlan-name-resolution.md
Title: Enhance port and VLAN name resolution description: Customize port and VLAN names on your sensors to enrich device resolution.--- Last updated 12/13/2020 -
-# Enhance port and VLAN name resolution
+# Enhance port, VLAN and OS resolution
You can customize port and VLAN names on your sensors to enrich device resolution.
VLAN names can contain up to 50 ASCII characters.
> VLAN names are not synchronized between the sensor and the management console. You need to define the name on the management console as well. For Cisco switches, add the following line to the span configuration: `monitor session 1 destination interface XX/XX encapsulation dot1q`. In that command, *XX/XX* is the name and number of the port.
-To configure VLANs:
+To configure VLAN names:
1. On the side menu, select **System Settings**.
To configure VLANs:
3. Add a unique name next to each VLAN ID.
+## Improve device operating system classification: data enhancement
+
+Sensors continuously auto discover new devices, as well as changes to previously discovered devices, including operating system types.
+
+Under certain circumstances, conflicts might be detected in discovered operating systems. This can happen, for example, if you have an operating systems version that refers to either desktop or server systems. If it happens, you'll receive a notification with optional operating systems classifications.
++
+Investigate the recommendations in order to enrich operating system classification. This classification appears in the device inventory, data-mining reports, and other displays. Making sure this information is up-to-date can improve the accuracy of alerts, threats, and risk analysis reports.
+
+To access operating system recommendations:
+
+1. Select **System Settings**.
+1. Select **Data Enhancement**.
+ ## Next steps View enriched device information in various reports:
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-forward-alert-information-to-partners.md
Title: Forward alert information description: You can send alert information to partner systems by working with forwarding rules.--- Last updated 12/02/2020 - # Forward alert information
defender-for-iot How To Gain Insight Into Global Regional And Local Threats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-gain-insight-into-global-regional-and-local-threats.md
Title: Gain insight into global, regional, and local threats description: Gain insight into global, regional, and local threats by using the site map in the on-premises management console.--- Last updated 12/07/2020 - # Gain insight into global, regional, and local threats
defender-for-iot How To Identify Required Appliances https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-identify-required-appliances.md
Title: Identify required appliances description: Learn about hardware and virtual appliances for certified Defender for IoT sensors and the on-premises management console. --- Last updated 01/13/2021 - # Identify required appliances
After you purchase the appliance, go to **Defender for IoT** > **Network Sensors
:::image type="content" source="media/how-to-prepare-your-network/azure-defender-for-iot-sensor-download-software-screen.png" alt-text="Network sensors ISO.":::
-## Enterprise deployment: Dell PowerEdge R340 XL
-
-| Component | Technical specifications |
-|--|--|
-| Chassis | 1U rack server
-| Dimensions | 42.8 x 434.0 x 596 (mm) /1.67" x 17.09" x 23.5" (in) |
-| Weight | Max 29.98 lb/13.6 kg |
-| Processor | Intel Xeon E-2144G 3.6 GHz, 8M cache, 4C/8T, turbo (71 W) |
-| Chipset | Intel C246 |
-| Memory | 32 GB = 2 x 16-GB 2666MT/s DDR4 ECC UDIMM |
-| Storage | 3 x 2-TB 7.2 K RPM SATA 6-Gbps 512n 3.5-in Hot-Plug Hard Drive - RAID 5 |
-| Network controller | On-board: 2 x 1-Gb Broadcom BCM5720<br>On-board LOM: iDRAC Port Card 1-Gb Broadcom BCM5720 <br><br>External: 1 x Intel Ethernet i350 QP 1-Gb Server Adapter, Low Profile |
-| Management | iDRAC nine Enterprise |
-| Device access | Two rear USB 3.0 <br> One front USB 3.0 |
-| Power | Dual Hot Plug Power Supplies 350 W |
-| Rack support | ReadyRails II sliding rails for tool-less mounting in 4-post racks with square or unthreaded round holes or tooled mounting in 4-post threaded hole racks, with support for optional tool-less cable management arm. |
-
-## Dell R340 BOM
-- ## Next steps [About Azure Defender for IoT installation](how-to-install-software.md)
defender-for-iot How To Import Device Information https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-import-device-information.md
Title: Import device information description: Defender for IoT sensors monitor and analyze mirrored traffic. In these cases, you might want to import data to enrich information on devices already detected.--- Last updated 12/06/2020 - # Import device information to a sensor
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-install-software.md
Title: Defender for IoT installation description: Learn how to install a sensor and the on-premises management console for Azure Defender for IoT.--- Last updated 12/2/2020 - # Defender for IoT installation
defender-for-iot How To Investigate All Enterprise Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md
Title: Learn about devices discovered by all enterprise sensors description: Use the device inventory in the on-premises management console to get a comprehensive view of device information from connected sensors. Use import, export, and filtering tools to manage this information. --- Last updated 12/02/2020 - # Investigate all enterprise sensor detections in the device inventory
To filter the device inventory:
5. To change the filter definitions, change the definitions and select **Save Changes**.
-## View device information per zone
-
-You can learn the following information about devices in a zone.
-
-### View a device map
-
-To view a sensor device map for a selected zone:
--- In the **Site Management** window, select **View Zone Map** from the bar that contains the zone name.-
- :::image type="content" source="media/how-to-work-with-asset-inventory-information/default-region-to-default-business-unit-v2.png" alt-text="Default region to default business unit.":::
-
-The **Device Map** window appears. It shows all the network elements related to the selected zone, including the sensors, the devices connected to them, and other information.
--
-The following tools are available for viewing devices and device information from the map. For details about each of these features, see the *Defender for IoT platform user guide*.
--- **Map zoom views**: Simplified View, Connections View, and Detailed View. The displayed map view varies depending on the map's zoom level. You switch between map views by adjusting the zoom levels.-
- :::image type="icon" source="media/how-to-work-with-asset-inventory-information/zoom-icon.png" border="false":::
--- **Map search and layout tools**: Tools used to display varied network segments, devices, device groups, or layers.-
- :::image type="content" source="media/how-to-work-with-asset-inventory-information/search-and-layout-tools.png" alt-text="Screenshot of the Search and Layout Tools view.":::
--- **Labels and indicators on devices:** For example, the number of devices grouped in a subnet in an IT network. In this example, it's 8.-
- :::image type="content" source="media/how-to-work-with-asset-inventory-information/labels-and-indicators.png" alt-text="Screenshot of labels and indicators.":::
--- **View device properties**: For example, the sensor that's monitoring the device and basic device properties. Right-click the device to view the device properties.-
- :::image type="content" source="media/how-to-work-with-asset-inventory-information/asset-properties-v2.png" alt-text="Screenshot of the Device Properties view.":::
--- **Alert associated with a device:** Right-click the device to view related alerts.-
- :::image type="content" source="media/how-to-work-with-asset-inventory-information/show-alerts.png" alt-text="Screenshot of the Show Alerts view.":::
-
-### View alerts associated with a zone
-
-To view alerts associated with a specific zone:
--- Select the alert icon form the **Zone** window. -
- :::image type="content" source="media/how-to-work-with-asset-inventory-information/business-unit-view-v2.png" alt-text="The default Business Unit view with examples.":::
-
-For more information, see [Overview: Working with alerts](how-to-work-with-alerts-on-premises-management-console.md).
-
-### View the device inventory of a zone
-
-To view the device inventory associated with a specific zone:
--- Select **View Device Inventory** from the **Zone** window.-
- :::image type="content" source="media/how-to-work-with-asset-inventory-information/default-business-unit.png" alt-text="The device inventory screen will appear.":::
-
-For more information, see [Investigate all enterprise sensor detections in a device inventory](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md).
-
-### View additional zone information
-
-The following additional zone information is available:
--- **Zone details**: View the number of devices, alerts, and sensors associated with the zone.--- **Sensor details**: View the name, IP address, and version of each sensor assigned to the zone.--- **Connectivity status**: If a sensor is disconnected, connect from the sensor. See [Connect sensors to the on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md#connect-sensors-to-the-on-premises-management-console). --- **Update progress**: If the connected sensor is being upgraded, upgrade statuses will appear. During upgrade, the on-premises management console does not receive device information from the sensor.-
-## See also
+## Next steps
[Investigate sensor detections in a device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md)
defender-for-iot How To Investigate Cis Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-investigate-cis-benchmark.md
Title: Investigate CIS benchmark recommendation- description: Perform basic and advanced investigations based on OS baseline recommendations.--- Last updated 1/21/2021 - # Investigate OS baseline (based on CIS benchmark) recommendation
defender-for-iot How To Investigate Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-investigate-device.md
Title: Investigate a suspicious device description: This how to guide explains how to use Defender for IoT to investigate a suspicious IoT device using Log Analytics.----- - Last updated 09/04/2020- # Investigate a suspicious IoT device
defender-for-iot How To Investigate Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-investigate-sensor-detections-in-a-device-inventory.md
Title: Gain insight into devices discovered by a specific sensor description: The device inventory displays an extensive range of device attributes that a sensor detects. --- Last updated 12/06/2020 - # Investigate sensor detections in a device inventory
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-individual-sensors.md
Title: Manage individual sensors description: Learn how to manage individual sensors, including managing activation files, performing backups, and updating a standalone sensor. --- Last updated 02/02/2021 - # Manage individual sensors
defender-for-iot How To Manage Sensors From The On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-sensors-from-the-on-premises-management-console.md
Title: Manage sensors from the on-premises management console description: Learn how to manage sensors from the management console, including updating sensor versions, pushing system settings to sensors, and enabling and disabling engines on sensors.--- Last updated 12/07/2020 - # Manage sensors from the management console
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-sensors-on-the-cloud.md
Title: Onboard and manage sensors and subscriptions in the Defender for IoT portal description: Learn how to onboard, view, and manage sensors in the Defender for IoT portal.--- Last updated 2/18/2021 - # Onboard and manage sensors and subscriptions in the Defender for IoT portal
defender-for-iot How To Manage The Alert Event https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-the-alert-event.md
Title: Manage alert events description: Manage alert events detected in your network. --- Last updated 12/07/2020-
defender-for-iot How To Manage The On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-the-on-premises-management-console.md
Title: Manage the on-premises management console description: Learn about on-premises management console options like backup and restore, defining the host name, and setting up a proxy to sensors.--- Last updated 1/12/2021 - # Manage the on-premises management console
defender-for-iot How To Security Data Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-security-data-access.md
Title: Access security & recommendation data description: Learn about how to access your security alert and recommendation data when using Defender for IoT.----- - Last updated 09/04/2020- # Access your security data
defender-for-iot How To Send Security Messages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-send-security-messages.md
Title: Send Defender for IoT device security messages description: Learn how to send your security messages using Defender for IoT.----- - Last updated 2/8/2021- - # Send security messages SDK
defender-for-iot How To Set Up High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-set-up-high-availability.md
Title: Set up high availability description: Increase the resiliency of your Defender for IoT deployment by installing a on-premises management console high availability appliance. High availability deployments ensure your managed sensors continuously report to an active on-premises management console.--- Last updated 12/07/2020 - # About high availability
defender-for-iot How To Set Up Snmp Mib Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-set-up-snmp-mib-monitoring.md
Title: Set up SNMP MIB monitoring description: You can perform sensor health monitoring by using SNMP. The sensor responds to SNMP queries sent from an authorized monitoring server.--- Last updated 12/14/2020 - # Set up SNMP MIB monitoring
defender-for-iot How To Set Up Your Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-set-up-your-network.md
Title: Set up your network description: Learn about solution architecture, network preparation, prerequisites, and other information needed to ensure that you successfully set up your network to work with Azure Defender for IoT appliances.--- Last updated 02/18/2021 - # About Azure Defender for IoT network setup
defender-for-iot How To Track Sensor Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-track-sensor-activity.md
Title: Track sensor activity description: The event timeline presents a timeline of activity detected on your network, including alerts and alert management actions, network events, and user operations such as user sign-in and user deletion.--- Last updated 12/10/2020 - # Track sensor activity
defender-for-iot How To Troubleshoot The Sensor And On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-troubleshoot-the-sensor-and-on-premises-management-console.md
Title: Troubleshoot the sensor and on-premises management console description: Troubleshoot your sensor and on-premises management console to eliminate any problems you might be having.--- Last updated 03/14/2021 - # Troubleshoot the sensor and on-premises management console
defender-for-iot How To View Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-view-alerts.md
Title: View alerts
+ Title: Filter and manage alerts from the Alerts page
description: View alerts according to various categories, and uses search features to help you find alerts of interest. Previously updated : 12/02/2020 Last updated : 3/22/2021 -
-# View alerts
+# Filter and manage alerts from the Alerts page
This article describes how to view alerts triggered by your sensor and manage them with alert tools.
defender-for-iot How To View Information Per Zone https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-view-information-per-zone.md
+
+ Title: Learn about devices on specific zones
+description: Use the on-premises management console to get a comprehensive view information per specific zone
+++ Last updated : 03/21/2021+++++
+# View information per zone
++
+## View a device map for a zone
+
+View a Device map for a selected zone on a sensor. This view displays all network elements related to the selected zone, including the sensors, the devices connected to them, and other information.
+++
+- In the **Site Management** window, select **View Zone Map** from the bar that contains the zone name.
+
+ :::image type="content" source="media/how-to-work-with-asset-inventory-information/default-region-to-default-business-unit-v2.png" alt-text="Default region to default business unit.":::
+
+The **Device Map** window appears.
+The following tools are available for viewing devices and device information from the map. For details about each of these features, see the *Defender for IoT platform user guide*.
+
+- **Map zoom views**: Simplified View, Connections View, and Detailed View. The displayed map view varies depending on the map's zoom level. You switch between map views by adjusting the zoom levels.
+
+ :::image type="icon" source="media/how-to-work-with-asset-inventory-information/zoom-icon.png" border="false":::
+
+- **Map search and layout tools**: Tools used to display varied network segments, devices, device groups, or layers.
+
+ :::image type="content" source="media/how-to-work-with-asset-inventory-information/search-and-layout-tools.png" alt-text="Screenshot of the Search and Layout Tools view.":::
+
+- **Labels and indicators on devices:** For example, the number of devices grouped in a subnet in an IT network. In this example, it's 8.
+
+ :::image type="content" source="media/how-to-work-with-asset-inventory-information/labels-and-indicators.png" alt-text="Screenshot of labels and indicators.":::
+
+- **View device properties**: For example, the sensor that's monitoring the device and basic device properties. Right-click the device to view the device properties.
+
+ :::image type="content" source="media/how-to-work-with-asset-inventory-information/asset-properties-v2.png" alt-text="Screenshot of the Device Properties view.":::
+
+- **Alert associated with a device:** Right-click the device to view related alerts.
+
+ :::image type="content" source="media/how-to-work-with-asset-inventory-information/show-alerts.png" alt-text="Screenshot of the Show Alerts view.":::
+
+## View alerts associated with a zone
+
+To view alerts associated with a specific zone:
+
+- Select the alert icon form the **Zone** window.
+
+ :::image type="content" source="media/how-to-work-with-asset-inventory-information/business-unit-view-v2.png" alt-text="The default Business Unit view with examples.":::
+
+For more information, see [Overview: Working with alerts](how-to-work-with-alerts-on-premises-management-console.md).
+
+### View the device inventory of a zone
+
+To view the device inventory associated with a specific zone:
+
+- Select **View Device Inventory** from the **Zone** window.
+
+ :::image type="content" source="media/how-to-work-with-asset-inventory-information/default-business-unit.png" alt-text="The device inventory screen will appear.":::
+
+For more information, see [Investigate all enterprise sensor detections in a device inventory](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md).
+
+## View additional zone information
+
+The following additional zone information is available:
+
+- **Zone details**: View the number of devices, alerts, and sensors associated with the zone.
+
+- **Sensor details**: View the name, IP address, and version of each sensor assigned to the zone.
+
+- **Connectivity status**: If a sensor is disconnected, connect from the sensor. See [Connect sensors to the on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md#connect-sensors-to-the-on-premises-management-console).
+
+- **Update progress**: If the connected sensor is being upgraded, upgrade statuses will appear. During upgrade, the on-premises management console does not receive device information from the sensor.
+
+## Next steps
+
+[Gain insight into global, regional, and local threats](how-to-gain-insight-into-global-regional-and-local-threats.md)
defender-for-iot How To View Information Provided In Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-view-information-provided-in-alerts.md
Title: View information in alerts
+ Title: About alert messages
description: Select an alert from the Alerts window to review details.--- Previously updated : 12/03/2020 Last updated : 3/21/2021 -
-# View information in alerts
+# About alert messages
Select an alert from the **Alerts** window to review alert details. Details include the following information:
defender-for-iot How To Work With The Sensor Console Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-work with-the-sensor-console-dashboard.md
Title: Work with the sensor console dashboard description: The dashboard allows you to quickly view the security status of your network. It provides a high-level overview of threats to your whole system on a timeline along with information about related devices.--- Last updated 11/03/2020 - # The dashboard
defender-for-iot How To Work With Alerts On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-work-with-alerts-on-premises-management-console.md
Title: Work with alerts on the on-premises management console description: Use the on-premises management console to get an enterprise view of recent threats in your network and better understand how sensor users are handling them.--- Last updated 12/06/2020-
defender-for-iot How To Work With Alerts On Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-work-with-alerts-on-your-sensor.md
Title: Work with alerts on your sensor
+ Title: About sensor alerts
description: Work with alerts to help you enhance the security and operation of your network.--- Last updated 11/30/2020 -
-# Work with alerts on your sensor
+# About sensor alerts
-Work with alerts to help you enhance the security and operation of your network. Alerts provide you with information about:
+Alerts help you enhance the security and operation of your network. Alerts provide you with information about:
- Deviations from authorized network activity
defender-for-iot How To Work With Device Notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-work-with-device-notifications.md
Title: Work with device notifications description: Notifications provide information about network activity that might require your attention, along with recommendations for handling this activity.--- Last updated 12/12/2020 - # Work with device notifications
Notifications provide information about network activity that might require your
Responding to notifications improves the information provided in the device map, device inventory, and data-mining queries and reports. It also provides insight into legitimate network changes and potential network misconfigurations.
-To access notifications:
--- Select **System Settings** and then select **Data Enhancement**.-
-## Notifications vs. alerts
+**Notifications vs. alerts**
In addition to receiving notifications on network activity, you might receive *alerts*. Notifications provide information about network changes or unresolved device properties that don't present a threat. Alerts provide information about network deviations and changes that might present a threat to the network.
To display notifications and handle notifications:
**New IPs** and **No Subnets** configured events can't be handled simultaneously. They require manual confirmation.
-## Improve device OS classification: data enhancement
-
-The sensor continuously autodiscovers new OT devices. It also autodiscovers changes to previously discovered devices, including operating system types.
-
-Under certain circumstances, conflicts might be detected in discovered operating systems. This can happen because you have an OS version that refers to either desktop or server systems. If it happens, you'll receive a notification with optional OS classifications.
--
-Investigate the recommendations in order to enrich OS classification. This information appears in the device inventory, data-mining reports, and other displays. It can also improve the accuracy of alerts, threats, and risk analysis.
-
-When you accept a recommendation, the OS type information will be updated in the sensor.
- ## See also [View alerts](how-to-view-alerts.md)
defender-for-iot How To Work With The Sensor Device Map https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-work-with-the-sensor-device-map.md
Title: Work with the sensor device map description: The Device Map provides a graphical representation of network devices detected. Use the map to analyze, and manage device information, network slices and generate reports.--- Last updated 1/7/2021 - # Investigate sensor detections in the Device Map
defender-for-iot How To Work With Threat Intelligence Packages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-work-with-threat-intelligence-packages.md
Title: Update threat intelligence data description: The threat intelligence data package is provided with each new Defender for IoT version, or if needed between releases.--- Last updated 12/14/2020-
defender-for-iot Integration Cisco Ise Pxgrid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/integration-cisco-ise-pxgrid.md
Title: About the Cisco ISE pxGrid integration- description: Bridging the capabilities of Defender for IoT with Cisco ISE pxGrid, provides security teams unprecedented device visibility to the enterprise ecosystem.--- Last updated 12/28/2020 - # About the Cisco ISE pxGrid integration
defender-for-iot Integration Forescout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/integration-forescout.md
Title: About the Forescout integration- description: The Azure Defender for IoT integration with the Forescout platform provides a new level of centralized visibility, monitoring, and control for the IoT and OT landscape.--- Last updated 1/17/2021 - # About the Forescout integration
defender-for-iot Integration Fortinet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/integration-fortinet.md
Title: About the Fortinet integration- description: Defender for IoT and Fortinet has established a technology partnership in order to detect and stop attacks on IoT and ICS networks.--- Last updated 1/17/2021 - # Defender for IoT and Fortinet IIoT and ICS threat detection & prevention
defender-for-iot Integration Palo Alto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/integration-palo-alto.md
Title: Palo Alto integration- description: Defender for IoT has integrated its continuous ICS threat monitoring platform with Palo AltoΓÇÖs next-generation firewalls to enable blocking of critical threats, faster and more efficiently.--- Last updated 1/17/2021 - # About the Palo Alto integration
defender-for-iot Integration Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/integration-servicenow.md
Title: About the ServiceNow integration- description: The Defender for IoT ICS Management application for ServiceNow provides SOC analysts with multidimensional visibility into the specialized OT protocols and IoT devices deployed in industrial environments, along with ICS-aware behavioral analytics to rapidly detect suspicious or anomalous behavior.--- Last updated 1/17/2021 - # The Defender for IoT ICS management application for ServiceNow
defender-for-iot Integration Splunk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/integration-splunk.md
Title: About the Splunk integration- description: To address a lack of visibility into the security and resiliency of OT networks, Defender for IoT developed the Defender for IoT, IIoT, and ICS threat monitoring application for Splunk, a native integration between Defender for IoT and Splunk that enables a unified approach to IT and OT security.--- Last updated 1/4/2021 - # Defender for IoT and ICS threat monitoring application for Splunk
To create a forwarding rule:
| **Select Severity** | The minimal security level incident to forward. For example, if Minor is selected, minor alerts and any alert above this severity level will be forwarded. | | **Protocols** | By default, all the protocols are selected. To select a specific protocol, select **Specific** and select the protocol for which this rule is applied. | | **Engines** | By default, all the security engines are involved. To select a specific security engine for which this rule is applied, select **Specific** and select the engine. |
- | **System Notifications** | Forward sensor online/offline status. This option is only available if you have logged into the Central Manager. | |
+ | **System Notifications** | Forward sensor online/offline status. This option is only available if you have logged into the Central Manager. |
1. To instruct Defender for IoT to send asset information to Splunk, select **Action**, and then select **Send to Splunk Server**.
defender-for-iot Iot Security Azure Rtos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/iot-security-azure-rtos.md
Title: Defender-IoT-micro-agent for Azure RTOS overview description: Learn more about the Defender-IoT-micro-agent for Azure RTOS support and implementation as part of Azure Defender for IoT.------ - Last updated 01/14/2021- # Overview: Defender for IoT Defender-IoT-micro-agent for Azure RTOS (preview)
defender-for-iot Overview Security Agents https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/overview-security-agents.md
Title: Security agents description: Get started with understanding, configuring, deploying, and using Azure Defender for IoT security service agents on your IoT devices.------ - Last updated 1/24/2021- # Get started with Azure Defender for IoT device micro agents
defender-for-iot Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/overview.md
Title: Service overview description: Learn more about Defender for IoT features and services, and understand how Defender for IoT provides comprehensive IoT security.----- - Last updated 12/09/2020- # Welcome to Azure Defender for IoT
defender-for-iot Quickstart Azure Rtos Security Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-azure-rtos-security-module.md
Title: "Quickstart: Configure and enable the Defender-IoT-micro-agent for Azure RTOS" description: Learn how to onboard and enable the Defender-IoT-micro-agent for Azure RTOS service in your Azure IoT Hub. ---- - Last updated 01/24/2021-
defender-for-iot Quickstart Building The Defender Micro Agent From Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-building-the-defender-micro-agent-from-source.md
Title: Build the Defender micro agent from source code (Preview)- description: Micro Agent includes an infrastructure, which can be used to customize your distribution.--- Last updated 1/18/2021 - # Build the Defender micro agent from source code (Preview)
defender-for-iot Quickstart Configure Your Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-configure-your-solution.md
Title: "Quickstart: Add Azure resources to your IoT solution" description: In this quickstart, learn how to configure your end-to-end IoT solution using Azure Defender for IoT.----- - Last updated 01/25/2021- # Quickstart: Configure your Azure Defender for IoT solution
defender-for-iot Quickstart Create Custom Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-create-custom-alerts.md
Title: Create custom alerts description: Understand, create, and assign custom device alerts for the Azure Defender for IoT security service.----- - Last updated 09/04/2020- # Create custom alerts
defender-for-iot Quickstart Create Micro Agent Module Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-create-micro-agent-module-twin.md
Title: Create a Defender IoT micro agent module twin (Preview)- description: Learn how to create individual DefenderIotMicroAgent module twins for new devices.--- Last updated 1/20/2021 - # Create a Defender IoT micro agent module twin (Preview)
defender-for-iot Quickstart Create Security Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-create-security-twin.md
Title: "Quickstart: Create a security module twin" description: In this quickstart, learn how to create a Defender for IoT module twin for use with Azure Defender for IoT.------ - Last updated 1/21/2021- # Quickstart: Create an azureiotsecurity module twin
defender-for-iot Quickstart Investigate Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-investigate-security-alerts.md
Title: "Quickstart: Investigate security alerts" description: Understand, drill down, and investigate Defender for IoT security alerts on your IoT devices.----- - Last updated 07/30/2020- # Quickstart: Investigate security alerts
defender-for-iot Quickstart Investigate Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-investigate-security-recommendations.md
Title: Investigate security recommendations" description: Investigate security recommendations with the Defender for IoT security service.------ - Last updated 09/09/2020- # Quickstart: Investigate security recommendations
defender-for-iot Quickstart Onboard Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-onboard-iot-hub.md
Title: "Quickstart: Onboard Defender for IoT to an agent-based solution" description: In this quickstart you will learn how to onboard and enable the Defender for IoT security service in your Azure IoT Hub.------ - Last updated 1/20/2021- # Quickstart: Onboard Defender for IoT to an agent-based solution
defender-for-iot Quickstart Standalone Agent Binary Installation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-standalone-agent-binary-installation.md
Title: Install Defender for IoT micro agent (Preview)- description: Learn how to install, and authenticate the Defender Micro Agent.--- Last updated 3/9/2021 - # Install Defender for IoT micro agent (Preview)
defender-for-iot Quickstart System Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-system-prerequisites.md
Title: System prerequisites description: Get system prerequisites needed to run Azure Defender for IoT.--- Last updated 11/30/2020 - # System prerequisites
defender-for-iot References Defender For Iot Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/references-defender-for-iot-glossary.md
Title: Defender for IoT glossary description: This glossary provides a brief description of important Defender for IoT platform terms and concepts.--- Last updated 12/09/2020 - # Defender for IoT glossary
defender-for-iot References Horizon Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/references-horizon-api.md
Title: Horizon API description: This guide describes commonly used Horizon methods.--- Last updated 1/5/2021 - # Horizon API
defender-for-iot References Horizon Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/references-horizon-sdk.md
Title: Horizon SDK - description: The Horizon SDK lets Azure Defender for IoT developers design dissector plugins that decode network traffic so it can be processed by automated Defender for IoT network analysis programs.--- Last updated 1/13/2021 - # Horizon proprietary protocol dissector
defender-for-iot References Work With Defender For Iot Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/references-work-with-defender-for-iot-apis.md
Title: Work with Defender for IoT APIs description: Use an external REST API to access the data discovered by sensors and management consoles and perform actions with that data.--- Last updated 12/14/2020 - # Defender for IoT sensor and management console APIs
defender-for-iot References Work With Defender For Iot Cli Commands https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/references-work-with-defender-for-iot-cli-commands.md
Title: Work with Defender for IoT CLI commands description: This article describes Defender for IoT CLI commands for sensors and on-premises management consoles. --- Last updated 12/12/2020 - # Work with Defender for IoT CLI commands
The command supports the following input flags:
| --key | The \*.key file. Key length should be a minimum of 2,048 bits. | | --chain | Path to the certificate chain file (optional). | | --pass | Passphrase used to encrypt the certificate (optional). |
-| --passphrase-set | The default is **False**, **unused**. <br />Set to **True** to use the previous passphrase supplied with the previous certificate (optional). | |
+| --passphrase-set | The default is **False**, **unused**. <br />Set to **True** to use the previous passphrase supplied with the previous certificate (optional). |
When you're using the tool:
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/release-notes.md
Title: What's new in Azure Defender for IoT description: This article lets you know what's new in the latest release of Defender for IoT.
-
---- - Last updated 03/14/2021- # What's new in Azure Defender for IoT?
defender-for-iot Resources Agent Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/resources-agent-frequently-asked-questions.md
Title: Azure Defender for IoT agent frequently asked questions description: Find answers to the most frequently asked questions about Azure Defender for IoT agent.------ - Last updated 10/07/2020- # Azure Defender for IoT agent frequently asked questions
defender-for-iot Resources Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/resources-frequently-asked-questions.md
Title: Defender for IoT frequently asked questions description: Find answers to the most frequently asked questions about Azure Defender for IoT features and service.------ - Last updated 03/02/2021- # Azure Defender for IoT frequently asked questions
defender-for-iot Resources Manage Proprietary Protocols https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/resources-manage-proprietary-protocols.md
Title: Manage proprietary protocols (Horizon) description: Defender for IoT Horizon delivers an Open Development Environment (ODE) used to secure IoT and ICS devices running proprietary protocols.--- Last updated 12/12/2020 - # Defender for IoT Horizon
defender-for-iot Security Agent Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/security-agent-architecture.md
Title: "Quickstart: Security agents overview" description: In this quickstart you will learn how to understand security agent architecture for the agents used in the Azure Defender for IoT service.------ - Last updated 01/24/2021- # Quickstart: Security agent reference architecture
defender-for-iot Security Edge Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/security-edge-architecture.md
Title: Defender for IoT Defender-IoT-micro-agent for IoT Edge description: Understand the architecture and capabilities of Azure Defender for IoT Defender-IoT-micro-agent for IoT Edge.------ - Last updated 09/09/2020- # Azure Defender for IoT Edge Defender-IoT-micro-agent
defender-for-iot Support Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/support-policies.md
Title: Support policies for Azure Defender for IoT- description: This article describes the support, breaking change policies for Defender for IoT, and the versions of Azure Defender for IoT that are currently available. --- Last updated 2/8/2021 - # Versioning and support for Azure Defender for IoT
defender-for-iot Troubleshoot Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/troubleshoot-agent.md
Title: Troubleshoot security agent start-up (Linux) description: Troubleshoot working with Azure Defender for IoT security agents for Linux.------ - Last updated 09/09/2020- # Security agent troubleshoot guide (Linux)
Defender for IoT agent encountered an error! Error in: {Error Code}, reason: {Er
``` | Error Code | Error sub code | Error details | Remediate C | Remediate C# |
-|:--|:|:--|:|:|
-| Local Configuration | Missing configuration | A configuration is missing in the local configuration file. The error message should state which key is missing. | Add the missing key to the /var/LocalConfiguration.json file, see the [cs-localconfig-reference](azure-iot-security-local-configuration-c.md) for details.| Add the missing key to the General.config file, see the [c#-localconfig-reference](azure-iot-security-local-configuration-csharp.md) for details. |
-| Local Configuration | Cant Parse Configuration | A configuration value can't be parsed. The error message should state which key can't be parsed. A configuration value cannot be parsed either because the value is not in the expected type, or the value is out of range. | Fix the value of the key in /var/LocalConfiguration.json file so that it matches the LocalConfiguration schema, see the [c#-localconfig-reference](azure-iot-security-local-configuration-csharp.md) for details. | Fix the value of the key in General.config file so that it matches the schema, see the [cs-localconfig-reference](azure-iot-security-local-configuration-c.md) for details.|
-| Local Configuration | File Format | Failed to parse configuration file. | The configuration file is corrupted, download the agent and re-install. | |
-| Remote Configuration | Timeout | The agent could not fetch the azureiotsecurity module twin within the timeout period. | Make sure authentication configuration is correct and try again. | The agent could not fetch the azureiotsecurity module twin within timeout period. | Make sure authentication configuration is correct and try again. |
-| Authentication | File Not Exist | The file in the given path does not exist. | Make sure the file exists in the given path or go to the **LocalConfiguration.json** file and change the **FilePath** configuration. | Make sure the file exists in the given path or go to the **Authentication.config** file and change the **filePath** configuration.|
+|--|--|--|--|--|
+| Local Configuration | Missing configuration | A configuration is missing in the local configuration file. The error message should state which key is missing. | Add the missing key to the /var/LocalConfiguration.json file, see the [cs-localconfig-reference](azure-iot-security-local-configuration-c.md) for details. | Add the missing key to the General.config file, see the [c#-localconfig-reference](azure-iot-security-local-configuration-csharp.md) for details. |
+| Local Configuration | Cant Parse Configuration | A configuration value can't be parsed. The error message should state which key can't be parsed. A configuration value cannot be parsed either because the value is not in the expected type, or the value is out of range. | Fix the value of the key in /var/LocalConfiguration.json file so that it matches the LocalConfiguration schema, see the [c#-localconfig-reference](azure-iot-security-local-configuration-csharp.md) for details. | Fix the value of the key in General.config file so that it matches the schema, see the [cs-localconfig-reference](azure-iot-security-local-configuration-c.md) for details. |
+| Local Configuration | File Format | Failed to parse configuration file. | The configuration file is corrupted, download the agent and re-install. | - |
+| Remote Configuration | Timeout | The agent could not fetch the azureiotsecurity module twin within the timeout period. | Make sure authentication configuration is correct and try again. | The agent could not fetch the azureiotsecurity module twin within timeout period. Make sure authentication configuration is correct and try again. |
+| Authentication | File Not Exist | The file in the given path does not exist. | Make sure the file exists in the given path or go to the **LocalConfiguration.json** file and change the **FilePath** configuration. | Make sure the file exists in the given path or go to the **Authentication.config** file and change the **filePath** configuration. |
| Authentication | File Permission | The agent does not have sufficient permissions to open the file. | Give the **asciotagent** user read permissions on the file in the given path. | Make sure the file is accessible. | | Authentication | File Format | The given file is not in the correct format. | Make sure the file is in the correct format. The supported file types are .pfx and .pem. | Make sure the file is a valid certificate file. |
-| Authentication | Unauthorized | The agent was not able to authenticate against IoT Hub with the given credentials. | Validate authentication configuration in LocalConfiguration file, go through the authentication configuration and make sure all the details are correct, validate that the secret in the file matches the authenticated identity. | Validate authentication configuration in Authentication.config, go through the authentication configuration and make sure all the details are correct, then validate that the secret in the file matches the authenticated identity.
-| Authentication | Not Found | The device / module was found. | Validate authentication configuration - make sure the hostname is correct, the device exists in IoT Hub and has an azureiotsecurity twin module. | Validate authentication configuration - make sure the hostname is correct, the device exists in IoT Hub and has an azureiotsecurity twin module. |
-| Authentication | Missing Configuration | A configuration is missing in the *Authentication.config* file. The error message should state which key is missing. | Add the missing key to the *LocalConfiguration.json* file.| Add the missing key to the *Authentication.config* file, see the [c#-localconfig-reference](azure-iot-security-local-configuration-csharp.md) for details. |
-| Authentication | Cant Parse Configuration | A configuration value can't be parsed. The error message should state which key can't be parsed. A configuration value can not be parsed because either the value is not of the expected type, or the value is out of range. |Fix the value of the key in the **LocalConfiguration.json** file. |Fix the value of the key in **Authentication.config** file to match the schema, see the [cs-localconfig-reference](azure-iot-security-local-configuration-c.md) for details.|
-|
+| Authentication | Unauthorized | The agent was not able to authenticate against IoT Hub with the given credentials. | Validate authentication configuration in LocalConfiguration file, go through the authentication configuration and make sure all the details are correct, validate that the secret in the file matches the authenticated identity. | Validate authentication configuration in Authentication.config, go through the authentication configuration and make sure all the details are correct, then validate that the secret in the file matches the authenticated identity. |
+| Authentication | Not Found | The device / module was found. | Validate authentication configuration - make sure the hostname is correct, the device exists in IoT Hub and has an azureiotsecurity twin module. | Validate authentication configuration - make sure the hostname is correct, the device exists in IoT Hub and has an azureiotsecurity twin module. |
+| Authentication | Missing Configuration | A configuration is missing in the *Authentication.config* file. The error message should state which key is missing. | Add the missing key to the *LocalConfiguration.json* file. | Add the missing key to the *Authentication.config* file, see the [c#-localconfig-reference](azure-iot-security-local-configuration-csharp.md) for details. |
+| Authentication | Cant Parse Configuration | A configuration value can't be parsed. The error message should state which key can't be parsed. A configuration value can not be parsed because either the value is not of the expected type, or the value is out of range. | Fix the value of the key in the **LocalConfiguration.json** file. | Fix the value of the key in **Authentication.config** file to match the schema, see the [cs-localconfig-reference](azure-iot-security-local-configuration-c.md) for details.|
## Next steps
defender-for-iot Troubleshoot Defender Micro Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/troubleshoot-defender-micro-agent.md
Title: Defender IoT micro agent troubleshooting (Preview)- description: Learn how to handle unexpected or unexplained errors.--- Last updated 1/24/2021 - # Defender IoT micro agent troubleshooting (Preview)
digital-twins How To Manage Routes Apis Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-routes-apis-cli.md
Without filtering, endpoints receive a variety of events from Azure Digital Twin
You can restrict the events being sent by adding a **filter** for an endpoint to your event route.
+>[!NOTE]
+> Filters are **case-sensitive** and need to match on the payload case (which may not necessarily match the model case).
+ To add a filter, you can use a PUT request to *https://{Your-azure-digital-twins-hostname}/eventRoutes/{event-route-name}?api-version=2020-10-31* with the following body: :::code language="json" source="~/digital-twins-docs-samples/api-requests/filter.json":::
digital-twins How To Manage Routes Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-routes-portal.md
To add an event filter while you are creating an event route, use the _Add an ev
You can either select from some basic common filter options, or use the advanced filter options to write your own custom filters.
+>[!NOTE]
+> Filters are **case-sensitive** and need to match on the payload case (which may not necessarily match the model case).
+ #### Use the basic filters To use the basic filters, expand the _Event types_ option and select the checkboxes corresponding to the events you'd like to send to your endpoint.
digital-twins How To Use Postman https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-postman.md
This article describes how to configure the [Postman REST client](https://www.ge
1. [**Create**](#create-your-own-collection) your own collection from scratch. 1. [**Add requests**](#add-an-individual-request) to your configured collection and send them to the Azure Digital Twins APIs.
+Azure Digital Twins has two sets of APIs that you can work with: **data plane** and **control plane**. For more about the difference between these API sets, see [*How-to: Use the Azure Digital Twins APIs and SDKs*](how-to-use-apis-sdks.md). This article contains information for both API sets.
+ ## Prerequisites To proceed with using Postman to access the Azure Digital Twins APIs, you need to set up an Azure Digital Twins instance and download Postman. The rest of this section walks you through these steps.
Otherwise, you can open an [Azure Cloud Shell](https://shell.azure.com) window i
az login ```
-1. Next, use the [az account get-access-token](/cli/azure/account#az_account_get_access_token) command to get a bearer token with access to the Azure Digital Twins service. In this command, you'll pass in the resource ID for the Azure Digital Twins service endpoint (a static value of `0b07f429-9f4b-4714-9392-cc5e8e80c8b0`), in order to get an access token that can access Azure Digital Twins resources.
+2. Next, use the [az account get-access-token](/cli/azure/account#az_account_get_access_token) command to get a bearer token with access to the Azure Digital Twins service. In this command, you'll pass in the resource ID for the Azure Digital Twins service endpoint, in order to get an access token that can access Azure Digital Twins resources.
+
+ The required context for the token depends on which set of APIs you're using, so use the tabs below to select between [data plane](how-to-use-apis-sdks.md#overview-data-plane-apis) and [control plane](how-to-use-apis-sdks.md#overview-control-plane-apis) APIs.
+ # [Data plane](#tab/data-plane)
+
+ To get a token to use with the **data plane** APIs, use the following static value for the token context: `0b07f429-9f4b-4714-9392-cc5e8e80c8b0`. This is the resource ID for the Azure Digital Twins service endpoint.
+
```azurecli-interactive az account get-access-token --resource 0b07f429-9f4b-4714-9392-cc5e8e80c8b0 ```
+
+ # [Control plane](#tab/control-plane)
+
+ To get a token to use with the **control plane** APIs, use the following value for the token context: `https://management.azure.com/`.
+
+ ```azurecli-interactive
+ az account get-access-token --resource https://management.azure.com/
+ ```
+
+
-1. Copy the value of `accessToken` in the result, and save it to use in the next section. This is your **token value** that you will provide to Postman to authorize your requests.
+3. Copy the value of `accessToken` in the result, and save it to use in the next section. This is your **token value** that you will provide to Postman to authorize your requests.
- :::image type="content" source="media/how-to-use-postman/console-access-token.png" alt-text="Screenshot of a local console window showing the result of the az account get-access-token command. One of the fields in the result is called accessToken and its sample value--beginning with ey--is highlighted.":::
+ :::image type="content" source="media/how-to-use-postman/console-access-token.png" alt-text="Screenshot of console showing the result of the az account get-access-token command. The accessToken field and its sample value is highlighted.":::
>[!TIP] >This token is valid for at least five minutes and a maximum of 60 minutes. If you run out of time allotted for the current token, you can repeat the steps in this section to get a new one.
A quick way to get started with Azure Digital Twins in Postman is to import a pr
### Download the collection file
-The first step in importing the API set is to download a collection.
+The first step in importing the API set is to download a collection. Choose the tab below for your choice of data plane or control plane to see the pre-built collection options.
-There are currently two Azure Digital Twins collections available for you to choose from:
+# [Data plane](#tab/data-plane)
+
+There are currently two Azure Digital Twins data plane collections available for you to choose from:
* [**Azure Digital Twins Postman Collection**](https://github.com/microsoft/azure-digital-twins-postman-samples): This collection provides a simple getting started experience for Azure Digital Twins in Postman. The requests include sample data, so you can run them with minimal edits required. Choose this collection if you want a digestible set of key API requests containing sample information. - To find the collection, navigate to the repo link and open the file named *postman_collection.json*.
-* [**Azure Digital Twins data plane Swagger**](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/digitaltwins/data-plane/Microsoft.DigitalTwins): This repo contains the complete Swagger file for the Azure Digital Twins API set, which can be downloaded and imported to Postman as a collection. This will provide a comprehensive set of every API request, but with empty data bodies rather than sample data. Choose this collection if you want to have access to every API call and fill in all the data yourself.
+* **[Azure Digital Twins data plane Swagger](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/digitaltwins/data-plane/Microsoft.DigitalTwins)**: This repo contains the complete Swagger file for the Azure Digital Twins API set, which can be downloaded and imported to Postman as a collection. This will provide a comprehensive set of every API request, but with empty data bodies rather than sample data. Choose this collection if you want to have access to every API call and fill in all the data yourself.
- To find the collection, navigate to the repo link and choose the folder for the latest spec version. From here, open the file called *digitaltwins.json*.
+# [Control plane](#tab/control-plane)
+
+The collection currently available for control plane is the [**Azure Digital Twins control plane Swagger**](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/digitaltwins/data-plane/Microsoft.DigitalTwins). This repo contains the complete Swagger file for the Azure Digital Twins API set, which can be downloaded and imported to Postman as a collection. This will provide a comprehensive set of every API request.
+
+To find the collection, navigate to the repo link and choose the folder for the latest spec version. From here, open the file called *digitaltwins.json*.
+++ Here's how to download your chosen collection to your machine so that you can import it into Postman. 1. Use the links above to open the collection file in GitHub in your browser. 1. Select the **Raw** button to open the raw text of the file.
Next, import the collection into Postman.
1. In the Import window that follows, select **Upload Files** and navigate to the collection file on your machine that you created earlier. Select Open. 1. Select the **Import** button to confirm.
- :::image type="content" source="media/how-to-use-postman/postman-import-collection-2.png" alt-text="Screenshot of Postman's 'Import' window. The Azure Digital Twins API file is showing as a file to import as a collection. The 'Import' button is highlighted.":::
+ :::image type="content" source="media/how-to-use-postman/postman-import-collection-2.png" alt-text="Screenshot of Postman's 'Import' window, showing the file to import as a collection and the Import button.":::
The newly imported collection can now be seen from your main Postman view, in the Collections tab.
Next, continue on to the next section to add a bearer token to the collection fo
Next, edit the collection you've created to configure some access details. Highlight the collection you've created and select the **View more actions** icon to pull up a menu. Select **Edit**. Follow these steps to add a bearer token to the collection for authorization. This is where you'll use the **token value** you gathered in the [Get bearer token](#get-bearer-token) section in order to use it for all API requests in your collection.
Follow these steps to add a bearer token to the collection for authorization. Th
1. Set the Type to **OAuth 2.0**, paste your access token into the Access Token box, and select **Save**.
- :::image type="content" source="media/how-to-use-postman/postman-paste-token-imported.png" alt-text="Screenshot of the imported collection's edit dialog in Postman, showing the 'Authorization' tab. A Type of 'OAuth 2.0' is selected, and Access Token box where the access token value can be pasted is highlighted." lightbox="media/how-to-use-postman/postman-paste-token-imported.png":::
+ :::image type="content" source="media/how-to-use-postman/postman-paste-token-imported.png" alt-text="Screenshot of Postman edit dialog for the imported collection, on the 'Authorization' tab. Type is 'OAuth 2.0', and Access Token box is highlighted." lightbox="media/how-to-use-postman/postman-paste-token-imported.png":::
+
+### Additional configuration
-### Configure collection variables
+# [Data plane](#tab/data-plane)
-Next, help the collection connect easily to your Azure Digital Twins resources by setting some collection-level **variables**. When many requests in a collection require the same value (like the host name of your Azure Digital Twins instance), you can store the value in a variable that applies to every request in the collection. Both of the downloadable collections for Azure Digital Twins come with pre-created variables that you can set at the collection level.
+If you're making a [data plane](how-to-use-apis-sdks.md#overview-data-plane-apis) collection, help the collection connect easily to your Azure Digital Twins resources by setting some **variables** provided with the collections. When many requests in a collection require the same value (like the host name of your Azure Digital Twins instance), you can store the value in a variable that applies to every request in the collection. Both of the downloadable collections for Azure Digital Twins come with pre-created variables that you can set at the collection level.
1. Still in the edit dialog for your collection, move to the **Variables** tab.
Next, help the collection connect easily to your Azure Digital Twins resources b
:::image type="content" source="media/how-to-use-postman/postman-variables-imported.png" alt-text="Screenshot of the imported collection's edit dialog in Postman, showing the 'Variables' tab. The 'CURRENT VALUE' field is highlighted." lightbox="media/how-to-use-postman/postman-variables-imported.png":::
-1. If your collection has additional variables or if you'd like to add your own, fill and save those values as well.
+1. If your collection has additional variables, fill and save those values as well.
When you're finished with the above steps, you're done configuring the collection. You can close the editing tab for the collection if you want.
+# [Control plane](#tab/control-plane)
+
+If you're making a [control plane](how-to-use-apis-sdks.md#overview-control-plane-apis) collection, you've done everything that you need to configure the collection. You can close the editing tab for the collection if you want, and proceed to the next section.
+
+
+ ### Explore requests Next, explore the requests inside the Azure Digital Twins API collection. You can expand the collection to view the pre-created requests (sorted by category of operation).
You can edit the details of a request in the Postman collection using these step
1. Fill in values for the variables listed in the **Params** tab under **Path Variables**.
- :::image type="content" source="media/how-to-use-postman/postman-request-details-imported.png" alt-text="Screenshot of the main Postman window. The Azure Digital Twins API collection is expanded to the 'Digital Twins Get Relationship By Id' request. Details of the request are shown in the center of the page, where the 'Path Variables' section is highlighted." lightbox="media/how-to-use-postman/postman-request-details-imported.png":::
+ :::image type="content" source="media/how-to-use-postman/postman-request-details-imported.png" alt-text="Screenshot of Postman. The collection is expanded to show a request. The 'Path Variables' section is highlighted in the request details." lightbox="media/how-to-use-postman/postman-request-details-imported.png":::
1. Provide any necessary **Headers** or **Body** details in the respective tabs.
Follow these steps to add a bearer token to the collection for authorization. Th
1. Set the Type to **OAuth 2.0**, paste your access token into the Access Token box, and select **Save**.
- :::image type="content" source="media/how-to-use-postman/postman-paste-token-custom.png" alt-text="Screenshot of the new collection's edit dialog in Postman, showing the 'Authorization' tab. A Type of 'OAuth 2.0' is selected, and Access Token box where the access token value can be pasted is highlighted." lightbox="media/how-to-use-postman/postman-paste-token-custom.png":::
+ :::image type="content" source="media/how-to-use-postman/postman-paste-token-custom.png" alt-text="Screenshot of the Postman edit dialog for the new collection, on the 'Authorization' tab. Type is 'OAuth 2.0', and Access Token box is highlighted." lightbox="media/how-to-use-postman/postman-paste-token-custom.png":::
When you're finished with the above steps, you're done configuring the collection. You can close the edit tab for the new collection if you want.
Now that your collection is set up, you can add your own requests to the Azure D
:::row::: :::column:::
- :::image type="content" source="media/how-to-use-postman/postman-save-request.png" alt-text="Screenshot of the 'Save request' window in Postman, where you can fill out the fields described. The 'Save to Azure Digital Twins collection' button is highlighted.":::
+ :::image type="content" source="media/how-to-use-postman/postman-save-request.png" alt-text="Screenshot of 'Save request' window in Postman showing the fields described. The 'Save to Azure Digital Twins collection' button is highlighted.":::
:::column-end::: :::column::: :::column-end:::
Now that your collection is set up, you can add your own requests to the Azure D
You can now view your request under the collection, and select it to pull up its editable details. ### Set request details
To proceed with an example query, this article will use the Query API (and its [
1. Check that the headers shown for the request in the **Headers** tab match those described in the reference documentation. For this request, several headers have been automatically filled. For the Query API, none of the header options are required, so this step is done. 1. Check that the body shown for the request in the **Body** tab matches the needs described in the reference documentation. For the Query API, a JSON body is required to provide the query text. Here is an example body for this request that queries for all the digital twins in the instance:
- :::image type="content" source="media/how-to-use-postman/postman-request-body.png" alt-text="Screenshot of the new request's details in Postman. The Body tab is shown, and it contains a raw JSON body with a query of 'SELECT * FROM DIGITALTWINS'." lightbox="media/how-to-use-postman/postman-request-body.png":::
+ :::image type="content" source="media/how-to-use-postman/postman-request-body.png" alt-text="Screenshot of the new request's details in Postman, on the Body tab. It contains a raw JSON body with a query of 'SELECT * FROM DIGITALTWINS'." lightbox="media/how-to-use-postman/postman-request-body.png":::
For more information about crafting Azure Digital Twins queries, see [*How-to: Query the twin graph*](how-to-query-graph.md).
To proceed with an example query, this article will use the Query API (and its [
After sending the request, the response details will appear in the Postman window below the request. You can view the response's status code and any body text. You can also compare the response to the expected response data given in the reference documentation, to verify the result or learn more about any errors that arise.
dns Private Dns Autoregistration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/private-dns-autoregistration.md
# What is the autoregistration feature of Azure DNS private zones
-The Azure DNS private zones auto registration feature takes the pain out of DNS record management for virtual machines deployed in a virtual network. When you [link an virtual network](./private-dns-virtual-network-links.md) with a private DNS zone and enable auto registration for all the virtual machines, the DNS records for the virtual machines deployed in the virtual network are automatically created in the private DNS zone. In addition to forward look records (A records), reverse lookup records (PTR records) are also automatically created for the virtual machines.
+The Azure DNS private zones auto registration feature takes the pain out of DNS record management for virtual machines deployed in a virtual network. When you [link a virtual network](./private-dns-virtual-network-links.md) with a private DNS zone and enable auto registration for all the virtual machines, the DNS records for the virtual machines deployed in the virtual network are automatically created in the private DNS zone. In addition to forward look records (A records), reverse lookup records (PTR records) are also automatically created for the virtual machines.
If you add more virtual machines to the virtual network, DNS records for these virtual machines are also automatically created in the linked private DNS zone. When you delete a virtual machine, the DNS records for the virtual machine are automatically deleted from the private DNS zone.
You can enable autoregistration by selecting "Enable auto registration" option w
* Read about some common [private zone scenarios](./private-dns-scenarios.md) that can be realized with private zones in Azure DNS.
-* For common questions and answers about private zones in Azure DNS, including specific behavior you can expect for certain kinds of operations, see [Private DNS FAQ](./dns-faq-private.md).
+* For common questions and answers about private zones in Azure DNS, including specific behavior you can expect for certain kinds of operations, see [Private DNS FAQ](./dns-faq-private.md).
event-hubs Event Hubs Python Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-python-get-started-send.md
If you're new to Azure Event Hubs, see [Event Hubs overview](event-hubs-about.md
To complete this quickstart, you need the following prerequisites: - **Microsoft Azure subscription**. To use Azure services, including Azure Event Hubs, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/) or use your MSDN subscriber benefits when you [create an account](https://azure.microsoft.com).-- Python 2.7 or 3.5 or later, with PIP installed and updated.
+- Python 2.7 or 3.6 or later, with PIP installed and updated.
- The Python package for Event Hubs. To install the package, run this command in a command prompt that has Python in its path:
expressroute Expressroute Routing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-routing.md
Make sure that your IP address and AS number are registered to you in one of the
If your prefixes and AS number are not assigned to you in the preceding registries, you need to open a support case for manual validation of your prefixes and ASN. Support requires documentation, such as a Letter of Authorization, that proves you are allowed to use the resources.
-A Private AS Number is allowed with Microsoft Peering, but will also require manual validation. In addition, we remove private AS numbers in the AS PATH for the received prefixes. As a result, you can't append private AS numbers in the AS PATH to [influence routing for Microsoft Peering](expressroute-optimize-routing.md).
+A Private AS Number is allowed with Microsoft Peering, but will also require manual validation. In addition, we remove private AS numbers in the AS PATH for the received prefixes. As a result, you can't append private AS numbers in the AS PATH to [influence routing for Microsoft Peering](expressroute-optimize-routing.md). Additionally, AS numbers 64496 - 64511 reserved by IANA for documentation purposes are not allowed in the path.
> [!IMPORTANT] > Do not advertise the same public IP route to the public Internet and over ExpressRoute. To reduce the risk of incorrect configuration causing asymmetric routing, we strongly recommend that the [NAT IP addresses](expressroute-nat.md) advertised to Microsoft over ExpressRoute be from a range that is not advertised to the internet at all. If this is not possible to achieve, it is essential to ensure you advertise a more specific range over ExpressRoute than the one on the Internet connection. Besides the public route for NAT, you can also advertise over ExpressRoute the Public IP addresses used by the servers in your on-premises network that communicate with Microsoft 365 endpoints within Microsoft.
frontdoor Concept Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/concept-private-link.md
> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-Azure Front Door Premium SKU can connect to your origin behind Web App and Storage Account using the Private Link service, removing the need for your origin to be publically accessible.
+Azure Front Door Premium SKU can connect to your origin via private link service. Your applications can be hosted in your private VNet or behind a PaaS service such as Web App and Storage Account, removing the need for your origin to be publically accessible.
:::image type="content" source="../media/concept-private-link/front-door-private-endpoint-architecture.png" alt-text="Front Door Private Endpoints architecture":::
frontdoor How To Enable Private Link Internal Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/how-to-enable-private-link-internal-load-balancer.md
+
+ Title: 'Connect Azure Front Door Premium to an internal load balancer origin with Private Link'
+
+description: Learn how to connect your Azure Front Door Premium to an internal load balancer.
++++ Last updated : 03/16/2021+++
+# Connect Azure Front Door Premium to an internal load balancer origin with Private Link
+
+This article will guide you through how to configure Azure Front Door Premium SKU to connect to your internal load balancer origin using the Azure Private Link service.
+
+## Prerequisites
+
+Create a [private link service](../../private-link/create-private-link-service-portal.md).
+
+## Sign in to Azure
+
+Sign in to the [Azure portal](https://portal.azure.com).
+
+## Enable Private Link to an internal load balancer
+
+In this section, you'll map the Private Link service to a private endpoint created in Azure Front Door's private network.
+
+1. Within your Azure Front Door Premium profile, under *Settings*, select **Origin groups**.
+
+1. Select the origin group you want to enable Private Link for the internal load balancer.
+
+1. Select **+ Add an origin** to add an internal load balancer origin.
+
+ :::image type="content" source="../media/how-to-enable-private-link-internal-load-balancer/private-endpoint-internal-load-balancer.png" alt-text="Screenshot of enabling private link to an internal load balancer.":::
+
+1. For **Select an Azure resource**, select **In my directory**. Select or enter the following settings to configure the site you want Azure Front Door Premium to connect with privately.
+
+ | Setting | Value |
+ | - | -- |
+ | Region | Select the region that is the same or closest to your origin. |
+ | Resource type | Select **Microsoft.Network/privateLinkServices**. |
+ | Resource | Select your Private link tied to the internal load balancer. |
+ | Target sub resource | Leave blank. |
+ | Request message | Customize message or choose the default. |
+
+1. Then select **Add** and then **Update** to save your configuration.
+
+## Approve private endpoint connection from the storage account
+
+1. Go to the Private Link Center and select **Private link services**. Then select your Private link name.
+
+ :::image type="content" source="../media/how-to-enable-private-link-internal-load-balancer/list.png" alt-text="Screenshot of private link list.":::
+
+1. Select **Private endpoint connections** under *Settings*.
+
+ :::image type="content" source="../media/how-to-enable-private-link-internal-load-balancer/overview.png" alt-text="Screenshot of private link overview page.":::
+
+1. Select the *pending* private endpoint request from Azure Front Door Premium then select **Approve**.
+
+ :::image type="content" source="../media/how-to-enable-private-link-internal-load-balancer/private-endpoint-pending-approval.png" alt-text="Screenshot of pending approval for private link.":::
+
+1. Once approved, it should look like the screenshot below. It will take a few minutes for the connection to fully establish. You can now access your internal load balancer from Azure Front Door Premium.
+
+ :::image type="content" source="../media/how-to-enable-private-link-storage-account/private-endpoint-approved.png" alt-text="Screenshot of approved private link request.":::
+
+## Next steps
+
+Learn about [Private Link service](../../private-link/private-link-service-overview.md).
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/index.md
quality and ready to deploy today to assist you in meeting your various complian
| [ISO 27001 Shared Services](./iso27001-shared/index.md) | Provides a set of compliant infrastructure patterns and policy guard-rails that help towards ISO 27001 attestation. | | [ISO 27001 App Service Environment/SQL Database workload](./iso27001-ase-sql-workload/index.md) | Provides more infrastructure to the [ISO 27001 Shared Services](./iso27001-shared/index.md) blueprint sample. | | [Media](./medi) | Provides a set of policies to help comply with Media MPAA. |
+| [New Zealand ISM Restricted](./new-zealand-ism.md) | Assigns policies to address specific New Zealand Information Security Manual controls. |
| [NIST SP 800-53 R4](./nist-sp-800-53-r4.md) | Provides guardrails for compliance with NIST SP 800-53 R4. | | [NIST SP 800-171 R2](./nist-sp-800-171-r2.md) | Provides guardrails for compliance with NIST SP 800-171 R2. | | [PCI-DSS v3.2.1](./pci-dss-3.2.1/index.md) | Provides a set of policies to aide in PCI-DSS v3.2.1 compliance. |
governance New Zealand Ism https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/new-zealand-ism.md
+
+ Title: New Zealand ISM Restricted blueprint sample
+description: Overview of the New Zealand ISM Restricted blueprint sample. This blueprint sample helps customers assess specific controls.
Last updated : 03/22/2021++
+# New Zealand ISM Restricted blueprint sample
+
+The New Zealand ISM Restricted blueprint sample provides governance guard-rails using
+[Azure Policy](../../policy/overview.md) that help you assess specific
+[New Zealand Information Security Manual](https://www.nzism.gcsb.govt.nz/) controls. This blueprint
+helps customers deploy a core set of policies for any Azure-deployed architecture that must
+implement controls for New Zealand ISM Restricted.
+
+## Control mapping
+
+The [Azure Policy control mapping](../../policy/samples/new-zealand-ism.md) provides details
+on policy definitions included within this blueprint and how these policy definitions map to the
+**controls** in the New Zealand Information Security Manual. When assigned to an architecture,
+resources are evaluated by Azure Policy for non-compliance with assigned policy definitions. For
+more information, see [Azure Policy](../../policy/overview.md).
+
+## Deploy
+
+To deploy the Azure Blueprints New Zealand ISM Restricted blueprint sample,
+the following steps must be taken:
+
+> [!div class="checklist"]
+> - Create a new blueprint from the sample
+> - Mark your copy of the sample as **Published**
+> - Assign your copy of the blueprint to an existing subscription
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
+before you begin.
+
+### Create blueprint from sample
+
+First, implement the blueprint sample by creating a new blueprint in your environment using the
+sample as a starter.
+
+1. Select **All services** in the left pane. Search for and select **Blueprints**.
+
+1. From the **Getting started** page on the left, select the **Create** button under _Create a
+ blueprint_.
+
+1. Find the **New Zealand ISM Restricted** blueprint sample under _Other
+ Samples_ and select **Use this sample**.
+
+1. Enter the _Basics_ of the blueprint sample:
+
+ - **Blueprint name**: Provide a name for your copy of the New Zealand ISM Restricted blueprint
+ sample.
+ - **Definition location**: Use the ellipsis and select the management group to save your copy of
+ the sample to.
+
+1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
+ page.
+
+1. Review the list of artifacts that are included in the blueprint sample. Many of the artifacts
+ have parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
+ blueprint sample.
+
+### Publish the sample copy
+
+Your copy of the blueprint sample has now been created in your environment. It's created in
+**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
+blueprint sample can be customized to your environment and needs, but that modification may move it
+away from alignment with New Zealand ISM Restricted controls.
+
+1. Select **All services** in the left pane. Search for and select **Blueprints**.
+
+1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
+ blueprint sample and then select it.
+
+1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
+ **Version** for your copy of the blueprint sample. This property is useful for if you make a
+ modification later. Provide **Change notes** such as "First version published from the New
+ Zealand ISM Restricted blueprint sample." Then select **Publish** at the bottom of the page.
+
+### Assign the sample copy
+
+Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
+subscription within the management group it was saved to. This step is where parameters are provided
+to make each deployment of the copy of the blueprint sample unique.
+
+1. Select **All services** in the left pane. Search for and select **Blueprints**.
+
+1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
+ blueprint sample and then select it.
+
+1. Select **Assign blueprint** at the top of the blueprint definition page.
+
+1. Provide the parameter values for the blueprint assignment:
+
+ - Basics
+
+ - **Subscriptions**: Select one or more of the subscriptions that are in the management group
+ you saved your copy of the blueprint sample to. If you select more than one subscription, an
+ assignment will be created for each using the parameters entered.
+ - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
+ Change as needed or leave as is.
+ - **Location**: Select a region for the managed identity to be created in. Azure Blueprint uses
+ this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
+ [managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md).
+ - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
+ sample.
+
+ - Lock Assignment
+
+ Select the blueprint lock setting for your environment. For more information, see
+ [blueprints resource locking](../concepts/resource-locking.md).
+
+ - Managed Identity
+
+ Leave the default _system assigned_ managed identity option.
+
+ - Artifact parameters
+
+ The parameters defined in this section apply to the artifact under which it's defined. These
+ parameters are [dynamic parameters](../concepts/parameters.md#dynamic-parameters) since
+ they're defined during the assignment of the blueprint. For a full list or artifact parameters
+ and their descriptions, see [Artifact parameters table](#artifact-parameters-table).
+
+1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
+ assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
+ on the status of deployment, open the blueprint assignment.
+
+> [!WARNING]
+> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
+> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
+> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
+> running resources deployed by this blueprint sample.
+
+### Artifact parameters table
+
+The following table provides a list of the blueprint artifact parameters:
+
+|Artifact name|Artifact type|Parameter name|Description|
+|-|-|-|-|
+|New Zealand ISM Restricted|Policy Assignment|List of users that must be included in Windows VM Administrators group|A semicolon-separated list of users that should be included in the Administrators local group; Ex: Administrator; myUser1; myUser2|
+|New Zealand ISM Restricted|Policy Assignment|List of users that must be excluded from Windows VM Administrators group|A semicolon-separated list of users that should be excluded in the Administrators local group; Ex: Administrator; myUser1; myUser2|
+|New Zealand ISM Restricted|Policy Assignment|List of users that Windows VM Administrators group must only include|A semicolon-separated list of all the expected members of the Administrators local group; Ex: Administrator; myUser1; myUser2|
+|New Zealand ISM Restricted|Policy Assignment|Log Analytics workspace ID for VM agent reporting|ID (GUID) of the Log Analytics workspace where VMs agents should report|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Web Application Firewall (WAF) should be enabled for Azure Front Door Service|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Adaptive network hardening recommendations should be applied on internet facing virtual machines|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: There should be more than one owner assigned to your subscription|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Disk encryption should be applied on virtual machines|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Remote debugging should be turned off for Function Apps|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Web Application Firewall (WAF) should use the specified mode for Application Gateway|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|WAF mode requirement for Application Gateway|The Prevention or Detection mode must be enabled on the Application Gateway service|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Transparent Data Encryption on SQL databases should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Vulnerability assessment should be enabled on SQL Managed Instance|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Optional: List of custom VM images that have supported Windows OS to add to scope additional to the images in the gallery for policy: Deploy - Configure Dependency agent to be enabled on Windows virtual machines|For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: An Azure Active Directory administrator should be provisioned for SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Only secure connections to your Azure Cache for Redis should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Endpoint protection solution should be installed on virtual machine scale sets|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Windows machines missing any of specified members in the Administrators group|By selecting 'true,' you agree to be charged monthly per Arc connected machine|
+|New Zealand ISM Restricted|Policy Assignment|Optional: List of custom VM images that have supported Windows OS to add to scope additional to the images in the gallery for policy: [Preview]: Log Analytics Agent should be enabled for listed virtual machine images|For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol)|
+|New Zealand ISM Restricted|Policy Assignment|Optional: List of custom VM images that have supported Linux OS to add to scope additional to the images in the gallery for policy: [Preview]: Log Analytics Agent should be enabled for listed virtual machine images|For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Storage accounts should restrict network access|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Optional: List of custom VM images that have supported Windows OS to add to scope additional to the images in the gallery for policy: Deploy - Configure Dependency agent to be enabled on Windows virtual machine scale sets|For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Vulnerabilities in security configuration on your virtual machine scale sets should be remediated|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Windows machines that have extra accounts in the Administrators group|By selecting 'true,' you agree to be charged monthly per Arc connected machine|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Secure transfer to storage accounts should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Web Application Firewall (WAF) should use the specified mode for Azure Front Door Service|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|WAF mode requirement for Azure Front Door Service|The Prevention or Detection mode must be enabled on the Azure Front Door service|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Adaptive application controls for defining safe applications should be enabled on your machines|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: A maximum of 3 owners should be designated for your subscription|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: [Preview]: Storage account public access should be disallowed|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: A vulnerability assessment solution should be enabled on your virtual machines|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Web Application Firewall (WAF) should be enabled for Application Gateway|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: CORS should not allow every resource to access your Web Applications|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Windows web servers that are not using secure communication protocols|By selecting 'true,' you agree to be charged monthly per Arc connected machine|
+|New Zealand ISM Restricted|Policy Assignment|Minimum TLS version for Windows web servers|Windows web servers with lower TLS versions will be assessed as non-compliant|
+|New Zealand ISM Restricted|Policy Assignment|Optional: List of custom VM images that have supported Linux OS to add to scope additional to the images in the gallery for policy: Log Analytics agent should be enabled in virtual machine scale sets for listed virtual machine images|For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol)|
+|New Zealand ISM Restricted|Policy Assignment|Optional: List of custom VM images that have supported Windows OS to add to scope additional to the images in the gallery for policy: Log Analytics agent should be enabled in virtual machine scale sets for listed virtual machine images|For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: External accounts with write permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Windows machines that have the specified members in the Administrators group|By selecting 'true,' you agree to be charged monthly per Arc connected machine|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Deprecated accounts should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Function App should only be accessible over HTTPS|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Azure subscriptions should have a log profile for Activity Log|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|List of resource types that should have diagnostic logs enabled||
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: System updates should be installed on your machines|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Latest TLS version should be used in your API App|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: MFA should be enabled accounts with write permissions on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Microsoft IaaSAntimalware extension should be deployed on Windows servers|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Web Application should only be accessible over HTTPS|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Azure DDoS Protection Standard should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: MFA should be enabled on accounts with owner permissions on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Advanced data security should be enabled on your SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Advanced data security should be enabled on SQL Managed Instance|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Monitor missing Endpoint Protection in Azure Security Center|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Activity log should be retained for at least one year|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Management ports of virtual machines should be protected with just-in-time network access control|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Service Fabric clusters should only use Azure Active Directory for client authentication|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: API App should only be accessible over HTTPS|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Audit Windows machines on which Windows Defender Exploit Guard is not enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Windows machines on which Windows Defender Exploit Guard is not enabled|By selecting 'true,' you agree to be charged monthly per Arc connected machine|
+|New Zealand ISM Restricted|Policy Assignment|Compliance state to report for Windows machines on which Windows Defender Exploit Guard is not available|Windows Defender Exploit Guard is only available starting with Windows 10/Windows Server with update 1709. Setting this value to 'Non-Compliant' shows machines with older versions on which Windows Defender Exploit Guard is not available (such as Windows Server 2012 R2) as non-compliant. Setting this value to 'Compliant' shows these machines as compliant.|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: System updates on virtual machine scale sets should be installed|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Remote debugging should be turned off for Web Applications|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Vulnerabilities in security configuration on your machines should be remediated|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: MFA should be enabled on accounts with read permissions on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Vulnerabilities in container security configurations should be remediated|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Remote debugging should be turned off for API Apps|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Audit Linux machines that allow remote connections from accounts without passwords|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Linux machines that allow remote connections from accounts without passwords|By selecting 'true,' you agree to be charged monthly per Arc connected machine|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Deprecated accounts with owner permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Vulnerability assessment should be enabled on your SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Latest TLS version should be used in your Web App|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Windows machines should meet requirements for 'Security Settings - Account Policies'|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Enforce password history for Windows VM local accounts|Specifies limits on password reuse - how many times a new password must be created for a user account before the password can be repeated|
+|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Windows machines should meet requirements for 'Security Settings - Account Policies'|By selecting 'true,' you agree to be charged monthly per Arc connected machine|
+|New Zealand ISM Restricted|Policy Assignment|Maximum password age for Windows VM local accounts|Specifies the maximum number of days that may elapse before a user account password must be changed; the format of the value is two integers separated by a comma, denoting an inclusive range|
+|New Zealand ISM Restricted|Policy Assignment|Minimum password age for Windows VM local accounts|Specifies the minimum number of days that must elapse before a user account password can be changed|
+|New Zealand ISM Restricted|Policy Assignment|Minimum password length for Windows VM local accounts|Specifies the minimum number of characters that a user account password may contain|
+|New Zealand ISM Restricted|Policy Assignment|Password must meet complexity requirements for Windows VM local accounts|Specifies whether a user account password must be complex; if required, a complex password must not contain part of the user's account name or full name; be at least 6 characters long; contain a mix of uppercase, lowercase, number, and non-alphabetic characters|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Internet-facing virtual machines should be protected with network security groups|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Audit Linux machines that have accounts without passwords|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Linux machines that have accounts without passwords|By selecting 'true,' you agree to be charged monthly per Arc connected machine|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: External accounts with owner permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Latest TLS version should be used in your Function App|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: [Preview]: All Internet traffic should be routed via your deployed Azure Firewall|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Vulnerabilities on your SQL databases should be remediated|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects)|
+
+## Next steps
+
+Additional articles about blueprints and how to use them:
+
+- Learn about the [blueprint lifecycle](../concepts/lifecycle.md).
+- Understand how to use [static and dynamic parameters](../concepts/parameters.md).
+- Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md).
+- Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).
+- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Guest Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/guest-configuration.md
built-in content, Guest Configuration handles loading these tools automatically.
### Validation frequency
-The Guest Configuration client checks for new content every 5 minutes. Once a guest assignment is
+The Guest Configuration client checks for new or changed guest assignments every 5 minutes. Once a guest assignment is
received, the settings for that configuration are rechecked on a 15-minute interval. Results are sent to the Guest Configuration resource provider when the audit completes. When a policy [evaluation trigger](../how-to/get-compliance-data.md#evaluation-triggers) occurs, the state of the machine is written to the Guest Configuration resource provider. This update causes Azure Policy to evaluate the Azure Resource Manager properties. An on-demand Azure Policy evaluation retrieves the latest value from the Guest Configuration resource provider. However, it doesn't trigger a new audit
-of the configuration within the machine.
+of the configuration within the machine. The status is simultaneously written to Azure Resource Graph.
## Supported client types
hdinsight Hdinsight Apps Use Edge Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-apps-use-edge-node.md
After you've created an edge node, you can connect to the edge node using SSH, a
> [!WARNING] > Custom components that are installed on the edge node receive commercially reasonable support from Microsoft. This might result in resolving problems you encounter. Or, you may be referred to community resources for further assistance. The following are some of the most active sites for getting help from the community: >
-> * [Microsoft Q&A question page for HDInsight](/answers/topics/azure-hdinsight.html
+> * [Microsoft Q&A question page for HDInsight](/answers/topics/azure-hdinsight.html)
> * [https://stackoverflow.com](https://stackoverflow.com). > > If you are using an Apache technology, you may be able to find assistance through the Apache project sites on [https://apache.org](https://apache.org), such as the [Apache Hadoop](https://hadoop.apache.org/) site.
In this article, you've learned how to add an edge node and how to access the ed
* [Publish HDInsight applications](hdinsight-apps-publish-applications.md): Learn how to publish your custom HDInsight applications to Azure Marketplace. * [MSDN: Install an HDInsight application](/rest/api/hdinsight/hdinsight-application): Learn how to define HDInsight applications. * [Customize Linux-based HDInsight clusters using Script Action](hdinsight-hadoop-customize-cluster-linux.md): learn how to use Script Action to install additional applications.
-* [Create Linux-based Apache Hadoop clusters in HDInsight using Resource Manager templates](hdinsight-hadoop-create-linux-clusters-arm-templates.md): learn how to call Resource Manager templates to create HDInsight clusters.
+* [Create Linux-based Apache Hadoop clusters in HDInsight using Resource Manager templates](hdinsight-hadoop-create-linux-clusters-arm-templates.md): learn how to call Resource Manager templates to create HDInsight clusters.
hpc-cache Add Namespace Paths https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/add-namespace-paths.md
You can sort the table columns to better understand your cache's aggregated name
You must create at least one namespace path before clients can access the storage target. (Read [Mount the Azure HPC Cache](hpc-cache-mount.md) for more about client access.)
+If you recently added a storage target or customized an access policy, it might take a minute or two before you can create a namespace path.
+ ### Blob namespace paths An Azure Blob storage target can have only one namespace path.
industrial-iot Overview What Is Industrial Iot Platform https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industrial-iot/overview-what-is-industrial-iot-platform.md
+
+ Title: Azure Industrial IoT Platform
+description: This article provides an overview of the Industrial IoT Platform and its components.
++++ Last updated : 3/22/2021++
+# What is the Industrial IoT (IIoT) Platform?
+
+The Azure Industrial IoT Platform is a Microsoft suite of modules and services that are deployed on Azure. These modules and services have fully embraced openness. Specifically, we apply Azure's managed Platform as a Service (PaaS) offering, open-source software licensed via MIT license, open international standards for communication (OPC UA, AMQP, MQTT, HTTP) and interfaces (Open API), and open industrial data models (OPC UA) on the edge and in the cloud.
+
+## Enabling shopfloor connectivity
+
+The Azure Industrial IoT Platform covers industrial connectivity of shopfloor assets (including discovery of OPC UA-enabled assets), normalizes their data into OPC UA format and transmits asset telemetry data to Azure in OPC UA PubSub format. There, it stores the telemetry data in a cloud database. In addition, the platform enables secure access to the shopfloor assets via OPC UA from the cloud. Device management capabilities (including security configuration) is also built in. The OPC UA functionality has been built using Docker container technology for easy deployment and management. For non-OPC UA-enabled assets, we have partnered with the leading industrial connectivity providers and helped them port their OPC UA adapter software to Azure IoT Edge. These adapters are available in the Azure Marketplace.
+
+## Industrial IoT components: IoT Edge modules and cloud microservices
+
+The edge services are implemented as Azure IoT Edge modules and run on-premises. The cloud microservices are implemented as ASP.NET microservices with a REST interface and run on managed Azure Kubernetes Services or stand-alone on Azure App Service. For both edge and cloud services, we have provided pre-built Docker containers in the Microsoft Container Registry (MCR), removing this step for the customer. The edge and cloud services are leveraging each other and must be used together. We have also provided easy-to-use deployment scripts that allow one to deploy the entire platform with a single command.
+
+## Next steps
+
+Now that you have learned what the Azure Industrial IoT Platform is, you can learn about the OPC Publisher:
+
+> [!div class="nextstepaction"]
+> [What is the OPC Publisher?](overview-what-is-opc-publisher.md)
industrial-iot Overview What Is Industrial Iot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industrial-iot/overview-what-is-industrial-iot.md
+
+ Title: Azure Industrial IoT Overview
+description: This article provides an overview of Industrial IoT. It explains the shop floor connectivity and security components in IIoT.
++++ Last updated : 3/22/2021++
+# What is Industrial IoT (IIoT)?
+
+IIoT (Industrial Internet of Things) enhances industrial efficiencies through the application of IoT in the manufacturing industry.
+
+![Industrial Iot](media/overview-what-is-Industrial-IoT/icon-255-px.png)
+
+## Improve industrial efficiencies
+Enhance your operational productivity and profitability with Azure Industrial IoT. Reduce the time-consuming process of accessing the assets on-site. Connect and monitor your industrial equipment and devices in the cloud - including your machines already operating on the factory floor. Analyze your IoT data for insights that help you increase the performance of the entire site.
+
+## Industrial IoT Components
+
+**IoT Edge devices**
+An IoT Edge device is composed of Edge Runtime and Edge Modules.
+- *Edge Modules* are docker containers, which are the smallest unit of computation, like OPC Publisher and OPC Twin.
+- *Edge device* is used to deploy such modules, which act as mediator between OPC UA server and IoT Hub in cloud. More information about IoT Edge is [here](https://azure.microsoft.com/services/iot-edge/).
+
+**IoT Hub**
+The IoT Hub acts as a central message hub for bi-directional communication between IoT application and the devices it manages. It's an open and flexible cloud platform as a service that supports open-source SDKs and multiple protocols. Read more about IoT Hub [here](https://azure.microsoft.com/services/iot-hub/).
+
+**Industrial Edge Modules**
+- *OPC Publisher*: The OPC Publisher runs inside IoT Edge. It connects to OPC UA servers and publishes JSON encoded telemetry data from these servers in OPC UA "Pub/Sub" format to Azure IoT Hub. All transport protocols supported by the Azure IoT Hub client SDK can be used, i.e. HTTPS, AMQP, and MQTT.
+- *OPC Twin*: The OPC Twin consists of microservices that use Azure IoT Edge and IoT Hub to connect the cloud and the factory network. OPC Twin provides discovery, registration, and remote control of industrial devices through REST APIs. OPC Twin doesn't require an OPC Unified Architecture (OPC UA) SDK. It's programming language agnostic, and can be included in a serverless workflow.
+- *Discovery*: The discovery module, represented by the discoverer identity, provides discovery services on the edge, which include OPC UA server discovery. If discovery is configured and enabled, the module will send the results of a scan probe via the IoT Edge and IoT Hub telemetry path to the Onboarding service. The service processes the results and updates all related Identities in the Registry.
++
+**Discover, register, and manage your Industrial Assets with Azure**
+Azure Industrial IoT allows plant operators to discover OPC UA enabled servers in a factory network and register them in Azure IoT Hub. Operations personnel can subscribe and react to events on the factory floor from anywhere in the world. The Microservices' REST APIs mirror the OPC UA services edge-side. They are secured using OAUTH authentication and authorization backed by Azure Active Directory (AAD). This enables your cloud applications to browse server address spaces or read/write variables and execute methods using HTTPS and simple OPC UA JSON payloads.
+
+## Next steps
+Now that you have learned what Industrial IoT is, you can read about the Industrial IoT Platform and the OPC Publisher:
+
+> [!div class="nextstepaction"]
+> [What is the Industrial IoT Platform?](overview-what-is-industrial-iot-platform.md)
+
+> [!div class="nextstepaction"]
+> [What is the OPC Publisher?](overview-what-is-opc-publisher.md)
industrial-iot Overview What Is Opc Publisher https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industrial-iot/overview-what-is-opc-publisher.md
+
+ Title: Microsoft OPC Publisher
+description: This article provides an overview of the OPC Publisher Edge module.
++++ Last updated : 3/22/2021++
+# What is the OPC Publisher?
+
+OPC Publisher is a fully supported Microsoft product, developed in the open, that bridges the gap between industrial assets and the Microsoft Azure cloud. It does so by connecting to OPC UA-enabled assets or industrial connectivity software and publishes telemetry data to Azure IoT Hub in various formats, including IEC62541 OPC UA PubSub standard format (from version 2.6 onwards).
+
+It runs on Azure IoT Edge as a Module or on plain Docker as a container. Since it leverages the .NET cross-platform runtime, it also runs natively on Linux and Windows 10.
+
+OPC Publisher is a reference implementation that demonstrates how to:
+
+- Connect to existing OPC UA servers.
+- Publish JSON encoded telemetry data from OPC UA servers in OPC UA Pub/Sub format, using a JSON payload, to Azure IoT Hub.
+
+You can use any of the transport protocols that the Azure IoT Hub client SDK supports: HTTPS, AMQP, and MQTT.
+
+The reference implementation includes:
+
+- An OPC UA *client* for connecting to existing OPC UA servers you have on your network.
+- An OPC UA *server* on port 62222 that you can use to manage what's published and offers IoT Hub direct methods to do the same.
+
+You can download the [OPC Publisher reference implementation](https://github.com/Azure/iot-edge-opc-publisher) from GitHub.
+
+The application is implemented using .NET Core technology and can run on any platform supported by .NET Core.
+
+## What does the OPC Publisher do?
+
+OPC Publisher implements retry logic to establish connections to endpoints that don't respond to a certain number of keep alive requests. For example, if an OPC UA server stops responding because of a power outage.
+
+For each distinct publishing interval to an OPC UA server, the application creates a separate subscription over which all nodes with this publishing interval are updated.
+
+OPC Publisher supports batching of the data sent to IoT Hub to reduce network load. This batching sends a packet to IoT Hub only if the configured packet size is reached.
+
+This application uses the OPC Foundation OPC UA reference stack as NuGet packages. See [https://opcfoundation.org/license/redistributables/1.3/](https://opcfoundation.org/license/redistributables/1.3/) for the licensing terms.
+
+## Next steps
+Now that you have learned what the OPC Publisher is, you can get started by deploying it:
+
+> [!div class="nextstepaction"]
+> [Deploy OPC Publisher in standalone mode](tutorial-publisher-deploy-opc-publisher-standalone.md)
industrial-iot Reference Command Line Arguments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industrial-iot/reference-command-line-arguments.md
+
+ Title: Microsoft OPC Publisher Command-line Arguments
+description: This article provides an overview of the OPC Publisher Command-line Arguments
++++ Last updated : 3/22/2021++
+# Command-line Arguments
+
+In the following, there are several command-line arguments described that can be used to set global settings for OPC Publisher.
+
+## OPC Publisher Command-line Arguments for Version 2.5 and below
+
+* Usage: opcpublisher.exe \<applicationname> [\<iothubconnectionstring>] [\<options>]
+
+* applicationname: the OPC UA application name to use, required
+ The application name is also used to register the publisher under this name in the
+ IoT Hub device registry.
+
+* iothubconnectionstring: the IoT Hub owner connectionstring, optional. Typically you specify the IoTHub owner connectionstring only on the first start of the application. The connection string is encrypted and stored in the platforms certificate store.
+On subsequent calls, it's read from there and reused. If you specify the connectionstring on each start, the device, which is created for the application in the IoT Hub device registry is removed and recreated each time.
+
+There are a couple of environment variables, which can be used to control the application:
+```
+ _HUB_CS: sets the IoTHub owner connectionstring
+ _GW_LOGP: sets the filename of the log file to use
+ _TPC_SP: sets the path to store certificates of trusted stations
+ _GW_PNFP: sets the filename of the publishing configuration file
+```
+
+> [!NOTE]
+> Command-line arguments overrule environment variable settings.
+
+```
+ --pf, --publishfile=VALUE
+ the filename to configure the nodes to publish.
+ Default: '/appdata/publishednodes.json'
+ --tc, --telemetryconfigfile=VALUE
+ the filename to configure the ingested telemetry
+ Default: ''
+ -s, --site=VALUE
+ the site OPC Publisher is working in. if specified this domain is appended (delimited by a ':' to the 'ApplicationURI' property when telemetry is sent to IoTHub.
+ The value must follow the syntactical rules of a
+ DNS hostname.
+ Default: not set
+ --ic, --iotcentral
+ OPC Publisher sends OPC UA data in IoTCentral
+ compatible format (DisplayName of a node is used
+ as key, this key is the Field name in IoTCentral)
+ . you need to ensure that all DisplayName's are
+ unique. (Auto enables fetch display name)
+ Default: False
+ --sw, --sessionconnectwait=VALUE
+ specify the wait time in seconds publisher is
+ trying to connect to disconnected endpoints and
+ starts monitoring unmonitored items
+ Min: 10
+ Default: 10
+
+ --mq, --monitoreditemqueuecapacity=VALUE
+ specify how many notifications of monitored items
+ can be stored in the internal queue, if the data
+ can not be sent quick enough to IoTHub
+ Min: 1024
+ Default: 8192
+ --di, --diagnosticsinterval=VALUE
+ shows publisher diagnostic info at the specified
+ interval in seconds (need log level info).
+ -1 disables remote diagnostic log and diagnostic
+ output
+ 0 disables diagnostic output
+ Default: 0
+ --ns, --noshutdown=VALUE
+ same as runforever.
+ Default: False
+ --rf, --runforever
+ OPC Publisher can not be stopped by pressing a key on
+ the console, but runs forever.
+ Default: False
+ --lf, --logfile=VALUE
+ the filename of the logfile to use.
+ Default: './<hostname>-publisher.log'
+ --lt, --logflushtimespan=VALUE
+ the timespan in seconds when the logfile should be
+ flushed.
+ Default: 00:00:30 sec
+ --ll, --loglevel=VALUE
+ the loglevel to use (allowed: fatal, error, warn,
+ info, debug, verbose).
+ Default: info
+ --ih, --iothubprotocol=VALUE
+ the protocol to use for communication with IoTHub (allowed values: Amqp, Http1, Amqp_WebSocket_Only,
+ Amqp_Tcp_Only, Mqtt, Mqtt_WebSocket_Only, Mqtt_
+ Tcp_Only) or IoT EdgeHub (allowed values: Mqtt_
+ Tcp_Only, Amqp_Tcp_Only).
+ Default for IoTHub: Mqtt_WebSocket_Only
+ Default for IoT EdgeHub: Amqp_Tcp_Only
+ --ms, --iothubmessagesize=VALUE
+ the max size of a message which can be sent to
+ IoTHub. When telemetry of this size is available
+ it is sent.
+ 0 enforces immediate send when telemetry is
+ available
+ Min: 0
+ Max: 262144
+ Default: 262144
+ --si, --iothubsendinterval=VALUE
+ the interval in seconds when telemetry should be
+ sent to IoTHub. If 0, then only the
+ iothubmessagesize parameter controls when
+ telemetry is sent.
+ Default: '10'
+ --dc, --deviceconnectionstring=VALUE
+ if publisher is not able to register itself with
+ IoTHub, you can create a device with name <
+ applicationname> manually and pass in the
+ connectionstring of this device.
+ Default: none
+ -c, --connectionstring=VALUE
+ the IoTHub owner connectionstring.
+ Default: none
+ --hb, --heartbeatinterval=VALUE
+ the publisher is using this as default value in
+ seconds for the heartbeat interval setting of
+ nodes without
+ a heartbeat interval setting.
+ Default: 0
+ --sf, --skipfirstevent=VALUE
+ the publisher is using this as default value for
+ the skip first event setting of nodes without
+ a skip first event setting.
+ Default: False
+ --pn, --portnum=VALUE
+ the server port of the publisher OPC server
+ endpoint.
+ Default: 62222
+ --pa, --path=VALUE
+ the enpoint URL path part of the publisher OPC
+ server endpoint.
+ Default: '/UA/Publisher'
+ --lr, --ldsreginterval=VALUE
+ the LDS(-ME) registration interval in ms. If 0,
+ then the registration is disabled.
+ Default: 0
+ --ol, --opcmaxstringlen=VALUE
+ the max length of a string opc can transmit/
+ receive.
+ Default: 131072
+ --ot, --operationtimeout=VALUE
+ the operation timeout of the publisher OPC UA
+ client in ms.
+ Default: 120000
+ --oi, --opcsamplinginterval=VALUE
+ the publisher is using this as default value in
+ milliseconds to request the servers to sample
+ the nodes with this interval
+ this value might be revised by the OPC UA
+ servers to a supported sampling interval.
+ please check the OPC UA specification for
+ details how this is handled by the OPC UA stack.
+ a negative value sets the sampling interval
+ to the publishing interval of the subscription
+ this node is on.
+ 0 configures the OPC UA server to sample in
+ the highest possible resolution and should be
+ taken with care.
+ Default: 1000
+ --op, --opcpublishinginterval=VALUE
+ the publisher is using this as default value in
+ milliseconds for the publishing interval setting
+ of the subscriptions established to the OPC UA
+ servers.
+ please check the OPC UA specification for
+ details how this is handled by the OPC UA stack.
+ a value less than or equal zero lets the
+ server revise the publishing interval.
+ Default: 0
+ --ct, --createsessiontimeout=VALUE
+ specify the timeout in seconds used when creating
+ a session to an endpoint. On unsuccessful
+ connection attemps a backoff up to 5 times the
+ specified timeout value is used.
+ Min: 1
+ Default: 10
+ --ki, --keepaliveinterval=VALUE
+ specify the interval in seconds the publisher is
+ sending keep alive messages to the OPC servers
+ on the endpoints it is connected to.
+ Min: 2
+ Default: 2
+ --kt, --keepalivethreshold=VALUE
+ specify the number of keep alive packets a server
+ can miss, before the session is disconneced
+ Min: 1
+ Default: 5
+ --aa, --autoaccept
+ the OPC Publisher trusts all servers it is
+ establishing a connection to.
+ Default: False
+ --tm, --trustmyself=VALUE
+ same as trustowncert.
+ Default: False
+ --to, --trustowncert
+ the OPC Publisher certificate is put into the trusted
+ certificate store automatically.
+ Default: False
+ --fd, --fetchdisplayname=VALUE
+ same as fetchname.
+ Default: False
+ --fn, --fetchname
+ enable to read the display name of a published
+ node from the server. this increases the
+ runtime.
+ Default: False
+ --ss, --suppressedopcstatuscodes=VALUE
+ specifies the OPC UA status codes for which no
+ events should be generated.
+ Default: BadNoCommunication,
+ BadWaitingForInitialData
+ --at, --appcertstoretype=VALUE
+ the own application cert store type.
+ (allowed values: Directory, X509Store)
+ Default: 'Directory'
+ --ap, --appcertstorepath=VALUE
+ the path where the own application cert should be
+ stored
+ Default (depends on store type):
+ X509Store: 'CurrentUser\UA_MachineDefault'
+ Directory: 'pki/own'
+ --tp, --trustedcertstorepath=VALUE
+ the path of the trusted cert store
+ Default: 'pki/trusted'
+ --rp, --rejectedcertstorepath=VALUE
+ the path of the rejected cert store
+ Default 'pki/rejected'
+ --ip, --issuercertstorepath=VALUE
+ the path of the trusted issuer cert store
+ Default 'pki/issuer'
+ --csr
+ show data to create a certificate signing request
+ Default 'False'
+ --ab, --applicationcertbase64=VALUE
+ update/set this applications certificate with the
+ certificate passed in as bas64 string
+ --af, --applicationcertfile=VALUE
+ update/set this applications certificate with the
+ certificate file specified
+ --pb, --privatekeybase64=VALUE
+ initial provisioning of the application
+ certificate (with a PEM or PFX fomat) requires a
+ private key passed in as base64 string
+ --pk, --privatekeyfile=VALUE
+ initial provisioning of the application
+ certificate (with a PEM or PFX fomat) requires a
+ private key passed in as file
+ --cp, --certpassword=VALUE
+ the optional password for the PEM or PFX or the
+ installed application certificate
+ --tb, --addtrustedcertbase64=VALUE
+ adds the certificate to the applications trusted
+ cert store passed in as base64 string (multiple
+ comma-seperated strings supported)
+ --tf, --addtrustedcertfile=VALUE
+ adds the certificate file(s) to the applications
+ trusted cert store passed in as base64 string (
+ multiple comma-seperated filenames supported)
+ --ib, --addissuercertbase64=VALUE
+ adds the specified issuer certificate to the
+ applications trusted issuer cert store passed in
+ as base64 string (multiple comma-seperated strings supported)
+ --if, --addissuercertfile=VALUE
+ adds the specified issuer certificate file(s) to
+ the applications trusted issuer cert store (
+ multiple comma-seperated filenames supported)
+ --rb, --updatecrlbase64=VALUE
+ update the CRL passed in as base64 string to the
+ corresponding cert store (trusted or trusted
+ issuer)
+ --uc, --updatecrlfile=VALUE
+ update the CRL passed in as file to the
+ corresponding cert store (trusted or trusted
+ issuer)
+ --rc, --removecert=VALUE
+ remove cert(s) with the given thumbprint(s) (
+ multiple comma-seperated thumbprints supported)
+ --dt, --devicecertstoretype=VALUE
+ the iothub device cert store type.
+ (allowed values: Directory, X509Store)
+ Default: X509Store
+ --dp, --devicecertstorepath=VALUE
+ the path of the iot device cert store
+ Default Default (depends on store type):
+ X509Store: 'My'
+ Directory: 'CertificateStores/IoTHub'
+ -i, --install
+ register OPC Publisher with IoTHub and then exits.
+ Default: False
+ -h, --help
+ show this message and exit
+ --st, --opcstacktracemask=VALUE
+ ignored.
+ --sd, --shopfloordomain=VALUE
+ same as site option
+ The value must follow the syntactical rules of a
+ DNS hostname.
+ Default: not set
+ --vc, --verboseconsole=VALUE
+ ignored.
+ --as, --autotrustservercerts=VALUE
+ same as autoaccept
+ Default: False
+ --tt, --trustedcertstoretype=VALUE
+ ignored.
+ the trusted cert store always resides in a
+ directory.
+ --rt, --rejectedcertstoretype=VALUE
+ ignored.
+ the rejected cert store always resides in a
+ directory.
+ --it, --issuercertstoretype=VALUE
+ ignored.
+ the trusted issuer cert store always
+ resides in a directory.
+```
++
+## OPC Publisher Command-line Arguments for Version 2.6 and above
+```
+ --pf, --publishfile=VALUE
+ the filename to configure the nodes to publish.
+ If this Option is specified it puts OPC Publisher into stadalone mode.
+ --lf, --logfile=VALUE
+ the filename of the logfile to use.
+ --ll. --loglevel=VALUE
+ the log level to use (allowed: fatal, error,
+ warn, info, debug, verbose).
+ --me, --messageencoding=VALUE
+ the messaging encoding for outgoing messages
+ allowed values: Json, Uadp
+ --mm, --messagingmode=VALUE
+ the messaging mode for outgoing messages
+ allowed values: PubSub, Samples
+ --fm, --fullfeaturedmessage=VALUE
+ the full featured mode for messages (all fields filled in).
+ Default is 'true', for legacy compatibility use 'false'
+ --aa, --autoaccept
+ the publisher trusted all servers it is establishing a connection to
+ --bs, --batchsize=VALUE
+ the number of OPC UA data-change messages to be cached for batching.
+ --si, --iothubsendinterval=VALUE
+ the trigger batching interval in seconds.
+ --ms, --iothubmessagesize=VALUE
+ the maximum size of the (IoT D2C) message.
+ --om, --maxoutgressmessages=VALUE
+ the maximum size of the (IoT D2C) message egress buffer.
+ --di, --diagnosticsinterval=VALUE
+ shows publisher diagnostic info at the specified interval in seconds
+ (need log level info). -1 disables remote diagnostic log and diagnostic output
+ --lt, --logflugtimespan=VALUE
+ the timespan in seconds when the logfile should be flushed.
+ --ih, --iothubprotocol=VALUE
+ protocol to use for communication with the hub.
+ allowed values: AmqpOverTcp, AmqpOverWebsocket, MqttOverTcp,
+ MqttOverWebsocket, Amqp, Mqtt, Tcp, Websocket, Any
+ --hb, --heartbeatinterval=VALUE
+ the publisher is using this as default value in seconds for the
+ heartbeat interval setting of nodes without a heartbeat interval setting.
+ --ot, --operationtimeout=VALUE
+ the operation timeout of the publisher OPC UA client in ms.
+ --ol, --opcmaxstringlen=VALUE
+ the max length of a string opc can transmit/receive.
+ --oi, --opcsamplinginterval=VALUE
+ default value in milliseconds to request the servers to sample values
+ --op, --opcpublishinginterval=VALUE
+ default value in milliseconds for the publishing interval setting
+ of the subscriptions against the OPC UA server.
+ --ct, --createsessiontimeout=VALUE
+ the interval in seconds the publisher is sending keep alive
+ messages to the OPC servers on the endpoints it is connected to.
+ --kt, --keepalivethresholt=VALUE
+ specify the number of keep alive packets a server can miss,
+ before the session is disconnected.
+ --tm, --trustmyself
+ the publisher certificate is put into the trusted store automatically.
+ --at, --appcertstoretype=VALUE
+ the own application cert store type (allowed: Directory, X509Store).
+```
+## Next steps
+Further resources can be found in the GitHub repositories:
+
+> [!div class="nextstepaction"]
+> [OPC Publisher GitHub repository](https://github.com/Azure/Industrial-IoT)
+
+> [!div class="nextstepaction"]
+> [IIoT Platform GitHub repository](https://github.com/Azure/iot-edge-opc-publisher)
industrial-iot Reference Opc Publisher Telemetry Format https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industrial-iot/reference-opc-publisher-telemetry-format.md
+
+ Title: Microsoft OPC Publisher Telemetry Format
+description: This article provides an overview of the configuration settings file
++++ Last updated : 3/22/2021+
+# OPC Publisher Telemetry Format
+
+OPC Publisher version 2.6 and above support standardized OPC UA PubSub JSON format as specified in [part 14 of the OPC UA specification](https://opcfoundation.org/developer-tools/specifications-unified-architecture/part-14-pubsub/) and looks like this:
+```
+{
+ "MessageId": "18",
+ "MessageType": "ua-data",
+ "PublisherId": "uat46f9f8f82fd5c1b42a7de31b5dc2c11ef418a62f",
+ "DataSetClassId": "78c4e91c-82cb-444e-a8e0-6bbacc9a946d",
+ "Messages": [
+ {
+ "DataSetWriterId": "uat46f9f8f82fd5c1b42a7de31b5dc2c11ef418a62f",
+ "SequenceNumber": 18,
+ "MetaDataVersion": {
+ "MajorVersion": 1,
+ "MinorVersion": 1
+ },
+ "Timestamp": "2020-03-24T23:30:56.9597112Z",
+ "Status": null,
+ "Payload": {
+ "http://test.org/UA/Data/#i=10845": {
+ "Value": 99,
+ "SourceTimestamp": "2020-03-24T23:30:55.9891469Z",
+ "ServerTimestamp": "2020-03-24T23:30:55.9891469Z"
+ },
+ "http://test.org/UA/Data/#i=10846": {
+ "Value": 251,
+ "SourceTimestamp": "2020-03-24T23:30:55.9891469Z",
+ "ServerTimestamp": "2020-03-24T23:30:55.9891469Z"
+ }
+ }
+ }
+ ]
+}
+```
+
+In addition, all versions of OPC Publisher support a non-standardized, simple JSON telemetry format, which is compatible with [Azure Time Series Insights](https://azure.microsoft.com/services/time-series-insights/) and looks like this:
+```
+[
+ {
+ "EndpointUrl": "opc.tcp://192.168.178.3:49320/",
+ "NodeId": "ns=2;s=Pump\\234754a-c63-b9601",
+ "MonitoredItem": {
+ "ApplicationUri": "urn:myfirstOPCServer"
+ },
+ "Value": {
+ "Value": 973,
+ "SourceTimestamp": "2020-11-30T07:21:31.2604024Z",
+ "StatusCode": 0,
+ "Status": "Good"
+ }
+ },
+ {
+ "EndpointUrl": "opc.tcp://192.168.178.4:49320/",
+ "NodeId": "ns=2;s=Boiler\\234754a-c63-b9601",
+ "MonitoredItem": {
+ "ApplicationUri": "urn:mySecondOPCServer"
+ },
+ "Value": {
+ "Value": 974,
+ "SourceTimestamp": "2020-11-30T07:21:32.2625062Z",
+ "StatusCode": 0,
+ "Status": "Good"
+ }
+ }
+]
+```
+
+## OPC Publisher Telemetry Configuration File Format
+```
+ // The configuration settings file consists of two objects:
+ // 1) The 'Defaults' object, which defines defaults for the telemetry configuration
+ // 2) An array 'EndpointSpecific' of endpoint specific configuration
+ // Both objects are optional and if they are not specified, then OPC Publisher uses
+ // its internal default configuration:
+ // {
+ // "NodeId": "i=2058",
+ // "ApplicationUri": "urn:myopcserver",
+ // "DisplayName": "CurrentTime",
+ // "Value": {
+ // "Value": "10.11.2017 14:03:17",
+ // "SourceTimestamp": "2017-11-10T14:03:17Z"
+ // }
+ // }
+
+ // The 'Defaults' object in the sample below, are similar to what publisher is
+ // using as its internal default telemetry configuration.
+ {
+ "Defaults": {
+ // The first two properties ('EndpointUrl' and 'NodeId' are configuring data
+ // taken from the OpcPublisher node configuration.
+ "EndpointUrl": {
+
+ // The following three properties can be used to configure the 'EndpointUrl'
+ // property in the JSON message send by publisher to IoTHub.
+
+ // Publish controls if the property should be part of the JSON message at all.
+ "Publish": false,
+
+ // Pattern is a regular expression, which is applied to the actual value of the
+ // property (here 'EndpointUrl').
+ // If this key is ommited (which is the default), then no regex matching is done
+ // at all, which improves performance.
+ // If the key is used you need to define groups in the regular expression.
+ // Publisher applies the regular expression and then concatenates all groups
+ // found and use the resulting string as the value in the JSON message to
+ //sent to IoTHub.
+ // This example mimics the default behaviour and defines a group,
+ // which matches the conplete value:
+ "Pattern": "(.*)",
+ // Here some more exaples for 'Pattern' values and the generated result:
+ // "Pattern": "i=(.*)"
+ // defined for Defaults.NodeId.Pattern, will generate for the above sample
+ // a 'NodeId' value of '2058'to be sent by publisher
+ // "Pattern": "(i)=(.*)"
+ // defined for Defaults.NodeId.Pattern, will generate for the above sample
+ // a 'NodeId' value of 'i2058' to be sent by publisher
+
+ // Name allows you to use a shorter string as property name in the JSON message
+ // sent by publisher. By default the property name is unchanged and will be
+ // here 'EndpointUrl'.
+ // The 'Name' property can only be set in the 'Defaults' object to ensure
+ // all messages from publisher sent to IoTHub have a similar layout.
+ "Name": "EndpointUrl"
+
+ },
+ "NodeId": {
+ "Publish": true,
+
+ // If you set Defaults.NodeId.Name to "ni", then the "NodeId" key/value pair
+ // (from the above example) will change to:
+ // "ni": "i=2058",
+ "Name": "NodeId"
+ },
+
+ // The MonitoredItem object is configuring the data taken from the MonitoredItem
+ // OPC UA object for published nodes.
+ "MonitoredItem": {
+
+ // If you set the Defaults.MonitoredItem.Flat to 'false', then a
+ // 'MonitoredItem' object will appear, which contains 'ApplicationUri'
+ // and 'DisplayNode' proerties:
+ // "NodeId": "i=2058",
+ // "MonitoredItem": {
+ // "ApplicationUri": "urn:myopcserver",
+ // "DisplayName": "CurrentTime",
+ // }
+ // The 'Flat' property can only be used in the 'MonitoredItem' and
+ // 'Value' objects of the 'Defaults' object and will be used
+ // for all JSON messages sent by publisher.
+ "Flat": true,
+
+ "ApplicationUri": {
+ "Publish": true,
+ "Name": "ApplicationUri"
+ },
+ "DisplayName": {
+ "Publish": true,
+ "Name": "DisplayName"
+ }
+ },
+ // The Value object is configuring the properties taken from the event object
+ // the OPC UA stack provided in the value change notification event.
+ "Value": {
+ // If you set the Defaults.Value.Flat to 'true', then the 'Value'
+ // object will disappear completely and the 'Value' and 'SourceTimestamp'
+ // members won't be nested:
+ // "DisplayName": "CurrentTime",
+ // "Value": "10.11.2017 14:03:17",
+ // "SourceTimestamp": "2017-11-10T14:03:17Z"
+ // The 'Flat' property can only be used for the 'MonitoredItem' and 'Value'
+ // objects of the 'Defaults' object and will be used for all
+ // messages sent by publisher.
+ "Flat": false,
+
+ "Value": {
+ "Publish": true,
+ "Name": "Value"
+ },
+ "SourceTimestamp": {
+ "Publish": true,
+ "Name": "SourceTimestamp"
+ },
+ // 'StatusCode' is the 32 bit OPC UA status code
+ "StatusCode": {
+ "Publish": false,
+ "Name": "StatusCode"
+ // 'Pattern' is ignored for the 'StatusCode' value
+ },
+ // 'Status' is the symbolic name of 'StatusCode'
+ "Status": {
+ "Publish": false,
+ "Name": "Status"
+ }
+ }
+ },
+
+ // The next object allows to configure 'Publish' and 'Pattern' for specific
+ // endpoint URLs. Those will overwrite the ones specified in the 'Defaults' object
+ // or the defaults used by publisher.
+ // It is not allowed to specify 'Name' and 'Flat' properties in this object.
+ "EndpointSpecific": [
+ // The following shows how a endpoint specific configuration can look like:
+ {
+ // 'ForEndpointUrl' allows to configure for which OPC UA server this
+ // object applies and is a required property for all objects in the
+ // 'EndpointSpecific' array.
+ // The value of 'ForEndpointUrl' must be an 'EndpointUrl' configured in
+ // the publishednodes.json confguration file.
+ "ForEndpointUrl": "opc.tcp://<your_opcua_server>:<your_opcua_server_port>/<your_opcua_server_path>",
+ "EndpointUrl": {
+ // We overwrite the default behaviour and publish the
+ // endpoint URL in this case.
+ "Publish": true,
+ // We are only interested in the URL part following the 'opc.tcp://' prefix
+ // and define a group matching this.
+ "Pattern": "opc.tcp://(.*)"
+ },
+ "NodeId": {
+ // We are not interested in the configured 'NodeId' value,
+ // so we do not publish it.
+ "Publish": false
+ // No 'Pattern' key is specified here, so the 'NodeId' value will be
+ // taken as specified in the publishednodes configuration file.
+ },
+ "MonitoredItem": {
+ "ApplicationUri": {
+ // We already publish the endpoint URL, so we do not want
+ // the ApplicationUri of the MonitoredItem to be published.
+ "Publish": false
+ },
+ "DisplayName": {
+ "Publish": true
+ }
+ },
+ "Value": {
+ "Value": {
+ // The value of the node is important for us, everything else we
+ // are not interested in to keep the data ingest as small as possible.
+ "Publish": true
+ },
+ "SourceTimestamp": {
+ "Publish": false
+ },
+ "StatusCode": {
+ "Publish": false
+ },
+ "Status": {
+ "Publish": false
+ }
+ }
+ }
+ ]
+ }
+```
+
+## Next steps
+Further resources can be found in the GitHub repositories:
+
+> [!div class="nextstepaction"]
+> [OPC Publisher GitHub repository](https://github.com/Azure/Industrial-IoT)
+
+> [!div class="nextstepaction"]
+> [IIoT Platform GitHub repository](https://github.com/Azure/iot-edge-opc-publisher)
industrial-iot Tutorial Configure Industrial Iot Components https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industrial-iot/tutorial-configure-industrial-iot-components.md
+
+ Title: Configure the Azure Industrial IoT components
+description: In this tutorial, you learn how to change the default values of the configuration.
++++ Last updated : 3/22/2021++
+# Tutorial: Configure the Industrial IoT components
+
+The deployment script automatically configures all components to work with each other using default values. However, the settings of the default values can be changed to meet your requirements.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Customize the configuration of the components
++
+Here are some of the more relevant customization settings for the components:
+* IoT Hub
+ * Networking→Public access: Configure Internet access, for example, IP filters
+ * Networking → Private endpoint connections: Create an endpoint that's not accessible
+ through the Internet and can be consumed internally by other Azure services or on-premises devices (for example, through a VPN connection)
+ * IoT Edge: Manage the configuration of the edge devices that are connected to the OPC
+UA servers
+* Cosmos DB
+ * Replicate data globally: Configure data-redundancy
+ * Firewall and virtual networks: Configure Internet and VNET access, and IP filters
+ * Private endpoint connections: Create an endpoint that is not accessible through the
+Internet
+* Key Vault
+ * Secrets: Manage platform settings
+ * Access policies: Manage which applications and users may access the data in the Key
+Vault and which operations (for example, read, write, list, delete) they are allowed to perform on the network, firewall, VNET, and private endpoints
+* Azure Active Directory (AAD)→App registrations
+ * <APP_NAME>-web → Authentication: Manage reply URIs, which is the list of URIs that
+can be used as landing pages after authentication succeeds. The deployment script may be unable to configure this automatically under certain scenarios, such as lack of AAD admin rights. You may want to add or modify URIs when changing the hostname of the Web app, for example, the port number used by the localhost for debugging
+* App Service
+ * Configuration: Manage the environment variables that control the services or UI
+* Virtual machine
+ * Networking: Configure supported networks and firewall rules
+ * Serial console: SSH access to get insights or for debugging, get the credentials from the
+output of deployment script or reset the password
+* IoT Hub → IoT Edge
+ * Manage the identities of the IoT Edge devices that may access the hub, configure which modules are installed and which configuration they use, for example, encoding parameters for the OPC Publisher
+* IoT Hub → IoT Edge → \<DEVICE> → Set Modules → OpcPublisher (for standalone OPC Publisher operation only)
++
+## Configuration options
+
+|Configuration Option (shorthand/full name) | Description |
+|-||
+pf/publishfile |The filename to configure the nodes to publish. If this option is specified, it puts OPC Publisher into standalone mode.
+lf/logfile |The filename of the logfile to use.
+ll/loglevel |The log level to use (allowed: fatal, error, warn, info, debug, verbose).
+me/messageencoding |The messaging encoding for outgoing messages allowed values: Json, Uadp
+mm/messagingmode |The messaging mode for outgoing messages allowed values: PubSub, Samples
+fm/fullfeaturedmessage |The full featured mode for messages (all fields filled in). Default is 'true', for legacy compatibility use 'false'
+aa/autoaccept |The publisher trusted all servers it's a connection to
+bs/batchsize |The number of OPC UA data-change messages to be cached for batching.
+si/iothubsendinterval |The trigger batching interval in seconds.
+ms/iothubmessagesize |The maximum size of the (IoT D2C) message.
+om/maxoutgressmessages |The maximum size of the (IoT D2C) message egress buffer.
+di/diagnosticsinterval |Shows publisher diagnostic info at the specified interval in seconds (need log level info). -1 disables remote diagnostic log and diagnostic output
+lt/logflugtimespan |The timespan in seconds when the logfile should be flushed.
+ih/iothubprotocol |Protocol to use for communication with the hub. Allowed values: AmqpOverTcp, AmqpOverWebsocket, MqttOverTcp, MqttOverWebsocket, Amqp, Mqtt, Tcp, Websocket, Any
+hb/heartbeatinterval |The publisher is using this as default value in seconds for the heartbeat interval setting of nodes without a heartbeat interval setting.
+ot/operationtimeout |The operation timeout of the publisher OPC UA client in ms.
+ol/opcmaxstringlen |The max length of a string opc can transmit/receive.
+oi/opcsamplinginterval |Default value in milliseconds to request the servers to sample values
+op/opcpublishinginterval |Default value in milliseconds for the publishing interval setting of the subscriptions against the OPC UA server.
+ct/createsessiontimeout |The interval the publisher is sending keep alive messages in seconds to the OPC servers on the endpoints it's connected to.
+kt/keepalivethresholt |Specify the number of keep alive packets a server can miss, before the session is disconnected.
+tm/trustmyself |The publisher certificate is put into the trusted store automatically.
+at/appcertstoretype |The own application cert store type (allowed: Directory, X509Store).
++
+## Next steps
+Now that you have learned how to change the default values of the configuration, you can
+
+> [!div class="nextstepaction"]
+> [Pull IIoT data into ADX](tutorial-industrial-iot-azure-data-explorer.md)
+
+> [!div class="nextstepaction"]
+> [Visualize and analyze the data using Time Series Insights](tutorial-visualize-data-time-series-insights.md)
industrial-iot Tutorial Deploy Industrial Iot Platform https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industrial-iot/tutorial-deploy-industrial-iot-platform.md
+
+ Title: Deploy the Azure Industrial IoT Platform
+description: In this tutorial, you learn how to deploy the IIoT Platform.
++++ Last updated : 3/22/2021++
+# Tutorial: Deploy the Azure Industrial IoT Platform
+
+In this tutorial, you learn:
+
+> [!div class="checklist"]
+> * About the main components of the IIoT Platform
+> * About the different installation types
+> * How to deploy the Industrial IoT Platform
+
+## Prerequisites
+
+- An Azure subscription must be created
+- Download [Git](https://git-scm.com/downloads)
+- The Azure Active Directory (AAD) app registrations used for authentication require Global Administrator, Application
+Administrator, or Cloud Application Administrator rights to provide tenant-wide admin consent (see below for further options)
+- The supported operating systems for deployment are Windows, Linux and Mac
+- IoT Edge supports Windows 10 IoT Enterprise LTSC and Ubuntu Linux 16.08/18.04 LTS Linux
+
+## Main Components
+
+- Minimum dependencies: IoT Hub, Cosmos DB, Service Bus, Event Hub, Key Vault, Storage
+- Standard dependencies: Minimum + SignalR Service, AAD app
+registrations, Device Provisioning Service, Time Series Insights, Workbook, Log Analytics,
+Application Insights
+- Micro
+- UI (Web app): App Service Plan (shared with microservices), App Service
+- Simulation: Virtual machine, Virtual network, IoT Edge
+- Azure Kubernetes Service
+
+## Installation types
+
+- Minimum: Minimum dependencies
+- Local: Minimum and the standard dependencies
+-
+- Simulation: Minimum dependencies and the simulation components
+- App: Services and the UI
+- All (default): App and the simulation
+
+## Deployment
+
+1. To get started with the deployment of the IIoT Platform, clone the repository from the command prompt or terminal.
+
+ git clone https://github.com/Azure/Industrial-IoT
+ cd Industrial-IoT
+
+2. Start the guided deployment, the script will collect the required information, such as Azure account, subscription, target resource and group and application name.
+
+On Windows:
+ ```
+ .\deploy
+ ```
+
+On Linux or Mac:
+ ```
+ ./deploy.sh
+ ```
+
+3. The microservices and the UI are web applications that require authentication, this requires three app registrations in the AAD. If the required rights are missing, there are two possible solutions:
+
+- Ask the AAD admin to grant tenant-wide admin consent for the application
+- An AAD admin can create the AAD applications. The deploy/scripts folder contains the aad- register.ps1 script to perform the AAD registration separately from the deployment. The output of the script is a file containing the relevant information to be used as part of deployment and must be passed to the deploy.ps1 script in the same folder using the -
+aadConfig argument.
+ ```bash
+ cd deploy/scripts
+ ./aad-register.ps1 -Name <application-name> -Output aad.json
+ ./deploy.ps1 -aadConfig aad.json
+ ```
+
+For production deployments that require staging, rollback, scaling, and resilience, the platform can be deployed into [Azure Kubernetes Service (AKS)](https://github.com/Azure/Industrial-IoT/blob/master/docs/deploy/howto-deploy-aks.md)
+
+References:
+- [Deploying Azure Industrial IoT Platform](https://github.com/Azure/Industrial-IoT/tree/master/docs/deploy)
+- [How to deploy all-in-one](https://github.com/Azure/Industrial-IoT/blob/master/docs/deploy/howto-deploy-all-in-one.md)
+- [How to deploy platform into AKS](https://github.com/Azure/Industrial-IoT/blob/master/docs/deploy/howto-deploy-aks.md)
++
+## Next steps
+Now that you have deployed the IIoT Platform, you can learn how to customize configuration of the components:
+
+> [!div class="nextstepaction"]
+> [Customize the configuration of the components](tutorial-configure-industrial-iot-components.md)
industrial-iot Tutorial Industrial Iot Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industrial-iot/tutorial-industrial-iot-azure-data-explorer.md
+
+ Title: Pull Azure Industrial IoT data into ADX
+description: In this tutorial, you learn how to pull IIoT data into ADX.
++++ Last updated : 3/22/2021++
+# Tutorial: Pull Azure Industrial IoT data into ADX
+
+The Azure Industrial IoT (IIoT) Platform combines edge modules and cloud microservices with a number of Azure PaaS services to provide capabilities for industrial asset discovery and to collect data from these assets using OPC UA. [Azure Data Explorer (ADX)](https://docs.microsoft.com/azure/data-explorer) is a natural destination for IIoT data with data analytics features that enables running flexible queries on the ingested data from the OPC UA servers connected to the IoT Hub through the OPC Publisher. Although an ADX cluster can ingest data directly from the IoT Hub, the IIoT platform does further processing of the data to make it more useful before putting it on the Event Hub provided when a full deployment of the microservices is used (refer to the IIoT platform architecture).
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a table in ADX
+> * Connect the Event Hub to the ADX Cluster
+> * Analyze the data in ADX
+
+## How to make the data available in the ADX cluster to query it effectively
+
+If we look at the message format from the Event Hub (as defined by the class Microsoft.Azure.IIoT.OpcUa.Subscriber.Models.MonitoredItemMessageModel), we can see a hint to the structure that we need for the ADX table schema.
+
+![Structure](media/tutorial-iiot-data-adx/industrial-iot-in-azure-data-explorer-pic-1.png)
+
+Below are the steps that we'll need to make the data available in the ADX cluster and to query the data effectively.
+1. Create an ADX cluster. If you don't have an ADX cluster provisioned with the IIoT platform already, or if you would like to use a different cluster then follow the steps [here](https://docs.microsoft.com/azure/data-explorer/create-cluster-database-portal#create-a-cluster).
+2. Enable streaming ingestion on the ADX cluster as explained [here](https://docs.microsoft.com/azure/data-explorer/ingest-data-streaming#enable-streaming-ingestion-on-your-cluster).
+3. Create an ADX database by following the steps [here](https://docs.microsoft.com/azure/data-explorer/create-cluster-database-portal#create-a-database).
+
+For the following step, we'll use the [ADX web interface](https://docs.microsoft.com/azure/data-explorer/web-query-data) to run the necessary queries. Make sure to add your cluster to the web interface as explained in the link.
+
+4. Create a table in ADX to put the ingested data in. Although the MonitoredItemMessageModel class can be used to define the schema of the ADX table, it's recommended to ingest the data first into a staging table with one column of type [Dynamic](https://docs.microsoft.com/azure/data-explorer/kusto/query/scalar-data-types/dynamic). This gives us more flexibility in handling the data and processing into other tables (potentially combining it with other data sources) that serve the needs for multiple use cases. The following ADX query creates the staging table ΓÇÿiiot_stageΓÇÖ with one column ΓÇÿpayloadΓÇÖ,
+
+```
+.create table ['iiot_stage'] (['payload']:dynamic)
+```
+
+We also need to add a json ingestion mapping to instruct the cluster to put the entire Json message from the Event hub into staging table,
+
+```
+.create table ['iiot_stage'] ingestion json mapping 'iiot_stage_mapping' '[{"column":"payload","path":"$","datatype":"dynamic"}]'
+```
+
+5. Our table is now ready to receive data from the Event Hub.
+6. Use the instructions [here](https://docs.microsoft.com/azure/data-explorer/ingest-data-event-hub#connect-to-the-event-hub) to connect the Event Hub to the ADX cluster and start ingesting the data into our staging table. We only need to create the connection as we already have an Event Hub provisioned by the IIoT platform.
+7. Once the connection is verified, data will start flowing to our table and after a short delay we can start examining the data in our table. Use the following query in the ADX web interface to look at a data sample of 10 rows. We can see here how the data in the payload resembles the MonitoredItemMessageModel class mentioned earlier.
+
+![Query](media/tutorial-iiot-data-adx/industrial-iot-in-azure-data-explorer-pic-2.png)
+
+8. Let us now run some analytics on this data by parsing the Dynamic data in the ΓÇÿpayloadΓÇÖ column directly. In this example, we'll compute the average of the telemetry identified by the ΓÇ£DisplayNameΓÇ¥: ΓÇ£PositiveTrendDataΓÇ¥, over time windows of 10 minutes, on all the records ingested since a specific time point (defined by the variable min_t)
+let min_t = datetime(2020-10-23);
+iiot_stage
+| where todatetime(payload.Timestamp) > min_t
+| where tostring(payload.DisplayName)== 'PositiveTrendData'
+| summarize event_avg = avg(todouble(payload.Value)) by bin(todatetime(payload.Timestamp), 10 m)
+
+Since our ΓÇÿpayloadΓÇÖ column contains a dynamic data type, we need to carry out data conversion at query time so that our calculations are carried out on the correct data types.
+
+![Payload Timestamp](media/tutorial-iiot-data-adx/industrial-iot-in-azure-data-explorer-pic-3.png)
+
+As we mentioned earlier, ingesting the OPC UA data into a staging table with one ΓÇÿDynamicΓÇÖ column gives us flexibility. However, having to run data type conversions at query time can result in delays in executing the queries particularly if the data volume is large and if there are many concurrent queries. At this stage, we can create another table with the data types already determined, so that we avoid the query-time data type conversions.
+
+9. Create a new table for the parsed data that consists of a limited selection from the content of the dynamic ΓÇÿpayloadΓÇÖ in the staging table. Note that we've created a value column for each of the expected data types expected in our telemetry.
+
+```
+.create table ['iiot_parsed']
+ (['Tag_Timestamp']: datetime ,
+ ['Tag_PublisherId']:string ,
+ ['Tag']:string ,
+ ['Tag_Datatype']:string ,
+ ['Tag_NodeId']:string,
+ ['Tag_value_long']:long ,
+ ['Tag_value_double']:double,
+ ['Tag_value_boolean']:bool)
+```
+
+10. Create a function (at the database level) to project the required data from the staging table. Here we select the ΓÇÿTimestampΓÇÖ, ΓÇÿPublisherIdΓÇÖ, ΓÇÿDisplayNameΓÇÖ, ΓÇÿDatatypeΓÇÖ and ΓÇÿNodeIdΓÇÖ items from the ΓÇÿpayloadΓÇÖ column and project these as ΓÇÿTag_TimestampΓÇÖ, ΓÇÿTag_PublisherIdΓÇÖ, ΓÇÿTagΓÇÖ, ΓÇÿTag_DatatypeΓÇÖ, ΓÇÿTag_NodeIdΓÇÖ. The ΓÇÿValueΓÇÖ item is projected as three different parts based on the ΓÇÿDataTypeΓÇÖ.
+
+```
+.create-or-alter function fn_InflightParseIIoTEvent()
+{
+iiot_stage
+| extend Tag_Timestamp = todatetime(payload.Timestamp)
+| extend Tag_PublisherId = tostring(payload.PublisherId)
+| extend Tag = tostring(payload.DisplayName)
+| extend Tag_Datatype = tostring(payload.DataType)
+| extend Tag_NodeId = tostring(payload.NodeId)
+| extend Tag_value_long = case(Tag_Datatype == "Int64", tolong(payload.Value), long(null))
+| extend Tag_value_double = case(Tag_Datatype == "Double", todouble(payload.Value), double(null))
+| extend Tag_value_boolean = case(Tag_Datatype == "Boolean", tobool(payload.Value), bool(null))
+| project Tag_Timestamp, Tag_PublisherId, Tag, Tag_Datatype, Tag_NodeId, Tag_value_long, Tag_value_double, Tag_value_boolean
+}
+```
+
+For more information on mapping data types in ADX, see [here](https://docs.microsoft.com/azure/data-explorer/kusto/query/scalar-data-types/dynamic), and for functions in ADX you can start [here](https://docs.microsoft.com/azure/data-explorer/kusto/query/schema-entities/stored-functions).
+
+11. Apply the function from the previous step to the parsed table using an update policy. Update [policy](https://docs.microsoft.com/azure/data-explorer/kusto/management/updatepolicy) instructs ADX to automatically append data to a target table whenever new data is inserted into the source table, based on a transformation query that runs on the data inserted into the source table. We can use the following query to assign the parsed table as the destination and the stage table as the source for the update policy defined by the function we created in the previous step.
+
+```
+.alter table iiot_parsed policy update
+@'[{"IsEnabled": true, "Source": "iiot_stage", "Query": "fn_InflightParseIIoTEvent()", "IsTransactional": true, "PropagateIngestionProperties": true}]'
+```
+
+As soon as the above query is executed, data will start flowing and populating the destination table ΓÇÿiiot_parsedΓÇÖ. We can look at the data in ΓÇÿiiot_parsed as followsΓÇÖ.
+
+![Parsed Table](media/tutorial-iiot-data-adx/industrial-iot-in-azure-data-explorer-pic-4.png)
+
+12. Let us look now at how we can repeat the analytics that we did in a previous step; compute the average of the telemetry identified by ΓÇ£DisplayNameΓÇ¥: ΓÇ£PositiveTrendDataΓÇ¥, over time windows of 10 minutes, on all the records ingested since a specific time point (defined by the variable min_t). As we now have the values of the ΓÇÿPositveTrendDataΓÇÖ tag stored in a column of double data type, we expect an improvement in the query performance.
+
+![Repeat analytics](media/tutorial-iiot-data-adx/industrial-iot-in-azure-data-explorer-pic-5.png)
+
+13. Let us finally compare the query performance in both cases. We can find the time taken to execute the query using the ΓÇÿStatsΓÇÖ in the ADX UI (which can be located above the query results).
+
+![Query performance 1](media/tutorial-iiot-data-adx/industrial-iot-in-azure-data-explorer-pic-6.png)
+
+![Query performance 2](media/tutorial-iiot-data-adx/industrial-iot-in-azure-data-explorer-pic-7.png)
+
+We can see that the query that uses the parsed table is roughly twice as fast as that for the staging table. In this example, we have a small dataset and there are no concurrent queries running so the effect on the query execution time it isn't great, however for a realistic workload there would be a large impact on the performance. This is why it's important to consider separating the different data types into different columns.
+
+> [!NOTE]
+> The Update Policy only works on the data that is ingested into the staging table after the policy was set up and doesn't apply to any pre-existing data. This needs to be taken into consideration when, for example, we need to change the update policy. Full details can be found in the ADX documentation.
+
+## Next steps
+Now that you have learned how to change the default values of the configuration, you can
+
+> [!div class="nextstepaction"]
+> [Configure Industrial IoT components](tutorial-configure-industrial-iot-components.md)
+
+> [!div class="nextstepaction"]
+> [Visualize and analyze the data using Time Series Insights](tutorial-visualize-data-time-series-insights.md)
industrial-iot Tutorial Publisher Configure Opc Publisher https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industrial-iot/tutorial-publisher-configure-opc-publisher.md
+
+ Title: Configure the Microsoft OPC Publisher
+description: In this tutorial, you learn how to configure the OPC Publisher in standalone mode.
++++ Last updated : 3/22/2021++
+# Tutorial: Configure the OPC Publisher
+
+In this tutorial contains information on the configuration of the OPC Publisher. Several interfaces can be used to configure it.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Configure the OPC Publisher via Configuration File
+> * Configure the OPC Publisher via Command-line Arguments
+> * Configure the OPC Publisher via IoT Hub Direct Methods
+> * Configure the OPC Publisher via cloud-based, companion REST microservice
+
+## Configuring Security
+
+IoT Edge provides OPC Publisher with its security configuration for accessing IoT Hub automatically. OPC Publisher can also run as a standalone Docker container by specifying a device connection string for accessing IoT Hub via the `dc` command-line parameter. A device for IoT Hub can be created and its connection string retrieved through the Azure portal.
+
+For accessing OPC UA-enabled assets, X.509 certificates and their associated private keys are used by OPC UA. This is called OPC UA application authentication and in addition to OPC UA user authentication. OPC Publisher uses a file system-based certificate store to manage all application certificates. During startup, OPC Publisher checks if there is a certificate it can use in this certificate stores and creates a new self-signed certificate and new associated private key if there is none. Self-signed certificates provide weak authentication, since they are not signed by a trusted Certificate Authority, but at least the communication to the OPC UA-enabled asset can be encrypted this way.
+
+Security is enabled in the configuration file via the `"UseSecurity": true,` flag. The most secure endpoint available on the OPC UA servers the OPC Publisher is supposed to connect to is automatically selected.
+By default, OPC Publisher uses anonymous user authentication (in additional to the application authentication described above). However, OPC Publisher also supports user authentication using username and password. This can be specified via the REST API configuration interface (described below) or the configuration file as follows:
+```
+"OpcAuthenticationMode": "UsernamePassword",
+"OpcAuthenticationUsername": "usr",
+"OpcAuthenticationPassword": "pwd",
+```
+In addition, OPC Publisher version 2.5 and below encrypts the username and password in the configuration file. Version 2.6 and above only supports the username and password in plaintext. This will be improved in the next version of OPC Publisher.
+
+To persist the security configuration of OPC Publisher across restarts, the certificate and private key located in the certificate store directory must be mapped to the IoT Edge host OS filesystem. See "Specifying Container Create Options in the Azure portal" above.
+
+## Configuration via Configuration File
+
+The simplest way to configure OPC Publisher is via a configuration file. An example configuration file as well as documentation regarding its format is provided via the file [`publishednodes.json`](https://raw.githubusercontent.com/Azure/iot-edge-opc-publisher/master/opcpublisher/publishednodes.json) in this repository.
+Configuration file syntax has changed over time and OPC Publisher still can read old formats, but converts them into the latest format when persisting the configuration, done regularly in an automated fashion.
+
+A basic configuration file looks like this:
+```
+[
+ {
+ "EndpointUrl": "opc.tcp://testserver:62541/Quickstarts/ReferenceServer",
+ "UseSecurity": true,
+ "OpcNodes": [
+ {
+ "Id": "i=2258",
+ "OpcSamplingInterval": 2000,
+ "OpcPublishingInterval": 5000,
+ "DisplayName": "Current time"
+ }
+ ]
+ }
+]
+```
+
+OPC UA assets optimize network bandwidth by only sending data changes to OPC Publisher when the data has changed. If data changes need to be published more often or at regular intervals, OPC Publisher supports a "heartbeat" for every configured data item that can be enabled by additionally specifying the HeartbeatInterval key in the data item's configuration. The interval is specified in seconds:
+```
+ "HeartbeatInterval": 3600,
+```
+
+An OPC UA asset always sends the current value of a data item when OPC Publisher first connects to it. To prevent publishing this data to IoT Hub, the SkipFirst key can be additionally specified in the data item's configuration:
+```
+ "SkipFirst": true,
+```
+
+Both settings can be enabled globally via command-line options, too.
+
+## Configuration via Command-line Arguments
+
+There are several command-line arguments that can be used to set global settings for OPC Publisher. They are described [here](reference-command-line-arguments.md).
++
+## Configuration via the built-in OPC UA Server Interface
+
+>[!NOTE]
+> This feature is only available in version 2.5 and below of OPC Publisher.**
+
+OPC Publisher has a built-in OPC UA server, running on port 62222. It implements three OPC UA methods:
+
+ - PublishNode
+ - UnpublishNode
+ - GetPublishedNodes
+
+This interface can be accessed using an OPC UA client application, for example [UA Expert](https://www.unified-automation.com/products/development-tools/uaexpert.html).
+
+## Configuration via IoT Hub Direct Methods
+
+>[!NOTE]
+> This feature is only available in version 2.5 and below of OPC Publisher.**
+
+OPC Publisher implements the following [IoT Hub Direct Methods](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-direct-methods), which can be called from an application (from anywhere in the world) leveraging the [IoT Hub Device SDK](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-sdks):
+
+ - PublishNodes
+ - UnpublishNodes
+ - UnpublishAllNodes
+ - GetConfiguredEndpoints
+ - GetConfiguredNodesOnEndpoint
+ - GetDiagnosticInfo
+ - GetDiagnosticLog
+ - GetDiagnosticStartupLog
+ - ExitApplication
+ - GetInfo
+
+We have provided a [sample configuration application](https://github.com/Azure-Samples/iot-edge-opc-publisher-nodeconfiguration) as well as an [application for reading diagnostic information](https://github.com/Azure-Samples/iot-edge-opc-publisher-diagnostics) from OPC Publisher open-source, leveraging this interface.
+
+## Configuration via Cloud-based, Companion REST Microservice
+
+>[!NOTE]
+> This feature is only available in version 2.6 and above of OPC Publisher.
+
+A cloud-based, companion microservice with a REST interface is described and available [here](https://github.com/Azure/Industrial-IoT/blob/master/docs/services/publisher.md). It can be used to configure OPC Publisher via an OpenAPI-compatible interface, for example through Swagger.
+
+## Configuration of the simple JSON telemetry format via Separate Configuration File
+
+>[!NOTE]
+> This feature is only available in version 2.5 and below of OPC Publisher.
+
+OPC Publisher allows filtering the parts of the non-standardized, simple telemetry format via a separate configuration file, which can be specified via the tc command line option. If no configuration file is specified, the full JSON telemetry format is sent to IoT Hub. The format of the separate telemetry configuration file is described [here](reference-opc-publisher-telemetry-format.md#opc-publisher-telemetry-configuration-file-format).
+
+## Next steps
+Now that you have configured the OPC Publisher, the next step is to learn how to tune the performance and memory of the Edge module:
+
+> [!div class="nextstepaction"]
+> [Performance and Memory Tuning](tutorial-publisher-performance-memory-tuning-opc-publisher.md)
industrial-iot Tutorial Publisher Deploy Opc Publisher Standalone https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industrial-iot/tutorial-publisher-deploy-opc-publisher-standalone.md
+
+ Title: Deploy the Microsoft OPC Publisher
+description: In this tutorial you learn how to deploy the OPC Publisher in standalone mode.
++++ Last updated : 3/22/2021++
+# Tutorial: Deploy the OPC Publisher
+
+OPC Publisher is a fully supported Microsoft product, developed in the open, that bridges the gap between industrial assets and the Microsoft Azure cloud. It does so by connecting to OPC UA-enabled assets or industrial connectivity software and publishes telemetry data to [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) in various formats, including IEC62541 OPC UA PubSub standard format (from version 2.6 onwards).
+
+It runs on [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/) as a Module or on plain Docker as a container. Since it leverages the [.NET cross-platform runtime](https://docs.microsoft.com/dotnet/core/introduction), it also runs natively on Linux and Windows 10.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Deploy the OPC Publisher
+> * Run the latest released version of OPC Publisher as a container
+> * Specify Container Create Options in the Azure portal
+
+If you donΓÇÖt have an Azure subscription, create a free trial account
+
+## Prerequisites
+
+- An IoT Hub must be created
+- An IoT Edge device must be created
+
+## Deploy the OPC Publisher from the Azure Marketplace
+
+1. Pick the Azure subscription to use. If no Azure subscription is available, one must be created.
+2. Pick the IoT Hub the OPC Publisher is supposed to send data to. If no IoT Hub is available, one must be created.
+3. Pick the IoT Edge device the OPC Publisher is supposed to run on (or enter a name for a new IoT Edge device to be created).
+4. Click Create. The "Set modules on Device" page for the selected IoT Edge device opens.
+5. Click on "OPCPublisher" to open the OPC Publisher's "Update IoT Edge Module" page and then select "Container Create Options".
+6. Specify additional container create options based on your usage of OPC Publisher, see next section below.
++
+### Accessing the Microsoft Container Registry Docker containers for OPC Publisher manually
+
+The latest released version of OPC Publisher can be run manually via:
+
+```
+docker run mcr.microsoft.com/iotedge/opc-publisher:latest <name>
+```
+
+Where "name" is the name for the container.
+
+## Specifying Container Create Options in the Azure portal
+When deploying OPC Publisher through the Azure portal, container create options can be specified in the Update IoT Edge Module page of OPC Publisher. These create options must be in JSON format. The OPC Publisher command line arguments can be specified via the Cmd key, e.g.:
+```
+"Cmd": [
+ "--pf=./pn.json",
+ "--aa"
+],
+```
+
+A typical set of IoT Edge Module Container Create Options for OPC Publisher is:
+```
+{
+ "Hostname": "opcpublisher",
+ "Cmd": [
+ "--pf=./pn.json",
+ "--aa"
+ ],
+ "HostConfig": {
+ "Binds": [
+ "/iiotedge:/appdata"
+ ]
+ }
+}
+```
+
+With these options specified, OPC Publisher will read the configuration file `./pn.json`. The OPC Publisher's working directory is set to
+`/appdata` at startup and thus OPC Publisher will read the file `/appdata/pn.json` inside its Docker container.
+OPC Publisher's log file will be written to `/appdata` and the `CertificateStores` directory (used for OPC UA certificates) will also be created in this directory. To make these files available in the IoT Edge host file system, the container configuration requires a bind mount volume. The `/iiotedge:/appdata` bind will map the directory `/appdata` to the host directory `/iiotedge` (which will be created by the IoT Edge runtime if it doesn't exist).
+**Without this bind mount volume, all OPC Publisher configuration files will be lost when the container is restarted.**
+
+A connection to an OPC UA server using its hostname without a DNS server configured on the network can be achieved by adding an `ExtraHosts` entry to the `HostConfig` section:
+
+```
+"HostConfig": {
+ "ExtraHosts": [
+ "opctestsvr:192.168.178.26"
+ ]
+}
+```
+
+## Next steps
+Now that you have deployed the OPC Publisher Edge module, the next step is to configure it:
+
+> [!div class="nextstepaction"]
+> [Configure the OPC Publisher](tutorial-publisher-configure-opc-publisher.md)
industrial-iot Tutorial Publisher Performance Memory Tuning Opc Publisher https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industrial-iot/tutorial-publisher-performance-memory-tuning-opc-publisher.md
+
+ Title: Microsoft OPC Publisher Performance and Memory Tuning
+description: In this tutorial, you learn how to tune the performance and memory of the OPC Publisher.
++++ Last updated : 3/22/2021++
+# Tutorial: Tune the OPC Publisher performance and memory
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Adjust the performance
+> * Adjust the message flow to the memory resources
+
+When running OPC Publisher in production setups, network performance requirements (throughput and latency) and memory resources must be considered. OPC Publisher exposes the following command-line parameters to help meet these requirements:
+
+* Message queue capacity (`mq` for version 2.5 and below, not available in version 2.6, `om` for version 2.7)
+* IoT Hub send interval (`si`)
+* IoT Hub message size (`ms`)
+
+## Adjusting IoT Hub send interval and IoT Hub message size
+
+The `mq/om` parameter controls the upper limit of the capacity of the internal message queue. This queue buffers all messages before they are sent to IoT Hub. The default size of the queue is up to 2 MB for OPC Publisher version 2.5 and below and 4000 IoT Hub messages for version 2.7 (that is, if the setting for the IoT Hub message size is 256 KB, the size of the queue will be up to 1 GB). If OPC Publisher is not able to send messages to IoT Hub fast enough, the number of items in this queue increases. If this happens during test runs, one or both of the following can be done to mitigate:
+
+* decrease the IoT Hub send interval (`si`)
+
+* increase the IoT Hub message size (`ms`, the maximum this can be set to is 256 KB)
+
+If the queue keeps growing even though the `si` and `ms` parameters have been adjusted, eventually the maximum queue capacity will be reached and messages will be lost. This is due to the fact that both the `si` and `ms` parameter have physical limits and the Internet connection between OPC Publisher and IoT Hub is not fast enough for the number of messages that must be sent in a given scenario. In that case, only setting up several, parallel OPC Publishers will help. The `mq/om` parameter also has the biggest impact on the memory consumption by OPC Publisher.
+
+The `si` parameter forces OPC Publisher to send messages to IoT Hub at the specified interval. A message is sent either when the maximum IoT Hub message size of 256 KB of data is available (triggering the send interval to reset) or when the specified interval time has passed.
+
+The `ms` parameter enables batching of messages sent to IoT Hub. In most network setups, the latency of sending a single message to IoT Hub is high, compared to the time it takes to transmit the payload. This is mainly due to Quality of Service (QoS) requirements, since messages are acknowledged only once they have been processed by IoT Hub). Therefore, if a delay for the data to arrive at IoT Hub is acceptable, OPC Publisher should be configured to use the maximal message size of 256 KB by setting the `ms` parameter to 0. It is also the most cost-effective way to use OPC Publisher.
+
+The default configuration sends data to IoT Hub every 10 seconds (`si=10`) or when 256 KB of IoT Hub message data is available (`ms=0`). This adds a maximum delay of 10 seconds, but has low probability of losing data because of the large message size. The metric `monitored item notifications enqueue failure` in OPC Publisher version 2.5 and below and `messages lost` in OPC Publisher version 2.7 shows how many messages were lost.
+
+When both `si` and `ms` parameters are set to 0, OPC Publisher sends a message to IoT Hub as soon as data is available. This results in an average IoT Hub message size of just over 200 bytes. However, the advantage of this configuration is that OPC Publisher sends the data from the connected asset without delay. The number of lost messages will be high for use cases where a large amount of data must be published and hence this is not recommended for these scenarios.
+
+To measure the performance of OPC Publisher, the `di` parameter can be used to print performance metrics to the trace log in the interval specified (in seconds).
+
+## Next steps
+Now that you have learned how to tune the performance and memory of the OPC Publisher, you can check out the OPC Publisher GitHub repository for further resources:
+
+> [!div class="nextstepaction"]
+> [OPC Publisher GitHub repository](https://github.com/Azure/Industrial-IoT)
industrial-iot Tutorial Visualize Data Time Series Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industrial-iot/tutorial-visualize-data-time-series-insights.md
+
+ Title: Visualize OPC UA data in Azure Time Series Insights
+description: In this tutorial, you learn how to visualize data with Time Series Insights.
++++ Last updated : 3/22/2021++
+# Tutorial: Visualize data with Time Series Insights (TSI)
+
+The OPC Publisher module connects to OPC UA servers and publishes data from these servers to IoT Hub. The Telemetry processor in the Industrial IoT platform processes these events and forwards contextualized samples to TSI and other consumers.
+
+This how-to guide shows you how to visualize and analyze the OPC UA Telemetry using this Time Series Insights environment.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * All tutorials include a list summarizing the steps to completion
+> * Each of these bullet points align to a key H2
+> * Use these green checkboxes in a tutorial
+
+## Prerequisite
+
+* Deploy the IIoT Platform to get a Time Series Insights Environment automatically created
+* Data is being published to IoT Hub
+
+## Time Series Insights explorer
+
+The Time Series Insights explorer is a web app you can use to visualize your telemetry. To retrieve the url of the application open the `.env` file saved as a result of the deployment. Open a browser to the Url in the `PCS_TSI_URL` variable.
+
+Before using the Time Series Insights explorer, you must grant access to the TSI data to the users entitled to visualize the data. Note that on a fresh deployment no data access policies are set by default, therefore nobody can see the data. The data access policies need to be set in the Azure portal, in the Time Series Insights Environment deployed in the IIoT's platform deployed resource group, as follows:
+
+ ![Time Series Insights Explorer 1](media/tutorial-iiot-visualize-data-tsi/tutorial-time-series-insights-data-access-1.png)
+
+Select the Data Access Policies:
+
+ ![Time Series Insights Explorer 2](media/tutorial-iiot-visualize-data-tsi/tutorial-time-series-insights-data-access-2.png)
+
+Assign the required users:
+
+ ![Time Series Insights Explorer 3](media/tutorial-iiot-visualize-data-tsi/tutorial-time-series-insights-data-access-3.png)
++
+In the TSI Explorer, please note the Unassigned Time Series Instances. A TSI Instance corresponds to the time/value series for a specific data-point originated from a published node in an OPC server. The TSI Instance, respectively the OPC UA Data point, is uniquely identified by the EndpointId, SubscriptionId, and NodeId. The TSI instances models are automatically detected and display in the explorer based on the telemetry data ingested from the IIoT platform telemetry processor's event hub.
+
+ ![Time Series Insights Explorer 4](media/tutorial-iiot-visualize-data-tsi/tutorial-time-series-insights-step-0.png)
+
+The telemetry data can be visualized in the chart by right-clicking the TSI instance and selecting the Value. The time frame to be used in chart can be adjusted from the upper right corner. Value of multiple instances can be visualized on the same time basis selection.
+
+For more information, see [Quickstart: Explore the Azure Time Series Insights Preview](https://docs.microsoft.com/azure/time-series-insights/time-series-insights-update-quickstart)
+
+## Define and apply a new Model
+
+Since the telemetry instances are now just in raw format, they need to be contextualized with the appropriate
+
+For detailed information on TSI models see [Time Series Model in Azure Time Series Insights Preview](https://docs.microsoft.com/azure/time-series-insights/time-series-insights-update-tsm)
+
+1. Step 1 - In the model tab of the Explorer, define a new hierarchy for the telemetry data ingested. A hierarchy is the logical tree structure meant to enable the user to insert the meta-information required for a more intuitive navigation through the TSI instances. a user can create/delete/modify hierarchy templates that can be later on instantiated for the various TSI instances.
+
+ ![Step 1](media/tutorial-iiot-visualize-data-tsi/tutorial-time-series-insights-step-1.png)
+
+2. Step 2 - define a new type for the values. In our example, we only handle numeric data-types
+
+ ![Step 2](media/tutorial-iiot-visualize-data-tsi/tutorial-time-series-insights-step-2.png)
+
+3. Step 3 - select the new TSI instance that requires to be categorized in the previously defined hierarchy
+
+ ![Step 3](media/tutorial-iiot-visualize-data-tsi/tutorial-time-series-insights-step-3.png)
+
+4. Step 4 - fill in the instances properties - name, description, data value, as well as the hierarchy fields in order to match the logical structure
+
+ ![Step 4](media/tutorial-iiot-visualize-data-tsi/tutorial-time-series-insights-step-4.png)
+
+5. Step 5 - repeat step 5 for all uncategorized TSI instances
+
+ ![Step 5](media/tutorial-iiot-visualize-data-tsi/tutorial-time-series-insights-step-5.png)
+
+6. Step 6 - back in the TSI Explorer's main page, walk through the categorized instances hierarchy and select the values for the data-points to be analyzed
+
+ ![Step6](media/tutorial-iiot-visualize-data-tsi/tutorial-time-series-insights-step-6.png)
+
+## Connect Time Series Insights to Power BI
+
+You can also connect the Time Series Insights environment to Power BI. For more information, see [How to connect TSI to Power BI](https://docs.microsoft.com/azure/time-series-insights/how-to-connect-power-bi) and [Visualize data from TSI in Power BI](https://docs.microsoft.com/azure/time-series-insights/concepts-power-bi).
++
+## Next steps
+Now that you have learned how to visualize data in TSI, you can check out the Industrial IoT GitHub repository:
+
+> [!div class="nextstepaction"]
+> [IIoT Platform GitHub repository](https://github.com/Azure/iot-edge-opc-publisher)
iot-accelerators Howto Opc Publisher Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-publisher-configure.md
- Title: Configure OPC Publisher - Azure | Microsoft Docs
-description: This article describes how to configure OPC Publisher to specify OPC UA node data changes, OPC UA events to publish and also the telemetry format.
-- Previously updated : 06/10/2019-------
-# Configure OPC Publisher
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-You can configure OPC Publisher to specify:
--- The OPC UA node data changes to publish.-- The OPC UA events to publish.-- The telemetry format.-
-You can configure OPC Publisher using configuration files or using method calls.
-
-## Use configuration files
-
-This section describes to options for configuring OPC UA node publishing with configuration files.
-
-### Use a configuration file to configure publishing data changes
-
-The easiest way to configure the OPC UA nodes to publish is with a configuration file. The configuration file format is documented in [publishednodes.json](https://github.com/Azure/iot-edge-opc-publisher/blob/master/opcpublisher/publishednodes.json) in the repository.
-
-Configuration file syntax has changed over time. OPC Publisher still reads old formats, but converts them into the latest format when it persists the configuration.
-
-The following example shows the format of the configuration file:
-
-```json
-[
- {
- "EndpointUrl": "opc.tcp://testserver:62541/Quickstarts/ReferenceServer",
- "UseSecurity": true,
- "OpcNodes": [
- {
- "Id": "i=2258",
- "OpcSamplingInterval": 2000,
- "OpcPublishingInterval": 5000,
- "DisplayName": "Current time"
- }
- ]
- }
-]
-```
-
-### Use a configuration file to configure publishing events
-
-To publish OPC UA events, you use the same configuration file as for data changes.
-
-The following example shows how to configure publishing for events generated by the [SimpleEvents server](https://github.com/OPCFoundation/UA-.NETStandard-Samples/tree/master/Workshop/SimpleEvents/Server). The SimpleEvents server can be found in the [OPC Foundation repository](https://github.com/OPCFoundation/UA-.NETStandard-Samples)
-is:
-
-```json
-[
- {
- "EndpointUrl": "opc.tcp://testserver:62563/Quickstarts/SimpleEventsServer",
- "OpcEvents": [
- {
- "Id": "i=2253",
- "DisplayName": "SimpleEventServerEvents",
- "SelectClauses": [
- {
- "TypeId": "i=2041",
- "BrowsePaths": [
- "EventId"
- ]
- },
- {
- "TypeId": "i=2041",
- "BrowsePaths": [
- "Message"
- ]
- },
- {
- "TypeId": "nsu=http://opcfoundation.org/Quickstarts/SimpleEvents;i=235",
- "BrowsePaths": [
- "/2:CycleId"
- ]
- },
- {
- "TypeId": "nsu=http://opcfoundation.org/Quickstarts/SimpleEvents;i=235",
- "BrowsePaths": [
- "/2:CurrentStep"
- ]
- }
- ],
- "WhereClause": [
- {
- "Operator": "OfType",
- "Operands": [
- {
- "Literal": "ns=2;i=235"
- }
- ]
- }
- ]
- }
- ]
- }
-]
-```
-
-## Use method calls
-
-This section describes the method calls you can use to configure OPC Publisher.
-
-### Configure using OPC UA method calls
-
-OPC Publisher includes an OPC UA Server, which can be accessed on port 62222. If the hostname is **publisher**, then the endpoint URI is: `opc.tcp://publisher:62222/UA/Publisher`.
-
-This endpoint exposes the following four methods:
--- PublishNode-- UnpublishNode-- GetPublishedNodes-- IoT HubDirectMethod-
-### Configure using IoT Hub direct method calls
-
-OPC Publisher implements the following IoT Hub direct method calls:
--- PublishNodes-- UnpublishNodes-- UnpublishAllNodes-- GetConfiguredEndpoints-- GetConfiguredNodesOnEndpoint-- GetDiagnosticInfo-- GetDiagnosticLog-- GetDiagnosticStartupLog-- ExitApplication-- GetInfo-
-The format of the JSON payload of the method request and responses are defined in [opcpublisher/HubMethodModel.cs](https://github.com/Azure/iot-edge-opc-publisher/tree/master/opcpublisher).
-
-If you call an unknown method on the module, it responds with a string that says the method isn't implemented. You can call an unknown method as a way to ping the module.
-
-### Configure username and password for authentication
-
-The authentication mode can be set through an IoT Hub direct method calls. The payload must contain the property **OpcAuthenticationMode** and the username and password:
-
-```csharp
-{
- "EndpointUrl": "<Url of the endpoint to set authentication settings>",
- "OpcAuthenticationMode": "UsernamePassword",
- "Username": "<Username>",
- "Password": "<Password>"
- ...
-}
-```
-
-The password is encrypted by the IoT Hub Workload Client and stored in the publisher's configuration. To change authentication back to anonymous, use the method with the following payload:
-
-```csharp
-{
- "EndpointUrl": "<Url of the endpoint to set authentication settings>",
- "OpcAuthenticationMode": "Anonymous"
- ...
-}
-```
-
-If the **OpcAuthenticationMode** property isn't set in the payload, the authentication settings remain unchanged in the configuration.
-
-## Configure telemetry publishing
-
-When OPC Publisher receives a notification of a value change in a published node, it generates a JSON formatted message that's sent to IoT Hub.
-
-You can configure the content of this JSON formatted message using a configuration file. If no configuration file is specified with the `--tc` option, a default configuration is used that's compatible with the [Connected factory solution accelerator](https://github.com/Azure/azure-iot-connected-factory).
-
-If OPC Publisher is configured to batch messages, then they're sent as a valid JSON array.
-
-The telemetry is derived from the following sources:
--- The OPC Publisher node configuration for the node-- The **MonitoredItem** object of the OPC UA stack for which OPC Publisher got a notification.-- The argument passed to this notification, which provides details on the data value change.-
-The telemetry that's put into the JSON formatted message is a selection of important properties of these objects. If you need more properties, you need to change the OPC Publisher code base.
-
-The syntax of the configuration file is as follows:
-
-```json
-// The configuration settings file consists of two objects:
-// 1) The 'Defaults' object, which defines defaults for the telemetry configuration
-// 2) An array 'EndpointSpecific' of endpoint specific configuration
-// Both objects are optional and if they are not specified, then publisher uses
-// its internal default configuration, which generates telemetry messages compatible
-// with the Microsoft Connected factory Preconfigured Solution (https://github.com/Azure/azure-iot-connected-factory).
-
-// A JSON telemetry message for Connected factory looks like:
-// {
-// "NodeId": "i=2058",
-// "ApplicationUri": "urn:myopcserver",
-// "DisplayName": "CurrentTime",
-// "Value": {
-// "Value": "10.11.2017 14:03:17",
-// "SourceTimestamp": "2017-11-10T14:03:17Z"
-// }
-// }
-
-// The 'Defaults' object in the sample below, are similar to what publisher is
-// using as its internal default telemetry configuration.
-{
- "Defaults": {
- // The first two properties ('EndpointUrl' and 'NodeId' are configuring data
- // taken from the OpcPublisher node configuration.
- "EndpointUrl": {
-
- // The following three properties can be used to configure the 'EndpointUrl'
- // property in the JSON message send by publisher to IoT Hub.
-
- // Publish controls if the property should be part of the JSON message at all.
- "Publish": false,
-
- // Pattern is a regular expression, which is applied to the actual value of the
- // property (here 'EndpointUrl').
- // If this key is omitted (which is the default), then no regex matching is done
- // at all, which improves performance.
- // If the key is used you need to define groups in the regular expression.
- // Publisher applies the regular expression and then concatenates all groups
- // found and use the resulting string as the value in the JSON message to
- //sent to IoT Hub.
- // This example mimics the default behaviour and defines a group,
- // which matches the conplete value:
- "Pattern": "(.*)",
- // Here some more exaples for 'Pattern' values and the generated result:
- // "Pattern": "i=(.*)"
- // defined for Defaults.NodeId.Pattern, will generate for the above sample
- // a 'NodeId' value of '2058'to be sent by publisher
- // "Pattern": "(i)=(.*)"
- // defined for Defaults.NodeId.Pattern, will generate for the above sample
- // a 'NodeId' value of 'i2058' to be sent by publisher
-
- // Name allows you to use a shorter string as property name in the JSON message
- // sent by publisher. By default the property name is unchanged and will be
- // here 'EndpointUrl'.
- // The 'Name' property can only be set in the 'Defaults' object to ensure
- // all messages from publisher sent to IoT Hub have a similar layout.
- "Name": "EndpointUrl"
-
- },
- "NodeId": {
- "Publish": true,
-
- // If you set Defaults.NodeId.Name to "ni", then the "NodeId" key/value pair
- // (from the above example) will change to:
- // "ni": "i=2058",
- "Name": "NodeId"
- },
-
- // The MonitoredItem object is configuring the data taken from the MonitoredItem
- // OPC UA object for published nodes.
- "MonitoredItem": {
-
- // If you set the Defaults.MonitoredItem.Flat to 'false', then a
- // 'MonitoredItem' object will appear, which contains 'ApplicationUri'
- // and 'DisplayNode' proerties:
- // "NodeId": "i=2058",
- // "MonitoredItem": {
- // "ApplicationUri": "urn:myopcserver",
- // "DisplayName": "CurrentTime",
- // }
- // The 'Flat' property can only be used in the 'MonitoredItem' and
- // 'Value' objects of the 'Defaults' object and will be used
- // for all JSON messages sent by publisher.
- "Flat": true,
-
- "ApplicationUri": {
- "Publish": true,
- "Name": "ApplicationUri"
- },
- "DisplayName": {
- "Publish": true,
- "Name": "DisplayName"
- }
- },
- // The Value object is configuring the properties taken from the event object
- // the OPC UA stack provided in the value change notification event.
- "Value": {
- // If you set the Defaults.Value.Flat to 'true', then the 'Value'
- // object will disappear completely and the 'Value' and 'SourceTimestamp'
- // members won't be nested:
- // "DisplayName": "CurrentTime",
- // "Value": "10.11.2017 14:03:17",
- // "SourceTimestamp": "2017-11-10T14:03:17Z"
- // The 'Flat' property can only be used for the 'MonitoredItem' and 'Value'
- // objects of the 'Defaults' object and will be used for all
- // messages sent by publisher.
- "Flat": false,
-
- "Value": {
- "Publish": true,
- "Name": "Value"
- },
- "SourceTimestamp": {
- "Publish": true,
- "Name": "SourceTimestamp"
- },
- // 'StatusCode' is the 32 bit OPC UA status code
- "StatusCode": {
- "Publish": false,
- "Name": "StatusCode"
- // 'Pattern' is ignored for the 'StatusCode' value
- },
- // 'Status' is the symbolic name of 'StatusCode'
- "Status": {
- "Publish": false,
- "Name": "Status"
- }
- }
- },
-
- // The next object allows to configure 'Publish' and 'Pattern' for specific
- // endpoint URLs. Those will overwrite the ones specified in the 'Defaults' object
- // or the defaults used by publisher.
- // It is not allowed to specify 'Name' and 'Flat' properties in this object.
- "EndpointSpecific": [
- // The following shows how a endpoint specific configuration can look like:
- {
- // 'ForEndpointUrl' allows to configure for which OPC UA server this
- // object applies and is a required property for all objects in the
- // 'EndpointSpecific' array.
- // The value of 'ForEndpointUrl' must be an 'EndpointUrl' configured in
- // the publishednodes.json confguration file.
- "ForEndpointUrl": "opc.tcp://<your_opcua_server>:<your_opcua_server_port>/<your_opcua_server_path>",
- "EndpointUrl": {
- // We overwrite the default behaviour and publish the
- // endpoint URL in this case.
- "Publish": true,
- // We are only interested in the URL part following the 'opc.tcp://' prefix
- // and define a group matching this.
- "Pattern": "opc.tcp://(.*)"
- },
- "NodeId": {
- // We are not interested in the configured 'NodeId' value,
- // so we do not publish it.
- "Publish": false
- // No 'Pattern' key is specified here, so the 'NodeId' value will be
- // taken as specified in the publishednodes configuration file.
- },
- "MonitoredItem": {
- "ApplicationUri": {
- // We already publish the endpoint URL, so we do not want
- // the ApplicationUri of the MonitoredItem to be published.
- "Publish": false
- },
- "DisplayName": {
- "Publish": true
- }
- },
- "Value": {
- "Value": {
- // The value of the node is important for us, everything else we
- // are not interested in to keep the data ingest as small as possible.
- "Publish": true
- },
- "SourceTimestamp": {
- "Publish": false
- },
- "StatusCode": {
- "Publish": false
- },
- "Status": {
- "Publish": false
- }
- }
- }
- ]
-}
-```
-
-## Next steps
-
-Now you've learned how to configure OPC Publisher, the suggested next step is to learn how to [Run OPC Publisher](howto-opc-publisher-run.md).
iot-accelerators Howto Opc Publisher Run https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-publisher-run.md
- Title: Run OPC Publisher - Azure | Microsoft Docs
-description: This article describes how to run and debug OPC Publisher. It also addresses performance and memory considerations.
-- Previously updated : 06/10/2019-------
-# Run OPC Publisher
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-This article describes how to run ad debug OPC Publisher. It also addresses performance and memory considerations.
-
-## Command-line options
-
-Application usage is shown using the `--help` command-line option as follows:
-
-```sh/cmd
-Current directory is: /appdata
-Log file is: <hostname>-publisher.log
-Log level is: info
-
-OPC Publisher V2.3.0
-Informational version: V2.3.0+Branch.develop_hans_methodlog.Sha.0985e54f01a0b0d7f143b1248936022ea5d749f9
-
-Usage: opcpublisher.exe <applicationname> [<IoT Hubconnectionstring>] [<options>]
-
-OPC Edge Publisher to subscribe to configured OPC UA servers and send telemetry to Azure IoT Hub.
-To exit the application, just press CTRL-C while it is running.
-
-applicationname: the OPC UA application name to use, required
- The application name is also used to register the publisher under this name in the
- IoT Hub device registry.
-
-IoT Hubconnectionstring: the IoT Hub owner connectionstring, optional
-
-There are a couple of environment variables which can be used to control the application:
-_HUB_CS: sets the IoT Hub owner connectionstring
-_GW_LOGP: sets the filename of the log file to use
-_TPC_SP: sets the path to store certificates of trusted stations
-_GW_PNFP: sets the filename of the publishing configuration file
-
-Command line arguments overrule environment variable settings.
-
-Options:
- --pf, --publishfile=VALUE
- the filename to configure the nodes to publish.
- Default: '/appdata/publishednodes.json'
- --tc, --telemetryconfigfile=VALUE
- the filename to configure the ingested telemetry
- Default: ''
- -s, --site=VALUE the site OPC Publisher is working in. if specified
- this domain is appended (delimited by a ':' to
- the 'ApplicationURI' property when telemetry is
- sent to IoT Hub.
- The value must follow the syntactical rules of a
- DNS hostname.
- Default: not set
- --ic, --iotcentral publisher will send OPC UA data in IoTCentral
- compatible format (DisplayName of a node is used
- as key, this key is the Field name in IoTCentral)
- . you need to ensure that all DisplayName's are
- unique. (Auto enables fetch display name)
- Default: False
- --sw, --sessionconnectwait=VALUE
- specify the wait time in seconds publisher is
- trying to connect to disconnected endpoints and
- starts monitoring unmonitored items
- Min: 10
- Default: 10
- --mq, --monitoreditemqueuecapacity=VALUE
- specify how many notifications of monitored items
- can be stored in the internal queue, if the data
- can not be sent quick enough to IoT Hub
- Min: 1024
- Default: 8192
- --di, --diagnosticsinterval=VALUE
- shows publisher diagnostic info at the specified
- interval in seconds (need log level info).
- -1 disables remote diagnostic log and diagnostic
- output
- 0 disables diagnostic output
- Default: 0
- --ns, --noshutdown=VALUE
- same as runforever.
- Default: False
- --rf, --runforever publisher can not be stopped by pressing a key on
- the console, but will run forever.
- Default: False
- --lf, --logfile=VALUE the filename of the logfile to use.
- Default: './<hostname>-publisher.log'
- --lt, --logflushtimespan=VALUE
- the timespan in seconds when the logfile should be
- flushed.
- Default: 00:00:30 sec
- --ll, --loglevel=VALUE the loglevel to use (allowed: fatal, error, warn,
- info, debug, verbose).
- Default: info
- --ih, --IoT Hubprotocol=VALUE
- the protocol to use for communication with IoT Hub (
- allowed values: Amqp, Http1, Amqp_WebSocket_Only,
- Amqp_Tcp_Only, Mqtt, Mqtt_WebSocket_Only, Mqtt_
- Tcp_Only) or IoT EdgeHub (allowed values: Mqtt_
- Tcp_Only, Amqp_Tcp_Only).
- Default for IoT Hub: Mqtt_WebSocket_Only
- Default for IoT EdgeHub: Amqp_Tcp_Only
- --ms, --IoT Hubmessagesize=VALUE
- the max size of a message which can be send to
- IoT Hub. when telemetry of this size is available
- it will be sent.
- 0 will enforce immediate send when telemetry is
- available
- Min: 0
- Max: 262144
- Default: 262144
- --si, --IoT Hubsendinterval=VALUE
- the interval in seconds when telemetry should be
- send to IoT Hub. If 0, then only the
- IoT Hubmessagesize parameter controls when
- telemetry is sent.
- Default: '10'
- --dc, --deviceconnectionstring=VALUE
- if publisher is not able to register itself with
- IoT Hub, you can create a device with name <
- applicationname> manually and pass in the
- connectionstring of this device.
- Default: none
- -c, --connectionstring=VALUE
- the IoT Hub owner connectionstring.
- Default: none
- --hb, --heartbeatinterval=VALUE
- the publisher is using this as default value in
- seconds for the heartbeat interval setting of
- nodes without
- a heartbeat interval setting.
- Default: 0
- --sf, --skipfirstevent=VALUE
- the publisher is using this as default value for
- the skip first event setting of nodes without
- a skip first event setting.
- Default: False
- --pn, --portnum=VALUE the server port of the publisher OPC server
- endpoint.
- Default: 62222
- --pa, --path=VALUE the enpoint URL path part of the publisher OPC
- server endpoint.
- Default: '/UA/Publisher'
- --lr, --ldsreginterval=VALUE
- the LDS(-ME) registration interval in ms. If 0,
- then the registration is disabled.
- Default: 0
- --ol, --opcmaxstringlen=VALUE
- the max length of a string opc can transmit/
- receive.
- Default: 131072
- --ot, --operationtimeout=VALUE
- the operation timeout of the publisher OPC UA
- client in ms.
- Default: 120000
- --oi, --opcsamplinginterval=VALUE
- the publisher is using this as default value in
- milliseconds to request the servers to sample
- the nodes with this interval
- this value might be revised by the OPC UA
- servers to a supported sampling interval.
- please check the OPC UA specification for
- details how this is handled by the OPC UA stack.
- a negative value will set the sampling interval
- to the publishing interval of the subscription
- this node is on.
- 0 will configure the OPC UA server to sample in
- the highest possible resolution and should be
- taken with care.
- Default: 1000
- --op, --opcpublishinginterval=VALUE
- the publisher is using this as default value in
- milliseconds for the publishing interval setting
- of the subscriptions established to the OPC UA
- servers.
- please check the OPC UA specification for
- details how this is handled by the OPC UA stack.
- a value less than or equal zero will let the
- server revise the publishing interval.
- Default: 0
- --ct, --createsessiontimeout=VALUE
- specify the timeout in seconds used when creating
- a session to an endpoint. On unsuccessful
- connection attemps a backoff up to 5 times the
- specified timeout value is used.
- Min: 1
- Default: 10
- --ki, --keepaliveinterval=VALUE
- specify the interval in seconds the publisher is
- sending keep alive messages to the OPC servers
- on the endpoints it is connected to.
- Min: 2
- Default: 2
- --kt, --keepalivethreshold=VALUE
- specify the number of keep alive packets a server
- can miss, before the session is disconneced
- Min: 1
- Default: 5
- --aa, --autoaccept the publisher trusts all servers it is
- establishing a connection to.
- Default: False
- --tm, --trustmyself=VALUE
- same as trustowncert.
- Default: False
- --to, --trustowncert the publisher certificate is put into the trusted
- certificate store automatically.
- Default: False
- --fd, --fetchdisplayname=VALUE
- same as fetchname.
- Default: False
- --fn, --fetchname enable to read the display name of a published
- node from the server. this will increase the
- runtime.
- Default: False
- --ss, --suppressedopcstatuscodes=VALUE
- specifies the OPC UA status codes for which no
- events should be generated.
- Default: BadNoCommunication,
- BadWaitingForInitialData
- --at, --appcertstoretype=VALUE
- the own application cert store type.
- (allowed values: Directory, X509Store)
- Default: 'Directory'
- --ap, --appcertstorepath=VALUE
- the path where the own application cert should be
- stored
- Default (depends on store type):
- X509Store: 'CurrentUser\UA_MachineDefault'
- Directory: 'pki/own'
- --tp, --trustedcertstorepath=VALUE
- the path of the trusted cert store
- Default: 'pki/trusted'
- --rp, --rejectedcertstorepath=VALUE
- the path of the rejected cert store
- Default 'pki/rejected'
- --ip, --issuercertstorepath=VALUE
- the path of the trusted issuer cert store
- Default 'pki/issuer'
- --csr show data to create a certificate signing request
- Default 'False'
- --ab, --applicationcertbase64=VALUE
- update/set this applications certificate with the
- certificate passed in as bas64 string
- --af, --applicationcertfile=VALUE
- update/set this applications certificate with the
- certificate file specified
- --pb, --privatekeybase64=VALUE
- initial provisioning of the application
- certificate (with a PEM or PFX fomat) requires a
- private key passed in as base64 string
- --pk, --privatekeyfile=VALUE
- initial provisioning of the application
- certificate (with a PEM or PFX fomat) requires a
- private key passed in as file
- --cp, --certpassword=VALUE
- the optional password for the PEM or PFX or the
- installed application certificate
- --tb, --addtrustedcertbase64=VALUE
- adds the certificate to the applications trusted
- cert store passed in as base64 string (multiple
- strings supported)
- --tf, --addtrustedcertfile=VALUE
- adds the certificate file(s) to the applications
- trusted cert store passed in as base64 string (
- multiple filenames supported)
- --ib, --addissuercertbase64=VALUE
- adds the specified issuer certificate to the
- applications trusted issuer cert store passed in
- as base64 string (multiple strings supported)
- --if, --addissuercertfile=VALUE
- adds the specified issuer certificate file(s) to
- the applications trusted issuer cert store (
- multiple filenames supported)
- --rb, --updatecrlbase64=VALUE
- update the CRL passed in as base64 string to the
- corresponding cert store (trusted or trusted
- issuer)
- --uc, --updatecrlfile=VALUE
- update the CRL passed in as file to the
- corresponding cert store (trusted or trusted
- issuer)
- --rc, --removecert=VALUE
- remove cert(s) with the given thumbprint(s) (
- multiple thumbprints supported)
- --dt, --devicecertstoretype=VALUE
- the IoT Hub device cert store type.
- (allowed values: Directory, X509Store)
- Default: X509Store
- --dp, --devicecertstorepath=VALUE
- the path of the iot device cert store
- Default Default (depends on store type):
- X509Store: 'My'
- Directory: 'CertificateStores/IoT Hub'
- -i, --install register OPC Publisher with IoT Hub and then exits.
- Default: False
- -h, --help show this message and exit
- --st, --opcstacktracemask=VALUE
- ignored, only supported for backward comaptibility.
- --sd, --shopfloordomain=VALUE
- same as site option, only there for backward
- compatibility
- The value must follow the syntactical rules of a
- DNS hostname.
- Default: not set
- --vc, --verboseconsole=VALUE
- ignored, only supported for backward comaptibility.
- --as, --autotrustservercerts=VALUE
- same as autoaccept, only supported for backward
- cmpatibility.
- Default: False
- --tt, --trustedcertstoretype=VALUE
- ignored, only supported for backward compatibility.
- the trusted cert store will always reside in a
- directory.
- --rt, --rejectedcertstoretype=VALUE
- ignored, only supported for backward compatibility.
- the rejected cert store will always reside in a
- directory.
- --it, --issuercertstoretype=VALUE
- ignored, only supported for backward compatibility.
- the trusted issuer cert store will always
- reside in a directory.
-```
-
-Typically you specify the IoT Hub owner connection string only on the first run of the application. The connection string is encrypted and stored in the platform certificate store. On later runs, the application reads the connection string from the certificate store. If you specify the connection string on each run, the device that's created for the application in the IoT Hub device registry is removed and recreated.
-
-## Run natively on Windows
-
-Open the **opcpublisher.sln** project with Visual Studio, build the solution, and publish it. You can start the application in the **Target directory** you published to as follows:
-
-```cmd
-dotnet opcpublisher.dll <applicationname> [<IoT Hubconnectionstring>] [options]
-```
-
-## Use a self-built container
-
-Build your own container and start it as follows:
-
-```sh/cmd
-docker run <your-container-name> <applicationname> [<IoT Hubconnectionstring>] [options]
-```
-
-## Use a container from Microsoft Container Registry
-
-There's a prebuilt container available in the Microsoft Container Registry. Start it as follows:
-
-```sh/cmd
-docker run mcr.microsoft.com/iotedge/opc-publisher <applicationname> [<IoT Hubconnectionstring>] [options]
-```
-
-Check [Docker Hub](https://hub.docker.com/_/microsoft-iotedge-opc-publisher) to see the supported operating systems and processor architectures. If your OS and CPU architecture is supported, Docker automatically selects the correct container.
-
-## Run as an Azure IoT Edge module
-
-OPC Publisher is ready to be used as an [Azure IoT Edge](../iot-edge/index.yml) module. When you use OPC Publisher as IoT Edge module, the only supported transport protocols are **Amqp_Tcp_Only** and **Mqtt_Tcp_Only**.
-
-To add OPC Publisher as module to your IoT Edge deployment, go to your IoT Hub settings in the Azure portal and complete the following steps:
-
-1. Go to **IoT Edge** and create or select your IoT Edge device.
-1. Select **Set Modules**.
-1. Select **Add** under **Deployment Modules** and then **IoT Edge Module**.
-1. In the **Name** field, enter **publisher**.
-1. In the **Image URI** field, enter `mcr.microsoft.com/iotedge/opc-publisher:<tag>`
-1. You can find the available tags on [Docker Hub](https://hub.docker.com/_/microsoft-iotedge-opc-publisher)
-1. Paste the following JSON into the **Container Create Options** field:
-
- ```json
- {
- "Hostname": "publisher",
- "Cmd": [
- "--aa"
- ]
- }
- ```
-
- This configuration configures IoT Edge to start a container called **publisher** using the OPC Publisher image. The hostname of the container's system is set to **publisher**. OPC Publisher is called with the following command-line argument: `--aa`. With this option, OPC Publisher trusts the certificates of the OPC UA servers it connects to. You can use any OPC Publisher command-line options. The only limitation is the size of the **Container Create Options** supported by IoT Edge.
-
-1. Leave the other settings unchanged and select **Save**.
-1. If you want to process the output of the OPC Publisher locally with another IoT Edge module, go back to the **Set Modules** page. Then go to the **Specify Routes** tab, and add a new route that looks like the following JSON:
-
- ```json
- {
- "routes": {
- "processingModuleToIoT Hub": "FROM /messages/modules/processingModule/outputs/* INTO $upstream",
- "opcPublisherToProcessingModule": "FROM /messages/modules/publisher INTO BrokeredEndpoint(\"/modules/processingModule/inputs/input1\")"
- }
- }
- ```
-
-1. Back in the **Set Modules** page, select **Next**, until you reach the last page of the configuration.
-1. Select **Submit** to send your configuration to IoT Edge.
-1. When you've started IoT Edge on your edge device and the docker container **publisher** is running, you can check out the log output of OPC Publisher either by
- using `docker logs -f publisher` or by checking the logfile. In the previous example, the log file is above `d:\iiotegde\publisher-publisher.log`. You can also use the [iot-edge-opc-publisher-diagnostics tool](https://github.com/Azure-Samples/iot-edge-opc-publisher-diagnostics).
-
-### Make the configuration files accessible on the host
-
-To make the IoT Edge module configuration files accessible in the host file system, use the following **Container Create Options**. The following example is of a deployment using Linux Containers for Windows:
-
-```json
-{
- "Hostname": "publisher",
- "Cmd": [
- "--pf=./pn.json",
- "--aa"
- ],
- "HostConfig": {
- "Binds": [
- "d:/iiotedge:/appdata"
- ]
- }
-}
-```
-
-With these options, OPC Publisher reads the nodes it should publish from the file `./pn.json` and the container's working directory is set to `/appdata` at startup. With these settings, OPC Publisher reads the file `/appdata/pn.json` from the container to get its configuration. Without the `--pf` option, OPC Publisher tries to read the default configuration file `./publishednodes.json`.
-
-The log file, using the default name `publisher-publisher.log`, is written to `/appdata` and the `CertificateStores` directory is also created in this directory.
-
-To make all these files available in the host file system, the container configuration requires a bind mount volume. The `d://iiotedge:/appdata` bind maps the directory `/appdata`, which is the current working directory on container startup, to the host directory `d://iiotedge`. Without this option, no file data is persisted when the container next starts.
-
-If you're running Windows containers, then the syntax of the `Binds` parameter is different. At container startup, the working directory is `c:\appdata`. To put the configuration file in the directory `d:\iiotedge`on the host, specify the following mapping in the `HostConfig` section:
-
-```json
-"HostConfig": {
- "Binds": [
- "d:/iiotedge:c:/appdata"
- ]
-}
-```
-
-If you're running Linux containers on Linux, the syntax of the `Binds` parameter is again different. At container startup, the working directory is `/appdata`. To put the configuration file in the directory `/iiotedge` on the host, specify the following mapping in the `HostConfig` section:
-
-```json
-"HostConfig": {
- "Binds": [
- "/iiotedge:/appdata"
- ]
-}
-```
-
-## Considerations when using a container
-
-The following sections list some things to keep in mind when you use a container:
-
-### Access to the OPC Publisher OPC UA server
-
-By default, the OPC Publisher OPC UA server listens on port 62222. To expose this inbound port in a container, use the following command:
-
-```sh/cmd
-docker run -p 62222:62222 mcr.microsoft.com/iotedge/opc-publisher <applicationname> [<IoT Hubconnectionstring>] [options]
-```
-
-### Enable intercontainer name resolution
-
-To enable name resolution from within the container to other containers, create a user define docker bridge network, and connect the container to this network using the `--network` option. Also assign the container a name using the `--name` option as follows:
-
-```sh/cmd
-docker network create -d bridge iot_edge
-docker run --network iot_edge --name publisher mcr.microsoft.com/iotedge/opc-publisher <applicationname> [<IoT Hubconnectionstring>] [options]
-```
-
-The container is now reachable using the name `publisher` by other containers on the same network.
-
-### Access other systems from within the container
-
-Other containers can be reached using the parameters described in the previous section. If operating system on which Docker is hosted is DNS enabled, then accessing all systems that are known to DNS works.
-
-In networks that use NetBIOS name resolution, enable access to other systems by starting your container with the `--add-host` option. This option effectively adds an entry to the container's host file:
-
-```cmd/sh
-docker run --add-host mydevbox:192.168.178.23 mcr.microsoft.com/iotedge/opc-publisher <applicationname> [<IoT Hubconnectionstring>] [options]
-```
-
-### Assign a hostname
-
-OPC Publisher uses the hostname of the machine it's running on for certificate and endpoint generation. Docker chooses a random hostname if one isn't set by the `-h` option. The following example shows how to set the internal hostname of the container to `publisher`:
-
-```sh/cmd
-docker run -h publisher mcr.microsoft.com/iotedge/opc-publisher <applicationname> [<IoT Hubconnectionstring>] [options]
-```
-
-### Use bind mounts (shared filesystem)
-
-Instead of using the container file system, you may choose the host file system to store configuration information and log files. To configure this option, use the `-v` option of `docker run` in the bind mount mode.
-
-## OPC UA X.509 certificates
-
-OPC UA uses X.509 certificates to authenticate the OPC UA client and server when they establish a connection and to encrypt the communication between them. OPC Publisher uses certificate stores maintained by the OPC UA stack to manage all certificates. On startup, OPC Publisher checks if there's a certificate for itself. If there's no certificate in the certificate store, and one's not one passed in on the command-line, OPC Publisher creates a self-signed certificate. For more information, see the **InitApplicationSecurityAsync** method in `OpcApplicationConfigurationSecurity.cs`.
-
-Self-signed certificates don't provide any security, as they're not signed by a trusted CA.
-
-OPC Publisher provides command-line options to:
--- Retrieve CSR information of the current application certificate used by OPC Publisher.-- Provision OPC Publisher with a CA signed certificate.-- Provision OPC Publisher with a new key pair and matching CA signed certificate.-- Add certificates to a trusted peer or trusted issuer certificate store.-- Add a CRL.-- Remove a certificate from the trusted peer or trusted issuers certificate store.-
-All these options let you pass in parameters using files or base64 encoded strings.
-
-The default store type for all certificate stores is the file system, which you can change using command-line options. Because the container doesn't provide persistent storage in its file system, you must choose a different store type. Use the Docker `-v` option to persist the certificate stores in the host file system or on a Docker volume. If you use a Docker volume, you can pass in certificates using base64 encoded strings.
-
-The runtime environment affects how certificates are persisted. Avoid creating new certificate stores each time you run the application:
--- Running natively on Windows, you can't use an application certificate store of type `Directory` because access to the private key fails. In this case, use the option `--at X509Store`.-- Running as Linux docker container, you can map the certificate stores to the host file system with the docker run option `-v <hostdirectory>:/appdata`. This option makes the certificate persistent across application runs.-- Running as Linux docker container and you want to use an X509 store for the application certificate, use the docker run option `-v x509certstores:/root/.dotnet/corefx/cryptography/x509stores` and the application option `--at X509Store`-
-## Performance and memory considerations
-
-This section discusses options for managing memory and performance:
-
-### Command-line parameters to control performance and memory
-
-When you run OPC Publisher, you need to be aware of your performance requirements and the memory resources available on your host.
-
-Memory and performance are interdependent and both depend on the configuration of how many nodes you configure to publish. Ensure that the following parameters meet your requirements:
--- IoT Hub sends interval: `--si`-- IoT Hub message size (default `1`): `--ms`-- Monitored items queue capacity: `--mq`-
-The `--mq` parameter controls the upper bound of the capacity of the internal queue, which buffers all OPC node value change notifications. If OPC Publisher can't send messages to IoT Hub fast enough, this queue buffers the notifications. The parameter sets the number of notifications that can be buffered. If you see the number of items in this queue increasing in your test runs, then to avoid losing messages you should:
--- Reduce the IoT Hub send interval-- Increase the IoT Hub message size-
-The `--si` parameter forces OPC Publisher to send messages to IoT Hub at the specified interval. OPC Publisher sends a message as soon as the message size specified by the `--ms` parameter is reached, or as soon as the interval specified by the `--si` parameter is reached. To disable the message size option, use `--ms 0`. In this case, OPC Publisher uses the largest possible IoT Hub message size of 256 kB to batch data.
-
-The `--ms` parameter lets you batch messages sent to IoT Hub. The protocol you're using determines whether the overhead of sending a message to IoT Hub is high compared to the actual time of sending the payload. If your scenario allows for latency when data ingested by IoT Hub, configure OPC Publisher to use the largest message size of 256 kB.
-
-Before you use OPC Publisher in production scenarios, test the performance and memory usage under production conditions. You can use the `--di` parameter to specify the interval, in seconds, that OPC Publisher writes diagnostic information.
-
-### Test measurements
-
-The following example diagnostics show measurements with different values for `--si` and `--ms` parameters publishing 500 nodes with an OPC publishing interval of 1 second. The test used an OPC Publisher debug build on Windows 10 natively for 120 seconds. The IoT Hub protocol was the default MQTT protocol.
-
-#### Default configuration (--si 10 --ms 262144)
-
-```log
-==========================================================================
-OpcPublisher status @ 26.10.2017 15:33:05 (started @ 26.10.2017 15:31:09)
-
-OPC sessions: 1
-connected OPC sessions: 1
-connected OPC subscriptions: 5
-OPC monitored items: 500
-
-monitored items queue bounded capacity: 8192
-monitored items queue current items: 0
-monitored item notifications enqueued: 54363
-monitored item notifications enqueue failure: 0
-monitored item notifications dequeued: 54363
-
-messages sent to IoT Hub: 109
-last successful msg sent @: 26.10.2017 15:33:04
-bytes sent to IoT Hub: 12709429
-avg msg size: 116600
-msg send failures: 0
-messages too large to sent to IoT Hub: 0
-times we missed send interval: 0
-
-current working set in MB: 90
si setting: 10ms setting: 262144ih setting: Mqtt
-==========================================================================
-```
-
-The default configuration sends data to IoT Hub every 10 seconds, or when 256 kB of data is available for IoT Hub to ingest. This configuration adds a moderate latency of about 10 seconds, but has lowest probability of losing data because of the large message size. The diagnostics output shows there are no lost OPC node updates: `monitored item notifications enqueue failure: 0`.
-
-#### Constant send interval (--si 1 --ms 0)
-
-```log
-==========================================================================
-OpcPublisher status @ 26.10.2017 15:35:59 (started @ 26.10.2017 15:34:03)
-
-OPC sessions: 1
-connected OPC sessions: 1
-connected OPC subscriptions: 5
-OPC monitored items: 500
-
-monitored items queue bounded capacity: 8192
-monitored items queue current items: 0
-monitored item notifications enqueued: 54243
-monitored item notifications enqueue failure: 0
-monitored item notifications dequeued: 54243
-
-messages sent to IoT Hub: 109
-last successful msg sent @: 26.10.2017 15:35:59
-bytes sent to IoT Hub: 12683836
-avg msg size: 116365
-msg send failures: 0
-messages too large to sent to IoT Hub: 0
-times we missed send interval: 0
-
-current working set in MB: 90
si setting: 1ms setting: 0ih setting: Mqtt
-==========================================================================
-```
-
-When the message size is set to 0 then OPC Publisher internally batches data using the largest supported IoT Hub message size, which is 256 kB. The diagnostic output shows
-the average message size is 115,019 bytes. In this configuration OPC Publisher doesn't lose any OPC node value updates, and compared to the default it has lower latency.
-
-### Send each OPC node value update (--si 0 --ms 0)
-
-```log
-==========================================================================
-OpcPublisher status @ 26.10.2017 15:39:33 (started @ 26.10.2017 15:37:37)
-
-OPC sessions: 1
-connected OPC sessions: 1
-connected OPC subscriptions: 5
-OPC monitored items: 500
-
-monitored items queue bounded capacity: 8192
-monitored items queue current items: 8184
-monitored item notifications enqueued: 54232
-monitored item notifications enqueue failure: 44624
-monitored item notifications dequeued: 1424
-
-messages sent to IoT Hub: 1423
-last successful msg sent @: 26.10.2017 15:39:33
-bytes sent to IoT Hub: 333046
-avg msg size: 234
-msg send failures: 0
-messages too large to sent to IoT Hub: 0
-times we missed send interval: 0
-
-current working set in MB: 96
si setting: 0ms setting: 0ih setting: Mqtt
-==========================================================================
-```
-
-This configuration sends for each OPC node value change a message to IoT Hub. The diagnostics show the average message size is 234 bytes, which is small. The advantage of this configuration is that OPC Publisher doesn't add any latency. The number of
-lost OPC node value updates (`monitored item notifications enqueue failure: 44624`) is high, which make this configuration unsuitable for scenarios with high volumes of telemetry to be published.
-
-### Maximum batching (--si 0 --ms 262144)
-
-```log
-==========================================================================
-OpcPublisher status @ 26.10.2017 15:42:55 (started @ 26.10.2017 15:41:00)
-
-OPC sessions: 1
-connected OPC sessions: 1
-connected OPC subscriptions: 5
-OPC monitored items: 500
-
-monitored items queue bounded capacity: 8192
-monitored items queue current items: 0
-monitored item notifications enqueued: 54137
-monitored item notifications enqueue failure: 0
-monitored item notifications dequeued: 54137
-
-messages sent to IoT Hub: 48
-last successful msg sent @: 26.10.2017 15:42:55
-bytes sent to IoT Hub: 12565544
-avg msg size: 261782
-msg send failures: 0
-messages too large to sent to IoT Hub: 0
-times we missed send interval: 0
-
-current working set in MB: 90
si setting: 0ms setting: 262144ih setting: Mqtt
-==========================================================================
-```
-
-This configuration batches as many OPC node value updates as possible. The maximum IoT Hub message size is 256 kB, which is configured here. There's no send interval requested, which means the amount of data for IoT Hub to ingest determines the latency. This configuration has the least probability of losing any OPC node values and is suitable for publishing a high number of nodes. When you use this configuration, ensure your scenario doesn't have conditions where high latency is introduced if the message size of 256 kB isn't reached.
-
-## Debug the application
-
-To debug the application, open the **opcpublisher.sln** solution file with Visual Studio and use the Visual Studio debugging tools.
-
-If you need to access the OPC UA server in the OPC Publisher, make sure that your firewall allows access to the port the server listens on. The default port is: 62222.
-
-## Control the application remotely
-
-Configuring the nodes to publish can be done using IoT Hub direct methods.
-
-OPC Publisher implements a few additional IoT Hub direct method calls to read:
--- General information.-- Diagnostic information on OPC sessions, subscriptions, and monitored items.-- Diagnostic information on IoT Hub messages and events.-- The startup log.-- The last 100 lines of the log.-- Shut down the application.-
-The following GitHub repositories contain tools to [configure the nodes to publish](https://github.com/Azure-Samples/iot-edge-opc-publisher-nodeconfiguration) and [read the diagnostic information](https://github.com/Azure-Samples/iot-edge-opc-publisher-diagnostics). Both tools are also available as containers in Docker Hub.
-
-## Use a sample OPC UA server
-
-If you don't have a real OPC UA server, you can use the [sample OPC UA PLC](https://github.com/Azure-Samples/iot-edge-opc-plc) to get started. This sample PLC is also available on Docker Hub.
-
-It implements a number of tags, which generate random data and tags with anomalies. You can extend the sample if you need to simulate additional tag values.
-
-## Next steps
-
-Now that you've learned how to run OPC Publisher, the recommended next steps are to learn about [OPC Twin](overview-opc-twin.md) and [OPC Vault](overview-opc-vault.md).
iot-accelerators Howto Opc Twin Deploy Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-twin-deploy-dependencies.md
- Title: How to deploy OPC Twin cloud dependencies in Azure | Microsoft Docs
-description: This article describes how to deploy the OPC Twin Azure dependencies needed to do local development and debugging.
-- Previously updated : 11/26/2018------
-# Deploying dependencies for local development
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-This article explains how to deploy only the Azure Platform Services needed to do local development and debugging. At the end, you will have a resource group deployed that contains everything you need for local development and debugging.
-
-## Deploy Azure platform services
-
-1. Make sure you have PowerShell and [AzureRM PowerShell](/powershell/azure/azurerm/install-azurerm-ps) extensions installed. Open a command prompt or terminal and run:
-
- ```bash
- git clone https://github.com/Azure/azure-iiot-components
- cd azure-iiot-components
- ```
-
- ```bash
- deploy -type local
- ```
-
-2. Follow the prompts to assign a name to the resource group for your deployment. The script deploys only the dependencies to this resource group in your Azure subscription, but not the micro services. The script also registers an Application in Azure AD. This is needed to support OAUTH-based authentication. Deployment can take several minutes.
-
-3. Once the script completes, you can select to save the .env file. The .env environment file is the configuration file of all services and tools you want to run on your development machine.
-
-## Troubleshooting deployment failures
-
-### Resource group name
-
-Ensure you use a short and simple resource group name. The name is used also to name resources as such it must comply with resource naming requirements.
-
-### Azure Active Directory (AD) registration
-
-The deployment script tries to register Azure AD applications in Azure AD. Depending on your rights to the selected Azure AD tenant, this might fail. There are three options:
-
-1. If you chose a Azure AD tenant from a list of tenants, restart the script and choose a different one from the list.
-2. Alternatively, deploy a private Azure AD tenant, restart the script and select to use it.
-3. Continue without Authentication. Since you are running your micro services locally, this is acceptable, but does not mimic production environments.
-
-## Next steps
-
-Now that you have successfully deployed OPC Twin services to an existing project, here is the suggested next step:
-
-> [!div class="nextstepaction"]
-> [Learn about how to deploy OPC Twin modules](howto-opc-twin-deploy-modules.md)
iot-accelerators Howto Opc Twin Deploy Existing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-twin-deploy-existing.md
- Title: How to deploy an OPC Twin module to an existing Azure project | Microsoft Docs
-description: This article describes how to deploy OPC Twin to an existing project. You can also learn how to troubleshoot deployment failures.
-- Previously updated : 11/26/2018------
-# Deploy OPC Twin to an existing project
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-The OPC Twin module runs on IoT Edge and provides several edge services to the OPC Twin and Registry services.
-
-The OPC Twin microservice facilitates the communication between factory operators and OPC UA server devices on the factory floor via an OPC Twin IoT Edge module. The microservice exposes OPC UA services (Browse, Read, Write, and Execute) via its REST API.
-
-The OPC UA device registry microservice provides access to registered OPC UA applications and their endpoints. Operators and administrators can register and unregister new OPC UA applications and browse the existing ones, including their endpoints. In addition to application and endpoint management, the registry service also catalogs registered OPC Twin IoT Edge modules. The service API gives you control of edge module functionality, for example, starting or stopping server discovery (scanning services), or activating new endpoint twins that can be accessed using the OPC Twin microservice.
-
-The core of the module is the Supervisor identity. The supervisor manages endpoint twin, which corresponds to OPC UA server endpoints that are activated using the corresponding OPC UA registry API. This endpoint twins translate OPC UA JSON received from the OPC Twin microservice running in the cloud into OPC UA binary messages, which are sent over a stateful secure channel to the managed endpoint. The supervisor also provides discovery services that send device discovery events to the OPC UA device onboarding service for processing, where these events result in updates to the OPC UA registry. This article shows you how to deploy the OPC Twin module to an existing project.
-
-> [!NOTE]
-> For more information on deployment details and instructions, see the GitHub [repository](https://github.com/Azure/azure-iiot-opc-twin-module).
-
-## Prerequisites
-
-Make sure you have PowerShell and [AzureRM PowerShell](/powershell/azure/azurerm/install-azurerm-ps) extensions installed. If you've not already done so, clone this GitHub repository. Run the following commands in PowerShell:
-
-```powershell
-git clone --recursive https://github.com/Azure/azure-iiot-components.git
-cd azure-iiot-components
-```
-
-## Deploy industrial IoT services to Azure
-
-1. In your PowerShell session, run:
-
- ```powershell
- set-executionpolicy -ExecutionPolicy Unrestricted -Scope Process
- .\deploy.cmd
- ```
-
-2. Follow the prompts to assign a name to the resource group of the deployment and a name to the website. The script deploys the microservices and their Azure platform dependencies into the resource group in your Azure subscription. The script also registers an Application in your Azure Active Directory (AAD) tenant to support OAUTH-based authentication. Deployment will take several minutes. An example of what you'd see once the solution is successfully deployed:
-
- ![Industrial IoT OPC Twin deploy to existing project](media/howto-opc-twin-deploy-existing/opc-twin-deploy-existing1.png)
-
- The output includes the URL of the public endpoint.
-
-3. Once the script completes successfully, select whether you want to save the `.env` file. You need the `.env` environment file if you want to connect to the cloud endpoint using tools such as the Console or deploy modules for development and debugging.
-
-## Troubleshooting deployment failures
-
-### Resource group name
-
-Ensure you use a short and simple resource group name. The name is used also to name resources as such it must comply with resource naming requirements.
-
-### Website name already in use
-
-It is possible that the name of the website is already in use. If you run into this error, you need to use a different application name.
-
-### Azure Active Directory (AAD) registration
-
-The deployment script tries to register two AAD applications in Azure Active Directory. Depending on your rights to the selected AAD tenant, the deployment might fail. There are two options:
-
-1. If you chose a AAD tenant from a list of tenants, restart the script and choose a different one from the list.
-2. Alternatively, deploy a private AAD tenant in another subscription, restart the script, and select to use it.
-
-> [!WARNING]
-> NEVER continue without Authentication. If you choose to do so, anyone can access your OPC Twin endpoints from the Internet unauthenticated. You can always choose the ["local" deployment option](howto-opc-twin-deploy-dependencies.md) to kick the tires.
-
-## Deploy an all-in-one industrial IoT services demo
-
-Instead of just the services and dependencies you can also deploy an all-in-one demo. The all in one demo contains three OPC UA servers, the OPC Twin module, all microservices, and a sample Web Application. It is intended for demonstration purposes.
-
-1. Make sure you have a clone of the repository (see above). Open a PowerShell prompt in the root of the repository and run:
-
- ```powershell
- set-executionpolicy -ExecutionPolicy Unrestricted -Scope Process
- .\deploy -type demo
- ```
-
-2. Follow the prompts to assign a new name to the resource group and a name to the website. Once deployed successfully, the script will display the URL of the web application endpoint.
-
-## Deployment script options
-
-The script takes the following parameters:
-
-```powershell
--type
-```
-
-The type of deployment (vm, local, demo)
-
-```powershell
--resourceGroupName
-```
-
-Can be the name of an existing or a new resource group.
-
-```powershell
--subscriptionId
-```
-
-Optional, the subscription ID where resources will be deployed.
-
-```powershell
--subscriptionName
-```
-
-Or the subscription name.
-
-```powershell
--resourceGroupLocation
-```
-
-Optional, a resource group location. If specified, will try to create a new resource group in this location.
-
-```powershell
--aadApplicationName
-```
-
-A name for the AAD application to register under.
-
-```powershell
--tenantId
-```
-
-AAD tenant to use.
-
-```powershell
--credentials
-```
-
-## Next steps
-
-Now that you've learned how to deploy OPC Twin to an existing project, here is the suggested next step:
-
-> [!div class="nextstepaction"]
-> [Secure communication of OPC UA Client and OPC UA PLC](howto-opc-vault-secure.md)
iot-accelerators Howto Opc Twin Deploy Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-twin-deploy-modules.md
- Title: How to deploy OPC Twin module for Azure from scratch | Microsoft Docs
-description: This article describes how to deploy OPC Twin from scratch using the Azure portal's IoT Edge blade and also using AZ CLI.
-- Previously updated : 11/26/2018-------
-# Deploy OPC Twin module and dependencies from scratch
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-The OPC Twin module runs on IoT Edge and provides several edge services to the OPC device twin and registry services.
-
-There are several options to deploy modules to your [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/) Gateway, among them
--- [Deploying from Azure portal's IoT Edge blade](../iot-edge/how-to-deploy-modules-portal.md)-- [Deploying using AZ CLI](../iot-edge/how-to-deploy-cli-at-scale.md)-
-> [!NOTE]
-> For more information on deployment details and instructions, see the GitHub [repository](https://github.com/Azure/azure-iiot-components).
-
-## Deployment manifest
-
-All modules are deployed using a deployment manifest. An example manifest to deploy both [OPC Publisher](https://github.com/Azure/iot-edge-opc-publisher) and [OPC Twin](https://github.com/Azure/azure-iiot-opc-twin-module) is shown below.
-
-```json
-{
- "content": {
- "modulesContent": {
- "$edgeAgent": {
- "properties.desired": {
- "schemaVersion": "1.0",
- "runtime": {
- "type": "docker",
- "settings": {
- "minDockerVersion": "v1.25",
- "loggingOptions": "",
- "registryCredentials": {}
- }
- },
- "systemModules": {
- "edgeAgent": {
- "type": "docker",
- "settings": {
- "image": "mcr.microsoft.com/azureiotedge-agent:1.0",
- "createOptions": ""
- }
- },
- "edgeHub": {
- "type": "docker",
- "status": "running",
- "restartPolicy": "always",
- "settings": {
- "image": "mcr.microsoft.com/azureiotedge-hub:1.0",
- "createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5671/tcp\":[{\"HostPort\":\"5671\"}], \"8883/tcp\":[{\"HostPort\":\"8883\"}],\"443/tcp\":[{\"HostPort\":\"443\"}]}}}"
- }
- }
- },
- "modules": {
- "opctwin": {
- "version": "1.0",
- "type": "docker",
- "status": "running",
- "restartPolicy": "always",
- "settings": {
- "image": "mcr.microsoft.com/iotedge/opc-twin:latest",
- "createOptions": "{\"NetworkingConfig\": {\"EndpointsConfig\": {\"host\": {}}}, \"HostConfig\": {\"NetworkMode\": \"host\" }}"
- }
- },
- "opcpublisher": {
- "version": "2.0",
- "type": "docker",
- "status": "running",
- "restartPolicy": "always",
- "settings": {
- "image": "mcr.microsoft.com/iotedge/opc-publisher:latest",
- "createOptions": "{\"Hostname\":\"publisher\",\"Cmd\":[\"publisher\",\"--pf=./pn.json\",\"--di=60\",\"--tm\",\"--aa\",\"--si=0\",\"--ms=0\"],\"ExposedPorts\":{\"62222/tcp\":{}},\"NetworkingConfig\":{\"EndpointsConfig\":{\"host\":{}}},\"HostConfig\":{\"NetworkMode\":\"host\",\"PortBindings\":{\"62222/tcp\":[{\"HostPort\":\"62222\"}]}}}"
- }
- }
- }
- }
- },
- "$edgeHub": {
- "properties.desired": {
- "schemaVersion": "1.0",
- "routes": {
- "opctwinToIoTHub": "FROM /messages/modules/opctwin/* INTO $upstream",
- "opcpublisherToIoTHub": "FROM /messages/modules/opcpublisher/* INTO $upstream"
- },
- "storeAndForwardConfiguration": {
- "timeToLiveSecs": 7200
- }
- }
- }
- }
- }
-}
-```
-
-## Deploying from Azure portal
-
-The easiest way to deploy the modules to an Azure IoT Edge gateway device is through the Azure portal.
-
-### Prerequisites
-
-1. Deploy the OPC Twin [dependencies](howto-opc-twin-deploy-dependencies.md) and obtained the resulting `.env` file. Note the deployed `hub name` of the `PCS_IOTHUBREACT_HUB_NAME` variable in the resulting `.env` file.
-
-2. Register and start a [Linux](../iot-edge/how-to-install-iot-edge.md) or [Windows](../iot-edge/how-to-install-iot-edge.md) IoT Edge gateway and note its `device id`.
-
-### Deploy to an edge device
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) and navigate to your IoT hub.
-
-2. Select **IoT Edge** from the left-hand menu.
-
-3. Click on the ID of the target device from the list of devices.
-
-4. Select **Set Modules**.
-
-5. In the **Deployment modules** section of the page, select **Add** and **IoT Edge Module.**
-
-6. In the **IoT Edge Custom Module** dialog use `opctwin` as name for the module, then specify the container *Image URI* as
-
- ```bash
- mcr.microsoft.com/iotedge/opc-twin:latest
- ```
-
- As *Container Create Options*, use the following JSON:
-
- ```json
- {"NetworkingConfig": {"EndpointsConfig": {"host": {}}}, "HostConfig": {"NetworkMode": "host" }}
- ```
-
- Fill out the optional fields if necessary. For more information about container create options, restart policy, and desired status see [EdgeAgent desired properties](../iot-edge/module-edgeagent-edgehub.md#edgeagent-desired-properties). For more information about the module twin see [Define or update desired properties](../iot-edge/module-composition.md#define-or-update-desired-properties).
-
-7. Select **Save** and repeat step **5**.
-
-8. In the IoT Edge Custom Module dialog, use `opcpublisher` as name for the module and the container *image URI* as
-
- ```bash
- mcr.microsoft.com/iotedge/opc-publisher:latest
- ```
-
- As *Container Create Options*, use the following JSON:
-
- ```json
- {"Hostname":"publisher","Cmd":["publisher","--pf=./pn.json","--di=60","--tm","--aa","--si=0","--ms=0"],"ExposedPorts":{"62222/tcp":{}},"HostConfig":{"PortBindings":{"62222/tcp":[{"HostPort":"62222"}] }}}
- ```
-
-9. Select **Save** and then **Next** to continue to the routes section.
-
-10. In the routes tab, paste the following
-
- ```json
- {
- "routes": {
- "opctwinToIoTHub": "FROM /messages/modules/opctwin/* INTO $upstream",
- "opcpublisherToIoTHub": "FROM /messages/modules/opcpublisher/* INTO $upstream"
- }
- }
- ```
-
- and select **Next**
-
-11. Review your deployment information and manifest. It should look like the above deployment manifest. Select **Submit**.
-
-12. Once you've deployed modules to your device, you can view all of them in the **Device details** page of the portal. This page displays the name of each deployed module, as well as useful information like the deployment status and exit code.
-
-## Deploying using Azure CLI
-
-### Prerequisites
-
-1. Install the latest version of the [Azure command line interface (AZ)](/cli/azure/) from [here](/cli/azure/install-azure-cli).
-
-### Quickstart
-
-1. Save the above deployment manifest into a `deployment.json` file.
-
-2. Use the following command to apply the configuration to an IoT Edge device:
-
- ```azurecli
- az iot edge set-modules --device-id [device id] --hub-name [hub name] --content ./deployment.json
- ```
-
- The `device id` parameter is case-sensitive. The content parameter points to the deployment manifest file that you saved.
- ![az IoT Edge set-modules output](/azure/iot-edge/media/how-to-deploy-cli/set-modules.png)
-
-3. Once you've deployed modules to your device, you can view all of them with the following command:
-
- ```azurecli
- az iot hub module-identity list --device-id [device id] --hub-name [hub name]
- ```
-
- The device ID parameter is case-sensitive. ![az iot hub module-identity list output](/azure/iot-edge/media/how-to-deploy-cli/list-modules.png)
-
-## Next steps
-
-Now that you have learned how to deploy OPC Twin from scratch, here is the suggested next step:
-
-> [!div class="nextstepaction"]
-> [Deploy OPC Twin to an existing project](howto-opc-twin-deploy-existing.md)
iot-accelerators Howto Opc Vault Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-vault-deploy.md
- Title: How to deploy the OPC Vault certificate management service - Azure | Microsoft Docs
-description: How to deploy the OPC Vault certificate management service from scratch.
-- Previously updated : 08/16/2019------
-# Build and deploy the OPC Vault certificate management service
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-This article explains how to deploy the OPC Vault certificate management service in Azure.
-
-> [!NOTE]
-> For more information, see the GitHub [OPC Vault repository](https://github.com/Azure/azure-iiot-opc-vault-service).
-
-## Prerequisites
-
-### Install required software
-
-Currently the build and deploy operation is limited to Windows.
-The samples are all written for C# .NET Standard, which you need to build the service and samples for deployment.
-All the tools you need for .NET Standard come with the .NET Core tools. See [Get started with .NET Core](/dotnet/articles/core/getting-started).
-
-1. [Install .NET Core 2.1+][dotnet-install].
-2. [Install Docker][docker-url] (optional, only if the local Docker build is required).
-4. Install the [Azure command-line tools for PowerShell][powershell-install].
-5. Sign up for an [Azure subscription][azure-free].
-
-### Clone the repository
-
-If you haven't done so yet, clone this GitHub repository. Open a command prompt or terminal, and run the following:
-
-```bash
-git clone https://github.com/Azure/azure-iiot-opc-vault-service
-cd azure-iiot-opc-vault-service
-```
-
-Alternatively, you can clone the repo directly in Visual Studio 2017.
-
-### Build and deploy the Azure service on Windows
-
-A PowerShell script provides an easy way to deploy the OPC Vault microservice and the application.
-
-1. Open a PowerShell window at the repo root.
-3. Go to the deploy folder `cd deploy`.
-3. Choose a name for `myResourceGroup` that's unlikely to cause a conflict with other deployed webpages. See the "Website name already in use" section later in this article.
-5. Start the deployment with `.\deploy.ps1` for interactive installation, or enter a full command line:
-`.\deploy.ps1 -subscriptionName "MySubscriptionName" -resourceGroupLocation "East US" -tenantId "myTenantId" -resourceGroupName "myResourceGroup"`
-7. If you plan to develop with this deployment, add `-development 1` to enable the Swagger UI, and to deploy debug builds.
-6. Follow the instructions in the script to sign in to your subscription, and to provide additional information.
-9. After a successful build and deploy operation, you should see the following message:
- ```
- To access the web client go to:
- https://myResourceGroup.azurewebsites.net
-
- To access the web service go to:
- https://myResourceGroup-service.azurewebsites.net
-
- To start the local docker GDS server:
- .\myResourceGroup-dockergds.cmd
-
- To start the local dotnet GDS server:
- .\myResourceGroup-gds.cmd
- ```
-
- > [!NOTE]
- > In case of problems, see the "Troubleshooting deployment failures" section later in the article.
-
-8. Open your favorite browser, and open the application page: `https://myResourceGroup.azurewebsites.net`
-8. Give the web app and the OPC Vault microservice a few minutes to warm up after deployment. The web home page might stop responding on first use, for up to a minute, until you get the first responses.
-11. To take a look at the Swagger API, open: `https://myResourceGroup-service.azurewebsites.net`
-13. To start a local GDS server with dotnet, start `.\myResourceGroup-gds.cmd`. With Docker, start `.\myResourceGroup-dockergds.cmd`.
-
-It's possible to redeploy a build with exactly the same settings. Be aware that such an operation renews all application secrets, and might reset some settings in the Azure Active Directory (Azure AD) application registrations.
-
-It's also possible to redeploy just the web app binaries. With the parameter `-onlyBuild 1`, new zip packages of the service and the app are deployed to the web applications.
-
-After successful deployment, you can start using the services. See [Manage the OPC Vault certificate management service](howto-opc-vault-manage.md).
-
-## Delete the services from the subscription
-
-Here's how:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Go to the resource group in which the service was deployed.
-3. Select **Delete resource group**, and confirm.
-4. After a short while, all deployed service components are deleted.
-5. Go to **Azure Active Directory** > **App registrations**.
-6. There should be three registrations listed for each deployed resource group. The registrations have the following names:
-`resourcegroup-client`, `resourcegroup-module`, `resourcegroup-service`. Delete each registration separately.
-
-Now all deployed components are removed.
-
-## Troubleshooting deployment failures
-
-### Resource group name
-
-Use a short and simple resource group name. The name is also used to name resources and the service URL prefix. As such, it must comply with resource naming requirements.
-
-### Website name already in use
-
-It's possible that the name of the website is already in use. You need to use a different resource group name. The hostnames in use by the deployment script are: https:\//resourcegroupname.azurewebsites.net and https:\//resourgroupname-service.azurewebsites.net.
-Other names of services are built by the combination of short name hashes, and are unlikely to conflict with other services.
-
-### Azure AD registration
-
-The deployment script tries to register three Azure AD applications in Azure AD. Depending on your permissions in the selected Azure AD tenant, this operation might fail. There are two options:
--- If you chose an Azure AD tenant from a list of tenants, restart the script and choose a different one from the list.-- Alternatively, deploy a private Azure AD tenant in another subscription. Restart the script, and select to use it.-
-## Deployment script options
-
-The script takes the following parameters:
--
-```
--resourceGroupName
-```
-
-This can be the name of an existing or a new resource group.
-
-```
--subscriptionId
-```
--
-This is the subscription ID where resources will be deployed. It's optional.
-
-```
--subscriptionName
-```
--
-Alternatively, you can use the subscription name.
-
-```
--resourceGroupLocation
-```
--
-This is a resource group location. If specified, this parameter tries to create a new resource group in this location. This parameter is also optional.
--
-```
--tenantId
-```
--
-This is the Azure AD tenant to use.
-
-```
--development 0|1
-```
-
-This is to deploy for development. Use debug build, and set the ASP.NET environment to Development. Create `.publishsettings` for import in Visual Studio 2017, to allow it to deploy the app and the service directly. This parameter is also optional.
-
-```
--onlyBuild 0|1
-```
-
-This is to rebuild and to redeploy only the web apps, and to rebuild the Docker containers. This parameter is also optional.
-
-[azure-free]:https://azure.microsoft.com/free/
-[powershell-install]:https://azure.microsoft.com/downloads/#powershell
-[docker-url]: https://www.docker.com/
-[dotnet-install]: https://www.microsoft.com/net/learn/get-started
-
-## Next steps
-
-Now that you have learned how to deploy OPC Vault from scratch, you can:
-
-> [!div class="nextstepaction"]
-> [Manage OPC Vault](howto-opc-vault-manage.md)
iot-accelerators Howto Opc Vault Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-vault-manage.md
- Title: How to manage the OPC Vault certificate service - Azure | Microsoft Docs
-description: Manage the OPC Vault root CA certificates and user permissions.
-- Previously updated : 8/16/2019------
-# Manage the OPC Vault certificate service
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-This article explains the administrative tasks for the OPC Vault certificate management service in Azure. It includes information about how to renew Issuer CA certificates, how to renew the Certificate Revocation List (CRL), and how to grant and revoke user access.
-
-## Create or renew the root CA certificate
-
-After deploying OPC Vault, you must create the root CA certificate. Without a valid Issuer CA certificate, you can't sign or issue application certificates. Refer to [Certificates](howto-opc-vault-secure-ca.md#certificates) to manage your certificates with reasonable, secure lifetimes. Renew an Issuer CA certificate after half of its lifetime. When renewing, also consider that the configured lifetime of a newly-signed application certificate shouldn't exceed the lifetime of the Issuer CA certificate.
-> [!IMPORTANT]
-> The Administrator role is required to create or renew the Issuer CA certificate.
-
-1. Open your certificate service at `https://myResourceGroup-app.azurewebsites.net`, and sign in.
-2. Go to **Certificate Groups**.
-3. There is one default certificate group listed. Select **Edit**.
-4. In **Edit Certificate Group Details**, you can modify the subject name and lifetime of your CA and application certificates. The subject and the lifetimes should only be set once before the first CA certificate is issued. Lifetime changes during operations might result in inconsistent lifetimes of issued certificates and CRLs.
-5. Enter a valid subject (for example, `CN=My CA Root, O=MyCompany, OU=MyDepartment`).<br>
- > [!IMPORTANT]
- > If you change the subject, you must renew the Issuer certificate, or the service will fail to sign application certificates. The subject of the configuration is checked against the subject of the active Issuer certificate. If the subjects don't match, certificate signing is refused.
-6. Select **Save**.
-7. If you encounter a "forbidden" error at this point, your user credentials don't have the administrator permission to modify or create a new root certificate. By default, the user who deployed the service has administrator and signing roles with the service. Other users need to be added to the Approver, Writer or Administrator roles, as appropriate in the Azure Active Directory (Azure AD) application registration.
-8. Select **Details**. This should show the updated information.
-9. Select **Renew CA Certificate** to issue the first Issuer CA certificate, or to renew the Issuer certificate. Then select **OK**.
-10. After a few seconds, you'll see **Certificate Details**. To download the latest CA certificate and CRL for distribution to your OPC UA applications, select **Issuer** or **Crl**.
-
-Now the OPC UA certificate management service is ready to issue certificates for OPC UA applications.
-
-## Renew the CRL
-
-Renewal of the CRL is an update, which should be distributed to the applications at regular intervals. OPC UA devices, which support the CRL Distribution Point X509 extension, can directly update the CRL from the microservice endpoint. Other OPC UA devices might require manual updates, or can be updated by using GDS server push extensions (*) to update the trust lists with the certificates and CRLs.
-
-In the following workflow, all certificate requests in the deleted states are revoked in the CRLs, which correspond to the Issuer CA certificate for which they were issued. The version number of the CRL is incremented by 1. <br>
-> [!NOTE]
-> All issued CRLs are valid until the expiration of the Issuer CA certificate. This is because the OPC UA specification doesn't require a mandatory, deterministic distribution model for CRL.
-
-> [!IMPORTANT]
-> The Administrator role is required to renew the Issuer CRL.
-
-1. Open your certificate service at `https://myResourceGroup.azurewebsites.net`, and sign in.
-2. Go to the **Certificate Groups** page.
-3. Select **Details**. This should show the current certificate and CRL information.
-4. Select **Update CRL Revocation List (CRL)** to issue an updated CRL for all active Issuer certificates in the OPC Vault storage.
-5. After a few seconds, you'll see **Certificate Details**. To download the latest CA certificate and CRL for distribution to your OPC UA applications, select **Issuer** or **Crl**.
-
-## Manage user roles
-
-You manage user roles for the OPC Vault microservice in the Azure AD Enterprise Application. For a detailed description of the role definitions, see [Roles](howto-opc-vault-secure-ca.md#roles).
-
-By default, an authenticated user in the tenant can sign in the service as a Reader. Higher privileged roles require manual management in the Azure portal, or by using PowerShell.
-
-### Add user
-
-1. Open the Azure portal.
-2. Go to **Azure Active Directory** > **Enterprise applications**.
-3. Choose the registration of the OPC Vault microservice (by default, your `resourceGroupName-service`).
-4. Go to **Users and Groups**.
-5. Select **Add User**.
-6. Select or invite the user for assignment to a specific role.
-7. Select the role for the users.
-8. Select **Assign**.
-9. For users in the Administrator or Approver role, continue to add Azure Key Vault access policies.
-
-### Remove user
-
-1. Open the Azure portal.
-2. Go to **Azure Active Directory** > **Enterprise applications**.
-3. Choose the registration of the OPC Vault microservice (by default, your `resourceGroupName-service`).
-4. Go to **Users and Groups**.
-5. Select a user with a role to remove, and then select **Remove**.
-6. For removed users in the Administrator or Approver role, also remove them from Azure Key Vault policies.
-
-### Add user access policy to Azure Key Vault
-
-Additional access policies are required for Approvers and Administrators.
-
-By default, the service identity has only limited permissions to access Key Vault, to prevent elevated operations or changes to take place without user impersonation. The basic service permissions are Get and List, for both secrets and certificates. For secrets, there is only one exception: the service can delete a private key from the secret store after it's accepted by a user. All other operations require user impersonated permissions.
-
-#### For an Approver role, the following permissions must be added to Key Vault
-
-1. Open the Azure portal.
-2. Go to your OPC Vault `resourceGroupName`, used during deployment.
-3. Go to the Key Vault `resourceGroupName-xxxxx`.
-4. Go to **Access Policies**.
-5. Select **Add new**.
-6. Skip the template. There's no template that matches requirements.
-7. Choose **Select Principal**, and select the user to be added, or invite a new user to the tenant.
-8. Select the following **Key permissions**: **Get**, **List**, and **Sign**.
-9. Select the following **Secret permissions**: **Get**, **List**, **Set**, and **Delete**.
-10. Select the following **Certificate permissions**: **Get** and **List**.
-11. Select **OK**, and select **Save**.
-
-#### For an Administrator role, the following permissions must be added to Key Vault
-
-1. Open the Azure portal.
-2. Go to your OPC Vault `resourceGroupName`, used during deployment.
-3. Go to the Key Vault `resourceGroupName-xxxxx`.
-4. Go to **Access Policies**.
-5. Select **Add new**.
-6. Skip the template. There's no template that matches requirements.
-7. Choose **Select Principal**, and select the user to be added, or invite a new user to the tenant.
-8. Select the following **Key permissions**: **Get**, **List**, and **Sign**.
-9. Select the following **Secret permissions**: **Get**, **List**, **Set**, and **Delete**.
-10. Select the following **Certificate permissions**: **Get**, **List**, **Update**, **Create**, and **Import**.
-11. Select **OK**, and select **Save**.
-
-### Remove user access policy from Azure Key Vault
-
-1. Open the Azure portal.
-2. Go to your OPC Vault `resourceGroupName`, used during deployment.
-3. Go to the Key Vault `resourceGroupName-xxxxx`.
-4. Go to **Access Policies**.
-5. Find the user to remove, and select **Delete**.
-
-## Next steps
-
-Now that you have learned how to manage OPC Vault certificates and users, you can:
-
-> [!div class="nextstepaction"]
-> [Secure communication of OPC devices](howto-opc-vault-secure.md)
iot-accelerators Howto Opc Vault Secure Ca https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-vault-secure-ca.md
- Title: How to run the OPC Vault certificate management service securely - Azure | Microsoft Docs
-description: Describes how to run the OPC Vault certificate management service securely in Azure, and reviews other security guidelines to consider.
-- Previously updated : 8/16/2019------
-# Run the OPC Vault certificate management service securely
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-This article explains how to run the OPC Vault certificate management service securely in Azure, and reviews other security guidelines to consider.
-
-## Roles
-
-### Trusted and authorized roles
-
-The OPC Vault microservice allows for distinct roles to access various parts of the service.
-
-> [!IMPORTANT]
-> During deployment, the script only adds the user who runs the deployment script as a user for all roles. For a production deployment, you should review this role assignment, and reconfigure appropriately by following the guidelines below. This task requires manual assignment of roles and services in the Azure Active Directory (Azure AD) Enterprise Applications portal.
-
-### Certificate management service roles
-
-The OPC Vault microservice defines the following roles:
--- **Reader**: By default, any authenticated user in the tenant has read access.
- - Read access to applications and certificate requests. Can list and query for applications and certificate requests. Also device discovery information and public certificates are accessible with read access.
-- **Writer**: The Writer role is assigned to a user to add write permissions for certain tasks.
- - Read/Write access to applications and certificate requests. Can register, update, and unregister applications. Can create certificate requests and obtain approved private keys and certificates. Can also delete private keys.
-- **Approver**: The Approver role is assigned to a user to approve or reject certificate requests. The role doesn't include any other role.
- - In addition to the Approver role to access the OPC Vault microservice API, the user must also have the key signing permission in Azure Key Vault to be able to sign the certificates.
- - The Writer and Approver role should be assigned to different users.
- - The main role of the Approver is the approval of the generation and rejection of certificate requests.
-- **Administrator**: The Administrator role is assigned to a user to manage the certificate groups. The role doesn't support the Approver role, but includes the Writer role.
- - The administrator can manage the certificate groups, change the configuration, and revoke application certificates by issuing a new Certificate Revocation List (CRL).
- - Ideally, the Writer, Approver, and Administrator roles are assigned to different users. For additional security, a user with the Approver or Administrator role also needs key-signing permission in Key Vault, to issue certificates or to renew an Issuer CA certificate.
- - In addition to the microservice administration role, the role includes, but isn't limited to:
- - Responsibility for administering the implementation of the CAΓÇÖs security practices.
- - Management of the generation, revocation, and suspension of certificates.
- - Cryptographic key life-cycle management (for example, the renewal of the Issuer CA keys).
- - Installation, configuration, and maintenance of services that operate the CA.
- - Day-to-day operation of the services.
- - CA and database backup and recovery.
-
-### Other role assignments
-
-Also consider the following roles when you're running the service:
--- Business owner of the certificate procurement contract with the external root certification authority (for example, when the owner purchases certificates from an external CA or operates a CA that is subordinate to an external CA).-- Development and validation of the Certificate Authority.-- Review of audit records.-- Personnel that help support the CA or manage the physical and cloud facilities, but aren't directly trusted to perform CA operations, are in the *authorized* role. The set of tasks persons in the authorized role is allowed to perform must also be documented.-
-### Review memberships of trusted and authorized roles quarterly
-
-Review membership of trusted and authorized roles at least quarterly. Ensure that the set of people (for manual processes) or service identities (for automated processes) in each role is kept to a minimum.
-
-### Role separation between certificate requester and approver
-
-The certificate issuance process must enforce role separation between the certificate requester and certificate approver roles (persons or automated systems). Certificate issuance must be authorized by a certificate approver role that verifies that the certificate requestor
-is authorized to obtain certificates. The persons that hold the certificate approver role must be a formally authorized person.
-
-### Restrict assignment of privileged roles
-
-You should restrict assignment of privileged roles, such as authorizing membership of the Administrators and Approvers group, to a limited set of authorized personnel. Any privileged role changes must have access revoked within 24 hours. Finally, review privileged role assignments on a quarterly basis, and remove any unneeded or expired assignments.
-
-### Privileged roles should use two-factor authentication
-
-Use multi-factor authentication (also called two-factor authentication) for interactive sign-ins of Approvers and Administrators to the service.
-
-## Certificate service operation guidelines
-
-### Operational contacts
-
-The certificate service must have an up-to-date security response plan on file, which contains detailed operational incident response contacts.
-
-### Security updates
-
-All systems must be continuously monitored and updated with latest security updates.
-
-> [!IMPORTANT]
-> The GitHub repository of the OPC Vault service is continuously updated with security patches. Monitor these updates, and apply them to the service at regular intervals.
-
-### Security monitoring
-
-Subscribe to or implement appropriate security monitoring. For example, subscribe to a central monitoring solution (such as Azure Security Center or Microsoft 365 monitoring solution), and configure it appropriately to ensure that security events are transmitted to the monitoring solution.
-
-> [!IMPORTANT]
-> By default, the OPC Vault service is deployed with [Azure Application Insights](../azure-monitor/app/devops.md) as a monitoring solution. Adding a security solution like [Azure Security Center](https://azure.microsoft.com/services/security-center/) is highly recommended.
-
-### Assess the security of open-source software components
-
-All open-source components used within a product or service must be free of moderate or greater security vulnerabilities.
-
-> [!IMPORTANT]
-> During continuous integration builds, the GitHub repository of the OPC Vault service scans all components for vulnerabilities. Monitor these updates on GitHub, and apply them to the service at regular intervals.
-
-### Maintain an inventory
-
-Maintain an asset inventory for all production hosts (including persistent virtual machines), devices, all internal IP address ranges, VIPs, and public DNS domain names. Whenever you add or remove a system, device IP address, VIP, or public DNS domain, you must update the inventory within 30 days.
-
-#### Inventory of the default Azure OPC Vault microservice production deployment
-
-In Azure:
-- **App Service Plan**: App service plan for service hosts. Default S1.-- **App Service** for microservice: The OPC Vault service host.-- **App Service** for sample application: The OPC Vault sample application host.-- **Key Vault Standard**: To store secrets and Azure Cosmos DB keys for the web services.-- **Key Vault Premium**: To host the Issuer CA keys, for signing service, and for vault configuration and storage of application private keys.-- **Azure Cosmos DB**: Database for application and certificate requests. -- **Application Insights**: (optional) Monitoring solution for web service and application.-- **Azure AD Application Registration**: A registration for the sample application, the service, and the edge module.-
-For the cloud services, all hostnames, resource groups, resource names, subscription IDs, and tenant IDs used to deploy the service should be documented.
-
-In Azure IoT Edge or a local IoT Edge server:
-- **OPC Vault IoT Edge module**: To support a factory network OPC UA Global Discovery Server. -
-For the IoT Edge devices, the hostnames and IP addresses should be documented.
-
-### Document the Certification Authorities (CAs)
-
-The CA hierarchy documentation must contain all operated CAs. This includes all related
-subordinate CAs, parent CAs, and root CAs, even when they aren't managed by the service.
-Instead of formal documentation, you can provide an exhaustive set of all non-expired CA certificates.
-
-> [!NOTE]
-> The OPC Vault sample application supports the download of all certificates used and produced in the service for documentation.
-
-### Document the issued certificates by all Certification Authorities (CAs)
-
-Provide an exhaustive set of all certificates issued in the past 12 months.
-
-> [!NOTE]
-> The OPC Vault sample application supports the download of all certificates used and produced in the service for documentation.
-
-### Document the standard operating procedure for securely deleting cryptographic keys
-
-During the lifetime of a CA, key deletion might happen only rarely. This is why no user has Key Vault Certificate Delete right assigned, and why there are no APIs exposed to delete an Issuer CA certificate. The manual standard operating procedure for securely deleting certification authority cryptographic keys is only available by directly accessing Key Vault in the Azure portal. You can also delete the certificate group in Key Vault. To ensure immediate deletion, disable the
-[Key Vault soft delete](../key-vault/general/soft-delete-overview.md) functionality.
-
-## Certificates
-
-### Certificates must comply with minimum certificate profile
-
-The OPC Vault service is an online CA that issues end entity certificates to subscribers. The OPC Vault microservice follows these guidelines in the default implementation.
--- All certificates must include the following X.509 fields, as specified below:
- - The content of the version field must be v3.
- - The contents of the serialNumber field must include at least 8 bytes of entropy obtained from a FIPS (Federal Information Processing Standards) 140 approved random number generator.<br>
- > [!IMPORTANT]
- > The OPC Vault serial number is by default 20 bytes, and is obtained from the operating system cryptographic random number generator. The random number generator is FIPS 140 approved on Windows devices, but not on Linux. Consider this when choosing a service deployment that uses Linux VMs or Linux docker containers, on which the underlying technology OpenSSL isn't FIPS 140 approved.
- - The issuerUniqueID and subjectUniqueID fields must not be present.
- - End-entity certificates must be identified with the basic constraints extension, in accordance with IETF RFC 5280.
- - The pathLenConstraint field must be set to 0 for the Issuing CA certificate.
- - The Extended Key Usage extension must be present, and must contain the minimum set of Extended Key Usage object identifiers (OIDs). The anyExtendedKeyUsage OID (2.5.29.37.0) must not be specified.
- - The CRL Distribution Point (CDP) extension must be present in the Issuer CA certificate.<br>
- > [!IMPORTANT]
- > The CDP extension is present in OPC Vault CA certificates. Nevertheless, OPC UA devices use custom methods to distribute CRLs.
- - The Authority Information Access extension must be present in the subscriber certificates.<br>
- > [!IMPORTANT]
- > The Authority Information Access extension is present in OPC Vault subscriber certificates. Nevertheless, OPC UA devices use custom methods to distribute Issuer CA information.
-- Approved asymmetric algorithms, key lengths, hash functions and padding modes must be used.
- - RSA and SHA-2 are the only supported algorithms.
- - RSA can be used for encryption, key exchange, and signature.
- - RSA encryption must use only the OAEP, RSA-KEM, or RSA-PSS padding modes.
- - Key lengths greater than or equal to 2048 bits are required.
- - Use the SHA-2 family of hash algorithms (SHA256, SHA384, and SHA512).
- - RSA Root CA keys with a typical lifetime greater than or equal to 20 years must be 4096 bits or greater.
- - RSA Issuer CA keys must be at least 2048 bits. If the CA certificate expiration date is after 2030, the CA key must be 4096 bits or greater.
-- Certificate lifetime
- - Root CA certificates: The maximum certificate validity period for root CAs must not exceed 25 years.
- - Sub CA or online Issuer CA certificates: The maximum certificate validity period for CAs that are online and issue only subscriber certificates must not exceed 6 years. For these CAs, the related private signature key must not be used longer than 3 years to issue new certificates.<br>
- > [!IMPORTANT]
- > The Issuer certificate, as it is generated in the default OPC Vault microservice without external Root CA, is treated like an online Sub CA, with respective requirements and lifetimes. The default lifetime is set to 5 years, with a key length greater than or equal to 2048.
- - All asymmetric keys must have a maximum 5-year lifetime, and a recommended 1-year lifetime.<br>
- > [!IMPORTANT]
- > By default, the lifetimes of application certificates issued with OPC Vault have a lifetime of 2 years, and should be replaced every year.
- - Whenever a certificate is renewed, it's renewed with a new key.
-- OPC UA-specific extensions in application instance certificates
- - The subjectAltName extension includes the application Uri and hostnames. These might also include FQDN, IPv4, and IPv6 addresses.
- - The keyUsage includes digitalSignature, nonRepudiation, keyEncipherment, and dataEncipherment.
- - The extendedKeyUsage includes serverAuth and clientAuth.
- - The authorityKeyIdentifier is specified in signed certificates.
-
-### CA keys and certificates must meet minimum requirements
--- **Private keys**: RSA keys must be at least 2048 bits. If the CA certificate expiration date is after 2030, the CA key must be 4096 bits or greater.-- **Lifetime**: The maximum certificate validity period for CAs that are online and issue only subscriber certificates must not exceed 6 years. For these CAs, the related private signature key must not be used longer than 3 years to issue new certificates.-
-### CA keys are protected using Hardware Security Modules
-
-OpcVault uses Azure Key Vault Premium, and keys are protected by FIPS 140-2 Level 2 Hardware Security Modules (HSM).
-
-The cryptographic modules that Key Vault uses, whether HSM or software, are FIPS validated. Keys created or imported as HSM-protected are processed inside an HSM, validated to FIPS 140-2 Level 2. Keys created or imported as software-protected are processed inside cryptographic modules validated to FIPS 140-2 Level 1.
-
-## Operational practices
-
-### Document and maintain standard operational PKI practices for certificate enrollment
-
-Document and maintain standard operational procedures (SOPs) for how CAs issue certificates, including:
-- How the subscriber is identified and authenticated. -- How the certificate request is processed and validated (if applicable, include also how certificate renewal and rekey requests are processed). -- How issued certificates are distributed to the subscribers. -
-The OPC Vault microservice SOP is described in [OPC Vault architecture](overview-opc-vault-architecture.md) and [Manage the OPC Vault certificate service](howto-opc-vault-manage.md). The practices follow "OPC Unified Architecture Specification Part 12: Discovery and Global Services."
--
-### Document and maintain standard operational PKI practices for certificate revocation
-
-The certificate revocation process is described in [OPC Vault architecture](overview-opc-vault-architecture.md) and [Manage the OPC Vault certificate service](howto-opc-vault-manage.md).
-
-### Document CA key generation ceremony
-
-The Issuer CA key generation in the OPC Vault microservice is simplified, due to the secure storage in Azure Key Vault. For more information, see [Manage the OPC Vault certificate service](howto-opc-vault-manage.md).
-
-However, when you're using an external Root certification authority, a CA key generation ceremony must adhere to the following requirements.
-
-The CA key generation ceremony must be performed against a documented script that includes at least the following items:
-- Definition of roles and participant responsibilities.-- Approval for conduct of the CA key generation ceremony.-- Cryptographic hardware and activation materials required for the ceremony.-- Hardware preparation (including asset/configuration information update and sign-off).-- Operating system installation.-- Specific steps performed during the CA key generation ceremony, such as:
- - CA application installation and configuration.
- - CA key generation.
- - CA key backup.
- - CA certificate signing.
- - Import of signed keys in the protected HSM of the service.
- - CA system shutdown.
- - Preparation of materials for storage.
--
-## Next steps
-
-Now that you have learned how to securely manage OPC Vault, you can:
-
-> [!div class="nextstepaction"]
-> [Secure OPC UA devices with OPC Vault](howto-opc-vault-secure.md)
iot-accelerators Howto Opc Vault Secure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-vault-secure.md
- Title: Secure the communication of OPC UA devices with OPC Vault - Azure | Microsoft Docs
-description: How to register OPC UA applications, and how to issue signed application certificates for your OPC UA devices with OPC Vault.
-- Previously updated : 8/16/2018------
-# Use the OPC Vault certificate management service
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-This article explains how to register applications, and how to issue signed application certificates for your OPC UA devices.
-
-## Prerequisites
-
-### Deploy the certificate management service
-
-First, deploy the service to the Azure cloud. For details, see [Deploy the OPC Vault certificate management service](howto-opc-vault-deploy.md).
-
-### Create the Issuer CA certificate
-
-If you haven't done so yet, create the Issuer CA certificate. For details, see [Create and manage the Issuer certificate for OPC Vault](howto-opc-vault-manage.md).
-
-## Secure OPC UA applications
-
-### Step 1: Register your OPC UA application
-
-> [!IMPORTANT]
-> The Writer role is required to register an application.
-
-1. Open your certificate service at `https://myResourceGroup-app.azurewebsites.net`, and sign in.
-2. Go to **Register New**. For an application registration, a user needs to have at least the Writer role assigned.
-2. The entry form follows naming conventions in OPC UA. For example, in the following screenshot, the settings for the [OPC UA Reference Server](https://github.com/OPCFoundation/UA-.NETStandard/tree/master/Applications/ReferenceServer) sample in the OPC UA .NET Standard stack is shown:
-
- ![Screenshot of UA Reference Server Registration](media/howto-opc-vault-secure/reference-server-registration.png "UA Reference Server Registration")
-
-5. Select **Register** to register the application in the certificate service application database. The workflow directly guides the user to the next step to request a signed certificate for the application.
-
-### Step 2: Secure your application with a CA signed application certificate
-
-Secure your OPC UA application by issuing a signed certificate based on a Certificate Signing
-Request (CSR). Alternatively, you can request a new key pair, which includes a new private key in PFX or PEM format. For information about which method is supported for your application, see the documentation of your OPC UA device. In general, the CSR method is recommended, because it doesn't require a private key to be transferred over a wire.
-
-#### Request a new certificate with a new keypair
-
-1. Go to **Applications**.
-3. Select **New Request** for a listed application.
-
- ![Screenshot of Request New Certificate](media/howto-opc-vault-secure/request-new-certificate.png "Request New Certificate")
-
-3. Select **Request new KeyPair and Certificate** to request a private key and a new signed certificate with the public key for your application.
-
- ![Screenshot of Generate a New KeyPair and Certificate](media/howto-opc-vault-secure/generate-new-key-pair.png "Generate New Key Pair")
-
-4. Fill in the form with a subject and the domain names. For the private key, choose PEM or PFX with password. Select **Generate New KeyPair** to create the certificate request.
-
- ![Screenshot that shows the View Certificate Request Details screen and the Generate New KeyPair button.](media/howto-opc-vault-secure/approve-reject.png "Approve Certificate")
-
-5. Approval requires a user with the Approver role, and with signing permissions in Azure Key Vault. In the typical workflow, the Approver and Requester roles should be assigned to different users. Select **Approve** or **Reject** to start or cancel the actual creation of the key pair and the signing operation. The new key pair is created and stored securely in Azure Key Vault, until downloaded by the certificate requester. The resulting certificate with public key is signed by the CA. These operations can take a few seconds to finish.
-
- ![Screenshot of View Certificate Request Details, with approval message at bottom](media/howto-opc-vault-secure/view-key-pair.png "View Key Pair")
-
-7. The resulting private key (PFX or PEM) and certificate (DER) can be downloaded from here in the format selected as binary file download. A base64 encoded version is also available, for example, to copy and paste the certificate to a command line or text entry.
-8. After the private key is downloaded and stored securely, you can select **Delete Private Key**. The certificate with the public key remains available for future use.
-9. Due to the use of a CA signed certificate, the CA cert and Certificate Revocation List (CRL) should be downloaded here as well.
-
-Now it depends on the OPC UA device how to apply the new key pair. Typically, the CA cert and CRL are copied to a `trusted` folder, while the public and private keys of the application certificate are applied to an `own` folder in the certificate store. Some devices might already support server push for certificate updates. Refer to the documentation of your OPC UA device.
-
-#### Request a new certificate with a CSR
-
-1. Go to **Applications**.
-3. Select **New Request** for a listed application.
-
- ![Screenshot of Request New Certificate](media/howto-opc-vault-secure/request-new-certificate.png "Request New Certificate")
-
-3. Select **Request new Certificate with Signing Request** to request a new signed certificate for your application.
-
- ![Screenshot of Generate a new Certificate](media/howto-opc-vault-secure/generate-new-certificate.png "Generate New Certificate")
-
-4. Upload CSR by selecting a local file or by pasting a base64 encoded CSR in the form. Select **Generate New Certificate**.
-
- ![Screenshot of View Certificate Request Details](media/howto-opc-vault-secure/approve-reject-csr.png "Approve CSR")
-
-5. Approval requires a user with the Approver role, and with signing permissions in Azure Key Vault. Select **Approve** or **Reject** to start or cancel the actual signing operation. The resulting certificate with public key is signed by the CA. This operation can take a few seconds to finish.
-
- ![Screenshot that shows the View Certificate Request Details and includes an approval message at bottom.](media/howto-opc-vault-secure/view-cert-csr.png "View Certificate")
-
-6. The resulting certificate (DER) can be downloaded from here as binary file. A base64 encoded version is also available, for example, to copy and paste the certificate to a command line or text entry.
-10. After the certificate is downloaded and stored securely, you can select **Delete Certificate**.
-11. Due to the use of a CA signed certificate, the CA cert and CRL should be downloaded here as well.
-
-Now it depends on the OPC UA device how to apply the new certificate. Typically, the CA cert and CRL are copied to a `trusted` folder, while the application certificate is applied to an `own` folder in the certificate store. Some devices might already support server push for certificate updates. Refer to the documentation of your OPC UA device.
-
-### Step 3: Device secured
-
-The OPC UA device is now ready to communicate with other OPC UA devices secured by CA signed certificates, without further configuration.
-
-## Next steps
-
-Now that you have learned how to secure OPC UA devices, you can:
-
-> [!div class="nextstepaction"]
-> [Run a secure certificate management service](howto-opc-vault-secure-ca.md)
iot-accelerators Iot Accelerators Connected Factory Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/iot-accelerators-connected-factory-configure.md
- Title: Configure the Connected Factory topology - Azure | Microsoft Docs
-description: This article describes how to configure the Connected Factory solution accelerator including its topology.
----- Previously updated : 12/12/2017---
-# Configure the Connected Factory solution accelerator
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-The Connected Factory solution accelerator shows a simulated dashboard for a fictional company Contoso. This company has factories in numerous global locations globally.
-
-This article uses Contoso as an example to describe how to configure the topology of a Connected Factory solution.
-
-## Simulated factories configuration
-
-Each Contoso factory has production lines that consist of three stations each. Each station is a real OPC UA server with a specific role:
-
-* Assembly station
-* Test station
-* Packaging station
-
-These OPC UA servers have OPC UA nodes and [OPC Publisher](overview-opc-publisher.md) sends the values of these nodes to Connected Factory. This includes:
-
-* Current operational status such as current power consumption.
-* Production information such as the number of products produced.
-
-You can use the dashboard to drill into the Contoso factory topology from a global view down to a station level view. The Connected Factory dashboard enables:
-
-* The visualization of OEE and KPI figures for each layer in the topology.
-* The visualization of current values of OPC UA nodes in the stations.
-* The aggregation of the OEE and KPI figures from the station level to the global level.
-* The visualization of alerts and actions to perform if values reach specific thresholds.
-
-## Connected Factory topology
-
-The topology of factories, production lines, and stations is hierarchical:
-
-* The global level has factory nodes as children.
-* The factories have production line nodes as children.
-* The production lines have station nodes as children.
-* The stations (OPC UA servers) have OPC UA nodes as children.
-
-Every node in the topology has a common set of properties that define:
-
-* A unique identifier for the topology node.
-* A name.
-* A description.
-* An image.
-* The children of the topology node.
-* Minimum, target, and maximum values for OEE and KPI figures and the alert actions to execute.
-
-## Topology configuration file
-
-To configure the properties listed in the previous section, the Connected Factory solution uses a configuration file called [ContosoTopologyDescription.json](https://github.com/Azure/azure-iot-connected-factory/blob/master/WebApp/Contoso/Topology/ContosoTopologyDescription.json).
-
-You can find this file in the solution source code in the `WebApp/Contoso/Topology` folder.
-
-The following snippet shows an outline of the `ContosoTopologyDescription.json` configuration file:
-
-```json
-{
- <global_configuration>,
- "Factories": [
- <factory_configuration>,
- "ProductionLines": [
- <production_line_configuration>,
- "Stations": [
- <station_configuration>,
- <more station_configurations>
- ],
- <more production_line_configurations>
- ]
- <more factory_configurations>
- ]
-}
-```
-
-The common properties of `<global_configuration>`, `<factory_configuration>`, `<production_line_configuration>`, and `<station_configuration>` are:
-
-* **Name** (type string)
-
- Defines a descriptive name, which should be only one word for the topology node to show in the dashboard.
-
-* **Description** (type string)
-
- Describes the topology node in more detail.
-
-* **Image** (type string)
-
- The path to an image in the WebApp solution to show when information about the topology node is shown in the dashboard.
-
-* **OeeOverall**, **OeePerformance**, **OeeAvailability**, **OeeQuality**, **Kpi1**, **Kpi2** (type `<performance_definition>`)
-
- These properties define minimal, target, and maximal values of the operational figure used to generate alerts. These properties also define the actions to execute if an alert is detected.
-
-The `<factory_configuration>` and `<production_line_configuration>` items have a property:
-
-* **Guid** (type string)
-
- Uniquely identifies the topology node.
-
-`<factory_configuration>` has a property:
-
-* **Location** (type `<location_definition>`)
-
- Specifies where the factory is located.
-
-`<station_configuration>` has properties:
-
-* **OpcUri** (type string)
-
- This property must be set to the OPC UA Application URI of the OPC UA server.
- Because it must be globally unique by OPC UA specification, this property is used to identify the station topology node.
-
-* **OpcNodes**, which are an array of OPC UA nodes (type `<opc_node_description>`)
-
-`<location_definition>` has properties:
-
-* **City** (type string)
-
- Name of city closest to the location
-
-* **Country** (type string)
-
- Country of the location
-
-* **Latitude** (type double)
-
- Latitude of the location
-
-* **Longitude** (type double)
-
- Longitude of the location
-
-`<performance_definition>` has properties:
-
-* **Minimum** (type double)
-
- Lower threshold the value can reach. If the current value is below this threshold, an alert is generated.
-
-* **Target** (type double)
-
- Ideal target value.
-
-* **Maximum** (type double)
-
- Upper threshold the value can reach. If the current value is above this threshold, an alert is generated.
-
-* **MinimumAlertActions** (type `<alert_action>`)
-
- Defines the set of actions, which can be taken as response to a minimum alert.
-
-* **MaximumAlertActions** (type `<alert_action>`)
-
- Defines the set of actions, which can be taken as response to a maximum alert.
-
-`<alert_action`> has properties:
-
-* **Type** (type string)
-
- Type of the alert action. The following types are known:
-
- * **AcknowledgeAlert**: the status of the alert should change to acknowledged.
- * **CloseAlert**: all older alerts of the same type should no longer be shown in the dashboard.
- * **CallOpcMethod**: an OPC UA method should be called.
- * **OpenWebPage**: a browser window should be opened showing additional contextual information.
-
-* **Description** (type string)
-
- Description of the action shown in the dashboard.
-
-* **Parameter** (type string)
-
- Parameters required to execute the action. The value depends on the action type.
-
- * **AcknowledgeAlert**: no parameter required.
- * **CloseAlert**: no parameter required.
- * **CallOpcMethod**: the node information and parameters of the OPC UA method to call in the format "NodeId of parent node, NodeId of method to call, URI of the OPC UA server."
- * **OpenWebPage**: the URL to show in the browser window.
-
-`<opc_node_description>` contains information about OPC UA nodes in a station (OPC UA server). Nodes that represent no existing OPC UA nodes, but are used as storage in the computation logic of Connected Factory are also valid. It has the following properties:
-
-* **NodeId** (type string)
-
- Address of the OPC UA node in the stationΓÇÖs (OPC UA serverΓÇÖs) address space. Syntax must be as specified in the OPC UA specification for a NodeId.
-
-* **SymbolicName** (type string)
-
- Name to be shown in the dashboard when the value of this OPC UA node is shown.
-
-* **Relevance** (array of type string)
-
- Indicates for which computation of OEE or KPI the OPC UA node value is relevant. Each array element can be one of the following values:
-
- * **OeeAvailability_Running**: the value is relevant for calculation of OEE Availability.
- * **OeeAvailability_Fault**: the value is relevant for calculation of OEE Availability.
- * **OeePerformance_Ideal**: the value is relevant for calculation of OEE Performance and is typically a constant value.
- * **OeePerformance_Actual**: the value is relevant for calculation of OEE Performance.
- * **OeeQuality_Good**: the value is relevant for calculation of OEE Quality.
- * **OeeQuality_Bad**: the value is relevant for calculation of OEE Quality.
- * **Kpi1**: the value is relevant for calculation of KPI1.
- * **Kpi2**: the value is relevant for calculation of KPI2.
-
-* **OpCode** (type string)
-
- Indicates how the value of the OPC UA node is handled in Time Series Insight queries and OEE/KPI calculations. Each Time Series Insight query targets a specific timespan, which is a parameter of the query and delivers a result. The OpCode controls how the result is computed and can be one of the following values:
-
- * **Diff**: difference between the last and the first value in the timespan.
- * **Avg**: the average of all values in the timespan.
- * **Sum**: the sum of all values in the timespan.
- * **Last**: currently not used.
- * **Count**: the number of values in the timespan.
- * **Max**: the maximal value in the timespan.
- * **Min**: the minimal value in the timespan.
- * **Const**: the result is the value specified by property ConstValue.
- * **SubMaxMin**: the difference between the maximal and the minimal value.
- * **Timespan**: the timespan.
-
-* **Units** (type string)
-
- Defines a unit of the value for display in the dashboard.
-
-* **Visible** (type boolean)
-
- Controls if the value should be shown in the dashboard.
-
-* **ConstValue** (type double)
-
- If the **OpCode** is **Const**, then this property is the value of the node.
-
-* **Minimum** (type double)
-
- If the current value falls below this value, then a minimum alert is generated.
-
-* **Maximum** (type double)
-
- If the current value raises above this value, then a maximum alert is generated.
-
-* **MinimumAlertActions** (type `<alert_action>`)
-
- Defines the set of actions, which can be taken as response to a minimum alert.
-
-* **MaximumAlertActions** (type `<alert_action>`)
-
- Defines the set of actions, which can be taken as response to a maximum alert.
-
-At the station level, you also see **Simulation** objects. These objects are only used to configure the Connected Factory simulation and should not be used to configure a real topology.
-
-## How the configuration data is used at runtime
-
-All the properties used in the configuration file can be grouped into different categories depending on how they are used. Those categories are:
-
-### Visual appearance
-
-Properties in this category define the visual appearance of the Connected Factory dashboard. Examples include:
-
-* Name
-* Description
-* Image
-* Location
-* Units
-* Visible
-
-### Internal topology tree addressing
-
-The WebApp maintains an internal data dictionary containing information of all topology nodes. The properties **Guid** and **OpcUri** are used as keys to access this dictionary and need to be unique.
-
-### OEE/KPI computation
-
-The OEE/KPI figures for the Connected Factory simulation are parameterized by:
-
-* The OPC UA node values to be included in the calculation.
-* How the figure is computed from the telemetry values.
-
-Connected Factory uses the OEE formulas as published by the [http://www.oeefoundation.org](http://www.oeefoundation.org).
-
-OPC UA node objects in stations enable tagging for usage in OEE/KPI calculation. The **Relevance** property indicates for which OEE/KPI figure the OPC UA node value should be used. The **OpCode** property defines how the value is included in the computation.
-
-### Alert handling
-
-Connected Factory supports a simple minimum/maximum threshold-based alert generation mechanism. There are a number of predefined actions you can configure in response to those alerts. The following properties control this mechanism:
-
-* Maximum
-* Minimum
-* MaximumAlertActions
-* MinimumAlertActions
-
-## Correlating to telemetry data
-
-For certain operations, such as visualizing the last value or creating Time Series Insight queries, the WebApp needs an addressing scheme for the ingested telemetry data. The telemetry sent to Connected Factory also needs to be stored in internal data structures. The two properties enabling these operations are at station (OPC UA server) and OPC UA node level:
-
-* **OpcUri**
-
- Identifies (globally unique) the OPC UA server the telemetry comes from. In the ingested messages, this property is sent as **ApplicationUri**.
-
-* **NodeId**
-
- Identifies the node value in the OPC UA server. The format of the property must be as specified in the OPC UA specification. In the ingested messages, this property is sent as **NodeId**.
-
-See [What is OPC Publisher](overview-opc-publisher.md) for more information on how the telemetry data is ingested to Connected Factory.
-
-## Example: How KPI1 is calculated
-
-The configuration in the `ContosoTopologyDescription.json` file controls how OEE/KPI figures are calculated. The following example shows how properties in this file control the computation of KPI1.
-
-In Connected Factory KPI1 is used to measure the number of successfully manufactured products in the last hour. Each station (OPC UA server) in the Connected Factory simulation provides an OPC UA node (`NodeId: "ns=2;i=385"`), which provides the telemetry to compute this KPI.
-
-The configuration for this OPC UA node looks like the following snippet:
-
-```json
-{
- "NodeId": "ns=2;i=385",
- "SymbolicName": "NumberOfManufacturedProducts",
- "Relevance": [ "Kpi1", "OeeQuality_Good" ],
- "OpCode": "SubMaxMin"
-},
-```
-
-This configuration enables querying of the telemetry values of this node using Time Series Insights. The Time Series Insights query retrieves:
-
-* The number of values.
-* The minimal value.
-* The maximal value.
-* The average of all values.
-* The sum of all values for all unique **OpcUri** (**ApplicationUri**), **NodeId** pairs in a given timespan.
-
-One characteristic of the **NumberOfManufactureredProducts** node value is that it only increases. To calculate the number of products manufactured in the timespan, Connected Factory uses the **OpCode** **SubMaxMin**. The calculation retrieves the minimum value at the start of the timespan and the maximum value at the end of the timespan.
-
-The **OpCode** in the configuration configures the computation logic to calculate the result of the difference of maximum and minimum value. Those results are then accumulated bottom up to the root (global) level and shown in the dashboard.
-
-## Next steps
-
-A suggested next step is to learn how to [Customize the Connected Factory solution](iot-accelerators-connected-factory-customize.md).
iot-accelerators Iot Accelerators Connected Factory Customize https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/iot-accelerators-connected-factory-customize.md
- Title: Customize the Connected Factory solution - Azure | Microsoft Docs
-description: A description of how to customize the behavior of the Connected Factory solution accelerator.
----- Previously updated : 12/14/2017---
-# Customize how the Connected Factory solution displays data from your OPC UA servers
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-The Connected Factory solution aggregates and displays data from the OPC UA servers connected to the solution. You can browse and send commands to the OPC UA servers in your solution. For more information about OPC UA, see the [Connected Factory FAQ](iot-accelerators-faq-cf.md).
-
-Examples of aggregated data in the solution include the Overall Equipment Efficiency (OEE) and Key Performance Indicators (KPIs) that you can view in the dashboard at the factory, line, and station levels. The following screenshot shows the OEE and KPI values for the **Assembly** station, on **Production line 1**, in the **Munich** factory:
-
-![Example of OEE and KPI values in the solution][img-oee-kpi]
-
-The solution enables you to view detailed information from specific data items from the OPC UA servers, called *stations*. The following screenshot shows plots of the number of manufactured items from a specific station:
-
-![Plots of number of manufactured items][img-manufactured-items]
-
-If you click one of the graphs, you can explore the data further using Time Series Insights (TSI):
-
-![Explore data using Time Series Insights][img-tsi]
-
-This article describes:
--- How the data is made available to the various views in the solution.-- How you can customize the way the solution displays the data.-
-## Data sources
-
-The Connected Factory solution displays data from the OPC UA servers connected to the solution. The default installation includes several OPC UA servers running a factory simulation. You can add your own OPC UA servers that [connect through a gateway][lnk-connect-cf] to your solution.
-
-You can browse the data items that a connected OPC UA server can send to your solution in the dashboard:
-
-1. Choose **Browser** to navigate to the **Select an OPC UA server** view:
-
- ![Navigate to the Select an OPC UA server view][img-select-server]
-
-1. Select a server and click **Connect**. Click **Proceed** when the security warning appears.
-
- > [!NOTE]
- > This warning only appears once for each server and establishes a trust relationship between the solution dashboard and the server.
-
-1. You can now browse the data items that the server can send to the solution. Items that are being sent to the solution have a check mark:
-
- ![Published items][img-published]
-
-1. If you are an *Administrator* in the solution, you can choose to publish a data item to make it available in the Connected Factory solution. As an Administrator, you can also change the value of data items and call methods in the OPC UA server.
-
-## Map the data
-
-The Connected Factory solution maps and aggregates the published data items from the OPC UA server to the various views in the solution. The Connected Factory solution deploys to your Azure account when you provision the solution. A JSON file in the Visual Studio Connected Factory solution stores this mapping information. You can view and modify this JSON configuration file in the Connected Factory Visual Studio solution. You can redeploy the solution after you make a change.
-
-You can use the configuration file to:
--- Edit the existing simulated factories, production lines, and stations.-- Map data from real OPC UA servers that you connect to the solution.-
-For more information about mapping and aggregating the data to meet your specific requirements, see [How to configure the Connected Factory solution accelerator
-](iot-accelerators-connected-factory-configure.md).
-
-## Deploy the changes
-
-When you have finished making changes to the **ContosoTopologyDescription.json** file, you must redeploy the Connected Factory solution to your Azure account.
-
-The **azure-iot-connected-factory** repository includes a **build.ps1** PowerShell script you can use to rebuild and deploy the solution.
-
-## Next Steps
-
-Learn more about the Connected Factory solution accelerator by reading the following articles:
-
-* [Permissions on the azureiotsolutions.com site][lnk-permissions]
-* [Connected Factory FAQ](iot-accelerators-faq-cf.md)
-* [FAQ][lnk-faq]
--
-[img-oee-kpi]: ./media/iot-accelerators-connected-factory-customize/oeenadkpi.png
-[img-manufactured-items]: ./media/iot-accelerators-connected-factory-customize/manufactured.png
-[img-tsi]: ./media/iot-accelerators-connected-factory-customize/tsi.png
-[img-select-server]: ./media/iot-accelerators-connected-factory-customize/selectserver.png
-[img-published]: ./media/iot-accelerators-connected-factory-customize/published.png
--
-[lnk-permissions]: iot-accelerators-permissions.md
-[lnk-faq]: iot-accelerators-faq.md
iot-accelerators Iot Accelerators Connected Factory Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/iot-accelerators-connected-factory-dashboard.md
- Title: Use the Connected Factory dashboard - Azure | Microsoft Docs
-description: This article describes how to use features of the Connected Factory dashboard to monitor and manage your industrial IoT devices.
----- Previously updated : 07/10/2018--
-# Use features in the Connected Factory solution accelerator dashboard
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-The [Deploy a cloud-based solution to manage my industrial IoT devices](quickstart-connected-factory-deploy.md) quickstart showed you how to navigate the dashboard and respond to alarms. This how-to guide shows you some additional dashboard features you can use to monitor and manage your industrial IoT devices.
-
-## Apply filters
-
-You can filter the information displayed on the dashboard either in the **Factory Locations** panel or the **Alarms** panel:
-
-1. Click the **funnel** icon to display a list of available filters in either the factory locations panel or the alarms panel.
-
-1. The filters panel is displayed:
-
- [![Connected Factory solution accelerator filters](./media/iot-accelerators-connected-factory-dashboard/filterpanel-inline.png)](./media/iot-accelerators-connected-factory-dashboard/filterpanel-expanded.png#lightbox)
-
-1. Choose the filter that you require and click **Apply**. It's also possible to type free text into the filter fields.
-
-1. The filter is then applied. The extra funnel icon indicates that a filter is applied:
-
- [![Connected Factory solution accelerator filter applied](./media/iot-accelerators-connected-factory-dashboard/filterapplied-inline.png)](./media/iot-accelerators-connected-factory-dashboard/filterapplied-expanded.png#lightbox)
-
- > [!NOTE]
- > An active filter doesn't affect the displayed OEE and KPI values, it only filters the list contents.
-
-1. To clear a filter, click the funnel and click **Clear** in the filter panel.
-
-## Browse an OPC UA server
-
-When you deploy the solution accelerator, you automatically provision a set of simulated OPC UA servers that you can browse from the dashboard. Simulated servers make it easy for you to experiment with the solution accelerator without the need to deploy real servers.
-
-1. Click the **browser icon** in the dashboard navigation bar:
-
- [![Connected Factory solution accelerator server browser](./media/iot-accelerators-connected-factory-dashboard/browser-inline.png)](./media/iot-accelerators-connected-factory-dashboard/browser-expanded.png#lightbox)
-
-1. Choose one of the servers from the list that shows the servers deployed for you in the solution accelerator:
-
- [![Connected Factory solution accelerator server list](./media/iot-accelerators-connected-factory-dashboard/serverlist-inline.png)](./media/iot-accelerators-connected-factory-dashboard/serverlist-expanded.png#lightbox)
-
-1. Click **Connect**, a security dialog displays. For the simulation, it's safe to click **Proceed**.
-
-1. To expand any of the nodes in the server tree, click it. Nodes that are publishing telemetry have a check mark beside them:
-
- [![Connected Factory solution accelerator server tree](./media/iot-accelerators-connected-factory-dashboard/servertree-inline.png)](./media/iot-accelerators-connected-factory-dashboard/servertree-expanded.png#lightbox)
-
-1. Right-click an item to read, write, publish, or call that node. The actions available to you depend on your permissions and the attributes of the node. The read option displays a context panel showing the value of the specific node. The write option displays a context panel where you can enter a new value. The call option displays a node where you can enter the parameters for the call.
-
-## Publish a node
-
-When you browse a *simulated OPC UA server*, you can also choose to publish new nodes. You can analyze the telemetry from these nodes in the solution. These *simulated OPC UA servers* make it easy to experiment with the solution accelerator without deploying real devices:
-
-1. Browse to a node in the OPC UA server browser tree that you wish to publish.
-
-1. Right-click the node. Click **Publish**:
-
- [![Connected Factory solution accelerator publish node](./media/iot-accelerators-connected-factory-dashboard/publishnode-inline.png)](./media/iot-accelerators-connected-factory-dashboard/publishnode-expanded.png#lightbox)
-
-1. A context panel appears which tells you that the publish has succeeded. The node appears in the station level view with a check mark beside it:
-
- [![Connected Factory solution accelerator publish success](./media/iot-accelerators-connected-factory-dashboard/publishsuccess-inline.png)](./media/iot-accelerators-connected-factory-dashboard/publishsuccess-expanded.png#lightbox)
-
-## Command and control
-
-The Connected Factory allows you command and control your industry devices directly from the cloud. You can use this feature to respond to alarms generated by the device. For example, you could send a command to the device to open a pressure release valve. You can find the available commands in the **StationCommands** node in the OPC UA servers browser tree. In this scenario, you open a pressure release valve on the assembly station of a production line in Munich. To use the command and control functionality, you must be in the **Administrator** role for the solution accelerator deployment:
-
-1. Browse to the **StationCommands** node in the OPC UA server browser tree for the Munich, production line 0, assembly station.
-
-1. Choose the command that you wish use. Right-click the **OpenPressureReleaseValve** node. Click **Call**:
-
- [![Connected Factory solution accelerator call command](./media/iot-accelerators-connected-factory-dashboard/callcommand-inline.png)](./media/iot-accelerators-connected-factory-dashboard/callcommand-expanded.png#lightbox)
-
-1. A context panel appears informing you which method you're about to call and any parameter details. Click **Call**:
-
- [![Connected Factory solution accelerator call parameters](./media/iot-accelerators-connected-factory-dashboard/callpanel-inline.png)](./media/iot-accelerators-connected-factory-dashboard/callpanel-expanded.png#lightbox)
-
-1. The context panel is updated to inform you that the method call succeeded. You can verify the call succeeded by reading the value of the pressure node that updated as a result of the call.
-
- [![Connected Factory solution accelerator call success](./media/iot-accelerators-connected-factory-dashboard/callsuccess-inline.png)](./media/iot-accelerators-connected-factory-dashboard/callsuccess-expanded.png#lightbox)
-
-## Behind the scenes
-
-When you deploy a solution accelerator, the deployment process creates multiple resources in the Azure subscription you selected. You can view these resources in the Azure [portal](https://portal.azure.com). The deployment process creates a **resource group** with a name based on the name you choose for your solution accelerator:
-
-[![Connected Factory solution accelerator resource group](./media/iot-accelerators-connected-factory-dashboard/resourcegroup-inline.png)](./media/iot-accelerators-connected-factory-dashboard/resourcegroup-expanded.png#lightbox)
-
-You can view the settings of each resource by selecting it in the list of resources in the resource group.
-
-You can also view the source code for the solution accelerator in the [azure-iot-connected-factory](https://github.com/Azure/azure-iot-connected-factory) GitHub repository.
-
-When you're done, you can delete the solution accelerator from your Azure subscription on the [azureiotsolutions.com](https://www.azureiotsolutions.com/Accelerators#dashboard) site. This site enables you to easily delete all the resources that were provisioned when you created the solution accelerator.
-
-> [!NOTE]
-> To ensure that you delete everything related to the solution accelerator, delete it on the [azureiotsolutions.com](https://www.azureiotsolutions.com/Accelerators#dashboard) site. Do not delete the resource group in the portal.
-
-## Next steps
-
-Now that youΓÇÖve deployed a working solution accelerator, you can continue getting started with IoT solution accelerators by reading the following articles:
-
-* [Configure the Connected Factory solution accelerator](iot-accelerators-connected-factory-configure.md)
-* [Permissions on the azureiotsolutions.com site](iot-accelerators-permissions.md)
iot-accelerators Iot Accelerators Connected Factory Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/iot-accelerators-connected-factory-features.md
- Title: Connected Factory solution features - Azure | Microsoft Docs
-description: This article describes an overview of the features of the Connected Factory preconfigured solution, such as cloud dashboard, rules, and alerts.
----- Previously updated : 06/10/2019---
-# What is Connected Factory IoT solution accelerator?
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-Connected Factory is an implementation of Microsoft's Azure Industrial IoT reference architecture, packaged as on open-source solution. You can use it as a starting point for a commercial product. You can deploy a pre-built version of the Connected Factory solution into your Azure subscription from [Azure IoT solution accelerators](https://www.azureiotsolutions.com/#solutions/types/CF).
-
-![Connected Factory solution dashboard](./media/iot-accelerators-connected-factory-features/dashboard.png)
-
-The Connected Factory solution accelerator [code is available on GitHub](https://github.com/Azure/azure-iot-connected-factory).
-
-Connected Factory includes the following features:
-
-## Industrial device interoperability
--- Connect to industrial assets with an OPC UA interface.-- Use the simulated production lines (running OPC UA servers in Docker containers) to see live telemetry from them.-- Browse the OPC UA information model of the OPC UA servers from a cloud dashboard.-
-## Remote management
--- Configure your OPC UA assets from the cloud dashboard (call methods, read, and write data).-- Publish and unpublish telemetry data from your OPC UA assets from a cloud dashboard.-
-## Cloud dashboard
--- View telemetry previews directly in a cloud dashboard.-- View trends in telemetry data and create correlations using the Time Series Insights Explorer dashboard.-- See calculated Overall Equipment Efficiency (OEE) and Key Performance Indicators (KPIs) from a cloud dashboard.-- View industrial asset hierarchies in a tree topology as well as on an interactive map.-- View, acknowledge, and close alerts from a cloud dashboard.-
-## Azure Time Series Insights
--- [Azure Time Series Insights](../time-series-insights/time-series-insights-overview.md) is built for storing, visualizing, and querying large amounts of time-series data. Connected Factory leverages this service.-- Connected Factory integrates with this service enabling you perform deep, real-time analysis of your device data.-
-## Rules and alerts
-
-[Configure threshold-based rules for alerts](iot-accelerators-connected-factory-configure.md).
-
-## End-to-end security
--- Configure security permissions for users using role-based access control (RBAC).-- End-to-end encryption is implemented using OPC UA authentication (using X.509 certificates) as well as security tokens.-
-## Customizability
--- Customize the solution to meet specific business requirements.-- Full solution source-code available on GitHub. See the [Connected Factory preconfigured solution](https://github.com/Azure/azure-iot-connected-factory) repository.-
-## Next steps
-
-To learn more about the Connected Factory solution accelerator, see the Quickstart [Try a cloud-based solution to manage my industrial IoT devices](quickstart-connected-factory-deploy.md).
iot-accelerators Iot Accelerators Faq Cf https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/iot-accelerators-faq-cf.md
- Title: Connected Factory solution FAQ - Azure | Microsoft Docs
-description: This article answers the frequently asked questions for the Connected Factory solution accelerator. It includes links to the GitHub repository.
----- Previously updated : 12/12/2017---
-# Frequently asked questions for Connected Factory solution accelerator
-
-See also, the general [FAQ](iot-accelerators-faq.md) for IoT solution accelerators.
-
-### Where can I find the source code for the solution accelerator?
-
-The source code is stored in the following GitHub repository:
-
-* [Connected Factory solution accelerator](https://github.com/Azure/azure-iot-connected-factory)
-
-### What is OPC UA?
-
-OPC Unified Architecture (UA), released in 2008, is a platform-independent, service-oriented interoperability standard. OPC UA is used by various industrial systems and devices such as industry PCs, PLCs, and sensors. OPC UA integrates the functionality of the OPC Classic specifications into one extensible framework with built-in security. It is a standard that is driven by the OPC Foundation. The [OPC Foundation](https://opcfoundation.org/) is a not-for-profit organization with more than 440 members. The goal of the organization is to use OPC specifications to facilitate multi-vendor, multi-platform, secure and reliable interoperability through:
-
-* Infrastructure
-* Specifications
-* Technology
-* Processes
-
-### Why did Microsoft choose OPC UA for the Connected Factory solution accelerator?
-
-Microsoft chose OPC UA because it is an open, non-proprietary, platform independent, industry-recognized, and proven standard. It is a requirement for Industrie 4.0 (RAMI4.0) reference architecture solutions ensuring interoperability between a broad set of manufacturing processes and equipment. Microsoft sees demand from its customers to build Industrie 4.0 solutions. Support for OPC UA helps lower the barrier for customers to achieve their goals and provides immediate business value to them.
-
-### How do I add a public IP address to the simulation VM?
-
-You have two options to add the IP address:
-
-* Use the PowerShell script `Simulation/Factory/Add-SimulationPublicIp.ps1` in the [repository](https://github.com/Azure/azure-iot-connected-factory). Pass in your deployment name as a parameter. For a local deployment, use `<your username>ConnFactoryLocal`. The script prints out the IP address of the VM.
-
-* In the Azure portal, locate the resource group of your deployment. Except for a local deployment, the resource group has the name you specified as solution or deployment name. For a local deployment using the build script, the name of the resource group is `<your username>ConnFactoryLocal`. Now add a new **Public IP address** resource to the resource group.
-
-> [!NOTE]
-> In either case, ensure you install the latest patches by following the instructions on the [Ubuntu website](https://wiki.ubuntu.com/Security/Upgrades). Keep the installation up to date for as long as your VM is accessible through a public IP address.
-
-### How do I remove the public IP address to the simulation VM?
-
-You have two options to remove the IP address:
-
-* Use the PowerShell script Simulation/Factory/Remove-SimulationPublicIp.ps1 of the [repository](https://github.com/Azure/azure-iot-connected-factory). Pass in your deployment name as a parameter. For a local deployment, use `<your username>ConnFactoryLocal`. The script prints out the IP address of the VM.
-
-* In the Azure portal, locate the resource group of your deployment. Except for a local deployment, the resource group has the name you specified as solution or deployment name. For a local deployment using the build script, the name of the resource group is `<your username>ConnFactoryLocal`. Now remove the **Public IP address** resource from the resource group.
-
-### How do I sign in to the simulation VM?
-
-Signing in to the simulation VM is only supported if you have deployed your solution using the PowerShell script `build.ps1` in the [repository](https://github.com/Azure/azure-iot-connected-factory).
-
-If you deployed the solution from www.azureiotsolutions.com, you cannot sign in to the VM. You cannot sign in, because the password is generated randomly and you cannot reset it.
-
-1. Add a public IP address to the VM. See [How do I add a public IP address to the simulation VM?](#how-do-i-remove-the-public-ip-address-to-the-simulation-vm)
-1. Create an SSH session to your VM using the IP address of the VM.
-1. The username to use is: `docker`.
-1. The password to use depends on the version you used to deploy:
- * For solutions deployed using the build.ps1 script before 1 June 2017, the password is: `Passw0rd`.
- * For solutions deployed using the build.ps1 script after 1 June 2017, you can find the password in the `<name of your deployment>.config.user` file. The password is stored in the **VmAdminPassword** setting. The password is generated randomly at deployment time unless you specify it using the `build.ps1` script parameter `-VmAdminPassword`
-
-### How do I stop and start all docker processes in the simulation VM?
-
-1. Sign in to the simulation VM. See [How do I sign in to the simulation VM?](#how-do-i-sign-in-to-the-simulation-vm)
-1. To check which containers are active, run: `docker ps`.
-1. To stop all simulation containers, run: `./stopsimulation`.
-1. To start all simulation containers:
- * Export a shell variable with the name **IOTHUB_CONNECTIONSTRING**. Use the value of the **IotHubOwnerConnectionString** setting in the `<name of your deployment>.config.user` file. For example:
-
- ```sh
- export IOTHUB_CONNECTIONSTRING="HostName={yourdeployment}.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey={your key}"
- ```
-
- * Run `./startsimulation`.
-
-### How do I update the simulation in the VM?
-
-If you have made any changes to the simulation, you can use the PowerShell script `build.ps1` in the [repository](https://github.com/Azure/azure-iot-connected-factory) using the `updatedimulation` command. This script builds all the simulation components, stops the simulation in the VM, uploads, installs, and starts them.
-
-### How do I find out the connection string of the IoT hub used by my solution?
-
-If you deployed your solution with the `build.ps1` script in the [repository](https://github.com/Azure/azure-iot-connected-factory), the connection string is the value of **IotHubOwnerConnectionString** in the `<name of your deployment>.config.user` file.
-
-You can also find the connection string using the Azure portal. In the IoT Hub resource in the resource group of your deployment, locate the connection string settings.
-
-### Which IoT Hub devices does the Connected Factory simulation use?
-
-The simulation self registers the following devices:
-
-* proxy.beijing.corp.contoso
-* proxy.capetown.corp.contoso
-* proxy.mumbai.corp.contoso
-* proxy.munich0.corp.contoso
-* proxy.rio.corp.contoso
-* proxy.seattle.corp.contoso
-* publisher.beijing.corp.contoso
-* publisher.capetown.corp.contoso
-* publisher.mumbai.corp.contoso
-* publisher.munich0.corp.contoso
-* publisher.rio.corp.contoso
-* publisher.seattle.corp.contoso
-
-Using the [DeviceExplorer](https://github.com/Azure/azure-iot-sdk-csharp/tree/master/tools/) or [the IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) tool, you can check which devices are registered with the IoT hub your solution is using. To use device explorer, you need the connection string for the IoT hub in your deployment. To use the IoT extension for Azure CLI, you need your IoT Hub name.
-
-### How can I get log data from the simulation components?
-
-All components in the simulation log information in to log files. These files can be found in the VM in the folder `home/docker/Logs`. To retrieve the logs, you can use the PowerShell script `Simulation/Factory/Get-SimulationLogs.ps1` in the [repository](https://github.com/Azure/azure-iot-connected-factory).
-
-This script needs to sign in to the VM. You may need to provide credentials for the sign-in. See [How do I sign in to the simulation VM?](#how-do-i-sign-in-to-the-simulation-vm) to find the credentials.
-
-The script adds/removes a public IP address to the VM, if it does not yet have one and removes it. The script puts all log files in an archive and downloads the archive to your development workstation.
-
-Alternatively log in to the VM via SSH and inspect the log files at runtime.
-
-### How can I check if the simulation is sending data to the cloud?
-
-With the [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer) or the [Azure IoT CLI Extension monitor-events](/cli/azure/ext/azure-iot/iot/hub#ext-azure-iot-az-iot-hub-monitor-events) command, you can inspect the data sent to IoT Hub from certain devices. To use these tools, you need to know the connection string for the IoT hub in your deployment. See [How do I find out the connection string of the IoT hub used by my solution?](#how-do-i-find-out-the-connection-string-of-the-iot-hub-used-by-my-solution)
-
-Inspect the data sent by one of the publisher devices:
-
-* publisher.beijing.corp.contoso
-* publisher.capetown.corp.contoso
-* publisher.mumbai.corp.contoso
-* publisher.munich0.corp.contoso
-* publisher.rio.corp.contoso
-* publisher.seattle.corp.contoso
-
-If you see no data sent to IoT Hub, then there is an issue with the simulation. As a first analysis step you should analyze the log files of the simulation components. See [How can I get log data from the simulation components?](#how-can-i-get-log-data-from-the-simulation-components) Next, try to stop and start the simulation and if there's still no data sent, update the simulation completely. See [How do I update the simulation in the VM?](#how-do-i-update-the-simulation-in-the-vm)
-
-### How do I enable an interactive map in my Connected Factory solution?
-
-To enable an interactive map in your Connected Factory solution, you must have an Azure Maps account.
-
-When deploying from [www.azureiotsolutions.com](https://www.azureiotsolutions.com), the deployment process adds an Azure Maps account to the resource group that contains the solution accelerator services.
-
-When you deploy using the `build.ps1` script in the Connected Factory GitHub repository set the environment variable `$env:MapApiQueryKey` in the build window to the [key of your Azure Maps account](../azure-maps/how-to-manage-account-keys.md). The interactive map is then enabled automatically.
-
-You can also add an Azure Maps account key to your solution accelerator after deployment. Navigate to the Azure portal and access the App Service resource in your Connected Factory deployment. Navigate to **Application settings**, where you find a section **Application settings**. Set the **MapApiQueryKey** to the [key of your Azure Maps account](../azure-maps/how-to-manage-account-keys.md). Save the settings and then navigate to **Overview** and restart the App Service.
-
-### How do I create an Azure Maps account?
-
-See, [How to manage your Azure Maps account and keys](../azure-maps/how-to-manage-account-keys.md).
-
-### How to obtain your Azure Maps account key
-
-See, [How to manage your Azure Maps account and keys](../azure-maps/how-to-manage-account-keys.md).
-
-### How do enable the interactive map while debugging locally?
-
-To enable the interactive map while you are debugging locally, set the value of the setting `MapApiQueryKey` in the files `local.user.config` and `<yourdeploymentname>.user.config` in the root of your deployment to the value of the **QueryKey** you copied previously.
-
-### How do I use a different image at the home page of my dashboard?
-
-To change the static image shown io the home page of the dashboard, replace the image `WebApp\Content\img\world.jpg`. Then rebuild and redeploy the WebApp.
-
-### How do I use non OPC UA devices with Connected Factory?
-
-To send telemetry data from non OPC UA devices to Connected Factory:
-
-1. [Configure a new station in the Connected Factory topology](iot-accelerators-connected-factory-configure.md) in the `ContosoTopologyDescription.json` file.
-
-1. Ingest the telemetry data in Connected Factory compatible JSON format:
-
- ```json
- [
- {
- "ApplicationUri": "<the_value_of_OpcUri_of_your_station",
- "DisplayName": "<name_of_the_datapoint>",
- "NodeId": "value_of_NodeId_of_your_datapoint_in_the_station",
- "Value": {
- "Value": <datapoint_value>,
- "SourceTimestamp": "<timestamp>"
- }
- }
- ]
- ```
-
-1. The format of `<timestamp>` is: `2017-12-08T19:24:51.886753Z`
-
-1. Restart the Connected Factory App Service.
-
-### Next steps
-
-You can also explore some of the other features and capabilities of the IoT solution accelerators:
-
-* [Deploy Connected Factory solution accelerator](quickstart-connected-factory-deploy.md)
-* [IoT security from the ground up](../iot-fundamentals/iot-security-ground-up.md)
iot-accelerators Overview Iot Industrial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/overview-iot-industrial.md
- Title: Overview of Azure industrial IoT | Microsoft Docs
-description: This article provides an overview of industrial IoT. It explains the connected factory, factory floor connectivity and security components in IIoT.
-- Previously updated : 11/26/2018------
-# What is industrial IoT (IIoT)
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-IIoT is the Industrial Internet of Things. IIoT enhances industrial efficiencies through the application of IoT in the manufacturing industry.
-
-## Improve industrial efficiencies
-
-Enhance your operational productivity and profitability with a connected factory solution accelerator. Connect and monitor your industrial equipment and devices in the cloudΓÇöincluding your machines already operating on the factory floor. Analyze your IoT data for insights that help you increase the performance of the entire factory floor.
-
-Reduce the time-consuming process of accessing factory floor machines with OPC Twin, and focus your time on building IIoT solutions. Streamline certificate management and industrial asset integration with OPC Vault, and feel confident that asset connectivity is secured. These microservices provide a REST-like API on top of [Azure Industrial IoT components](https://github.com/Azure/Industrial-IoT). The service API gives you control of edge module functionality.
-
-![Industrial IoT overview](media/overview-iot-industrial/overview.png)
-
-> [!NOTE]
-> For more information about
-Azure Industrial IoT services, see the GitHub [repository](https://github.com/Azure/Industrial-IoT) and [documentation](https://azure.github.io/Industrial-IoT/).
-> If you're unfamiliar with how Azure IoT Edge modules work, begin with the following articles:
-- [About Azure IoT Edge](../iot-edge/about-iot-edge.md)-- [Azure IoT Edge modules](../iot-edge/iot-edge-modules.md)-
-## Connected factory
-
-[Connected Factory](../iot-accelerators/iot-accelerators-connected-factory-features.md) is an implementation of Microsoft's Azure Industrial IoT reference architecture that can be customized to meet specific business requirements. The full solution code is open-source and available on Connected Factory solution accelerator GitHub repository. You can use it as a starting point for a commercial product, and deploy a pre-built solution into your Azure subscription in minutes.
-
-## Factory floor connectivity
-
-OPC Twin is an IIoT component that automates device discovery and registration, and offers remote control of industrial devices through REST APIs. OPC Twin, uses Azure IoT Edge and IoT Hub to connect the cloud and the factory network. OPC Twin allows IIoT developers to focus on building IIoT applications without worrying about how to securely access the on-premises machines.
-
-## Security
-
-OPC Vault is an implementation of OPC UA Global Discovery Server (GDS) that can configure, register, and manage certificate lifecycle for OPC UA server and client applications in the cloud. OPC Vault simplifies the implementation and maintenance of secure asset connectivity in the industrial space. By automating certificate management, OPC Vault frees factory operators from the manual and complex processes associated with connectivity and certificate management.
-
-## Next steps
-
-Now that you've had an introduction to industrial IoT and its components, here is the suggested next step:
-
-[What is OPC Twin](overview-opc-twin.md)
iot-accelerators Overview Opc Twin Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/overview-opc-twin-architecture.md
- Title: OPC Twin architecture - Azure | Microsoft Docs
-description: This article provides an overview of the OPC Twin architecture. It describes about the discovery, activation, browsing, and monitoring of the server.
-- Previously updated : 11/26/2018------
-# OPC Twin architecture
-
-> [!IMPORTANT]
-> While we update this article, see [Azure Industrial IoT](https://azure.github.io/Industrial-IoT/) for the most up to date content.
-
-The following diagrams illustrate the OPC Twin architecture.
-
-## Discover and activate
-
-1. The operator enables network scanning on the module or makes a one-time discovery using a discovery URL. The discovered endpoints and application information are sent via telemetry to the onboarding agent for processing. The OPC UA device onboarding agent processes OPC UA server discovery events sent by the OPC Twin IoT Edge module when in discovery or scan mode. The discovery events result in application registration and updates in the OPC UA device registry.
-
- ![Diagram that shows the OPC Twin architecture with the OPC Twin IoT E