Updates from: 11/01/2022 02:09:45
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 10/04/2022 Last updated : 10/31/2022
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md) and [Azure AD B2C developer release notes](custom-policy-developer-notes.md) +
+## October 2022
+
+### New articles
+
+- [Edit Azure Active Directory B2C Identity Experience Framework (IEF) XML with Grit Visual IEF Editor](partner-grit-editor.md)
+- [Register apps in Azure Active Directory B2C](register-apps.md)
+
+### Updated articles
+
+- [Set up sign-in for a specific Azure Active Directory organization in Azure Active Directory B2C](identity-provider-azure-ad-single-tenant.md)
+- [Set up a password reset flow in Azure Active Directory B2C](add-password-reset-policy.md)
+- [Azure Active Directory B2C documentation landing page](index.yml)
+- [Publish your Azure Active Directory B2C app to the Azure Active Directory app gallery](publish-app-to-azure-ad-app-gallery.md)
+- [JSON claims transformations](json-transformations.md)
+ ## September ### New articles
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
To configure the Temporary Access Pass authentication method policy:
||||| | Minimum lifetime | 1 hour | 10 ΓÇô 43,200 Minutes (30 days) | Minimum number of minutes that the Temporary Access Pass is valid. | | Maximum lifetime | 8 hours | 10 ΓÇô 43,200 Minutes (30 days) | Maximum number of minutes that the Temporary Access Pass is valid. |
- | Default lifetime | 1 hour | 10 ΓÇô 43,200 Minutes (30 days) | Default values can be override by the individual passes, within the minimum and maximum lifetime configured by the policy. |
+ | Default lifetime | 1 hour | 10 ΓÇô 43,200 Minutes (30 days) | Default values can be overridden by the individual passes, within the minimum and maximum lifetime configured by the policy. |
| One-time use | False | True / False | When the policy is set to false, passes in the tenant can be used either once or more than once during its validity (maximum lifetime). By enforcing one-time use in the Temporary Access Pass policy, all passes created in the tenant will be created as one-time use. | | Length | 8 | 8-48 characters | Defines the length of the passcode. |
active-directory Quickstart Register App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-register-app.md
Previously updated : 01/13/2022 Last updated : 10/31/2022 #Customer intent: As developer, I want to know how to register my application with the Microsoft identity platform so that the security token service can issue ID and/or access tokens to client applications that request them.
Client secrets are considered less secure than certificate credentials. Applicat
For application security recommendations, see [Microsoft identity platform best practices and recommendations](identity-platform-integration-checklist.md#security). +
+### Add a federated credential
+
+Federated identity credentials are a type of credential that allows workloads, such as GitHub Actions, workloads running on Kubernetes, or workloads running in compute platforms outside of Azure access Azure AD protected resources without needing to manage secrets using [workload identity federation](workload-identity-federation.md).
+
+To add a federated credential, follow these steps:
+
+1. In the Azure portal, in **App registrations**, select your application.
+1. Select **Certificates & secrets** > **Federated credentials** > **Add a credential**.
+1. In the **Federated credential scenario** drop-down box, select one of the supported scenarios, and follow the corresponding guidance to complete the configuration.
+
+ - **Customer managed keys** for encrypt data in your tenant using Azure Key Vault in another tenant.
+ - **GitHub actions deploying Azure resources** to [configure a GitHub workflow](workload-identity-federation-create-trust.md#github-actions) to get tokens for your application and deploy assets to Azure.
+ - **Kubernetes accessing Azure resources** to configure a [Kubernetes service account](workload-identity-federation-create-trust.md#kubernetes) to get tokens for your application and access Azure resources.
+ - **Other issuer** to configure an identity managed by an external [OpenID Connect provider](workload-identity-federation-create-trust.md#other-identity-providers) to get tokens for your application and access Azure resources.
+
+
+For more information, how to get an access token with a federated credential, check out the [Microsoft identity platform and the OAuth 2.0 client credentials flow](v2-oauth2-client-creds-grant-flow.md#third-case-access-token-request-with-a-federated-credential) article.
++ ## Next steps Client applications typically need to access resources in a web API. You can protect your client application by using the Microsoft identity platform. You can also use the platform for authorizing scoped, permissions-based access to your web API.
active-directory Workload Identity Federation Create Trust User Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust-user-assigned-managed-identity.md
zone_pivot_groups: identity-wif-mi-methods
# Configure a user-assigned managed identity to trust an external identity provider (preview)
-This article describes how to manage a federated identity credential on a user-assigned managed identity in Azure Active Directory (Azure AD). The federated identity credential creates a trust relationship between a user-assigned managed identity and an external identity provider (IdP). Configuring a federated identity credential on a system-assigned managed identity is not supported.
+This article describes how to manage a federated identity credential on a user-assigned managed identity in Azure Active Directory (Azure AD). The federated identity credential creates a trust relationship between a user-assigned managed identity and an external identity provider (IdP). Configuring a federated identity credential on a system-assigned managed identity isn't supported.
After you configure your user-assigned managed identity to trust an external IdP, configure your external software workload to exchange a token from the external IdP for an access token from Microsoft identity platform. The external workload uses the access token to access Azure AD protected resources without needing to manage secrets (in supported scenarios). To learn more about the token exchange workflow, read about [workload identity federation](workload-identity-federation.md).
In the **Federated credential scenario** dropdown box, select your scenario.
### GitHub Actions deploying Azure resources
-For **Entity type**, select **Environment**, **Branch**, **Pull request**, or **Tag** and specify the value. The values must exactly match the configuration in the [GitHub workflow](https://docs.github.com/actions/using-workflows/workflow-syntax-for-github-actions#on). For more info, read the [examples](#entity-type-examples).
+To add a federated identity for GitHub actions, follow these steps:
-Add a **Name** for the federated credential.
+1. For **Entity type**, select **Environment**, **Branch**, **Pull request**, or **Tag** and specify the value. The values must exactly match the configuration in the [GitHub workflow](https://docs.github.com/actions/using-workflows/workflow-syntax-for-github-actions#on). For more info, read the [examples](#entity-type-examples).
-The **Issuer**, **Audiences**, and **Subject identifier** fields autopopulate based on the values you entered.
+1. Add a **Name** for the federated credential.
-Click **Add** to configure the federated credential.
+1. The **Issuer**, **Audiences**, and **Subject identifier** fields autopopulate based on the values you entered.
+
+1. Select **Add** to configure the federated credential.
+
+Use the following values from your Azure AD Managed Identity for your GitHub workflow:
+
+- `AZURE_CLIENT_ID` the managed identity **Client ID**
+
+- `AZURE_SUBSCRIPTION_ID` the **Subscription ID**.
+
+ The following screenshot demonstrates how to copy the managed identity ID and subscription ID.
+
+ [![Screenshot that demonstrates how to copy the managed identity ID and subscription ID from Azure portal.](./media/workload-identity-federation-create-trust-user-assigned-managed-identity/copy-managed-identity-id.png)](./media/workload-identity-federation-create-trust-user-assigned-managed-identity/copy-managed-identity-id.png#lightbox)
+
+- `AZURE_TENANT_ID` the **Directory (tenant) ID**. Learn [how to find your Azure Active Directory tenant ID](../fundamentals/active-directory-how-to-find-tenant.md).
#### Entity type examples
Fill in the **Cluster issuer URL**, **Namespace**, **Service account name**, and
- **Namespace** is the service account namespace. - **Name** is the name of the federated credential, which can't be changed later.
-Click **Add** to configure the federated credential.
+Select **Add** to configure the federated credential.
### Other
Specify the following fields (using a software workload running in Google Cloud
- **Subject identifier**: must match the `sub` claim in the token issued by the external identity provider. In this example using Google Cloud, *subject* is the Unique ID of the service account you plan to use. - **Issuer**: must match the `iss` claim in the token issued by the external identity provider. A URL that complies with the OIDC Discovery spec. Azure AD uses this issuer URL to fetch the keys that are necessary to validate the token. For Google Cloud, the *issuer* is "https://accounts.google.com".
-Click **Add** to configure the federated credential.
+Select **Add** to configure the federated credential.
## List federated identity credentials on a user-assigned managed identity
Federated identity credential and parent user assigned identity can be created o
All of the template parameters are mandatory.
-There is a limit of 3-120 characters for a federated identity credential name length. It must be alphanumeric, dash, underscore. First symbol is alphanumeric only.
+There's a limit of 3-120 characters for a federated identity credential name length. It must be alphanumeric, dash, underscore. First symbol is alphanumeric only.
-You must add exactly 1 audience to a federated identity credential. The audience is verified during token exchange. Use ΓÇ£api://AzureADTokenExchangeΓÇ¥ as the default value.
+You must add exactly one audience to a federated identity credential. The audience is verified during token exchange. Use ΓÇ£api://AzureADTokenExchangeΓÇ¥ as the default value.
-List, Get, and Delete operations are not available with template. Refer to Azure CLI for these operations. By default, all child federated identity credentials are created in parallel, which triggers concurrency detection logic and causes the deployment to fail with a 409-conflict HTTP status code. To create them sequentially, specify a chain of dependencies using the *dependsOn* property.
+List, Get, and Delete operations aren't available with template. Refer to Azure CLI for these operations. By default, all child federated identity credentials are created in parallel, which triggers concurrency detection logic and causes the deployment to fail with a 409-conflict HTTP status code. To create them sequentially, specify a chain of dependencies using the *dependsOn* property.
Make sure that any kind of automation creates federated identity credentials under the same parent identity sequentially. Federated identity credentials under different managed identities can be created in parallel without any restrictions.
active-directory Workload Identity Federation Create Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust.md
Previously updated : 07/27/2022 Last updated : 10/31/2022
Get the *subject* and *issuer* information for your external IdP and software wo
## Configure a federated identity credential on an app ### GitHub Actions
-Find your app registration in the [App Registrations](https://aka.ms/appregistrations) experience of the Azure portal. Select **Certificates & secrets** in the left nav pane, select the **Federated credentials** tab, and select **Add credential**.
-In the **Federated credential scenario** drop-down box, select **GitHub actions deploying Azure resources**.
+To add a federated identity for GitHub actions, follow these steps:
+
+1. Find your app registration in the [App Registrations](https://aka.ms/appregistrations) experience of the Azure portal. Select **Certificates & secrets** in the left nav pane, select the **Federated credentials** tab, and select **Add credential**.
+
+1. In the **Federated credential scenario** drop-down box, select **GitHub actions deploying Azure resources**.
+
+1. Specify the **Organization** and **Repository** for your GitHub Actions workflow.
+
+1. For **Entity type**, select **Environment**, **Branch**, **Pull request**, or **Tag** and specify the value. The values must exactly match the configuration in the [GitHub workflow](https://docs.github.com/actions/using-workflows/workflow-syntax-for-github-actions#on). Pattern matching isn't supported for branches and tags. Specify an environment if your on-push workflow runs against many branches or tags. For more info, read the [examples](#entity-type-examples).
+
+1. Add a **Name** for the federated credential.
+
+1. The **Issuer**, **Audiences**, and **Subject identifier** fields autopopulate based on the values you entered.
+
+1. Select **Add** to configure the federated credential.
+
+ :::image type="content" source="media/workload-identity-federation-create-trust/add-credential.png" alt-text="Screenshot of the Add a credential window, showing sample values." :::
-Specify the **Organization** and **Repository** for your GitHub Actions workflow.
-For **Entity type**, select **Environment**, **Branch**, **Pull request**, or **Tag** and specify the value. The values must exactly match the configuration in the [GitHub workflow](https://docs.github.com/actions/using-workflows/workflow-syntax-for-github-actions#on). Pattern matching is not supported for branches and tags. Specify an environment if your on-push workflow runs against many branches or tags. For more info, read the [examples](#entity-type-examples).
+Use the following values from your Azure AD application registration for your GitHub workflow:
-Add a **Name** for the federated credential.
+- `AZURE_CLIENT_ID` the **Application (client) ID**
-The **Issuer**, **Audiences**, and **Subject identifier** fields autopopulate based on the values you entered.
+- `AZURE_TENANT_ID` the **Directory (tenant) ID**
+
+ The following screenshot demonstrates how to copy the application ID and tenant ID.
-Click **Add** to configure the federated credential.
+ ![Screenshot that demonstrates how to copy the application ID and tenant ID from Microsoft Entra portal.](./media/workload-identity-federation-create-trust/copy-client-id.png)
+- `AZURE_SUBSCRIPTION_ID` your subscription ID. To get the subscription ID, open **Subscriptions** in Azure portal and find your subscription. Then, copy the **Subscription ID**.
#### Entity type examples
To delete a federated identity credential, select the **Delete** icon for the cr
Run the [az ad app federated-credential create](/cli/azure/ad/app/federated-credential) command to create a new federated identity credential on your app.
-The *id* parameter specifies the identifier URI, application ID, or object ID of the application. *parameters* specifies the parameters, in JSON format, for creating the federated identity credential.
+The `id` parameter specifies the identifier URI, application ID, or object ID of the application. The `parameters` parameter specifies the parameters, in JSON format, for creating the federated identity credential.
### GitHub Actions example
active-directory Workload Identity Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation.md
Previously updated : 09/19/2022 Last updated : 10/31/2022
You can use workload identity federation in scenarios such as GitHub Actions, wo
## Why use workload identity federation?
-Typically, a software workload (such as an application, service, script, or container-based application) needs an identity in order to authenticate and access resources or communicate with other services. When these workloads run on Azure, you can use managed identities and the Azure platform manages the credentials for you. For a software workload running outside of Azure, you need to use application credentials (a secret or certificate) to access Azure AD protected resources (such as Azure, Microsoft Graph, Microsoft 365, or third-party resources). These credentials pose a security risk and have to be stored securely and rotated regularly. You also run the risk of service downtime if the credentials expire.
+Typically, a software workload (such as an application, service, script, or container-based application) needs an identity in order to authenticate and access resources or communicate with other services. When these workloads run on Azure, you can use [managed identities](../managed-identities-azure-resources/overview.md) and the Azure platform manages the credentials for you. For a software workload running outside of Azure, you need to use application credentials (a secret or certificate) to access Azure AD protected resources (such as Azure, Microsoft Graph, Microsoft 365, or third-party resources). These credentials pose a security risk and have to be stored securely and rotated regularly. You also run the risk of service downtime if the credentials expire.
-You use workload identity federation to configure an Azure AD app registration or user-assigned managed identity to trust tokens from an external identity provider (IdP), such as GitHub. Once that trust relationship is created, your software workload can exchange trusted tokens from the external IdP for access tokens from Microsoft identity platform. Your software workload then uses that access token to access the Azure AD protected resources to which the workload has been granted access. This eliminates the maintenance burden of manually managing credentials and eliminates the risk of leaking secrets or having certificates expire.
+You use workload identity federation to configure an Azure AD app registration or [user-assigned managed identity](../managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) to trust tokens from an external identity provider (IdP), such as GitHub. Once that trust relationship is created, your software workload can exchange trusted tokens from the external IdP for access tokens from Microsoft identity platform. Your software workload then uses that access token to access the Azure AD protected resources to which the workload has been granted access. This eliminates the maintenance burden of manually managing credentials and eliminates the risk of leaking secrets or having certificates expire.
## Supported scenarios
The following scenarios are supported for accessing Azure AD protected resources
## How it works
-Create a trust relationship between the external IdP and an app or user-assigned managed identity in Azure AD by configuring a [federated identity credential](/graph/api/resources/federatedidentitycredentials-overview?view=graph-rest-beta&preserve-view=true). The federated identity credential is used to indicate which token from the external IdP should be trusted by your application or managed identity. You configure the federated identity credential on an app registration in the Azure portal or through Microsoft Graph. A federated credential is configured on a user-assigned managed identity through the Azure portal, Azure CLI, Azure PowerShell, Azure SDK, and Azure Resource Manager (ARM) templates. The steps for configuring the trust relationship will differ, depending on the scenario and external IdP.
+Create a trust relationship between the external IdP and an app registration or user-assigned managed identity in Azure AD. The federated identity credential is used to indicate which token from the external IdP should be trusted by your application or managed identity. You configure a federated identity either:
+
+- On an Azure AD [App registration](/azure/active-directory/develop/quickstart-register-app) in the Azure portal or through Microsoft Graph. This configuration allows you to get an access token for your application without needing to manage secrets outside Azure. For more information, learn how to [configure an app to trust an external identity provider](workload-identity-federation-create-trust.md).
+- On a user-assigned managed identity through the Azure portal, Azure CLI, Azure PowerShell, Azure SDK, and Azure Resource Manager (ARM) templates. The external workload uses the access token to access Azure AD protected resources without needing to manage secrets (in supported scenarios). The [steps for configuring the trust relationship](workload-identity-federation-create-trust-user-assigned-managed-identity.md) will differ, depending on the scenario and external IdP.
The workflow for exchanging an external token for an access token is the same, however, for all scenarios. The following diagram shows the general workflow of a workload exchanging an external token for an access token and then accessing Azure AD protected resources.
Learn more about how workload identity federation works:
- How to create, delete, get, or update [federated identity credentials](workload-identity-federation-create-trust.md) on an app registration. - How to create, delete, get, or update [federated identity credentials](workload-identity-federation-create-trust-user-assigned-managed-identity.md) on a user-assigned managed identity. - Read the [GitHub Actions documentation](https://docs.github.com/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure) to learn more about configuring your GitHub Actions workflow to get an access token from Microsoft identity provider and access Azure resources.-- For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format).
+- For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format).
active-directory Multi Tenant User Management Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-user-management-introduction.md
These terms are used throughout this content:
* **Home tenant**: The Azure AD tenant containing users requiring access to the resources in the resource tenant.
-* **User lifecycle management**: the process of provisioning, managing, and deprovisioning user access to resources.
+* **User lifecycle management**: The process of provisioning, managing, and deprovisioning user access to resources.
* **Unified GAL**: Each user in each tenant can see users from each organization in their Global Address List (GAL).
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
The What's new in Azure Active Directory? release notes provide information abou
+## April 2022
++
+### General Availability - Entitlement management separation of duties checks for incompatible access packages
+
+**Type:** Changed feature
+**Service category:** Other
+**Product capability:** Identity Governance
+
+In Azure AD entitlement management, an administrator can now configure the incompatible access packages and groups of an access package in the Azure portal. This prevents a user who already has one of those incompatible access rights from being able to request further access. For more information, see: [Configure separation of duties checks for an access package in Azure AD entitlement management](../governance/entitlement-management-access-package-incompatible.md).
++++
+### General Availability - Microsoft Defender for Endpoint Signal in Identity Protection
+
+**Type:** New feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+
+Identity Protection now integrates a signal from Microsoft Defender for Endpoint (MDE) that will protect against PRT theft detection. To learn more, see: [What is risk? Azure AD Identity Protection | Microsoft Docs](../identity-protection/concept-identity-protection-risks.md).
+
+++
+### General Availability - Entitlement management 3 stages of approval
+
+**Type:** Changed feature
+**Service category:** Other
+**Product capability:** Entitlement Management
+
+
+
+This update extends the Azure AD entitlement management access package policy to allow a third approval stage. This will be able to be configured via the Azure portal or Microsoft Graph. For more information, see: [Change approval and requestor information settings for an access package in Azure AD entitlement management](../governance/entitlement-management-access-package-approval-policy.md).
+
+++
+### General Availability - Improvements to Azure AD Smart Lockout
+
+**Type:** Changed feature
+**Service category:** Identity Protection
+**Product capability:** User Management
+
+
+
+With a recent improvement, Smart Lockout now synchronizes the lockout state across Azure AD data centers, so the total number of failed sign-in attempts allowed before an account is locked out will match the configured lockout threshold. For more information, see: [Protect user accounts from attacks with Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md).
+
++++
+### Public Preview - Integration of Microsoft 365 App Certification details into Azure Active Directory UX and Consent Experiences
+
+**Type:** New feature
+**Service category:** User Access Management
+**Product capability:** AuthZ/Access Delegation
++
+Microsoft 365 Certification status for an app is now available in Azure AD consent UX, and custom app consent policies. The status will later be displayed in several other Identity-owned interfaces such as enterprise apps. For more information, see: [Understanding Azure AD application consent experiences](../develop/application-consent-experience.md).
++++
+### Public preview - Use Azure AD access reviews to review access of B2B direct connect users in Teams shared channels
+
+**Type:** New feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+Use Azure AD access reviews to review access of B2B direct connect users in Teams shared channels. For more information, see: [Include B2B direct connect users and teams accessing Teams Shared Channels in access reviews (preview)](../governance/create-access-review.md#include-b2b-direct-connect-users-and-teams-accessing-teams-shared-channels-in-access-reviews).
+++
+### Public Preview - New MS Graph APIs to configure federated settings when federated with Azure AD
+
+**Type:** New feature
+**Service category:** MS Graph
+**Product capability:** Identity Security & Protection
++
+We're announcing the public preview of following MS Graph APIs and PowerShell cmdlets for configuring federated settings when federated with Azure AD:
+
+|Action |MS Graph API |PowerShell cmdlet |
+||||
+|Get federation settings for a federated domain | [Get internalDomainFederation](/graph/api/internaldomainfederation-get?view=graph-rest-beta&preserve-view=true) | [Get-MgDomainFederationConfiguration](/powershell/module/microsoft.graph.identity.directorymanagement/get-mgdomainfederationconfiguration?view=graph-powershell-beta&preserve-view=true) |
+|Create federation settings for a federated domain | [Create internalDomainFederation](/graph/api/domain-post-federationconfiguration?view=graph-rest-beta&preserve-view=true) | [New-MgDomainFederationConfiguration](/powershell/module/microsoft.graph.identity.directorymanagement/new-mgdomainfederationconfiguration?view=graph-powershell-beta&preserve-view=true) |
+|Remove federation settings for a federated domain | [Delete internalDomainFederation](/graph/api/internaldomainfederation-delete?view=graph-rest-beta&preserve-view=true) | [Remove-MgDomainFederationConfiguration](/powershell/module/microsoft.graph.identity.directorymanagement/remove-mgdomainfederationconfiguration?view=graph-powershell-beta&preserve-view=true) |
+|Update federation settings for a federated domain | [Update internalDomainFederation](/graph/api/internaldomainfederation-update?view=graph-rest-beta&preserve-view=true) | [Update-MgDomainFederationConfiguration](/powershell/module/microsoft.graph.identity.directorymanagement/update-mgdomainfederationconfiguration?view=graph-powershell-beta&preserve-view=true) |
++
+If using older MSOnline cmdlets ([Get-MsolDomainFederationSettings](/powershell/module/msonline/get-msoldomainfederationsettings?view=azureadps-1.0&preserve-view=true) and [Set-MsolDomainFederationSettings](/powershell/module/msonline/set-msoldomainfederationsettings?view=azureadps-1.0&preserve-view=true)), we highly recommend transitioning to the latest MS Graph APIs and PowerShell cmdlets.
+
+For more information, see [internalDomainFederation resource type - Microsoft Graph beta | Microsoft Docs](/graph/api/resources/internaldomainfederation?view=graph-rest-beta&preserve-view=true).
+++
+### Public Preview ΓÇô Ability to force reauthentication on Intune enrollment, risky sign-ins, and risky users
+
+**Type:** New feature
+**Service category:** RBAC role
+**Product capability:** AuthZ/Access Delegation
++
+Added functionality to session controls allowing admins to reauthenticate a user on every sign-in if a user or particular sign-in event is deemed risky, or when enrolling a device in Intune. For more information, see [Configure authentication session management with conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md).
+++
+### Public Preview ΓÇô Protect against by-passing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD
+
+**Type:** New feature
+**Service category:** MS Graph
+**Product capability:** Identity Security & Protection
++
+We're delighted to announce a new security protection that prevents bypassing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD. When enabled for a federated domain in your Azure AD tenant, it ensures that a compromised federated account can't bypass Azure AD Multi-Factor Authentication by imitating that a multi factor authentication has already been performed by the identity provider. The protection can be enabled via new security setting, [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-beta#federatedidpmfabehavior-values&preserve-view=true).
+
+We highly recommend enabling this new protection when using Azure AD Multi-Factor Authentication as your multi factor authentication for your federated users. To learn more about the protection and how to enable it, visit [Enable protection to prevent by-passing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#enable-protection-to-prevent-by-passing-of-cloud-azure-ad-multi-factor-authentication-when-federated-with-azure-ad).
+++
+### New Federated Apps available in Azure AD Application gallery - April 2022
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** Third Party Integration
+
+In April 2022 we added the following 24 new applications in our App gallery with Federation support:
+[X-1FBO](https://www.x1fbo.com/), [select Armor](https://app.clickarmor.c)
+
+You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial.
+
+For listing your application in the Azure AD app gallery, please read the details here https://aka.ms/AzureADAppRequest
+++
+### General Availability - Customer data storage for Japan customers in Japanese data centers
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** GoLocal
+
+From April 15, 2022, Microsoft began storing Azure ADΓÇÖs Customer Data for new tenants with a Japan billing address within the Japanese data centers. For more information, see: [Customer data storage for Japan customers in Azure Active Directory](active-directory-data-storage-japan.md).
++++
+### Public Preview - New provisioning connectors in the Azure AD Application Gallery - April 2022
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** Third Party Integration
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+- [Adobe Identity Management (OIDC)](../saas-apps/adobe-identity-management-provisioning-oidc-tutorial.md)
+- [embed signage](../saas-apps/embed-signage-provisioning-tutorial.md)
+- [KnowBe4 Security Awareness Training](../saas-apps/knowbe4-security-awareness-training-provisioning-tutorial.md)
+- [NordPass](../saas-apps/nordpass-provisioning-tutorial.md)
+
+For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md)
+++++ ## March 2022
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Azure AD receives improvements on an ongoing basis. To stay up to date with the
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Active Directory](whats-new-archive.md).
+## October 2022
+
+### General Availability - Upgrade Azure AD Provisioning agent to the latest version (version number: 1.1.977.0)
+++
+**Type:** Plan for change
+**Service category:** Provisioning
+**Product capability:** AAD Connect Cloud Sync
+
+Microsoft will stop support for Azure AD provisioning agent with versions 1.1.818.0 and below starting Feb 1,2023. If you are using Azure AD cloud sync, please make sure you have the latest version of the agent. You can info about the agent release history [here](../app-provisioning/provisioning-agent-release-version-history.md). You can download the latest version [here](https://download.msappproxy.net/Subscription/d3c8b69d-6bf7-42be-a529-3fe9c2e70c90/Connector/provisioningAgentInstaller)
+
+You can find out which version of the agent you are using as follows:
+
+1. Going to the domain server which you have the agent installed
+1. Right-click on the Microsoft Azure AD Connect Provisioning Agent app
+1. Click on ΓÇ£DetailsΓÇ¥ tab and you can find the version number there
+
+> [!NOTE]
+> Azure Active Directory (AD) Connect follows the [Modern Lifecycle Policy](/lifecycle/policies/modern). Changes for products and services under the Modern Lifecycle Policy may be more frequent and require customers to be alert for forthcoming modifications to their product or service.
+Product governed by the Modern Policy follow a [continuous support and servicing model](/lifecycle/overview/product-end-of-support-overview). Customers must take the latest update to remain supported. For products and services governed by the Modern Lifecycle Policy, Microsoft's policy is to provide a minimum 30 days' notification when customers are required to take action in order to avoid significant degradation to the normal use of the product or service.
+++
+### General Availability - Add multiple domains to the same SAML/Ws-Fed based identity provider configuration for your external users
+++
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+An IT admin can now add multiple domains to a single SAML/WS-Fed identity provider configuration to invite users from multiple domains to authenticate from the same identity provider endpoint. For more information, see: [Federation with SAML/WS-Fed identity providers for guest users](../external-identities/direct-federation.md).
++++
+### General Availability - Limits on the number of configured API permissions for an application registration will be enforced starting in October 2022
+++
+**Type:** Plan for change
+**Service category:** Other
+**Product capability:** Developer Experience
+
+In the end of October, the total number of required permissions for any single application registration must not exceed 400 permissions across all APIs. Applications exceeding the limit won't be able to increase the number of permissions they're configured for. The existing limit on the number of distinct APIs for which permissions are required remains unchanged and may not exceed 50 APIs.
+
+In the Azure portal, the required permissions are listed under API Permissions within specific applications in the application registration menu. When using Microsoft Graph or Microsoft Graph PowerShell, the required permissions are listed in the requiredResourceAccess property of an [application](/graph/api/resources/application) entity. For more information, see: [Validation differences by supported account types (signInAudience)](../develop/supported-accounts-validation.md).
++++
+### Public Preview - Conditional access Authentication strengths
+++
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** User Authentication
+
+Announcing Public preview of Authentication strength, a Conditional Access control that allows administrators to specify which authentication methods can be used to access a resource. For more information, see: [Conditional Access authentication strength (preview)](../authentication/concept-authentication-strengths.md). You can use custom authentication strengths to restrict access by requiring specific FIDO2 keys using the Authenticator Attestation GUIDs (AAGUIDs), and apply this through conditional access policies. For more information, see: [FIDO2 security key advanced options](../authentication/concept-authentication-strengths.md#fido2-security-key-advanced-options).
+++
+### Public Preview - Conditional access authentication strengths for external identities
++
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+You can now require your business partner (B2B) guests across all Microsoft clouds to use specific authentication methods to access your resources with **Conditional Access Authentication Strength policies**. For more information, see: [Conditional Access: Require an authentication strength for external users](../conditional-access/howto-conditional-access-policy-authentication-strength-external.md).
++++
+### Generally Availability - Windows Hello for Business, Cloud Kerberos Trust deployment
+++
+**Type:** New feature
+**Service category:** Authentications (Logins)
+**Product capability:** User Authentication
+
+We're excited to announce the general availability of hybrid cloud Kerberos trust, a new Windows Hello for Business deployment model to enable a password-less sign-in experience. With this new model, weΓÇÖve made Windows Hello for Business much easier to deploy than the existing key trust and certificate trust deployment models by removing the need for maintaining complicated public key infrastructure (PKI), and Azure Active Directory (AD) Connect synchronization wait times. For more information, see: [Hybrid Cloud Kerberos Trust Deployment](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-kerberos-trust).
+++
+### General Availability - Device-based conditional access on Linux Desktops
+++
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** SSO
+
+This feature empowers users on Linux clients to register their devices with Azure AD, enroll into Intune management, and satisfy device-based Conditional Access policies when accessing their corporate resources.
+
+- Users can register their Linux devices with Azure AD
+- Users can enroll in Mobile Device Management (Intune), which can be used to provide compliance decisions based upon policy definitions to allow device based conditional access on Linux Desktops
+- If compliant, users can use Edge Browser to enable Single-Sign on to M365/Azure resources and satisfy device-based Conditional Access policies.
++
+For more information, see:
+[Azure AD registered devices](../devices/concept-azure-ad-register.md).
+[Plan your Azure Active Directory device deployment](../devices/plan-device-deployment.md)
+++
+### General Availability - Deprecation of Azure Multi-Factor Authentication Server
+++
+**Type:** Deprecated
+**Service category:** MFA
+**Product capability:** Identity Security & Protection
+
+Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multi-factor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services, and to remain in a supported state, organizations should migrate their usersΓÇÖ authentication data to the cloud-based Azure AD Multi-Factor Authentication service using the latest Migration Utility included in the most recent Azure AD Multi-Factor Authentication Server update. For more information, see: [Migrate from MFA Server to Azure AD Multi-Factor Authentication](../authentication/how-to-migrate-mfa-server-to-azure-mfa.md).
+++
+### General Availability - Change of Default User Consent Settings
+++
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** Developer Experience
+
+Starting Sept 30th, 2022, Microsoft will require all new tenants to follow a new user consent configuration. While this won't impact any existing tenants that were created before September 30, 2022, all new tenants created after September 30, 2022, will have the default setting of ΓÇ£Enable automatic updates (Recommendation)ΓÇ¥ under User consent settings. This change reduces the risk of malicious applications attempting to trick users into granting them access to your organization's data. For more information, see: [Configure how users consent to applications](../manage-apps/configure-user-consent.md).
+++
+### Public Preview - Lifecycle Workflows is now available
+++
+**Type:** New feature
+**Service category:** Lifecycle Workflows
+**Product capability:** Identity Governance
++
+We're excited to announce the public preview of Lifecycle Workflows, a new Identity Governance capability that allows customers to extend the user provisioning process, and adds enterprise grade user lifecycle management capabilities, in Azure AD to modernize your identity lifecycle management process. With Lifecycle Workflows, you can:
+
+- Confidently configure and deploy custom workflows to onboard and offboard cloud employees at scale replacing your manual processes.
+- Automate out-of-the-box actions critical to required Joiner and Leaver scenarios and get rich reporting insights.
+- Extend workflows via Logic Apps integrations with custom tasks extensions for more complex scenarios.
+
+For more information, see: [What are Lifecycle Workflows? (Public Preview)](../governance/what-are-lifecycle-workflows.md).
+++
+### Public Preview - User-to-Group Affiliation recommendation for group Access Reviews
+++
+**Type:** New feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+This feature provides Machine Learning based recommendations to the reviewers of Azure AD Access Reviews to make the review experience easier and more accurate. The recommendation detects user affiliation with other users within the group, and leverages the scoring mechanism we built by computing the userΓÇÖs average distance with other users in the group. For more information, see: [Review recommendations for Access reviews](../governance/review-recommendations-access-reviews.md).
+++
+### General Availability - Group assignment for SuccessFactors Writeback application
+++
+**Type:** New feature
+**Service category:** Provisioning
+**Product capability:** Outbound to SaaS Applications
+
+When configuring writeback of attributes from Azure AD to SAP SuccessFactors Employee Central, you can now specify the scope of users using Azure AD group assignment. For more information, see: [Tutorial: Configure attribute write-back from Azure AD to SAP SuccessFactors](../saas-apps/sap-successfactors-writeback-tutorial.md).
+++
+### General Availability - Number Matching for Microsoft Authenticator notifications
+++
+**Type:** New feature
+**Service category:** Microsoft Authenticator App
+**Product capability:** User Authentication
+
+To prevent accidental notification approvals, admins can now require users to enter the number displayed on the sign-in screen when approving an MFA notification in the Microsoft Authenticator app. We've also refreshed the Azure portal admin UX and Microsoft Graph APIs to make it easier for customers to manage Authenticator app feature roll-outs. As part of this update we have also added the highly requested ability for admins to exclude user groups from each feature.
+
+The number matching feature greatly up-levels the security posture of the Microsoft Authenticator app and protects organizations from MFA fatigue attacks. We highly encourage our customers to adopt this feature leveraging the rollout controls we have built. Number Matching will begin to be enabled for all users of the Microsoft Authenticator app starting 27th of February 2023.
++
+For more information, see: [How to use number matching in multifactor authentication (MFA) notifications - Authentication methods policy](../authentication/how-to-mfa-number-match.md).
+++
+### General Availability - Additional context in Microsoft Authenticator notifications
+++
+**Type:** New feature
+**Service category:** Microsoft Authenticator App
+**Product capability:** User Authentication
+
+Reduce accidental approvals by showing users additional context in Microsoft Authenticator app notifications. Customers can enhance notifications with the following:
+
+- Application Context: This feature will show users which application they're signing into.
+- Geographic Location Context: This feature will show users their sign-in location based on the IP address of the device they're signing into.
+
+The feature is available for both MFA and Password-less Phone Sign-in notifications and greatly increases the security posture of the Microsoft Authenticator app. We've also refreshed the Azure portal Admin UX and Microsoft Graph APIs to make it easier for customers to manage Authenticator app feature roll-outs. As part of this update, we've also added the highly requested ability for admins to exclude user groups from certain features.
+
+We highly encourage our customers to adopt these critical security features to reduce accidental approvals of Authenticator notifications by end users.
++
+For more information, see: [How to use additional context in Microsoft Authenticator notifications - Authentication methods policy](../authentication/how-to-mfa-additional-context.md).
+++
+### New Federated Apps available in Azure AD Application gallery - October 2022
+++
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+++
+In October 2022 we've added the following 15 new applications in our App gallery with Federation support:
+
+[Unifii](https://www.unifii.com.au/), [WaitWell Staff App](https://waitwell.c)
+
+You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial,
+
+For listing your application in the Azure AD app gallery, please read the details here https://aka.ms/AzureADAppRequest
+++++
+### Public preview - New provisioning connectors in the Azure AD Application Gallery - October 2022
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+
+- [LawVu](../saas-apps/lawvu-provisioning-tutorial.md)
+
+For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
++++++ ## September 2022 ### General Availability - SSPR writeback is now available for disconnected forests using Azure AD Connect cloud sync
Smart Lockout now synchronizes the lockout state across Azure AD data centers, s
-
-## April 2022
--
-### General Availability - Entitlement management separation of duties checks for incompatible access packages
-
-**Type:** Changed feature
-**Service category:** Other
-**Product capability:** Identity Governance
-
-In Azure AD entitlement management, an administrator can now configure the incompatible access packages and groups of an access package in the Azure portal. This prevents a user who already has one of those incompatible access rights from being able to request further access. For more information, see: [Configure separation of duties checks for an access package in Azure AD entitlement management](../governance/entitlement-management-access-package-incompatible.md).
----
-### General Availability - Microsoft Defender for Endpoint Signal in Identity Protection
-
-**Type:** New feature
-**Service category:** Identity Protection
-**Product capability:** Identity Security & Protection
-
-
-Identity Protection now integrates a signal from Microsoft Defender for Endpoint (MDE) that will protect against PRT theft detection. To learn more, see: [What is risk? Azure AD Identity Protection | Microsoft Docs](../identity-protection/concept-identity-protection-risks.md).
-
---
-### General Availability - Entitlement management 3 stages of approval
-
-**Type:** Changed feature
-**Service category:** Other
-**Product capability:** Entitlement Management
-
-
-
-This update extends the Azure AD entitlement management access package policy to allow a third approval stage. This will be able to be configured via the Azure portal or Microsoft Graph. For more information, see: [Change approval and requestor information settings for an access package in Azure AD entitlement management](../governance/entitlement-management-access-package-approval-policy.md).
-
---
-### General Availability - Improvements to Azure AD Smart Lockout
-
-**Type:** Changed feature
-**Service category:** Identity Protection
-**Product capability:** User Management
-
-
-
-With a recent improvement, Smart Lockout now synchronizes the lockout state across Azure AD data centers, so the total number of failed sign-in attempts allowed before an account is locked out will match the configured lockout threshold. For more information, see: [Protect user accounts from attacks with Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md).
-
----
-### Public Preview - Integration of Microsoft 365 App Certification details into Azure Active Directory UX and Consent Experiences
-
-**Type:** New feature
-**Service category:** User Access Management
-**Product capability:** AuthZ/Access Delegation
--
-Microsoft 365 Certification status for an app is now available in Azure AD consent UX, and custom app consent policies. The status will later be displayed in several other Identity-owned interfaces such as enterprise apps. For more information, see: [Understanding Azure AD application consent experiences](../develop/application-consent-experience.md).
----
-### Public preview - Use Azure AD access reviews to review access of B2B direct connect users in Teams shared channels
-
-**Type:** New feature
-**Service category:** Access Reviews
-**Product capability:** Identity Governance
-
-Use Azure AD access reviews to review access of B2B direct connect users in Teams shared channels. For more information, see: [Include B2B direct connect users and teams accessing Teams Shared Channels in access reviews (preview)](../governance/create-access-review.md#include-b2b-direct-connect-users-and-teams-accessing-teams-shared-channels-in-access-reviews).
---
-### Public Preview - New MS Graph APIs to configure federated settings when federated with Azure AD
-
-**Type:** New feature
-**Service category:** MS Graph
-**Product capability:** Identity Security & Protection
--
-We're announcing the public preview of following MS Graph APIs and PowerShell cmdlets for configuring federated settings when federated with Azure AD:
-
-|Action |MS Graph API |PowerShell cmdlet |
-||||
-|Get federation settings for a federated domain | [Get internalDomainFederation](/graph/api/internaldomainfederation-get?view=graph-rest-beta&preserve-view=true) | [Get-MgDomainFederationConfiguration](/powershell/module/microsoft.graph.identity.directorymanagement/get-mgdomainfederationconfiguration?view=graph-powershell-beta&preserve-view=true) |
-|Create federation settings for a federated domain | [Create internalDomainFederation](/graph/api/domain-post-federationconfiguration?view=graph-rest-beta&preserve-view=true) | [New-MgDomainFederationConfiguration](/powershell/module/microsoft.graph.identity.directorymanagement/new-mgdomainfederationconfiguration?view=graph-powershell-beta&preserve-view=true) |
-|Remove federation settings for a federated domain | [Delete internalDomainFederation](/graph/api/internaldomainfederation-delete?view=graph-rest-beta&preserve-view=true) | [Remove-MgDomainFederationConfiguration](/powershell/module/microsoft.graph.identity.directorymanagement/remove-mgdomainfederationconfiguration?view=graph-powershell-beta&preserve-view=true) |
-|Update federation settings for a federated domain | [Update internalDomainFederation](/graph/api/internaldomainfederation-update?view=graph-rest-beta&preserve-view=true) | [Update-MgDomainFederationConfiguration](/powershell/module/microsoft.graph.identity.directorymanagement/update-mgdomainfederationconfiguration?view=graph-powershell-beta&preserve-view=true) |
--
-If using older MSOnline cmdlets ([Get-MsolDomainFederationSettings](/powershell/module/msonline/get-msoldomainfederationsettings?view=azureadps-1.0&preserve-view=true) and [Set-MsolDomainFederationSettings](/powershell/module/msonline/set-msoldomainfederationsettings?view=azureadps-1.0&preserve-view=true)), we highly recommend transitioning to the latest MS Graph APIs and PowerShell cmdlets.
-
-For more information, see [internalDomainFederation resource type - Microsoft Graph beta | Microsoft Docs](/graph/api/resources/internaldomainfederation?view=graph-rest-beta&preserve-view=true).
---
-### Public Preview ΓÇô Ability to force reauthentication on Intune enrollment, risky sign-ins, and risky users
-
-**Type:** New feature
-**Service category:** RBAC role
-**Product capability:** AuthZ/Access Delegation
--
-Added functionality to session controls allowing admins to reauthenticate a user on every sign-in if a user or particular sign-in event is deemed risky, or when enrolling a device in Intune. For more information, see [Configure authentication session management with conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md).
---
-### Public Preview ΓÇô Protect against by-passing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD
-
-**Type:** New feature
-**Service category:** MS Graph
-**Product capability:** Identity Security & Protection
--
-We're delighted to announce a new security protection that prevents bypassing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD. When enabled for a federated domain in your Azure AD tenant, it ensures that a compromised federated account can't bypass Azure AD Multi-Factor Authentication by imitating that a multi factor authentication has already been performed by the identity provider. The protection can be enabled via new security setting, [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-beta#federatedidpmfabehavior-values&preserve-view=true).
-
-We highly recommend enabling this new protection when using Azure AD Multi-Factor Authentication as your multi factor authentication for your federated users. To learn more about the protection and how to enable it, visit [Enable protection to prevent by-passing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#enable-protection-to-prevent-by-passing-of-cloud-azure-ad-multi-factor-authentication-when-federated-with-azure-ad).
---
-### New Federated Apps available in Azure AD Application gallery - April 2022
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** Third Party Integration
-
-In April 2022 we added the following 24 new applications in our App gallery with Federation support:
-[X-1FBO](https://www.x1fbo.com/), [select Armor](https://app.clickarmor.c)
-
-You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial.
-
-For listing your application in the Azure AD app gallery, please read the details here https://aka.ms/AzureADAppRequest
---
-### General Availability - Customer data storage for Japan customers in Japanese data centers
-
-**Type:** New feature
-**Service category:** App Provisioning
-**Product capability:** GoLocal
-
-From April 15, 2022, Microsoft began storing Azure ADΓÇÖs Customer Data for new tenants with a Japan billing address within the Japanese data centers. For more information, see: [Customer data storage for Japan customers in Azure Active Directory](active-directory-data-storage-japan.md).
----
-### Public Preview - New provisioning connectors in the Azure AD Application Gallery - April 2022
-
-**Type:** New feature
-**Service category:** App Provisioning
-**Product capability:** Third Party Integration
-
-You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
-- [Adobe Identity Management (OIDC)](../saas-apps/adobe-identity-management-provisioning-oidc-tutorial.md)-- [embed signage](../saas-apps/embed-signage-provisioning-tutorial.md)-- [KnowBe4 Security Awareness Training](../saas-apps/knowbe4-security-awareness-training-provisioning-tutorial.md)-- [NordPass](../saas-apps/nordpass-provisioning-tutorial.md)-
-For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md)
-----
active-directory Howto Identity Protection Configure Mfa Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-mfa-policy.md
For more information on Azure AD multifactor authentication, see [What is Azure
1. Browse to **Azure Active Directory** > **Security** > **Identity Protection** > **MFA registration policy**. 1. Under **Assignments** 1. **Users** - Choose **All users** or **Select individuals and groups** if limiting your rollout.
- 1. Optionally you can choose to exclude users from the policy.
+ 1. Optionally you can choose to exclude users or groups from the policy.
1. **Enforce Policy** - **On** 1. **Save**
active-directory Managed Identities Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md
The following Azure services support managed identities for Azure resources:
| Azure Service Fabric | [Using Managed identities for Azure with Service Fabric](../../service-fabric/concepts-managed-identity.md) | | Azure SignalR Service | [Managed identities for Azure SignalR Service](../../azure-signalr/howto-use-managed-identity.md) | | Azure Spring Apps | [Enable system-assigned managed identity for an application in Azure Spring Apps](../../spring-apps/how-to-enable-system-assigned-managed-identity.md) |
-| Azure SQL | [Azure SQL Transparent Data Encryption with customer-managed key](/azure/azure-sql/database/transparent-data-encryption-byok-overview) |
-| Azure SQL Managed Instance | [Azure SQL Transparent Data Encryption with customer-managed key](/azure/azure-sql/database/transparent-data-encryption-byok-overview) |
+| Azure SQL | [Managed identities in Azure AD for Azure SQL](/azure/azure-sql/database/authentication-azure-ad-user-assigned-managed-identity) |
+| Azure SQL Managed Instance | [Managed identities in Azure AD for Azure SQL](/azure/azure-sql/database/authentication-azure-ad-user-assigned-managed-identity) |
| Azure Stack Edge | [Manage Azure Stack Edge secrets using Azure Key Vault](../../databox-online/azure-stack-edge-gpu-activation-key-vault.md#recover-managed-identity-access) | Azure Static Web Apps | [Securing authentication secrets in Azure Key Vault](../../static-web-apps/key-vault-secrets.md) | Azure Stream Analytics | [Authenticate Stream Analytics to Azure Data Lake Storage Gen1 using managed identities](../../stream-analytics/stream-analytics-managed-identities-adls.md) |
active-directory Recommendation Integrate Third Party Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-integrate-third-party-apps.md
Title: Azure Active Directory recommendation - Integrate third party apps with Azure AD | Microsoft Docs description: Learn why you should integrate third party apps with Azure AD -+ Previously updated : 08/26/2022- Last updated : 10/31/2022+
-# Azure AD recommendation: Integrate your third party apps
+# Azure AD recommendation: Integrate third party apps
-[Azure AD recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices.
-
-This article covers the recommendation to integrate third party apps.
+[Azure Active Directory (Azure AD) recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices.
+This article covers the recommendation to integrate your third party apps with Azure AD.
## Description
-As an Azure AD admin responsible for managing applications, you want to use the Azure AD security features with your third party apps. Integrating these apps into Azure AD enables:
--- You to use one unified method to manage access to your third party apps.-- Your users to benefit from using single sign-on to access all your apps with a single password.-
+As an Azure AD admin responsible for managing applications, you want to use the Azure AD security features with your third party apps. Integrating these apps into Azure AD enables you to use one unified method to manage access to your third party apps. Your users also benefit from using single sign-on to access all your apps with a single password.
-## Logic
-
-If Azure AD determines that none of your users are using Azure AD to authenticate to your third party apps, this recommendation shows up.
+If Azure AD determines that none of your users are using Azure AD to authenticate to your third party apps, this recommendation shows up.
## Value
-Integrating third party apps with Azure AD allows you to use Azure AD's security features.
-The integration:
+Integrating third party apps with Azure AD allows you to utilize the core identity and access features provided by Azure AD. Manage access, single sign-on, and other properties. Add an extra security layer by using [Conditional Access](../conditional-access/overview.md) to control how your users can access your apps.
+
+Integrating third party apps with Azure AD:
- Improves the productivity of your users. - Lowers your app management cost.
-You can then add an extra security layer by using conditional access to control how your users can access your apps.
- ## Action plan 1. Review the configuration of your apps.
-2. For each app that isn't integrated into Azure AD yet, verify whether an integration is possible.
+2. For each app that isn't integrated into Azure AD, verify whether an integration is possible.
## Next steps -- [Tutorials for integrating SaaS applications with Azure Active Directory](../saas-apps/tutorial-list.md)-- [Azure AD reports overview](overview-reports.md)
+- [Explore tutorials for integrating SaaS applications with Azure AD](../saas-apps/tutorial-list.md)
active-directory Recommendation Mfa From Known Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-mfa-from-known-devices.md
Title: Azure Active Directory recommendation - Minimize MFA prompts from known devices in Azure AD | Microsoft Docs description: Learn why you should minimize MFA prompts from known devices in Azure AD. -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+
This article covers the recommendation to convert minimize multi-factor authenti
## Description
-As an admin, you want to maintain security for my companyΓÇÖs resources, but you also want your employees to easily access resources as needed.
+As an admin, you want to maintain security for your companyΓÇÖs resources, but you also want your employees to easily access resources as needed.
MFA enables you to enhance the security posture of your tenant. While enabling MFA is a good practice, you should try to keep the number of MFA prompts your users have to go through at a minimum. One option you have to accomplish this goal is to **allow users to remember multi-factor authentication on devices they trust**.
active-directory Recommendation Migrate Apps From Adfs To Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-apps-from-adfs-to-azure-ad.md
Title: Azure Active Directory recommendation - Migrate apps from ADFS to Azure AD in Azure AD | Microsoft Docs description: Learn why you should migrate apps from ADFS to Azure AD in Azure AD -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+
[Azure AD recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices.
-This article covers the recommendation to migrate apps from ADFS to Azure AD.
+This article covers the recommendation to migrate apps from ADFS to Azure Active Directory (Azure AD).
## Description
active-directory Recommendation Migrate To Authenticator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-to-authenticator.md
Title: Azure Active Directory recommendation - Migrate to Microsoft authenticator | Microsoft Docs description: Learn why you should migrate your users to the Microsoft authenticator app in Azure AD. -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+
active-directory Recommendation Turn Off Per User Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-turn-off-per-user-mfa.md
Title: Azure Active Directory recommendation - Turn off per user MFA in Azure AD | Microsoft Docs description: Learn why you should turn off per user MFA in Azure AD -+ Previously updated : 08/26/2022- Last updated : 10/31/2022+
-# Azure AD recommendation: Turn off per user MFA
+# Azure AD recommendation: Convert per-user MFA to Conditional Access MFA
[Azure AD recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices. -
-This article covers the recommendation to turn off per user MFA.
-
+This article covers the recommendation to convert per-user Multi-factor authentication (MFA) accounts to Conditional Access (CA) MFA accounts.
## Description
-As an admin, you want to maintain security for my companyΓÇÖs resources, but you also want your employees to easily access resources as needed.
-
-Multi-factor authentication (MFA) enables you to enhance the security posture of your tenant. In your tenant, you can enable MFA on a per-user basis. In this scenario, your users perform MFA each time they sign in (with some exceptions, such as when they sign in from trusted IP addresses or when the remember MFA on trusted devices feature is turned on).
-
-While enabling MFA is a good practice, you can reduce the number of times your users are prompted for MFA by converting per-user MFA to MFA based on conditional access.
-
+As an admin, you want to maintain security for your companyΓÇÖs resources, but you also want your employees to easily access resources as needed. MFA enables you to enhance the security posture of your tenant.
-## Logic
+In your tenant, you can enable MFA on a per-user basis. In this scenario, your users perform MFA each time they sign in, with some exceptions, such as when they sign in from trusted IP addresses or when the remember MFA on trusted devices feature is turned on. While enabling MFA is a good practice, converting per-user MFA to MFA based on [Conditional Access](../conditional-access/overview.md) can reduce the number of times your users are prompted for MFA.
-This recommendation shows up, if:
+This recommendation shows up if:
-- You have per-user MFA configured for at least 5% of your users-- Conditional access policies are active for more than 1% of your users (indicating familiarity with CA policies).
+- You have per-user MFA configured for at least 5% of your users.
+- Conditional Access policies are active for more than 1% of your users (indicating familiarity with CA policies).
## Value
-This recommendation improves your user's productivity and minimizes the sign-in time with fewer MFA prompts. Ensure that your most sensitive resources can have the tightest controls, while your least sensitive resources can be more freely accessible.
+This recommendation improves your user's productivity and minimizes the sign-in time with fewer MFA prompts. CA and MFA used together help ensure that your most sensitive resources can have the tightest controls, while your least sensitive resources can be more freely accessible.
## Action plan
-1. To get started, confirm that there's an existing conditional access policy with an MFA requirement. Ensure that you're covering all resources and users you would like to secure with MFA. Review your [conditional access policies](https://portal.azure.com/?Microsoft_AAD_IAM_enableAadvisorFeaturePreview=true&amp%3BMicrosoft_AAD_IAM_enableAadvisorFeature=true#blade/Microsoft_AAD_IAM/PoliciesTemplateBlade).
+1. Confirm that there's an existing CA policy with an MFA requirement. Ensure that you're covering all resources and users you would like to secure with MFA.
+ - Review your [Conditional Access policies](https://portal.azure.com/?Microsoft_AAD_IAM_enableAadvisorFeaturePreview=true&amp%3BMicrosoft_AAD_IAM_enableAadvisorFeature=true#blade/Microsoft_AAD_IAM/PoliciesTemplateBlade).
-2. To require MFA using a conditional access policy, follow the steps in [Secure user sign-in events with Azure AD Multi-Factor Authentication](../authentication/tutorial-enable-azure-mfa.md).
+2. Require MFA using a Conditional Access policy.
+ - [Secure user sign-in events with Azure AD Multi-Factor Authentication](../authentication/tutorial-enable-azure-mfa.md).
3. Ensure that the per-user MFA configuration is turned off.
-
+After all users have been migrated to CA MFA accounts, the recommendation status automatically updates the next time the service runs. Continue to review your CA policies to improve the overall health of your tenant.
## Next steps -- [Tutorials for integrating SaaS applications with Azure Active Directory](../saas-apps/tutorial-list.md)-- [Azure AD reports overview](overview-reports.md)
+- [Learn about requiring MFA for all users using Conditional Access](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)
+- [View the MFA CA policy tutorial](../authentication/tutorial-enable-azure-mfa.md)
active-directory Reference Azure Ad Sla Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-azure-ad-sla-performance.md
For each month, we truncate the SLA attainment at three places after the decimal
| June | 99.999% | 99.999% | | July | 99.999% | 99.999% | | August | 99.999% | 99.999% |
-| September | 99.999% | |
+| September | 99.999% | 99.998% |
| October | 99.999% | | | November | 99.998% | | | December | 99.978% | |
active-directory Tutorial Access Api With Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/tutorial-access-api-with-certificates.md
Title: Tutorial for AD Reporting API with certificates | Microsoft Docs description: This tutorial explains how to use the Azure AD Reporting API with certificate credentials to get data from directories without user intervention. -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+ -
-# Customer intent: As a developer, I want to learn how to access the Azure AD reporting API using certificates so that I can create an application that does not require user intervention to access reports.
+
+# Customer intent: As a developer, I want to learn how to access the Azure AD reporting API using certificates so that I can create an application that does not require user intervention to access reports.
+ # Tutorial: Get data using the Azure Active Directory reporting API with certificates
-The [Azure Active Directory (Azure AD) reporting APIs](concept-reporting-api.md) provide you with programmatic access to the data through a set of REST-based APIs. You can call these APIs from a variety of programming languages and tools. If you want to access the Azure AD Reporting API without user intervention, you must configure your access to use certificates.
+The [Azure Active Directory (Azure AD) reporting APIs](concept-reporting-api.md) provide you with programmatic access to the data through a set of REST-based APIs. You can call these APIs from various programming languages and tools. If you want to access the Azure AD Reporting API without user intervention, you must configure your access to use certificates.
In this tutorial, you learn how to use a test certificate to access the MS Graph API for reporting. We don't recommend using test certificates in a production environment. ## Prerequisites
-1. To access sign-in data, make sure you have an Azure Active Directory tenant with a premium (P1/P2) license. See [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md) to upgrade your Azure Active Directory edition. Note that if you did not have any activities data prior to the upgrade, it will take a couple of days for the data to show up in the reports after you upgrade to a premium license.
+1. To access sign-in data, make sure you have an Azure AD tenant with a premium (P1/P2) license. See [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md) to upgrade your Azure AD edition. If you didn't have any activities data prior to the upgrade, it will take a couple of days for the data to show up in the reports after you upgrade to a premium license.
-2. Create or switch to a user account in the **global administrator**, **security administrator**, **security reader** or **report reader** role for the tenant.
+2. Create or switch to a user account in the **Global Administrator**, **Security Administrator**, **Security Reader** or **Report Reader** role for the tenant.
3. Complete the [prerequisites to access the Azure Active Directory reporting API](howto-configure-prerequisites-for-reporting-api.md). 4. Download and install [Azure AD PowerShell V2](https://github.com/Azure/azure-docs-powershell-azuread/blob/master/docs-conceptual/azureadps-2.0/install-adv2.md). 5. Install [MSCloudIdUtils](https://www.powershellgallery.com/packages/MSCloudIdUtils/). This module provides several utility cmdlets including:
- - The ADAL libraries needed for authentication
- - Access tokens from user, application keys, and certificates using ADAL
+ - The Microsoft Authentication Library libraries needed for authentication
+ - Access tokens from user, application keys, and certificates using Microsoft Authentication Library
- Graph API handling paged results 6. If it's your first time using the module run **Install-MSCloudIdUtilsModule**, otherwise import it using the **Import-Module** PowerShell command. Your session should look similar to this screen:
In this tutorial, you learn how to use a test certificate to access the MS Graph
## Get data using the Azure Active Directory reporting API with certificates
-1. Navigate to the [Azure portal](https://portal.azure.com), select **Azure Active Directory**, then select **App registrations** and choose your application from the list.
+1. Go to the [Azure portal](https://portal.azure.com) > **Azure Active Directory** > **App registrations** and choose your application from the list.
-2. Select **Certificates & secrets** under **Manage** section on Application registration blade and select **Upload Certificate**.
+2. From the Application registration area, select **Certificates & secrets** under the **Manage** section, and then select **Upload Certificate**.
3. Select the certificate file from the previous step and select **Add**.
-4. Note the Application ID, and the thumbprint of the certificate you just registered with your application. To find the thumbprint, from your application page in the portal, go to **Certificates & secrets** under **Manage** section. The thumbprint will be under the **Certificates** list.
+4. Note the Application ID, and the thumbprint of the certificate you registered with your application. To find the thumbprint, from your application page in the portal, go to **Certificates & secrets** under **Manage** section. The thumbprint will be under the **Certificates** list.
5. Open the application manifest in the inline manifest editor and verify the *keyCredentials* property is updated with your new certificate information as shown below -
In this tutorial, you learn how to use a test certificate to access the MS Graph
![Screenshot shows a PowerShell window with a command that creates an access token.](./media/tutorial-access-api-with-certificates/getaccesstoken.png)
-7. Use the access token in your PowerShell script to query the Graph API. Use the **Invoke-MSCloudIdMSGraphQuery** cmdlet from the MSCloudIDUtils to enumerate the signins and directoryAudits endpoint. This cmdlet handles multi-paged results, and sends those results to the PowerShell pipeline.
+7. Use the access token in your PowerShell script to query the Graph API. Use the **Invoke-MSCloudIdMSGraphQuery** cmdlet from the MSCloudIDUtils to enumerate the `signins` and `directoryAudits` endpoint. This cmdlet handles multi-paged results, and sends those results to the PowerShell pipeline.
-8. Query the directoryAudits endpoint to retrieve the audit logs.
+8. Query the `directoryAudits` endpoint to retrieve the audit logs.
![Screenshot shows a PowerShell window with a command to query the directoryAudits endpoint using the access token from earlier in this procedure.](./media/tutorial-access-api-with-certificates/query-directoryAudits.png)
-9. Query the signins endpoint to retrieve the sign-in logs.
+9. Query the `signins` endpoint to retrieve the sign-in logs.
![Screenshot shows a PowerShell window with a command to query the signins endpoint using the access token from earlier in this procedure.](./media/tutorial-access-api-with-certificates/query-signins.png)
active-directory Tutorial Azure Monitor Stream Logs To Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md
Title: Tutorial - Stream logs to an Azure event hub | Microsoft Docs description: Learn how to set up Azure Diagnostics to push Azure Active Directory logs to an event hub -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+ + # Customer intent: As an IT administrator, I want to learn how to route Azure AD logs to an event hub so I can integrate it with my third party SIEM system.- # Tutorial: Stream Azure Active Directory logs to an Azure event hub
To use this feature, you need:
* An Azure subscription. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/). * An Azure AD tenant.
-* A user who's a *global administrator* or *security administrator* for the Azure AD tenant.
+* A user who's a *Global Administrator* or *Security Administrator* for the Azure AD tenant.
* An Event Hubs namespace and an event hub in your Azure subscription. Learn how to [create an event hub](../../event-hubs/event-hubs-create.md). ## Stream logs to an event hub
After data is displayed in the event hub, you can access and read the data in tw
* [Integrate Azure Active Directory logs with ArcSight using Azure Monitor](howto-integrate-activity-logs-with-arcsight.md) * [Integrate Azure AD logs with Splunk by using Azure Monitor](./howto-integrate-activity-logs-with-splunk.md) * [Integrate Azure AD logs with SumoLogic by using Azure Monitor](howto-integrate-activity-logs-with-sumologic.md)
-* [Integrate Azure AD logs with Elastic using an event hub](https://github.com/Microsoft/azure-docs/blob/master/articles/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md)
* [Interpret audit logs schema in Azure Monitor](./overview-reports.md) * [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md)
active-directory Tutorial Log Analytics Wizard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/tutorial-log-analytics-wizard.md
Previously updated : 08/26/2022 Last updated : 10/31/2022 --++
In this tutorial, you learn how to:
- An Azure subscription with at least one P1 licensed admin. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/). -- An Azure AD tenant.
+- An Azure Active Directory (Azure AD) tenant.
-- A user who's a global administrator or security administrator for the Azure AD tenant.
+- A user who's a Global Administrator or Security Administrator for the Azure AD tenant.
Familiarize yourself with these articles:
advisor Azure Advisor Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/azure-advisor-score.md
Title: Optimize Azure workloads by using Advisor Score
-description: Use Azure Advisor Score to get the most out of Azure.
+ Title: Optimize Azure workloads by using Advisor score
+description: Use Azure Advisor score to get the most out of Azure.
Last updated 09/09/2020
-# Optimize Azure workloads by using Advisor Score
+# Optimize Azure workloads by using Advisor score
-## Introduction to Advisor Score
+## Introduction to Advisor score
Azure Advisor provides best practice recommendations for your workloads. These recommendations are personalized and actionable to help you:
Azure Advisor provides best practice recommendations for your workloads. These r
* Proactively prevent top issues by following best practices. * Assess your Azure workloads against the five pillars of the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
-As a core feature of Advisor, Advisor Score can help you achieve these goals effectively and efficiently.
+As a core feature of Advisor, Advisor score can help you achieve these goals effectively and efficiently.
To get the most out of Azure, it's crucial to understand where you are in your workload optimization journey. You need to know which services or resources are consumed well and which are not. Further, you'll want to know how to prioritize your actions, based on recommendations, to maximize the outcome.
-It's also important to track and report the progress you're making in this optimization journey. With Advisor Score, you can easily do all these things with the new gamification experience.
+It's also important to track and report the progress you're making in this optimization journey. With Advisor score, you can easily do all these things with the new gamification experience.
As your personalized cloud consultant, Azure Advisor continually assesses your usage telemetry and resource configuration to check for industry best practices. Advisor then aggregates its findings into a single score. With this score, you can tell at a glance if you're taking the necessary steps to build reliable, secure, and cost-efficient solutions.
The Advisor score consists of an overall score, which can be further broken down
You can track the progress you make over time by viewing your overall score and category score with daily, weekly, and monthly trends. You can also set benchmarks to help you achieve your goals.
- ![Screenshot that shows the Advisor Score page.](./media/advisor-score-1.png)
+![Screenshot that shows the Advisor Score page.](https://user-images.githubusercontent.com/41593141/195171041-3eacca75-751a-4407-bad0-1cf7b21c42ff.png)
## Interpret an Advisor score Advisor displays your overall Advisor score and a breakdown for Advisor categories, in percentages. A score of 100% in any category means all your resources assessed by Advisor follow the best practices that Advisor recommends. On the other end of the spectrum, a score of 0% means that none of your resources assessed by Advisor follow Advisor's recommendations. Using these score grains, you can easily achieve the following flow:
-* **Advisor Score** helps you baseline how your workload or subscriptions are doing based on an Advisor score. You can also see the historical trends to understand what your trend is.
+* **Advisor score** helps you baseline how your workload or subscriptions are doing based on an Advisor score. You can also see the historical trends to understand what your trend is.
* **Score by category** for each recommendation tells you which outstanding recommendations will improve your score the most. These values reflect both the weight of the recommendation and the predicted ease of implementation. These factors help to make sure you can get the most value with your time. They also help you with prioritization. * **Category score impact** for each recommendation helps you prioritize your remediation actions for each category.
-The contribution of each recommendation to your category score is shown clearly on the **Advisor Score** page in the Azure portal. You can increase each category score by the percentage point listed in the **Potential score increase** column. This value reflects both the weight of the recommendation within the category and the predicted ease of implementation to address the potentially easiest tasks. Focusing on the recommendations with the greatest score impact will help you make the most progress with time.
+The contribution of each recommendation to your category score is shown clearly on the **Advisor score** page in the Azure portal. You can increase each category score by the percentage point listed in the **Potential score increase** column. This value reflects both the weight of the recommendation within the category and the predicted ease of implementation to address the potentially easiest tasks. Focusing on the recommendations with the greatest score impact will help you make the most progress with time.
-![Screenshot that shows the Advisor score impact.](./media/advisor-score-2.png)
+![Screenshot that shows the Advisor score impact.](https://user-images.githubusercontent.com/41593141/195171044-6a45fa99-a291-49f3-8914-2b596771e63b.png)
If any Advisor recommendations aren't relevant for an individual resource, you can postpone or dismiss those recommendations. They'll be excluded from the score calculation with the next refresh. Advisor will also use this input as additional feedback to improve the model.
No. Your score isn't necessarily a reflection of how much you spend. Unnecessary
## Access Advisor Score
-Advisor Score is in public preview in the Azure portal. In the left pane, under the **Advisor** section, see **Advisor Score**.
+In the left pane, under the **Advisor** section, see **Advisor score**.
+
+![Screenshot that shows the Advisor Score entry point.](https://user-images.githubusercontent.com/41593141/195171046-f0db9b6c-b59f-4bef-aa33-6a5c2ace18c0.png)
-![Screenshot that shows the Advisor Score entry point.](./media/advisor-score-3.png)
## Next steps
aks Concepts Sustainable Software Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-sustainable-software-engineering.md
Azure Font Door and Application Gateway help manage traffic from web application
Many attacks on cloud infrastructure seek to misuse deployed resources for the attacker's direct gain leading to an unnecessary spike in usage and cost. Vulnerability scanning tools help minimize the window of opportunity for attackers and mitigate any potential malicious usage of resources.
-* Follow recommendations from [Microsoft Defender for Cloud](/security/benchmark/azure/security-control-vulnerability-management) and run automated vulnerability scanning tools such as [Defender for Containers](/azure/defender-for-cloud/defender-for-containers-va-acr) to avoid unnecessary resource usage by identifying vulnerabilities in your images and minimizing the window of opportunity for attackers.
+* Follow recommendations from [Microsoft Defender for Cloud](/security/benchmark/azure/security-control-vulnerability-management) and run automated vulnerability scanning tools such as [Defender for Containers](/azure/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure) to avoid unnecessary resource usage by identifying vulnerabilities in your images and minimizing the window of opportunity for attackers.
## Next steps
aks Configure Kube Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kube-proxy.md
az aks update -g <resourceGroup> -n <clusterName> --kube-proxy-config kube-proxy
## Next steps
-Learn more about utilizing the Standard Load Balancer for inbound traffic at the [AKS Standard Load Balancer documentation][load-balancer-standard.md].
+Learn more about utilizing the Standard Load Balancer for inbound traffic at the [AKS Standard Load Balancer documentation](load-balancer-standard.md).
Learn more about using Internal Load Balancer for Inbound traffic at the [AKS Internal Load Balancer documentation](internal-lb.md).
aks Kubernetes Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-action.md
The following shows an example output from the above command.
``` In your GitHub repository, create the below secrets for your action to use. To create a secret:
-1. Navigate to the repository's settings, and click *Secrets* then *Actions*.
-1. For each secret, click *New Repository Secret* and enter the name and value of the secret.
+1. Navigate to the repository's settings, and select **Security > Secrets and variables > Actions**.
+1. For each secret, click **New Repository Secret** and enter the name and value of the secret.
For more details on creating secrets, see [Encrypted Secrets][github-actions-secrets].
aks Use Pod Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-security-policies.md
Last updated 03/25/2021
# Preview - Secure your cluster using pod security policies in Azure Kubernetes Service (AKS)
-> [!WARNING]
-> **The feature described in this document, pod security policy (preview), will begin [deprecation](https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/) with Kubernetes version 1.21, with its removal in version 1.25.** You can now [Migrate Pod Security Policy to Pod Security Admission Controller](https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/) ahead of the deprecation.
->
-> After pod security policy (preview) is deprecated, you must have already migrated to Pod Security Admission controller or disabled the feature on any existing clusters using the deprecated feature to perform future cluster upgrades and stay within Azure support.
+[!Important]
+The feature described in this document, pod security policy (preview), will begin deprecation with Kubernetes version 1.21, with its removal in version 1.25. AKS will mark Pod Security Policy as "Deprecated" in the AKS API on 04-01-2023. You can now Migrate Pod Security Policy to Pod Security Admission Controller ahead of the deprecation.
-To improve the security of your AKS cluster, you can limit what pods can be scheduled. Pods that request resources you don't allow can't run in the AKS cluster. You define this access using pod security policies. This article shows you how to use pod security policies to limit the deployment of pods in AKS.
+After pod security policy (preview) is deprecated, you must have already migrated to Pod Security Admission controller or disabled the feature on any existing clusters using the deprecated feature to perform future cluster upgrades and stay within Azure support.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
analysis-services Analysis Services Create Bicep File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-bicep-file.md
Title: Quickstart - Create an Azure Analysis Services server resource by using B
description: Quickstart showing how to an Azure Analysis Services server resource by using a Bicep file. Last updated 03/08/2022 -+ tags: azure-resource-manager, bicep
analysis-services Analysis Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-overview.md
Azure Analysis Services is a fully managed platform as a service (PaaS) that pro
![Data sources](./media/analysis-services-overview/aas-overview-overall.png)
-**Video:** Check out [Azure Analysis Services Overview](https://sec.ch9.ms/ch9/d6dd/a1cda46b-ef03-4cea-8f11-68da23c5d6dd/AzureASoverview_high.mp4) to learn how Azure Analysis Services fits in with Microsoft's overall BI capabilities.
+**Video:** Check out [Azure Analysis Services Overview](https://www.youtube.com/watch?v=m1jnG1zIvTo&t=31s) to learn how Azure Analysis Services fits in with Microsoft's overall BI capabilities.
## Get up and running quickly
app-service App Service Sql Asp Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-sql-asp-github-actions.md
In the example, replace the placeholders with your subscription ID, resource gro
## Configure the GitHub secret for authentication
-In [GitHub](https://github.com/), browse your repository, select **Settings > Secrets > Add a new secret**.
-
-To use [user-level credentials](#generate-deployment-credentials), paste the entire JSON output from the Azure CLI command into the secret's value field. Name the secret `AZURE_CREDENTIALS`.
## Add GitHub secrets for your build
app-service App Service Sql Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-sql-github-actions.md
In the example, replace the placeholders with your subscription ID, resource gro
## Configure the GitHub secret for authentication
-In [GitHub](https://github.com/), browse your repository, select **Settings > Secrets > Add a new secret**.
-
-To use [user-level credentials](#generate-deployment-credentials), paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret the name `AZURE_CREDENTIALS`.
## Add a SQL Server secret
app-service Deploy Container Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-container-github-action.md
OpenID Connect is an authentication method that uses short-lived tokens. Setting
# [Publish profile](#tab/publish-profile)
-In [GitHub](https://github.com/), browse your repository, select **Settings > Secrets > Add a new secret**.
+In [GitHub](https://github.com/), browse your repository. Select **Settings > Security > Secrets and variables > Actions > New repository secret**.
To use [app-level credentials](#generate-deployment-credentials), paste the contents of the downloaded publish profile file into the secret's value field. Name the secret `AZURE_WEBAPP_PUBLISH_PROFILE`.
When you configure your GitHub workflow, you use the `AZURE_WEBAPP_PUBLISH_PROFI
# [Service principal](#tab/service-principal)
-In [GitHub](https://github.com/), browse your repository, select **Settings > Secrets > Add a new secret**.
+In [GitHub](https://github.com/), browse your repository. Select **Settings > Security > Secrets and variables > Actions > New repository secret**.
To use [user-level credentials](#generate-deployment-credentials), paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret the name like `AZURE_CREDENTIALS`.
When you configure the workflow file later, you use the secret for the input `cr
You need to provide your application's **Client ID**, **Tenant ID** and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
-1. Open your GitHub repository and go to **Settings**.
+1. Open your GitHub repository and go to **Settings > Security > Secrets and variables > Actions > New repository secret**.
1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets. You can find these values in the Azure portal by searching for your active directory application.
app-service Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-github-actions.md
To learn how to create a Create an active directory application, service princip
# [Publish profile](#tab/applevel)
-In [GitHub](https://github.com/), browse your repository, select **Settings > Secrets > Add a new secret**.
+In [GitHub](https://github.com/), browse your repository. Select **Settings > Security > Secrets and variables > Actions > New repository secret**.
To use [app-level credentials](#generate-deployment-credentials), paste the contents of the downloaded publish profile file into the secret's value field. Name the secret `AZURE_WEBAPP_PUBLISH_PROFILE`.
When you configure your GitHub workflow, you use the `AZURE_WEBAPP_PUBLISH_PROFI
# [Service principal](#tab/userlevel)
-In [GitHub](https://github.com/), browse your repository, select **Settings > Secrets > Add a new secret**.
+In [GitHub](https://github.com/), browse your repository. Select **Settings > Security > Secrets and variables > Actions > New repository secret**.
To use [user-level credentials](#generate-deployment-credentials), paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret the name `AZURE_CREDENTIALS`.
When you configure the workflow file later, you use the secret for the input `cr
You need to provide your application's **Client ID**, **Tenant ID** and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
-1. Open your GitHub repository and go to **Settings**.
+1. Open your GitHub repository and go to **Settings > Security > Secrets and variables > Actions > New repository secret**.
1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets:
app-service Tutorial Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-custom-container.md
To deploy a container to Azure App Service, you first create a web app on App Se
An App Service plan corresponds to the virtual machine that hosts the web app. By default, the previous command uses an inexpensive [B1 pricing tier](https://azure.microsoft.com/pricing/details/app-service/linux/) that is free for the first month. You can control the tier with the `--sku` parameter.
-1. Create the web app with the [`az webpp create`](/cli/azure/webapp#az-webapp-create) command:
+1. Create the web app with the [`az webapp create`](/cli/azure/webapp#az-webapp-create) command:
```azurecli-interactive az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --deployment-container-image-name <registry-name>.azurecr.io/appsvc-tutorial-custom-image:latest
application-gateway Ingress Controller Install New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-new.md
This step will add the following components to your subscription:
wget https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/deploy/azuredeploy.json -O template.json ```
-1. Deploy the Azure Resource Manager template using `az cli`. The deployment might take up to 5 minutes.
+1. Deploy the Azure Resource Manager template using the Azure CLI. The deployment might take up to 5 minutes.
+ ```azurecli resourceGroupName="MyResourceGroup" location="westus2"
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
recommendations: false
Optical Character Recognition (OCR) for documents is optimized for large text-heavy documents in multiple file formats and global languages. It should include features like higher-resolution scanning of document images for better handling of smaller and dense text, paragraphs detection, handling fillable forms, and advanced forms and document scenarios like single character boxes and accurate extraction of key fields commonly found in invoices, receipts, and other prebuilt scenarios.
-## Form Recognizer Read model
+## OCR in Form Recognizer - Read model
Form Recognizer v3.0ΓÇÖs Read Optical Character Recognition (OCR) model runs at a higher resolution than Computer Vision Read and extracts print and handwritten text from PDF documents and scanned images. It also includes preview support for extracting text from Microsoft Word, Excel, PowerPoint, and HTML documents. It detects paragraphs, text lines, words, locations, and languages, and is the underlying OCR engine for other Form Recognizer models like Layout, General Document, Invoice, Receipt, Identity (ID) document, and other prebuilt models, as well as custom models.
-## Supported document types
+## OCR supported document types
> [!NOTE] >
Try extracting text from forms and documents using the Form Recognizer Studio. Y
## Supported languages and locales
-Form Recognizer v3.0 version supports several languages for the read model. *See* our [Language Support](language-support.md) for a complete list of supported handwritten and printed languages.
+Form Recognizer v3.0 version supports several languages for the read OCR model. *See* our [Language Support](language-support.md) for a complete list of supported handwritten and printed languages.
## Data detection and extraction
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
Title: Intelligent document processing - Form Recognizer
+ Title: Form Recognizer overview
-description: Machine-learning based OCR and document understanding service to automate extraction of text, table and structure, and key-value pairs from your forms and documents.
+description: Machine-learning based OCR and intelligent document processing understanding service to automate extraction of text, table and structure, and key-value pairs from your forms and documents.
Previously updated : 10/20/2022 Last updated : 10/31/2022 recommendations: false
recommendations: false
<!-- markdownlint-disable MD024 --> <!-- markdownlint-disable MD036 -->
-# What is Intelligent Document Processing?
-
-Intelligent Document Processing (IDP) refers to capturing, transforming, and processing data from documents (e.g., PDF, or scanned documents including Microsoft Office and HTML documents). It typically uses advanced machine-learning based technologies like computer vision, Optical Character Recognition (OCR), document layout analysis, and Natural Language Processing (NLP) to extract meaningful information, process and integrate with other systems.
-
-IDP solutions can extract data from structured documents with pre-defined layouts like a tax form, unstructured or free-form documents like a contract, and semi-structured documents. They have a wide variety of benefits spanning knowledge mining, business process automation, and industry-specific applications. Examples include invoice processing, medical claims processing, and contracts workflow automation.
-
-## What is Azure Form Recognizer?
+# What is Azure Form Recognizer?
::: moniker range="form-recog-3.0.0" [!INCLUDE [applies to v3.0](includes/applies-to-v3-0.md)]
applied-ai-services Tutorial Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/tutorial-azure-function.md
Previously updated : 08/23/2022 Last updated : 10/31/2022
Next, you'll add your own code to the Python script to call the Form Recognizer
import time from requests import get, post import os
+ import requests
from collections import OrderedDict import numpy as np import pandas as pd
Next, you'll add your own code to the Python script to call the Form Recognizer
if resp.status_code != 202: print("POST analyze failed:\n%s" % resp.text)
- quit()
+ quit()
print("POST analyze succeeded:\n%s" % resp.headers) get_url = resp.headers["operation-location"]
Next, you'll add your own code to the Python script to call the Form Recognizer
results = resp_json else: print("GET Layout results failed:\n%s")
- quit()
+ quit()
results = resp_json
In this tutorial, you learned how to use an Azure Function written in Python to
> [Microsoft Power BI](https://powerbi.microsoft.com/integrations/azure-table-storage/) * [What is Form Recognizer?](overview.md)
-* Learn more about the [layout model](concept-layout.md)
+* Learn more about the [layout model](concept-layout.md)
automation Automation Child Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-child-runbooks.md
Title: Create modular runbooks in Azure Automation
description: This article explains how to create a runbook that another runbook calls. Previously updated : 10/29/2021 Last updated : 10/16/2022 #Customer intent: As a developer, I want create modular runbooks so that I can be more efficient.
Currently, PowerShell 5.1 is supported and only certain runbook types can call e
* The PowerShell types and the PowerShell Workflow types can't call each other inline. They must use `Start-AzAutomationRunbook`. > [!IMPORTANT]
-> Executing child scripts using `.\child-runbook.ps1` is not supported in PowerShell 7.1 preview.
+> Executing child scripts using `.\child-runbook.ps1` is not supported in PowerShell 7.1 and PowerShell 7.2 (preview).
**Workaround**: Use `Start-AutomationRunbook` (internal cmdlet) or `Start-AzAutomationRunbook` (from *Az.Automation* module) to start another runbook from parent runbook. The publish order of runbooks matters only for PowerShell Workflow and graphical PowerShell Workflow runbooks.
automation Automation Powershell Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-powershell-workflow.md
Title: Learn PowerShell Workflow for Azure Automation
description: This article teaches you the differences between PowerShell Workflow and PowerShell and concepts applicable to Automation runbooks. Previously updated : 10/29/2018 Last updated : 10/16/2022
Runbooks in Azure Automation are implemented as Windows PowerShell workflows, Wi
While a workflow is written with Windows PowerShell syntax and launched by Windows PowerShell, it is processed by Windows Workflow Foundation. The benefits of a workflow over a normal script include simultaneous performance of an action against multiple devices and automatic recovery from failures. > [!NOTE]
-> This article is applicable for PowerShell 5.1; PowerShell 7.1 (preview) does not support workflows.
-> A PowerShell Workflow script is very similar to a Windows PowerShell script but has some significant differences that can be confusing to a new user. Therefore, we recommend that you write your runbooks using PowerShell Workflow only if you need to use [checkpoints](#use-checkpoints-in-a-workflow).
+> This article is applicable for PowerShell 5.1; PowerShell 7.1 (preview) and PowerShell 7.2 (preview) does not support workflows. A PowerShell Workflow script is very similar to a Windows PowerShell script but has some significant differences that can be confusing to a new user. Therefore, we recommend that you write your runbooks using PowerShell Workflow only if you need to use [checkpoints](#use-checkpoints-in-a-workflow).
For complete details of the topics in this article, see [Getting Started with Windows PowerShell Workflow](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj134242(v=ws.11)).
automation Automation Runbook Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-gallery.md
Title: Use Azure Automation runbooks and modules in PowerShell Gallery
description: This article tells how to use runbooks and modules from Microsoft GitHub repos and the PowerShell Gallery. Previously updated : 10/29/2021 Last updated : 10/27/2022 # Use existing runbooks and modules
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md
Title: Azure Automation runbook types
description: This article describes the types of runbooks that you can use in Azure Automation and considerations for determining which type to use. Previously updated : 11/17/2021-- Last updated : 10/28/2022++ # Azure Automation runbook types
The Azure Automation Process Automation feature supports several types of runboo
| Type | Description | |: |: |
+| [PowerShell](#powershell-runbooks) |Textual runbook based on Windows PowerShell scripting. The currently supported versions are: PowerShell 5.1 (GA), PowerShell 7.1 (preview), and PowerShell 7.2 (preview).|
+| [PowerShell Workflow](#powershell-workflow-runbooks)|Textual runbook based on Windows PowerShell Workflow scripting. |
+| [Python](#python-runbooks) |Textual runbook based on Python scripting. The currently supported versions are: Python 2.7 (GA), Python 3.8 (preview), and Python 3.10 (preview). |
| [Graphical](#graphical-runbooks)|Graphical runbook based on Windows PowerShell and created and edited completely in the graphical editor in Azure portal. | | [Graphical PowerShell Workflow](#graphical-runbooks)|Graphical runbook based on Windows PowerShell Workflow and created and edited completely in the graphical editor in Azure portal. |
-| [PowerShell](#powershell-runbooks) |Textual runbook based on Windows PowerShell scripting. |
-| [PowerShell Workflow](#powershell-workflow-runbooks)|Textual runbook based on Windows PowerShell Workflow scripting. |
-| [Python](#python-runbooks) |Textual runbook based on Python scripting. |
Take into account the following considerations when determining which type to use for a particular runbook. * You can't convert runbooks from graphical to text type, or the other way around. * There are limitations when using runbooks of different types as child runbooks. For more information, see [Child runbooks in Azure Automation](automation-child-runbooks.md).
-## Graphical runbooks
-
-You can create and edit graphical and graphical PowerShell Workflow runbooks using the graphical editor in the Azure portal. However, you can't create or edit this type of runbook with another tool. Main features of graphical runbooks:
-
-* Exported to files in your Automation account and then imported into another Automation account.
-* Generate PowerShell code.
-* Converted to or from graphical PowerShell Workflow runbooks during import.
-
-### Advantages
-
-* Use visual insert-link-configure authoring model.
-* Focus on how data flows through the process.
-* Visually represent management processes.
-* Include other runbooks as child runbooks to create high-level workflows.
-* Encourage modular programming.
-
-### Limitations
-
-* Can't create or edit outside the Azure portal.
-* Might require a code activity containing PowerShell code to execute complex logic.
-* Can't convert to one of the [text formats](automation-runbook-types.md), nor can you convert a text runbook to graphical format.
-* Can't view or directly edit PowerShell code that the graphical workflow creates. You can view the code you create in any code activities.
-* Can't run runbooks on a Linux Hybrid Runbook Worker. See [Automate resources in your datacenter or cloud by using Hybrid Runbook Worker](automation-hybrid-runbook-worker.md).
-* Graphical runbooks can't be digitally signed.
## PowerShell runbooks PowerShell runbooks are based on Windows PowerShell. You directly edit the code of the runbook using the text editor in the Azure portal. You can also use any offline text editor and [import the runbook](manage-runbooks.md) into Azure Automation. -
-The PowerShell version is determined by the **Runtime version** specified (that is version 7.1 preview or 5.1). The Azure Automation service supports the latest PowerShell runtime.
+The PowerShell version is determined by the **Runtime version** specified (that is version 7.2 (preview), 7.1 (preview) or 5.1). The Azure Automation service supports the latest PowerShell runtime.
-The same Azure sandbox and Hybrid Runbook Worker can execute **PowerShell 5.1** and **PowerShell 7.1** runbooks side by side.
+The same Azure sandbox and Hybrid Runbook Worker can execute **PowerShell 5.1** and **PowerShell 7.1 (preview)** runbooks side by side.
> [!NOTE]
-> At the time of runbook execution, if you select **Runtime Version** as **7.1 (preview)**, PowerShell modules targeting 7.1 runtime version is used and if you select **Runtime Version** as **5.1**, PowerShell modules targeting 5.1 runtime version are used.
+> - Currently, PowerShell 7.2 (preview) runtime version is supported in five regions for Cloud jobs only: West Central US, East US, South Africa North, North Europe, Australia Southeast
+> - At the time of runbook execution, if you select **Runtime Version** as **7.1 (preview)**, PowerShell modules targeting 7.1 (preview) runtime version are used and if you select **Runtime Version** as **5.1**, PowerShell modules targeting 5.1 runtime version are used. This applies for PowerShell 7.2 (preview) modules and runbooks.
Ensure that you select the right Runtime Version for modules.
For example : if you are executing a runbook for a SharePoint automation scenari
:::image type="content" source="./media/automation-runbook-types/runbook-types.png" alt-text="runbook Types.":::
+> [!NOTE]
+> Currently, PowerShell 5.1, PowerShell 7.1 (preview) and PowerShell 7.2 (preview) are supported.
-Currently, PowerShell 5.1 and 7.1 (preview) are supported.
+### Advantages
+- Implement all complex logic with PowerShell code without the other complexities of PowerShell Workflow.
+- Start faster than PowerShell Workflow runbooks, since they don't need to be compiled before running.
+- Run in Azure and on Hybrid Runbook Workers for both Windows and Linux.
-### Advantages
+### Limitations and Known issues
-* Implement all complex logic with PowerShell code without the other complexities of PowerShell Workflow.
-* Start faster than PowerShell Workflow runbooks, since they don't need to be compiled before running.
-* Run in Azure and on Hybrid Runbook Workers for both Windows and Linux.
+The following are the current limitations and known issues with PowerShell runbooks:
-### Limitations - version 5.1
+# [PowerShell 5.1](#tab/lps51)
-* You must be familiar with PowerShell scripting.
-* Runbooks can't use [parallel processing](automation-powershell-workflow.md#use-parallel-processing) to execute multiple actions in parallel.
-* Runbooks can't use [checkpoints](automation-powershell-workflow.md#use-checkpoints-in-a-workflow) to resume runbook if there's an error.
-* You can include only PowerShell, PowerShell Workflow runbooks, and graphical runbooks as child runbooks by using the [Start-AzAutomationRunbook](/powershell/module/az.automation/start-azautomationrunbook) cmdlet, which creates a new job.
-* Runbooks can't use the PowerShell [#Requires](/powershell/module/microsoft.powershell.core/about/about_requires) statement, it is not supported in Azure sandbox or on Hybrid Runbook Workers and might cause the job to fail.
+**Limitations**
-### Known issues - version 5.1
+- You must be familiar with PowerShell scripting.
+- Runbooks can't use [parallel processing](automation-powershell-workflow.md#use-parallel-processing) to execute multiple actions in parallel.
+- Runbooks can't use [checkpoints](automation-powershell-workflow.md#use-checkpoints-in-a-workflow) to resume runbook if there's an error.
+- You can include only PowerShell, PowerShell Workflow runbooks, and graphical runbooks as child runbooks by using the [Start-AzAutomationRunbook](/powershell/module/az.automation/start-azautomationrunbook) cmdlet, which creates a new job.
+- Runbooks can't use the PowerShell [#Requires](/powershell/module/microsoft.powershell.core/about/about_requires) statement, it is not supported in Azure sandbox or on Hybrid Runbook Workers and might cause the job to fail.
-The following are current known issues with PowerShell runbooks:
+**Known issues**
* PowerShell runbooks can't retrieve an unencrypted [variable asset](./shared-resources/variables.md) with a null value. * PowerShell runbooks can't retrieve a variable asset with `*~*` in the name. * A [Get-Process](/powershell/module/microsoft.powershell.management/get-process) operation in a loop in a PowerShell runbook can crash after about 80 iterations. * A PowerShell runbook can fail if it tries to write a large amount of data to the output stream at once. You can typically work around this issue by having the runbook output just the information needed to work with large objects. For example, instead of using `Get-Process` with no limitations, you can have the cmdlet output just the required parameters as in `Get-Process | Select ProcessName, CPU`.
-### Limitations - 7.1 (preview)
-- The Azure Automation internal PowerShell cmdlets are not supported on a Linux Hybrid Runbook Worker. You must import the `automationassets` module at the beginning of your Python runbook to access the Automation account shared resources (assets) functions. -- For the PowerShell 7 runtime version, the module activities are not extracted for the imported modules.-- *PSCredential* runbook parameter type is not supported in PowerShell 7 runtime version.-- PowerShell 7.x does not support workflows. See [this](/powershell/scripting/whats-new/differences-from-windows-powershell?view=powershell-7.1#powershell-workflow&preserve-view=true) for more details.-- PowerShell 7.x currently does not support signed runbooks.-- Source control integration doesn't support PowerShell 7.1. Also, PowerShell 7.1 runbooks in source control gets created in Automation account as Runtime 5.1.
+# [PowerShell 7.1 (preview)](#tab/lps71)
+
+**Limitations**
-### Known Issues - 7.1 (preview)
+- You must be familiar with PowerShell scripting.
+- The Azure Automation internal PowerShell cmdlets are not supported on a Linux Hybrid Runbook Worker. You must import the `automationassets` module at the beginning of your Python runbook to access the Automation account shared resources (assets) functions.
+- For the PowerShell 7 runtime version, the module activities are not extracted for the imported modules.
+- *PSCredential* runbook parameter type is not supported in PowerShell 7 runtime version.
+- PowerShell 7.x does not support workflows. See [this](/powershell/scripting/whats-new/differences-from-windows-powershell?view=powershell-7.1#powershell-workflow&preserve-view=true) for more details.
+- PowerShell 7.x currently does not support signed runbooks.
+- Source control integration doesn't support PowerShell 7.1 (preview) Also, PowerShell 7.1 (preview) runbooks in source control gets created in Automation account as Runtime 5.1.
+
+**Known issues**
- Executing child scripts using `.\child-runbook.ps1` is not supported in this preview. **Workaround**: Use `Start-AutomationRunbook` (internal cmdlet) or `Start-AzAutomationRunbook` (from *Az.Automation* module) to start another runbook from parent runbook.
The following are current known issues with PowerShell runbooks:
- When you import a PowerShell 7.1 module thatΓÇÖs dependent on other modules, you may find that the import button is gray even when PowerShell 7.1 version of the dependent module is installed. For example, Az.Compute version 4.20.0, has a dependency on Az.Accounts being >= 2.6.0. This issue occurs when an equivalent dependent module in PowerShell 5.1 doesn't meet the version requirements. For example, 5.1 version of Az.Accounts was < 2.6.0. - When you start PowerShell 7 runbook using the webhook, it auto-converts the webhook input parameter to an invalid JSON. +
+# [PowerShell 7.2 (preview)](#tab/lps72)
+
+**Limitations**
+
+> [!NOTE]
+> Currently, PowerShell 7.2 (preview) runtime version is supported in five regions for Cloud jobs only: West Central US, East US, South Africa North, North Europe, and Australia Southeast.
+
+- You must be familiar with PowerShell scripting.
+- For the PowerShell 7 runtime version, the module activities are not extracted for the imported modules.
+- *PSCredential* runbook parameter type is not supported in PowerShell 7 runtime version.
+- PowerShell 7.x does not support workflows. See [this](/powershell/scripting/whats-new/differences-from-windows-powershell?view=powershell-7.1#powershell-workflow&preserve-view=true) for more details.
+- PowerShell 7.x currently does not support signed runbooks.
+- Source control integration doesn't support PowerShell 7.2 (preview). Also, PowerShell 7.2 (preview) runbooks in source control gets created in Automation account as Runtime 5.1.
+- Currently, only cloud jobs are supported for PowerShell 7.2 (preview) runtime versions.
+- Logging job operations to the Log Analytics workspace through linked workspace or diagnostics settings are not supported.
+- Currently, PowerShell 7.2 (preview) runbooks are only supported from Azure portal. Rest API and PowerShell is not supported.
+- Az module 8.3.0 is installed by default and cannot be managed at the automation account level. Use custom modules to override the Az module to the desired version.
+- The imported PowerShell 7.2 (preview) module would be validated during job execution. Ensure that all dependencies for the selected module are also imported for successful job execution.
+
+**Known issues**
+
+- Executing child scripts using `.\child-runbook.ps1` is not supported in this preview.
+ **Workaround**: Use `Start-AutomationRunbook` (internal cmdlet) or `Start-AzAutomationRunbook` (from *Az.Automation* module) to start another runbook from parent runbook.
+- Runbook properties defining logging preference is not supported in PowerShell 7 runtime.
+ **Workaround**: Explicitly set the preference at the start of the runbook as below -
+ ```
+ $VerbosePreference = "Continue"
+
+ $ProgressPreference = "Continue"
+ ```
++ ## PowerShell Workflow runbooks PowerShell Workflow runbooks are text runbooks based on [Windows PowerShell Workflow](automation-powershell-workflow.md). You directly edit the code of the runbook using the text editor in the Azure portal. You can also use any offline text editor and [import the runbook](manage-runbooks.md) into Azure Automation.
->[!NOTE]
-> PowerShell 7.1 does not support workflow runbooks.
+> [!NOTE]
+> PowerShell 7.1 (preview) and PowerShell 7.2 (preview) do not support Workflow runbooks.
### Advantages
PowerShell Workflow runbooks are text runbooks based on [Windows PowerShell Work
## Python runbooks
-Python runbooks compile under Python 2 and Python 3. Python 3 runbooks are currently in preview. You can directly edit the code of the runbook using the text editor in the Azure portal. You can also use an offline text editor and [import the runbook](manage-runbooks.md) into Azure Automation.
-
-Python 3 runbooks are supported in the following Azure global infrastructures:
+Python runbooks compile under Python 2, Python 3.8 (preview) and Python 3.10 (preview). You can directly edit the code of the runbook using the text editor in the Azure portal. You can also use an offline text editor and [import the runbook](manage-runbooks.md) into Azure Automation.
-* Azure global
-* Azure Government
+* Python 3.10 (preview) runbooks are currently supported in five regions for cloud jobs only:
+ - West Central US
+ - East US
+ - South Africa North
+ - North Europe
+ - Australia Southeast
### Advantages
-* Use the robust Python libraries.
-* Can run in Azure or on Hybrid Runbook Workers.
-* For Python 2, Windows Hybrid Runbook Workers are supported with [python 2.7](https://www.python.org/downloads/release/latest/python2) installed.
-* For Python 3 Cloud Jobs, Python 3.8 version is supported. Scripts and packages from any 3.x version might work if the code is compatible across different versions.
-* For Python 3 Hybrid jobs on Windows machines, you can choose to install any 3.x version you may want to use.
-* For Python 3 Hybrid jobs on Linux machines, we depend on the Python 3 version installed on the machine to run DSC OMSConfig and the Linux Hybrid Worker. Different versions should work if there are no breaking changes in method signatures or contracts between versions of Python 3.
+> [!NOTE]
+> Importing a Python package may take several minutes.
+
+- Uses the robust Python libraries.
+- Can run in Azure or on Hybrid Runbook Workers.
+- For Python 2, Windows Hybrid Runbook Workers are supported with [python 2.7](https://www.python.org/downloads/release/latest/python2) installed.
+- For Python 3.8 (preview) Cloud Jobs, Python 3.8 (preview) version is supported. Scripts and packages from any 3.x version might work if the code is compatible across different versions.
+- For Python 3.8 (preview) Hybrid jobs on Windows machines, you can choose to install any 3.x version you may want to use.
+- For Python 3.8 (preview) Hybrid jobs on Linux machines, we depend on the Python 3 version installed on the machine to run DSC OMSConfig and the Linux Hybrid Worker. Different versions should work if there are no breaking changes in method signatures or contracts between versions of Python 3.
+ ### Limitations
-* You must be familiar with Python scripting.
-* To use third-party libraries, you must [import the packages](python-packages.md) into the Automation account.
-* Using **Start-AutomationRunbook** cmdlet in PowerShell/PowerShell Workflow to start a Python 3 runbook (preview) doesn't work. You can use **Start-AzAutomationRunbook** cmdlet from Az.Automation module or **Start-AzureRmAutomationRunbook** cmdlet from AzureRm.Automation module to work around this limitation. 
-* Azure Automation doesn't supportΓÇ»**sys.stderr**.
-* The Python **automationassets** package is not available on pypi.org, so it's not available for import onto a Windows machine.
+Following are the limitations of Python runbooks
+
+# [Python 2.7](#tab/py27)
+
+- You must be familiar with Python scripting.
+- For Python 2.7.12 modules use wheel files cp27-amd6.
+- To use third-party libraries, you must [import the packages](python-packages.md) into the Automation account.
+- Azure Automation doesn't supportΓÇ»**sys.stderr**.
+- The Python **automationassets** package is not available on pypi.org, so it's not available for import onto a Windows machine.
++
+# [Python 3.8 (preview)](#tab/py38)
+
+- You must be familiar with Python scripting.
+- For Python 3.8 (preview) modules, use wheel files targeting cp38-amd64.
+- To use third-party libraries, you must [import the packages](python-packages.md) into the Automation account.
+- Using **Start-AutomationRunbook** cmdlet in PowerShell/PowerShell Workflow to start a Python 3.8 (preview) runbook (preview) doesn't work. You can use **Start-AzAutomationRunbook** cmdlet from Az.Automation module or **Start-AzureRmAutomationRunbook** cmdlet from AzureRm.Automation module to work around this limitation. 
+- Azure Automation doesn't supportΓÇ»**sys.stderr**.
+- The Python **automationassets** package is not available on pypi.org, so it's not available for import onto a Windows machine.
+
+# [Python 3.10 (preview)](#tab/py10)
+
+**Limitations**
+
+- For Python 3.10 (preview) modules, currently, only the wheel files targeting cp310 Linux OS are supported. [Learn more](./python-3-packages.md)
+- Currently, only cloud jobs are supported for Python 3.10 (preview) runtime versions.
+- Custom packages for Python 3.10 (preview) are only validated during job runtime. Job is expected to fail if the package is not compatible in the runtime or if required dependencies of packages are not imported into automation account.
+- Currently, Python 3.10 (preview) runbooks are only supported from Azure portal. Rest API and PowerShell is not supported.
++ ### Multiple Python versions
-For a Windows Runbook Worker, when running a Python 2 runbook it looks for the environment variable `PYTHON_2_PATH` first and validates whether it points to a valid executable file. For example, if the installation folder is `C:\Python2`, it would check if `C:\Python2\python.exe` is a valid path. If not found, then it looks for the `PATH` environment variable to do a similar check.
+It is applicable for Windows Hybrid workers. For a Windows Runbook Worker, when running a Python 2 runbook it looks for the environment variable `PYTHON_2_PATH` first and validates whether it points to a valid executable file. For example, if the installation folder is `C:\Python2`, it would check if `C:\Python2\python.exe` is a valid path. If not found, then it looks for the `PATH` environment variable to do a similar check.
For Python 3, it looks for the `PYTHON_3_PATH` env variable first and then falls back to the `PATH` environment variable.
When using only one version of Python, you can add the installation path to the
### Known issues
-For cloud jobs, Python 3 jobs sometimes fail with an exception message `invalid interpreter executable path`. You might see this exception if the job is delayed, starting more than 10 minutes, or using **Start-AutomationRunbook** to start Python 3 runbooks. If the job is delayed, restarting the runbook should be sufficient. Hybrid jobs should work without any issue if using the following steps:
+For cloud jobs, Python 3.8 jobs sometimes fail with an exception message `invalid interpreter executable path`. You might see this exception if the job is delayed, starting more than 10 minutes, or using **Start-AutomationRunbook** to start Python 3.8 runbooks. If the job is delayed, restarting the runbook should be sufficient. Hybrid jobs should work without any issue if using the following steps:
1. Create a new environment variable called `PYTHON_3_PATH` and specify the installation folder. For example, if the installation folder is `C:\Python3`, then this path needs to be added to the variable. 1. Restart the machine after setting the environment variable.
+## Graphical runbooks
+
+You can create and edit graphical and graphical PowerShell Workflow runbooks using the graphical editor in the Azure portal. However, you can't create or edit this type of runbook with another tool. Main features of graphical runbooks:
+
+* Exported to files in your Automation account and then imported into another Automation account.
+* Generate PowerShell code.
+* Converted to or from graphical PowerShell Workflow runbooks during import.
+
+### Advantages
+
+* Use visual insert-link-configure authoring model.
+* Focus on how data flows through the process.
+* Visually represent management processes.
+* Include other runbooks as child runbooks to create high-level workflows.
+* Encourage modular programming.
+
+### Limitations
+
+* Can't create or edit outside the Azure portal.
+* Might require a code activity containing PowerShell code to execute complex logic.
+* Can't convert to one of the [text formats](automation-runbook-types.md), nor can you convert a text runbook to graphical format.
+* Can't view or directly edit PowerShell code that the graphical workflow creates. You can view the code you create in any code activities.
+* Can't run runbooks on a Linux Hybrid Runbook Worker. See [Automate resources in your datacenter or cloud by using Hybrid Runbook Worker](automation-hybrid-runbook-worker.md).
+* Graphical runbooks can't be digitally signed.
++ ## Next steps * To learn about PowerShell runbooks, see [Tutorial: Create a PowerShell runbook](./learn/powershell-runbook-managed-identity.md).
automation Automation Webhooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-webhooks.md
Consider the following strategies:
## Create a webhook > [!NOTE]
-> When you use the webhook with PowerShell 7 runbook, it auto-converts the webhook input parameter to an invalid JSON. For more information, see [Known issues - 7.1 (preview)](./automation-runbook-types.md#known-issues71-preview). We recommend that you use the webhook with PowerShell 5 runbook.
+> When you use the webhook with PowerShell 7 runbook, it auto-converts the webhook input parameter to an invalid JSON. For more information, see [Known issues - PowerShell 7.1 (preview)](./automation-runbook-types.md#limitations-and-known-issues). We recommend that you use the webhook with PowerShell 5 runbook.
1. Create PowerShell runbook with the following code:
automation Automation Tutorial Runbook Textual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/learn/automation-tutorial-runbook-textual.md
Title: Tutorial - Create a PowerShell Workflow runbook in Azure Automation
description: This tutorial teaches you to create, test, and publish a PowerShell Workflow runbook. Previously updated : 10/28/2021 Last updated : 10/16/2022 #Customer intent: As a developer, I want use workflow runbooks so that I can automate the parallel starting of VMs.
This tutorial walks you through the creation of a [PowerShell Workflow runbook](../automation-runbook-types.md#powershell-workflow-runbooks) in Azure Automation. PowerShell Workflow runbooks are text runbooks based on Windows PowerShell Workflow. You can create and edit the code of the runbook using the text editor in the Azure portal. >[!NOTE]
-> This article is applicable for PowerShell 5.1; PowerShell 7.1 (preview) does not support workflows.
+> This article is applicable for PowerShell 5.1; PowerShell 7.1 (preview) and PowerShell 7.2 (preview) don't support workflows.
In this tutorial, you learn how to:
automation Python 3 Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/python-3-packages.md
Title: Manage Python 3 packages in Azure Automation
description: This article tells how to manage Python 3 packages (preview) in Azure Automation. Previously updated : 11/01/2021 Last updated : 10/26/2022 -+ # Manage Python 3 packages (preview) in Azure Automation
-This article describes how to import, manage, and use Python 3 (preview) packages in Azure Automation running on the Azure sandbox environment and Hybrid Runbook Workers.To help simplify runbooks, you can use Python packages to import the modules you need.
-
-To support Python 3 runbooks in the Automation service, Azure package 4.0.0 is installed by default in the Automation account. The default version can be overridden by importing Python packages into your Automation account.
- Preference is given to the imported version in your Automation account. To import a single package, see [Import a package](#import-a-package). To import a package with multiple packages, see [Import a package with dependencies](#import-a-package-with-dependencies).
+This article describes how to import, manage, and use Python 3 (preview) packages in Azure Automation running on the Azure sandbox environment and Hybrid Runbook Workers. Python packages should be downloaded on Hybrid Runbook workers for successful job execution. To help simplify runbooks, you can use Python packages to import the modules you need.
For information on managing Python 2 packages, see [Manage Python 2 packages](./python-packages.md).
+## Default Python packages
+
+To support Python 3.8 (preview) runbooks in the Automation service, Azure package 4.0.0 is installed by default in the Automation account. The default version can be overridden by importing Python packages into your Automation account.
+
+Preference is given to the imported version in your Automation account. To import a single package, see [Import a package](#import-a-package). To import a package with multiple packages, see [Import a package with dependencies](#import-a-package-with-dependencies).
+
+There are no default packages installed for Python 3.10 (preview).
+ ## Packages as source files
-Azure Automation supports only a Python package that only contains Python code and doesn't include other language extensions or code in other languages. However, the Azure Sandbox environment might not have the required compilers for C/C++ binaries, so it's recommended to use [wheel files](https://pythonwheels.com/) instead. The [Python Package Index](https://pypi.org/) (PyPI) is a repository of software for the Python programming language. When selecting a Python 3 package to import into your Automation account from PyPI, note the following filename parts:
+Azure Automation supports only a Python package that only contains Python code and doesn't include other language extensions or code in other languages. However, the Azure Sandbox environment might not have the required compilers for C/C++ binaries, so it's recommended to use [wheel files](https://pythonwheels.com/) instead.
+
+> [!NOTE]
+> Currently, Python 3.10 (preview) only supports wheel files.
+
+The [Python Package Index](https://pypi.org/) (PyPI) is a repository of software for the Python programming language. When selecting a Python 3 package to import into your Automation account from PyPI, note the following filename parts:
+
+Select a Python version:
+
+#### [Python 3.8 (preview)](#tab/py3)
| Filename part | Description | |||
-|cp38|Automation supports **Python 3.8.x** for Cloud Jobs.|
+|cp38|Automation supports **Python 3.8 (preview)** for Cloud jobs.|
|amd64|Azure sandbox processes are **Windows 64-bit** architecture.|
-For example, if you wanted to import pandas, you could select a wheel file with a name similar as `pandas-1.2.3-cp38-win_amd64.whl`.
+For example:
+- To import pandas - select a wheel file with a name similar as `pandas-1.2.3-cp38-win_amd64.whl`.
-Some Python packages available on PyPI don't provide a wheel file. In this case, download the source (.zip or .tar.gz file) and generate the wheel file using `pip`. For example, perform the following steps using a 64-bit machine with Python 3.8.x and wheel package installed:
+Some Python packages available on PyPI don't provide a wheel file. In this case, download the source (.zip or .tar.gz file) and generate the wheel file using `pip`.
+
+Perform the following steps using a 64-bit Windows machine with Python 3.8.x and wheel package installed:
1. Download the source file `pandas-1.2.4.tar.gz`.
-1. Run pip to get the wheel file with the following command: `pip wheel --no-deps pandas-1.2.4.tar.gz`.
+1. Run pip to get the wheel file with the following command: `pip wheel --no-deps pandas-1.2.4.tar.gz`
+
+#### [Python 3.10 (preview)](#tab/py10)
+
+| Filename part | Description |
+|||
+|cp310|Automation supports **Python 3.10 (preview)** for Cloud jobs.|
+|manylinux_x86_64|Azure sandbox processes are Linux based 64-bit architecture for Python 3.10 (preview) runbooks.
++
+For example:
+- To import pandas - select a wheel file with a name similar as `pandas-1.5.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl`
++
+Some Python packages available on PyPI don't provide a wheel file. In this case, download the source (.zip or .tar.gz file) and generate the wheel file using pip.
+
+Perform the following steps using a 64-bit Linux machine with Python 3.10.x and wheel package installed:
+
+1. Download the source file `pandas-1.2.4.tar.gz.`
+1. Run pip to get the wheel file with the following command: `pip wheel --no-deps pandas-1.2.4.tar.gz`
+++ ## Import a package
Some Python packages available on PyPI don't provide a wheel file. In this case,
:::image type="content" source="media/python-3-packages/add-python-3-package.png" alt-text="Screenshot of the Python packages page shows Python packages in the left menu and Add a Python package highlighted.":::
-1. On the **Add Python Package** page, select a local package to upload. The package can be a **.whl** or **.tar.gz** file.
-1. Enter a name and select the **Runtime Version** as Python 3.8.x (preview)
+1. On the **Add Python Package** page, select a local package to upload. The package can be a **.whl** or **.tar.gz** file for Python 3.8 (preview) and **.whl** file for Python 3.10 (preview).
+1. Enter a name and select the **Runtime Version** as Python 3.8 (preview) or Python 3.10 (preview).
+ > [!NOTE]
+ > Python 3.10 (preview) runtime version is currently supported in five regions for Cloud jobs only: West Central US, East US, South Africa North, North Europe, Australia Southeast.
1. Select **Import**
- :::image type="content" source="media/python-3-packages/upload-package.png" alt-text="Screenshot shows the Add Python 3.8.x Package page with an uploaded tar.gz file selected.":::
+ :::image type="content" source="media/python-3-packages/upload-package.png" alt-text="Screenshot shows the Add Python 3.8 (preview) Package page with an uploaded tar.gz file selected.":::
After a package has been imported, it's listed on the Python packages page in your Automation account. To remove a package, select the package and click **Delete**. ### Import a package with dependencies
-You can import a Python 3 package and its dependencies by importing the following Python script into a Python 3 runbook, and then running it.
+You can import a Python 3.8 (preview) package and its dependencies by importing the following Python script into a Python 3 runbook, and then running it.
```cmd https://github.com/azureautomation/runbooks/blob/master/Utility/Python/import_py3package_from_pypi.py
https://github.com/azureautomation/runbooks/blob/master/Utility/Python/import_py
#### Importing the script into a runbook For information on importing the runbook, see [Import a runbook from the Azure portal](manage-runbooks.md#import-a-runbook-from-the-azure-portal). Copy the file from GitHub to storage that the portal can access before you run the import.
+> [!NOTE]
+> Currently, importing a runbook from Azure Portal isn't supported for Python 3.10 (preview).
++ The **Import a runbook** page defaults the runbook name to match the name of the script. If you have access to the field, you can change the name. **Runbook type** may default to **Python 2**. If it does, make sure to change it to **Python 3**. :::image type="content" source="media/python-3-packages/import-python-3-package.png" alt-text="Screenshot shows the Python 3 runbook import page.":::
For more information on using parameters with runbooks, see [Work with runbook p
With the package imported, you can use it in a runbook. Add the following code to list all the resource groups in an Azure subscription. ```python
-import os
-import azure.mgmt.resource
-import automationassets
-
-def get_automation_runas_credential(runas_connection):
- from OpenSSL import crypto
- import binascii
- from msrestazure import azure_active_directory
- import adal
-
- # Get the Azure Automation RunAs service principal certificate
- cert = automationassets.get_automation_certificate("AzureRunAsCertificate")
- pks12_cert = crypto.load_pkcs12(cert)
- pem_pkey = crypto.dump_privatekey(crypto.FILETYPE_PEM,pks12_cert.get_privatekey())
-
- # Get run as connection information for the Azure Automation service principal
- application_id = runas_connection["ApplicationId"]
- thumbprint = runas_connection["CertificateThumbprint"]
- tenant_id = runas_connection["TenantId"]
-
- # Authenticate with service principal certificate
- resource ="https://management.core.windows.net/"
- authority_url = ("https://login.microsoftonline.com/"+tenant_id)
- context = adal.AuthenticationContext(authority_url)
- return azure_active_directory.AdalAuthentication(
- lambda: context.acquire_token_with_client_certificate(
- resource,
- application_id,
- pem_pkey,
- thumbprint)
- )
-
-# Authenticate to Azure using the Azure Automation RunAs service principal
-runas_connection = automationassets.get_automation_connection("AzureRunAsConnection")
-azure_credential = get_automation_runas_credential(runas_connection)
-
-# Intialize the resource management client with the RunAs credential and subscription
-resource_client = azure.mgmt.resource.ResourceManagementClient(
- azure_credential,
- str(runas_connection["SubscriptionId"]))
-
-# Get list of resource groups and print them out
-groups = resource_client.resource_groups.list()
-for group in groups:
- print(group.name)
+#!/usr/bin/env python3
+import os
+import requests
+# printing environment variables
+endPoint = os.getenv('IDENTITY_ENDPOINT')+"?resource=https://management.azure.com/"
+identityHeader = os.getenv('IDENTITY_HEADER')
+payload={}
+headers = {
+ 'X-IDENTITY-HEADER': identityHeader,
+ 'Metadata': 'True'
+}
+response = requests.request("GET", endPoint, headers=headers, data=payload)
+print(response.text)
+ ``` > [!NOTE]
-> The Python `automationassets` package is not available on pypi.org, so it's not available for import onto a Windows machine.
+> The Python `automationassets` package is not available on pypi.org, so it's not available for import on to a Windows hybrid runbook worker.
+ ## Identify available packages in sandbox
for package in installed_packages_list:
print(package) ```
+### Python 3.8 (preview) PowerShell cmdlets
+
+#### Add new Python 3.8 (preview) package
+
+```python
+New-AzAutomationPython3Package -AutomationAccountName tarademo -ResourceGroupName mahja -Name requires.io -ContentLinkUri https://files.pythonhosted.org/packages/7f/e2/85dfb9f7364cbd7a9213caea0e91fc948da3c912a2b222a3e43bc9cc6432/requires.io-0.2.6-py2.py3-none-any.whl
+
+Response
+ResourceGroupName : mahja
+AutomationAccountName : tarademo
+Name : requires.io
+IsGlobal : False
+Version :
+SizeInBytes : 0
+ActivityCount : 0
+CreationTime : 9/26/2022 1:37:13 PM +05:30
+LastModifiedTime : 9/26/2022 1:37:13 PM +05:30
+ProvisioningState : Creating
+```
+
+#### List all Python 3.8 (preview) packages
+
+```python
+Get-AzAutomationPython3Package -AutomationAccountName tarademo -ResourceGroupName mahja
+
+Response :
+ResourceGroupName : mahja
+AutomationAccountName : tarademo
+Name : cryptography
+IsGlobal : False
+Version :
+SizeInBytes : 0
+ActivityCount : 0
+CreationTime : 9/26/2022 11:52:28 AM +05:30
+LastModifiedTime : 9/26/2022 12:11:00 PM +05:30
+ProvisioningState : Failed
+ResourceGroupName : mahja
+AutomationAccountName : tarademo
+Name : requires.io
+IsGlobal : False
+Version :
+SizeInBytes : 0
+ActivityCount : 0
+CreationTime : 9/26/2022 1:37:13 PM +05:30
+LastModifiedTime : 9/26/2022 1:39:04 PM +05:30
+ProvisioningState : ContentValidated
+ResourceGroupName : mahja
+AutomationAccountName : tarademo
+Name : sockets
+IsGlobal : False
+Version : 1.0.0
+SizeInBytes : 4495
+ActivityCount : 0
+CreationTime : 9/20/2022 12:46:28 PM +05:30
+LastModifiedTime : 9/22/2022 5:03:42 PM +05:30
+ProvisioningState : Succeeded
+```
+
+#### Obtain details about specific package
+
+```python
+Get-AzAutomationPython3Package -AutomationAccountName tarademo -ResourceGroupName mahja -Name sockets
++
+Response
+ResourceGroupName : mahja
+AutomationAccountName : tarademo
+Name : sockets
+IsGlobal : False
+Version : 1.0.0
+SizeInBytes : 4495
+ActivityCount : 0
+CreationTime : 9/20/2022 12:46:28 PM +05:30
+LastModifiedTime : 9/22/2022 5:03:42 PM +05:30
+ProvisioningState : Succeeded
+```
+
+#### Remove Python 3.8 (preview) package
+
+```python
+Remove-AzAutomationPython3Package -AutomationAccountName tarademo -ResourceGroupName mahja -Name sockets
+```
+
+#### Update Python 3.8 (preview) package
+
+```python
+Set-AzAutomationPython3Package -AutomationAccountName tarademo -ResourceGroupName mahja -Name requires.io -ContentLinkUri https://files.pythonhosted.org/packages/7f/e2/85dfb9f7364cbd7a9213caea0e91fc948da3c912a2b222a3e43bc9cc6432/requires.io-0.2.6-py2.py3-none-any.whl
++
+ResourceGroupName : mahja
+AutomationAccountName : tarademo
+Name : requires.io
+IsGlobal : False
+Version : 0.2.6
+SizeInBytes : 10109
+ActivityCount : 0
+CreationTime : 9/26/2022 1:37:13 PM +05:30
+LastModifiedTime : 9/26/2022 1:43:12 PM +05:30
+ProvisioningState : Creating
+```
+ ## Next steps To prepare a Python runbook, see [Create a Python runbook](learn/automation-tutorial-runbook-textual-python-3.md).
azure-app-configuration Quickstart Python Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-python-provider.md
+
+ Title: Quickstart for using Azure App Configuration with Python apps using the Python provider | Microsoft Docs
+description: In this quickstart, create a Python app with the Azure App Configuration Python provider to centralize storage and management of application settings separate from your code.
+++
+ms.devlang: python
++ Last updated : 10/31/2022+
+#Customer intent: As a Python developer, I want to manage all my app settings in one place.
+
+# Quickstart: Create a Python app with the Azure App Configuration Python provider
+
+In this quickstart, you will use the Python provider for Azure App Configuration to centralize storage and management of application settings using the [Azure App Configuration Python provider client library](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/appconfiguration/azure-appconfiguration-provider).
+
+The Python App Configuration provider is a library running on top of the Azure SDK for Python, helping Python developers easily consume the App Configuration service. It enables configuration settings to be used like a dictionary.
+
+## Prerequisites
+
+- Azure subscription - [create one for free](https://azure.microsoft.com/free/)
+- Python 3.6 or later - for information on setting up Python on Windows, see the [Python on Windows documentation](/windows/python/)
+
+## Create an App Configuration store
++
+9. Select **Configuration Explorer** > **Create** > **Key-value** to add the following key-value pairs:
+
+ | Key | Value | Label | Content type |
+ |-|-|-|--|
+ | *message* | *Hello* | Leave empty | Leave empty |
+ | *test.message* | *Hello test* | Leave empty | Leave empty |
+ | *my_json* | *{"key":"value"}* | Leave empty | *application/json* |
+
+10. Select **Apply**.
+
+## Set up the Python app
+
+1. Create a new directory for the project named *app-configuration-quickstart*.
+
+ ```console
+ mkdir app-configuration-quickstart
+ ```
+
+1. Switch to the newly created *app-configuration-quickstart* directory.
+
+ ```console
+ cd app-configuration-quickstart
+ ```
+
+1. Install the Azure App Configuration provider by using the `pip install` command.
+
+ ```console
+ pip install azure-appconfiguration-provider
+ ```
+
+1. Create a new file called *app-configuration-quickstart.py* in the *app-configuration-quickstart* directory and add the following code:
+
+ ```python
+ from azure.appconfiguration.provider import (
+ AzureAppConfigurationProvider,
+ SettingSelector
+ )
+ import os
+
+ connection_string = os.environ.get("AZURE_APPCONFIG_CONNECTION_STRING")
+
+ # Connect to Azure App Configuration using a connection string.
+ config = AzureAppConfigurationProvider.load(
+ connection_string=connection_string)
+
+ # Find the key "message" and print its value.
+ print(config["message"])
+ # Find the key "my_json" and print the value for "key" from the dictionary.
+ print(config["my_json"]["key"])
+
+ # Connect to Azure App Configuration using a connection string and trimmed key prefixes.
+ trimmed = {"test."}
+ config = AzureAppConfigurationProvider.load(
+ connection_string=connection_string, trimmed_key_prefixes=trimmed)
+ # From the keys with trimmed prefixes, find a key with "message" and print its value.
+ print(config["message"])
+
+ # Connect to Azure App Configuration using SettingSelector.
+ selects = {SettingSelector("message*", "\0")}
+ config = AzureAppConfigurationProvider.load(
+ connection_string=connection_string, selects=selects)
+
+ # Print True or False to indicate if "message" is found in Azure App Configuration.
+ print("message found: " + str("message" in config))
+ print("test.message found: " + str("test.message" in config))
+ ```
+
+## Configure your App Configuration connection string
+
+1. Set an environment variable named **AZURE_APPCONFIG_CONNECTION_STRING**, and set it to the connection string of your App Configuration store. At the command line, run the following command:
+
+ ### [Windows command prompt](#tab/windowscommandprompt)
+
+ To build and run the app locally using the Windows command prompt, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
+
+ ```cmd
+ setx AZURE_APPCONFIG_CONNECTION_STRING "connection-string-of-your-app-configuration-store"
+ ```
+
+ ### [PowerShell](#tab/powershell)
+
+ If you use Windows PowerShell, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
+
+ ```azurepowershell
+ $Env:AZURE_APPCONFIG_CONNECTION_STRING = "<app-configuration-store-connection-string>"
+ ```
+
+ ### [macOS](#tab/unix)
+
+ If you use macOS, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
+
+ ```console
+ export AZURE_APPCONFIG_CONNECTION_STRING='<app-configuration-store-connection-string>'
+ ```
+
+ ### [Linux](#tab/linux)
+
+ If you use Linux, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
+
+ ```console
+ export AZURE_APPCONFIG_CONNECTION_STRING='<app-configuration-store-connection-string>'
+ ```
+
+1. Restart the command prompt to allow the change to take effect. Print out the value of the environment variable to validate that it is set properly with the command below.
+
+ ### [Windows command prompt](#tab/windowscommandprompt)
+
+ Using the Windows command prompt, run the following command:
+
+ ```cmd
+ printenv AZURE_APPCONFIG_CONNECTION_STRING
+ ```
+
+ ### [PowerShell](#tab/powershell)
+
+ If you use Windows PowerShell, run the following command:
+
+ ```azurepowershell
+ $Env:AZURE_APPCONFIG_CONNECTION_STRING
+ ```
+
+ ### [macOS](#tab/unix)
+
+ If you use macOS, run the following command:
+
+ ```console
+ echo "$AZURE_APPCONFIG_CONNECTION_STRING"
+ ```
+
+ ### [Linux](#tab/linux)
+
+ If you use Linux, run the following command:
+
+ ```console
+ echo "$AZURE_APPCONFIG_CONNECTION_STRING"
+
+1. After the build successfully completes, run the following command to run the app locally:
+
+ ```python
+ python app-configuration-quickstart.py
+ ```
+
+ You should see the following output:
+
+ ```Output
+ Hello
+ value
+ Hello test
+ message found: True
+ test.message found: False
+ ```
+
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you created a new App Configuration store and learned how to access key-values from a Python app.
+
+For additional code samples, visit:
+
+> [!div class="nextstepaction"]
+> [Azure App Configuration Python provider](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/appconfiguration/azure-appconfiguration-provider)
azure-app-configuration Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-python.md
- Title: Quickstart for using Azure App Configuration with Python apps | Microsoft Docs
-description: In this quickstart, create a Python app with Azure App Configuration to centralize storage and management of application settings separate from your code.
+
+ Title: Quickstart for using Azure App Configuration with Python apps using the Azure SDK for Python | Microsoft Docs
+description: In this quickstart, create a Python app with the Azure SDK for Python to centralize storage and management of application settings separate from your code.
ms.devlang: python - Previously updated : 9/17/2020+ Last updated : 10/21/2022 #Customer intent: As a Python developer, I want to manage all my app settings in one place.
-# Quickstart: Create a Python app with Azure App Configuration
+# Quickstart: Create a Python app with the Azure SDK for Python
+
+In this quickstart, you will use the Azure SDK for Python to centralize storage and management of application settings using the [Azure App Configuration client library for Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/appconfiguration/azure-appconfiguration).
-In this quickstart, you will use Azure App Configuration to centralize storage and management of application settings using the [Azure App Configuration client library for Python](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/appconfiguration/azure-appconfiguration).
+To use Azure App Configuration with the Python provider instead of the SDK, go to [Python provider](./quickstart-python-provider.md). The Python provider enables loading configuration settings from an Azure App Configuration store in a managed way.
## Prerequisites - Azure subscription - [create one for free](https://azure.microsoft.com/free/)-- Python 2.7, or 3.6 or later - For information on setting up Python on Windows, see the [Python on Windows documentation](/windows/python/)
+- Python 3.6 or later - for information on setting up Python on Windows, see the [Python on Windows documentation](/windows/python/)
## Create an App Configuration store [!INCLUDE [azure-app-configuration-create](../../includes/azure-app-configuration-create.md)]
-7. Select **Configuration Explorer** > **Create** > **Key-value** to add the following key-value pairs:
+9. Select **Configuration Explorer** > **Create** > **Key-value** to add the following key-value pairs:
- | Key | Value |
- |||
- | TestApp:Settings:Message | Data from Azure App Configuration |
+ | Key | Value |
+ |-|-|
+ | *TestApp:Settings:Message* | *Data from Azure App Configuration* |
Leave **Label** and **Content Type** empty for now.
-8. Select **Apply**.
+10. Select **Apply**.
## Setting up the Python app
In this quickstart, you will use Azure App Configuration to centralize storage a
``` > [!NOTE]
-> The code snippets in this quickstart will help you get started with the App Configuration client library for Python. For your application, you should also consider handling exceptions according to your needs. To learn more about exception handling, please refer to our [Python SDK documentation](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/appconfiguration/azure-appconfiguration).
+> The code snippets in this quickstart will help you get started with the App Configuration client library for Python. For your application, you should also consider handling exceptions according to your needs. To learn more about exception handling, please refer to our [Python SDK documentation](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/appconfiguration/azure-appconfiguration).
## Configure your App Configuration connection string
-1. Set an environment variable named **AZURE_APP_CONFIG_CONNECTION_STRING**, and set it to the access key to your App Configuration store. At the command line, run the following command:
+1. Set an environment variable named **AZURE_APPCONFIG_CONNECTION_STRING**, and set it to the connection string of your App Configuration store. At the command line, run the following command:
+
+ ### [Windows command prompt](#tab/windowscommandprompt)
+
+ To build and run the app locally using the Windows command prompt, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
```cmd
- setx AZURE_APP_CONFIG_CONNECTION_STRING "connection-string-of-your-app-configuration-store"
+ setx AZURE_APPCONFIG_CONNECTION_STRING "connection-string-of-your-app-configuration-store"
+ ```
+
+ ### [PowerShell](#tab/powershell)
+
+ If you use Windows PowerShell, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
+
+ ```azurepowershell
+ $Env:AZURE_APPCONFIG_CONNECTION_STRING = "<app-configuration-store-connection-string>"
```
+ ### [macOS](#tab/unix)
+
+ If you use macOS, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
+
+ ```console
+ export AZURE_APPCONFIG_CONNECTION_STRING='<app-configuration-store-connection-string>'
+ ```
+
+ ### [Linux](#tab/linux)
+
+ If you use Linux, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
+
+ ```console
+ export AZURE_APPCONFIG_CONNECTION_STRING='<app-configuration-store-connection-string>'
+ ```
+
+1. Restart the command prompt to allow the change to take effect. Print out the value of the environment variable to validate that it is set properly with the command below.
+
+ ### [Windows command prompt](#tab/windowscommandprompt)
+
+ Using the Windows command prompt, run the following command:
+
+ ```cmd
+ printenv AZURE_APPCONFIG_CONNECTION_STRING
+ ```
+
+ ### [PowerShell](#tab/powershell)
+ If you use Windows PowerShell, run the following command: ```azurepowershell
- $Env:AZURE_APP_CONFIG_CONNECTION_STRING = "connection-string-of-your-app-configuration-store"
+ $Env:AZURE_APPCONFIG_CONNECTION_STRING
```
- If you use macOS or Linux, run the following command:
+ ### [macOS](#tab/unix)
+
+ If you use macOS, run the following command:
```console
- export AZURE_APP_CONFIG_CONNECTION_STRING='connection-string-of-your-app-configuration-store'
+ echo "$AZURE_APPCONFIG_CONNECTION_STRING"
```
-2. Restart the command prompt to allow the change to take effect. Print out the value of the environment variable to validate that it is set properly.
+ ### [Linux](#tab/linux)
+
+ If you use Linux, run the following command:
+
+ ```console
+ echo "$AZURE_APPCONFIG_CONNECTION_STRING"
+ ```
+
+1. Restart the command prompt to allow the change to take effect. Print out the value of the environment variable to validate that it is set properly.
## Code samples
The sample code snippets in this section show you how to perform common operatio
> [!NOTE] > The App Configuration client library refers to a key-value object as `ConfigurationSetting`. Therefore, in this article, the **key-values** in App Configuration store will be referred to as **configuration settings**.
-* [Connect to an App Configuration store](#connect-to-an-app-configuration-store)
-* [Get a configuration setting](#get-a-configuration-setting)
-* [Add a configuration setting](#add-a-configuration-setting)
-* [Get a list of configuration settings](#get-a-list-of-configuration-settings)
-* [Lock a configuration setting](#lock-a-configuration-setting)
-* [Unlock a configuration setting](#unlock-a-configuration-setting)
-* [Update a configuration setting](#update-a-configuration-setting)
-* [Delete a configuration setting](#delete-a-configuration-setting)
+Learn below how to:
+
+- [Connect to an App Configuration store](#connect-to-an-app-configuration-store)
+- [Get a configuration setting](#get-a-configuration-setting)
+- [Add a configuration setting](#add-a-configuration-setting)
+- [Get a list of configuration settings](#get-a-list-of-configuration-settings)
+- [Lock a configuration setting](#lock-a-configuration-setting)
+- [Unlock a configuration setting](#unlock-a-configuration-setting)
+- [Update a configuration setting](#update-a-configuration-setting)
+- [Delete a configuration setting](#delete-a-configuration-setting)
### Connect to an App Configuration store The following code snippet creates an instance of **AzureAppConfigurationClient** using the connection string stored in your environment variables. ```python
- connection_string = os.getenv('AZURE_APP_CONFIG_CONNECTION_STRING')
+ connection_string = os.getenv('AZURE_APPCONFIG_CONNECTION_STRING')
app_config_client = AzureAppConfigurationClient.from_connection_string(connection_string) ```
The following code snippet retrieves a configuration setting by `key` name.
### Add a configuration setting
-The following code snippet creates a `ConfigurationSetting` object with `key` and `value` fields and invokes the `add_configuration_setting` method.
+The following code snippet creates a `ConfigurationSetting` object with `key` and `value` fields and invokes the `add_configuration_setting` method.
This method will throw an exception if you try to add a configuration setting that already exists in your store. If you want to avoid this exception, the [set_configuration_setting](#update-a-configuration-setting) method can be used instead. ```python
The `set_configuration_setting` method can be used to update an existing setting
The following code snippet deletes a configuration setting by `key` name. ```python+ deleted_config_setting = app_config_client.delete_configuration_setting(key="TestApp:Settings:NewSetting") print("\nDeleted configuration setting:") print("Key: " + deleted_config_setting.key + ", Value: " + deleted_config_setting.value)
try:
print("Azure App Configuration - Python Quickstart") # Quickstart code goes here
- connection_string = os.getenv('AZURE_APP_CONFIG_CONNECTION_STRING')
+ connection_string = os.getenv('AZURE_APPCONFIG_CONNECTION_STRING')
app_config_client = AzureAppConfigurationClient.from_connection_string(connection_string) retrieved_config_setting = app_config_client.get_configuration_setting(key='TestApp:Settings:Message')
Key: TestApp:Settings:NewSetting, Value: Value has been updated!
## Clean up resources - [!INCLUDE [azure-app-configuration-cleanup](../../includes/azure-app-configuration-cleanup.md)] ## Next steps
In this quickstart, you created a new App Configuration store and learned how to
For additional code samples, visit: > [!div class="nextstepaction"]
-> [Azure App Configuration client library samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/appconfiguration/azure-appconfiguration/samples)
+> [Azure App Configuration client library samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/appconfiguration/azure-appconfiguration/samples)
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
az extension add --name k8s-configuration
> [!NOTE] > Eventually Azure will stop supporting GitOps with Flux v1, so begin using [Flux v2](./tutorial-use-gitops-flux2.md) as soon as possible.
-To help troubleshoot issues with `sourceControlConfigurations` resource (Flux v1), run these az commands with `--debug` parameter specified:
+To help troubleshoot issues with `sourceControlConfigurations` resource (Flux v1), run these Azure CLI commands with `--debug` parameter specified:
```azurecli az provider show -n Microsoft.KubernetesConfiguration --debug
metadata:
### Flux v2 - General
-To help troubleshoot issues with `fluxConfigurations` resource (Flux v2), run these az commands with `--debug` parameter specified:
+To help troubleshoot issues with `fluxConfigurations` resource (Flux v2), run these Azure CLI commands with the `--debug` parameter specified:
```azurecli az provider show -n Microsoft.KubernetesConfiguration --debug
azure-functions Create First Function Vs Code Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-python.md
Before you begin, make sure that you have the following requirements in place:
+ The [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code. ::: zone pivot="python-mode-configuration"
-+ The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code.
++ The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code, version 1.8.3 or a later version. ::: zone-end ::: zone pivot="python-mode-decorators" + The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code, version 1.8.1 or later.
azure-functions Functions How To Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-github-actions.md
To download the publishing profile of your function app:
### Add the GitHub secret
-1. In [GitHub](https://github.com), browse to your repository, select **Settings** > **Secrets** > **Add a new secret**.
+1. In [GitHub](https://github.com/), go to your repository.
- :::image type="content" source="media/functions-how-to-github-actions/add-secret.png" alt-text="Add Secret":::
+1. Select **Security > Secrets and variables > Actions**.
-1. Add a new secret using `AZURE_FUNCTIONAPP_PUBLISH_PROFILE` for **Name**, the content of the publishing profile file for **Value**, and then select **Add secret**.
+1. Select **New repository secret**.
+
+1. Add a new secret with the name `AZURE_FUNCTIONAPP_PUBLISH_PROFILE` and the value set to the contents of the publishing profile file.
+
+1. Select **Add secret**.
GitHub can now authenticate to your function app in Azure.
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
As a Python developer, you may also be interested in one of the following articl
| <ul><li>[Python function using Visual Studio Code](./create-first-function-vs-code-python.md?pivots=python-mode-configuration)</li><li>[Python function with terminal/command prompt](./create-first-function-cli-python.md?pivots=python-mode-configuration)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[Performance&nbsp;considerations](functions-best-practices.md)</li></ul> | <ul><li>[Image classification with PyTorch](machine-learning-pytorch.md)</li><li>[Azure Automation sample](/samples/azure-samples/azure-functions-python-list-resource-groups/azure-functions-python-sample-list-resource-groups/)</li><li>[Machine learning with TensorFlow](functions-machine-learning-tensorflow.md)</li><li>[Browse Python samples](/samples/browse/?products=azure-functions&languages=python)</li></ul> | ::: zone-end ::: zone pivot="python-mode-decorators"
-| Getting started | Concepts|
+| Getting started | Concepts| Samples |
|--|--|--|
-| <ul><li>[Python function using Visual Studio Code](./create-first-function-vs-code-python.md?pivots=python-mode-decorators)</li><li>[Python function with terminal/command prompt](./create-first-function-cli-python.md?pivots=python-mode-decorators)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[Performance&nbsp;considerations](functions-best-practices.md)</li></ul> |
+| <ul><li>[Python function using Visual Studio Code](./create-first-function-vs-code-python.md?pivots=python-mode-decorators)</li><li>[Python function with terminal/command prompt](./create-first-function-cli-python.md?pivots=python-mode-decorators)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[Performance&nbsp;considerations](functions-best-practices.md)</li></ul> | <li>[Code Examples](functions-bindings-triggers-python.md)</li> |
::: zone-end > [!NOTE]
def main(req: azure.functions.HttpRequest) -> str:
return f'Hello, {user}!' ```
-At this time, only specific triggers and bindings are supported by the v2 programming model. Supported triggers and bindings are as follows.
-
-| Type | Trigger | Input Binding | Output Binding |
-| | | | |
-| HTTP | x | | |
-| Timer | x | | |
-| Azure Queue Storage | x | | x |
-| Azure Service Bus Topic | x | | x |
-| Azure Service Bus Queue | x | | x |
-| Azure Cosmos DB | x | x | x |
-| Azure Blob Storage | x | x | x |
-| Azure Event Grid | x | | x |
+At this time, only specific triggers and bindings are supported by the v2 programming model. For more information, see [Triggers and inputs](#triggers-and-inputs).
To learn about known limitations with the v2 model and their workarounds, see [Troubleshoot Python errors in Azure Functions](./recover-python-functions.md?pivots=python-mode-decorators). ::: zone-end
app.register_functions(bp)
``` ::: zone-end - ## Import behavior You can import modules in your function code using both absolute and relative references. Based on the folder structure shown above, the following imports work from within the function file *<project_root>\my\_first\_function\\_\_init\_\_.py*: ```python
When the function is invoked, the HTTP request is passed to the function as `req
::: zone pivot="python-mode-decorators" Inputs are divided into two categories in Azure Functions: trigger input and other input. Although they're defined using different decorators, usage is similar in Python code. Connection strings or secrets for trigger and input sources map to values in the `local.settings.json` file when running locally, and the application settings when running in Azure.
-As an example, the following code demonstrates the difference between the two:
+As an example, the following code demonstrates how to define a Blob storage input binding:
```json // local.settings.json
As an example, the following code demonstrates the difference between the two:
"IsEncrypted": false, "Values": { "FUNCTIONS_WORKER_RUNTIME": "python",
- "AzureWebJobsStorage": "<azure-storage-connection-string>"
+ "AzureWebJobsStorage": "<azure-storage-connection-string>",
+ "AzureWebJobsFeatureFlags": "EnableWorkerIndexing"
} } ```
At this time, only specific triggers and bindings are supported by the v2 progra
| Type | Trigger | Input Binding | Output Binding | | | | | |
-| HTTP | x | | |
-| Timer | x | | |
-| Azure Queue Storage | x | | x |
-| Azure Service Bus topic | x | | x |
-| Azure Service Bus queue | x | | x |
-| Azure Cosmos DB | x | x | x |
-| Azure Blob Storage | x | x | x |
-| Azure Event Grid | x | | x |
+| [HTTP](functions-bindings-triggers-python.md#http-trigger) | x | | |
+| [Timer](functions-bindings-triggers-python.md#timer-trigger) | x | | |
+| [Azure Queue Storage](functions-bindings-triggers-python.md#azure-queue-storage-trigger) | x | | x |
+| [Azure Service Bus topic](functions-bindings-triggers-python.md#azure-service-bus-topic-trigger) | x | | x |
+| [Azure Service Bus queue](functions-bindings-triggers-python.md#azure-service-bus-queue-trigger) | x | | x |
+| [Azure Cosmos DB](functions-bindings-triggers-python.md#azure-eventhub-trigger) | x | x | x |
+| [Azure Blob Storage](functions-bindings-triggers-python.md#blob-trigger) | x | x | x |
+| [Azure Hub](functions-bindings-triggers-python.md#azure-eventhub-trigger) | x | | x |
-To learn more about defining triggers and bindings in the v2 model, see this [documentation](https://github.com/Azure/azure-functions-python-library/blob/dev/docs/ProgModelSpec.pyi).
+For more examples, see [Python V2 model Azure Functions triggers and bindings (preview)](functions-bindings-triggers-python.md).
::: zone-end
The host.json file must also be updated to include an HTTP `routePrefix`, as sho
"extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[2.*, 3.0.0)"
+ "version": "[3.*, 4.0.0)"
}, "extensions": {
For a full example, see [Using Flask Framework with Azure Functions](/samples/az
::: zone-end ::: zone pivot="python-mode-decorators"
-You can use ASGI and WSGI-compatible frameworks such as Flask and FastAPI with your HTTP-triggered Python functions, which is shown in the following example:
+You can use ASGI and WSGI-compatible frameworks such as Flask and FastAPI with your HTTP-triggered Python functions. You must first update the host.json file to include an HTTP `routePrefix`, as shown in the following example:
+
+```json
+{
+ "version": "2.0",
+ "logging":
+ {
+ "applicationInsights":
+ {
+ "samplingSettings":
+ {
+ "isEnabled": true,
+ "excludedTypes": "Request"
+ }
+ }
+ },
+ "extensionBundle":
+ {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[2.*, 3.0.0)"
+ },
+ "extensions":
+ {
+ "http":
+ {
+ "routePrefix": ""
+ }
+ }
+}
+```
+
+The framework code looks like the following example:
# [ASGI](#tab/asgi)
For a list of preinstalled system libraries in Python worker Docker images, see
| Functions runtime | Debian version | Python versions | ||||
-| Version 3.x | Buster | [Python 3.6](https://github.com/Azure/azure-functions-docker/blob/master/host/3.0/buster/amd64/python/python36/python36.Dockerfile)<br/>[Python 3.7](https://github.com/Azure/azure-functions-docker/blob/master/host/3.0/buster/amd64/python/python37/python37.Dockerfile)<br />[Python 3.8](https://github.com/Azure/azure-functions-docker/blob/master/host/3.0/buster/amd64/python/python38/python38.Dockerfile)<br/> [Python 3.9](https://github.com/Azure/azure-functions-docker/blob/master/host/3.0/buster/amd64/python/python39/python39.Dockerfile)|
+| Version 3.x | Buster | [Python 3.7](https://github.com/Azure/azure-functions-docker/blob/master/host/3.0/buster/amd64/python/python37/python37.Dockerfile)<br />[Python 3.8](https://github.com/Azure/azure-functions-docker/blob/master/host/3.0/buster/amd64/python/python38/python38.Dockerfile)<br/> [Python 3.9](https://github.com/Azure/azure-functions-docker/blob/master/host/3.0/buster/amd64/python/python39/python39.Dockerfile)|
## Python worker extensions
For more information, see the following resources:
[HttpRequest]: /python/api/azure-functions/azure.functions.httprequest
-[HttpResponse]: /python/api/azure-functions/azure.functions.httpresponse
+[HttpResponse]: /python/api/azure-functions/azure.functions.httpresponse
azure-functions Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/language-support-policy.md
There are few exceptions to the retirement policy outlined above. Here is a list
|Language Versions |EOL Date |Retirement Date| |--|--|-| |Node 12|30 Apr 2022|13 December 2022|
-|Python 3.6 |23 December 2021|30 September 2022|
## Language version support timeline
azure-functions Recover Python Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/recover-python-functions.md
The following is a list of troubleshooting sections for common issues in Python
* [Python exited with code 139](#troubleshoot-python-exited-with-code-139) * [Troubleshoot errors with Protocol Buffers](#troubleshoot-errors-with-protocol-buffers) ::: zone-end+ ::: zone pivot="python-mode-decorators"
+Specifically with the v2 model, here are some known issues and their workarounds:
+
+* [Multiple Python workers not supported](#multiple-python-workers-not-supported)
+* [Could not load file or assembly](#troubleshoot-could-not-load-file-or-assembly)
+* [Unable to resolve the Azure Storage connection named Storage](#troubleshoot-unable-to-resolve-the-azure-storage-connection)
+* [Issues with deployment](#issue-with-deployment)
+
+General troubleshooting guides for Python Functions include:
+ * [ModuleNotFoundError and ImportError](#troubleshoot-modulenotfounderror) * [Cannot import 'cygrpc'](#troubleshoot-cannot-import-cygrpc) * [Python exited with code 137](#troubleshoot-python-exited-with-code-137) * [Python exited with code 139](#troubleshoot-python-exited-with-code-139) * [Troubleshoot errors with Protocol Buffers](#troubleshoot-errors-with-protocol-buffers)
-* [Multiple Python workers not supported](#multiple-python-workers-not-supported)
-* [Could not load file or assembly](#troubleshoot-could-not-load-file-or-assembly)
-* [Unable to resolve the Azure Storage connection named Storage](#troubleshoot-unable-to-resolve-the-azure-storage-connection)
-* [Issues with deployment](#issue-with-deployment)
::: zone-end + ## Troubleshoot ModuleNotFoundError This section helps you troubleshoot module-related errors in your Python function app. These errors typically result in the following Azure Functions error message:
azure-government Documentation Government Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-impact-level-5.md
recommendations: false Previously updated : 10/21/2022 Last updated : 10/30/2022 # Isolation guidelines for Impact Level 5 workloads
Virtual machine scale sets aren't currently supported on Azure Dedicated Host. B
> [!IMPORTANT] > As new hardware generations become available, some VM types might require reconfiguration (scale up or migration to a new VM SKU) to ensure they remain on properly dedicated hardware. For more information, see **[Virtual machine isolation in Azure](../virtual-machines/isolation.md).**
-#### Disk encryption options
+#### Disk encryption for virtual machines
-There are several types of encryption available for your managed disks supporting virtual machines and virtual machine scale sets:
+You can encrypt the storage that supports these virtual machines in one of two ways to support necessary encryption standards.
-- Azure Disk Encryption-- Server-side encryption of Azure Disk Storage-- Encryption at host-- Confidential disk encryption
+- Use Azure Disk Encryption to encrypt the drives by using dm-crypt (Linux) or BitLocker (Windows):
+ - [Enable Azure Disk Encryption for Linux](../virtual-machines/linux/disk-encryption-overview.md)
+ - [Enable Azure Disk Encryption for Windows](../virtual-machines/windows/disk-encryption-overview.md)
+- Use Azure Storage service encryption for storage accounts with your own key to encrypt the storage account that holds the disks:
+ - [Storage service encryption with customer-managed keys](../storage/common/customer-managed-keys-configure-key-vault.md)
-All these options enable you to have sole control over encryption keys. For more information, see [Overview of managed disk encryption options](../virtual-machines/disk-encryption-overview.md).
+#### Disk encryption for virtual machine scale sets
+You can encrypt disks that support virtual machine scale sets by using Azure Disk Encryption:
+
+- [Encrypt disks in virtual machine scale sets](../virtual-machine-scale-sets/disk-encryption-key-vault.md)
## Containers
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
You may have a limited number of Logic Apps actions per action group.
### Secure webhook
-When you use a secure webhook action, you can use Azure AD to secure the connection between your action group and your protected web API, which is your webhook endpoint. For an overview of Azure AD applications and service principals, see [Microsoft identity platform (v2.0) overview](../../active-directory/develop/v2-overview.md). Follow these steps to take advantage of the secure webhook functionality.
+When you use a secure webhook action, you must use Azure AD to secure the connection between your action group and your protected web API, which is your webhook endpoint. For an overview of Azure AD applications and service principals, see [Microsoft identity platform (v2.0) overview](../../active-directory/develop/v2-overview.md). Follow these steps to take advantage of the secure webhook functionality.
+
+> [!NOTE]
+>
+> Basic authentication is not supported for SecureWebhok. To use basic authentication you must use Webhook.
> [!NOTE] >
azure-monitor Resource Manager Alerts Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-alerts-metric.md
Previously updated : 04/27/2022 Last updated : 10/31/2022
resource metricAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
} ```
+> [!NOTE]
+>
+> Using "All" as a dimension value is equivalent to selecting "\*" (all current and future values).
++ ## Multiple dimensions, dynamic thresholds A single dynamic thresholds alert rule can create tailored thresholds for hundreds of metric time series (even different types) at a time, which results in fewer alert rules to manage. The following sample creates a dynamic thresholds metric alert rule on dimensional metrics.
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
Title: Application Insights API for custom events and metrics | Microsoft Docs description: Insert a few lines of code in your device or desktop app, webpage, or service to track usage and diagnose issues. Previously updated : 10/24/2022 Last updated : 10/31/2022 ms.devlang: csharp, java, javascript, vb
To determine how long data is kept, see [Data retention and privacy](./data-rete
## Frequently asked questions
+### Why am I missing telemetry data?
+
+Both [TelemetryChannels](telemetry-channels.md#what-are-telemetry-channels) will lose buffered telemetry if it isn't flushed before an application shuts down.
+
+To avoid data loss, flush the TelemetryClient when an application is shutting down.
+
+For more information, see [Flushing data](#flushing-data).
+ ### What exceptions might `Track_()` calls throw? None. You don't need to wrap them in try-catch clauses. If the SDK encounters problems, it will log messages in the debug console output and, if the messages get through, in Diagnostic Search.
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md
Application Insights .NET SDK supports the credential classes provided by [Azure
Below is an example of manually creating and configuring a `TelemetryConfiguration` using .NET: ```csharp
-var config = new TelemetryConfiguration
-{
- ConnectionString = "InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/"
-}
+TelemetryConfiguration.Active.ConnectionString = "InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/";
var credential = new DefaultAzureCredential();
-config.SetAzureTokenCredential(credential);
+TelemetryConfiguration.Active.SetAzureTokenCredential(credential);
``` Below is an example of configuring the `TelemetryConfiguration` using .NET Core:
azure-monitor Azure Vm Vmss Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-vm-vmss-apps.md
Title: Monitor performance on Azure VMs - Azure Application Insights description: Application performance monitoring for Azure VM and Azure virtual machine scale sets. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 08/19/2022 Last updated : 10/31/2022 ms.devlang: csharp, java, javascript, python
Enabling monitoring for your .NET or Java based web applications running on [Azu
This article walks you through enabling Application Insights monitoring using the Application Insights Agent and provides preliminary guidance for automating the process for large-scale deployments. > [!IMPORTANT]
-> **Java** based applications running on Azure VMs and VMSS are monitored with **[Application Insights Java 3.0 agent](./java-in-process-agent.md)**, which is generally available.
+> **Java** based applications running on Azure VMs and VMSS are monitored with the **[Application Insights Java 3.0 agent](./java-in-process-agent.md)**, which is generally available.
> [!IMPORTANT] > Azure Application Insights Agent for ASP.NET and ASP.NET Core applications running on **Azure VMs and VMSS** is currently in public preview. For monitoring your ASP.NET applications running **on-premises**, use the [Azure Application Insights Agent for on-premises servers](./status-monitor-v2-overview.md), which is generally available and fully supported.
For a complete list of supported auto-instrumentation scenarios, see [Supported
> [!NOTE] > Auto-instrumentation is available for ASP.NET, ASP.NET Core IIS-hosted applications and Java. Use an SDK to instrument Node.js and Python applications hosted on an Azure virtual machines and virtual machine scale sets.
-### [.NET](#tab/net)
+### [.NET Framework](#tab/net)
-The Application Insights Agent auto-collects the same dependency signals out-of-the-box as the .NET SDK. See [Dependency auto-collection](./auto-collect-dependencies.md#net) to learn more.
+The Application Insights Agent auto-collects the same dependency signals out-of-the-box as the SDK. See [Dependency auto-collection](./auto-collect-dependencies.md#net) to learn more.
+
+### [.NET Core / .NET](#tab/core)
+
+The Application Insights Agent auto-collects the same dependency signals out-of-the-box as the SDK. See [Dependency auto-collection](./auto-collect-dependencies.md#net) to learn more.
### [Java](#tab/Java)
Get-AzResource -ResourceId /subscriptions/<mySubscriptionId>/resourceGroups/<myR
Find troubleshooting tips for Application Insights Monitoring Agent Extension for .NET applications running on Azure virtual machines and virtual machine scale sets. > [!NOTE]
-> .NET Core, Node.js, and Python applications are only supported on Azure virtual machines and Azure virtual machine scale sets via manual SDK based instrumentation and therefore the steps below do not apply to these scenarios.
+> The steps below do not apply to Node.js and Python applications, which require SDK instrumentation.
Extension execution output is logged to files found in the following directories: ```Windows
azure-monitor Transaction Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/transaction-diagnostics.md
Title: Application Insights transaction diagnostics | Microsoft Docs description: This article explains Application Insights end-to-end transaction diagnostics. Previously updated : 01/19/2018 Last updated : 10/31/2022
This view has four key parts: a results list, a cross-component transaction char
This chart provides a timeline with horizontal bars during requests and dependencies across components. Any exceptions that are collected are also marked on the timeline.
-* The top row on this chart represents the entry point. It's the incoming request to the first component called in this transaction. The duration is the total time taken for the transaction to complete.
-* Any calls to external dependencies are simple noncollapsible rows, with icons that represent the dependency type.
-* Calls to other components are collapsible rows. Each row corresponds to a specific operation invoked at the component.
-* By default, the request, dependency, or exception that you selected appears on the right side.
-* Select any row to see its [details on the right](#details-of-the-selected-telemetry).
+1. The top row on this chart represents the entry point. It's the incoming request to the first component called in this transaction. The duration is the total time taken for the transaction to complete.
+1. Any calls to external dependencies are simple noncollapsible rows, with icons that represent the dependency type.
+1. Calls to other components are collapsible rows. Each row corresponds to a specific operation invoked at the component.
+1. By default, the request, dependency, or exception that you selected appears on the right side. Select any row to see its [details](#details-of-the-selected-telemetry).
> [!NOTE] > Calls to other components have two rows. One row represents the outbound call (dependency) from the caller component. The other row corresponds to the inbound request at the called component. The leading icon and distinct styling of the duration bars help differentiate between them.
If all calls were instrumented, in process is the likely root cause for the time
### What if I see the message ***Error retrieving data*** while navigating Application Insights in the Azure portal?
-This error indicates that the browser was unable to call into a required API or the API returned a failure response. To troubleshoot the behavior, open a browser [InPrivate window](https://support.microsoft.com/microsoft-edge/browse-inprivate-in-microsoft-edge-cd2c9a48-0bc4-b98e-5e46-ac40c84e27e2) and [disable any browser extensions](https://support.microsoft.com/microsoft-edge/add-turn-off-or-remove-extensions-in-microsoft-edge-9c0ec68c-2fbc-2f2c-9ff0-bdc76f46b026) that are running, then identify if you can still reproduce the portal behavior. If the portal error still occurs, try testing with other browsers, or other machines, investigate DNS or other network related issues from the client machine where the API calls are failing. If the portal error persists and requires further investigations, then [collect a browser network trace](https://learn.microsoft.com/azure/azure-portal/capture-browser-trace) while you reproduce the unexpected portal behavior and open a support case from the Azure portal.
+This error indicates that the browser was unable to call into a required API or the API returned a failure response. To troubleshoot the behavior, open a browser [InPrivate window](https://support.microsoft.com/microsoft-edge/browse-inprivate-in-microsoft-edge-cd2c9a48-0bc4-b98e-5e46-ac40c84e27e2) and [disable any browser extensions](https://support.microsoft.com/microsoft-edge/add-turn-off-or-remove-extensions-in-microsoft-edge-9c0ec68c-2fbc-2f2c-9ff0-bdc76f46b026) that are running, then identify if you can still reproduce the portal behavior. If the portal error still occurs, try testing with other browsers, or other machines, investigate DNS or other network related issues from the client machine where the API calls are failing. If the portal error persists and requires further investigations, then [collect a browser network trace](../../azure-portal/capture-browser-trace.md#capture-a-browser-trace-for-troubleshooting) while you reproduce the unexpected portal behavior and open a support case from the Azure portal.
azure-monitor Container Insights Prometheus Metrics Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-metrics-addon.md
The output will be similar to the following:
- Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`. - The Azure Monitor workspace and Azure Managed Grafana workspace must already be created.-- The template needs to be deployed in the same resource group as the cluster.
+- The template needs to be deployed in the Azure Managed Grafana workspaces resource group.
### Retrieve list of Grafana integrations If you're using an existing Azure Managed Grafana instance that already has been linked to an Azure Monitor workspace then you need the list of Grafana integrations. Open the **Overview** page for the Azure Managed Grafana instance and select the JSON view. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace.
If you're using an existing Azure Managed Grafana instance that already has been
``` ### Retrieve System Assigned identity for Grafana resource
-If you're using an existing Azure Managed Grafana instance that already has been linked to an Azure Monitor workspace then you need the list of Grafana integrations. Open the **Overview** page for the Azure Managed Grafana instance and select the JSON view. Copy the value of the `principalId` field for the `SystemAssigned` identity.
+The system assigned identity for the Azure Managed Grafana resource is also required. To get to it, open the **Overview** page for the Azure Managed Grafana instance and select the JSON view. Copy the value of the `principalId` field for the `SystemAssigned` identity.
```json "identity": {
If you're using an existing Azure Managed Grafana instance that already has been
"type": "SystemAssigned" }, ```-
-Assign the `Monitoring Data Reader` role to the Grafana System Assigned Identity. This is the principalId on the Azure Monitor Workspace resource. This will let the Azure Managed Grafana resource read data from the Azure Monitor Workspace and is a requirement for viewing the metrics.
+Please assign the `Monitoring Data Reader` on the Azure Monitor Workspace for the Grafana System Identity i.e. take the principal ID that you got from the Azure Managed Grafana Resource, open the Access Control Blade for the Azure Monitor Workspace and assign the `Monitoring Data Reader` Built-In role to the principal ID (System Assigned MSI for the Azure Managed Grafana resource). This will let the Azure Managed Grafana resource read data from the Azure Monitor Workspace and is a requirement for viewing the metrics.
### Download and edit template and parameter file
Assign the `Monitoring Data Reader` role to the Grafana System Assigned Identity
}, { "azureMonitorWorkspaceResourceId": "full_resource_id_2"
- }
+ },
{
- "azureMonitorWorkspaceResourceId": "[parameters('azureMonitorWorkspaceResourceId')]"
+ "azureMonitorWorkspaceResourceId": "[parameters('azureMonitorWorkspaceResourceId')]"
} ] } } ````
+ For e.g. In the above code snippet `full_resource_id_1` and `full_resource_id_2` were already present on the Azure Managed Grafana resource and we're manually adding them to the ARM template. The final `azureMonitorWorkspaceResourceId` already exists in the template and is being used to link to the Azure Monitor Workspace resource ID provided in the parameters file. Please note, You do not have to replace `full_resource_id_1` and `full_resource_id_2` and any other resource id's if no integrations are found in the retrieval step.
### Deploy template
ama-metrics-ksm-5fcf8dffcd 1 1 1 11h
## Uninstall metrics addon
-Currently, Azure CLI is the only option to remove the metrics addon and stop sending Prometheus metrics to Azure Monitor managed service for Prometheus. The following command removes the agent from the cluster nodes and deletes the recording rules created for the data being collected from the cluster, it doesn't remove the DCE, DCR, or the data already collected and stored in your Azure Monitor workspace.
+
+Currently, Azure CLI is the only option to remove the metrics addon and stop sending Prometheus metrics to Azure Monitor managed service for Prometheus.
+The aks-preview extension needs to be installed using the command `az extension add --name aks-preview`. For more information on how to install a CLI extension, see [Use and manage extensions with the Azure CLI](/azure/azure-cli-extensions-overview). The following command removes the agent from the cluster nodes and deletes the recording rules created for the data being collected from the cluster, it doesn't remove the DCE, DCR, or the data already collected and stored in your Azure Monitor workspace.
```azurecli az aks update --disable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group>
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
Each Azure resource requires its own diagnostic setting, which defines the follo
A single diagnostic setting can define no more than one of each of the destinations. If you want to send data to more than one of a particular destination type (for example, two different Log Analytics workspaces), create multiple settings. Each resource can have up to five diagnostic settings. > [!WARNING]
-> If you need to delete a resource, you should first delete its diagnostic settings. Otherwise, if you recreate this resource using the same name, the previous diagnostic settings will be included with the new resource. This will resume the collection of resource logs for the new resource as defined in a diagnostic setting and send the applicable metric and log data to the previously configured destination.
+> If you need to delete a resource, you should first delete its diagnostic settings. Otherwise, if you recreate this resource, the diagnostic settings for the deleted resource could be included with the new resource, depending on the resource configuration for each resource. If the diagnostics settings are included with the new resource, this resumes the collection of resource logs as defined in the diagnostic setting and sends the applicable metric and log data to the previously configured destination.
+>
+>Also, itΓÇÖs a good practice to delete the diagnostic settings for a resource you're going to delete and don't plan on using again to keep your environment clean.
The following video walks you through routing resource platform logs with diagnostic settings. The video was done at an earlier time. Be aware of the following changes:
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
na Previously updated : 10/25/2022 Last updated : 10/31/2022 # Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files
The required network ports are as follows:
*DNS running on AD DS domain controller
-### Network requirements
+### DNS requirements
Azure NetApp Files SMB, dual-protocol, and Kerberos NFSv4.1 volumes require reliable access to Domain Name System (DNS) services and up-to-date DNS records. Poor network connectivity between Azure NetApp Files and DNS servers can cause client access interruptions or client timeouts. Incomplete or incorrect DNS records for AD DS or Azure NetApp Files can cause client access interruptions or client timeouts.
azure-netapp-files Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/use-availability-zones.md
na Previously updated : 10/25/2022 Last updated : 10/31/2022
-# Use availability zones for high availability in Azure NetApp Files
+# Use availability zones for high availability in Azure NetApp Files (preview)
Azure [availability zones](../availability-zones/az-overview.md#availability-zones) are physically separate locations within each supporting Azure region that are tolerant to local failures. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved because of redundancy and logical isolation of Azure services. To ensure resiliency, a minimum of three separate availability zones are present in all [availability zone-enabled regions](../availability-zones/az-overview.md#azure-regions-with-availability-zones).
You can co-locate your compute, storage, networking, and data resources across a
Latency is subject to availability zone latency for within availability zone access and the regional latency envelope for cross-availability zone access.
+>[!IMPORTANT]
+>Availability zone volume placement in Azure NetApp Files is currently in preview. Refer to [Manage availability zone volume placement](manage-availability-zone-volume-placement.md#register-the-feature) for details on registering the feature.
+ ## Azure regions with availability zones For a list of regions that that currently support availability zones, refer to [Azure regions with availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones).
azure-resource-manager Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-github-actions.md
Create secrets for your Azure credentials, resource group, and subscriptions.
1. In [GitHub](https://github.com/), navigate to your repository.
-1. Select **Settings > Secrets > New secret**.
+1. Select **Security > Secrets and variables > Actions > New repository secret**.
1. Paste the entire JSON output from the Azure CLI command into the secret's value field. Name the secret `AZURE_CREDENTIALS`.
azure-resource-manager Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-github-actions.md
The file has two sections:
## Generate deployment credentials
-# [Service principal](#tab/userlevel)
-You can create a [service principal](../../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command in the [Azure CLI](/cli/azure/). Run this command with [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button.
-
-Create a resource group if you do not already have one.
-
-```azurecli-interactive
- az group create -n {MyResourceGroup} -l {location}
-```
-
-Replace the placeholder `myApp` with the name of your application.
-
-```azurecli-interactive
- az ad sp create-for-rbac --name {myApp} --role contributor --scopes /subscriptions/{subscription-id}/resourceGroups/{MyResourceGroup} --sdk-auth
-```
-
-In the example above, replace the placeholders with your subscription ID and resource group name. The output is a JSON object with the role assignment credentials that provide access to your App Service app similar to below. Copy this JSON object for later. You will only need the sections with the `clientId`, `clientSecret`, `subscriptionId`, and `tenantId` values.
-
-```output
- {
- "clientId": "<GUID>",
- "clientSecret": "<GUID>",
- "subscriptionId": "<GUID>",
- "tenantId": "<GUID>",
- (...)
- }
-```
-
-> [!IMPORTANT]
-> It is always a good practice to grant minimum access. The scope in the previous example is limited to the resource group.
-
-# [OpenID Connect](#tab/openid)
--
-OpenID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security.
-
-1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application.
-
- ```azurecli-interactive
- az ad app create --display-name myApp
- ```
-
- This command will output JSON with an `appId` that is your `client-id`. Save the value to use as the `AZURE_CLIENT_ID` GitHub secret later.
-
- You'll use the `objectId` value when creating federated credentials with Graph API and reference it as the `APPLICATION-OBJECT-ID`.
-
-1. Create a service principal. Replace the `$appID` with the appId from your JSON output.
-
- This command generates JSON output with a different `objectId` and will be used in the next step. The new `objectId` is the `assignee-object-id`.
-
- Copy the `appOwnerTenantId` to use as a GitHub secret for `AZURE_TENANT_ID` later.
-
- ```azurecli-interactive
- az ad sp create --id $appId
- ```
-
-1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
-
- ```azurecli-interactive
- az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal --scopes /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Web/sites/
- ```
-
-1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
-
- * Replace `APPLICATION-OBJECT-ID` with the **objectId (generated while creating app)** for your Active Directory application.
- * Set a value for `CREDENTIAL-NAME` to reference later.
- * Set the `subject`. The value of this is defined by GitHub depending on your workflow:
- * Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >`
- * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
- * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
-
- ```azurecli
- az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
- ```
-
- To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
-
-- ## Configure the GitHub secrets
-# [Service principal](#tab/userlevel)
-
-You need to create secrets for your Azure credentials, resource group, and subscriptions.
-
-1. In [GitHub](https://github.com/), browse your repository.
-
-1. Select **Settings > Secrets > New secret**.
-
-1. Paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret the name `AZURE_CREDENTIALS`.
-1. Create another secret named `AZURE_RG`. Add the name of your resource group to the secret's value field (example: `myResourceGroup`).
-
-1. Create an additional secret named `AZURE_SUBSCRIPTION`. Add your subscription ID to the secret's value field (example: `90fd3f9d-4c61-432d-99ba-1273f236afa2`).
-
-# [OpenID Connect](#tab/openid)
-
-You need to provide your application's **Client ID**, **Tenant ID**, and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
-
-1. Open your GitHub repository and go to **Settings**.
-
-1. Select **Settings > Secrets > New secret**.
-
-1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets:
-
- |GitHub Secret | Active Directory Application |
- |||
- |AZURE_CLIENT_ID | Application (client) ID |
- |AZURE_TENANT_ID | Directory (tenant) ID |
- |AZURE_SUBSCRIPTION_ID | Subscription ID |
-
-1. Save each secret by selecting **Add secret**.
-- ## Add Resource Manager template Add a Resource Manager template to your GitHub repository. This template creates a storage account.
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
There are some important best practices to follow for optimal performance of NFS
- Create multiple datastores of 4-TB size for better performance. The default limit is 64 but it can be increased up to a maximum of 256 by submitting a support ticket. To submit a support ticket, go to [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). - Work with your Microsoft representative to ensure that the Azure VMware Solution private cloud and the Azure NetApp Files volumes are deployed within same [Availability Zone](../availability-zones/az-overview.md#availability-zones).
+> [!IMPORTANT]
+>Changing the Azure NetApp Files volumes tier after creating the datastore will result in unexpected behavior in portal and API due to metadata mismatch. Set your performance tier of the Azure NetApp Files volume when creating the datastore. If you need to change tier during run time, detach the datastore, change the performance tier of the volume and attach the datastore. We are working on improvements to make this seamless.
+ ## Attach an Azure NetApp Files volume to your private cloud ### [Portal](#tab/azure-portal)
azure-vmware Configure Alerts For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-alerts-for-azure-vmware-solution.md
description: Learn how to use alerts to receive notifications. Also learn how to
Previously updated : 07/23/2021 Last updated : 10/26/2022 # Configure Azure Alerts in Azure VMware Solution
azure-vmware Deploy Disaster Recovery Using Jetstream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-disaster-recovery-using-jetstream.md
Azure VMware Solution supports the installation of JetStream using either static
| **Datastore** | Name of the datastore where you'll deploy the JetStream MSA. | | **VMName** | Name of JetStream MSA VM, for example, **jetstreamServer**. | | **Cluster** | Name of the Azure VMware Solution private cluster where the JetStream MSA is deployed, for example, **Cluster-1**. |
- | **Netmask** | Netmask of the MSA to be deployed, for example, **22** or **24**. |
+ | **Netmask** | Netmask of the MSA to be deployed, for example, **255.255.255.0**. |
| **MSIp** | IP address of the JetStream MSA VM. | | **Dns** | DNS IP that the JetStream MSA VM should use. | | **Gateway** | IP address of the network gateway for the JetStream MSA VM. |
azure-vmware Integrate Azure Native Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/integrate-azure-native-services.md
Title: Monitor and protect VMs with Azure native services
description: Learn how to integrate and deploy Microsoft Azure native tools to monitor and manage your Azure VMware Solution workloads. Previously updated : 08/15/2021 Last updated : 10/26/2022+ # Monitor and protect VMs with Azure native services
Microsoft Azure native services let you monitor, manage, and protect your virtua
The Azure native services that you can integrate with Azure VMware Solution include: -- **Azure Arc** extends Azure management to any infrastructure, including Azure VMware Solution, on-premises, or other cloud platforms. [Azure Arc-enabled servers](../azure-arc/servers/overview.md) lets you manage your Windows and Linux physical servers and virtual machines hosted *outside* of Azure, on your corporate network, or another cloud provider. You can attach a Kubernetes cluster hosted in your Azure VMware Solution environment using [Azure Arc enabled Kubernetes](../azure-arc/kubernetes/overview.md).
+- **Azure Arc** extends Azure management to any infrastructure, including Azure VMware Solution, on-premises, or other cloud platforms. [Azure Arc-enabled servers](../azure-arc/servers/overview.md) lets you manage your Windows and Linux physical servers and virtual machines hosted *outside* of Azure, on your corporate network, or another cloud provider. You can attach a Kubernetes cluster hosted in your Azure VMware Solution environment using [Azure Arc enabled Kubernetes](../azure-arc/kubernetes/overview.md).
-- **Azure Monitor** collects, analyzes, and acts on telemetry from your cloud and on-premises environments. It requires no deployment. You can monitor guest operating system performance to discover and map application dependencies for Azure VMware Solution or on-premises VMs. Your Log Analytics workspace in Azure Monitor enables log collection and performance counter collection using the Log Analytics agent or extensions.
+- **Azure Monitor** collects, analyzes, and acts on data from your cloud and on-premises environments. It requires no deployment. You can monitor guest operating system performance to discover and map application dependencies for Azure VMware Solution or on-premises VMs. Your Log Analytics workspace in Azure Monitor enables log collection and performance counter collection using the Log Analytics agent or extensions.
With Azure Monitor, you can collect data from different [sources to monitor and analyze](../azure-monitor/data-sources.md) and different types of [data for analysis, visualization, and alerting](../azure-monitor/data-platform.md). You can also create alert rules to identify issues in your environment, like high use of resources, missing patches, low disk space, and heartbeat of your VMs. You can set an automated response to detected events by sending an alert to IT Service Management (ITSM) tools. Alert detection notification can also be sent via email.
The Azure native services that you can integrate with Azure VMware Solution incl
The diagram shows the integrated monitoring architecture for Azure VMware Solution VMs. The Log Analytics agent enables collection of log data from Azure, Azure VMware Solution, and on-premises VMs. The log data is sent to Azure Monitor Logs and stored in a Log Analytics workspace. You can deploy the Log Analytics agent using Arc enabled servers [VM extensions support](../azure-arc/servers/manage-vm-extensions.md) for new and existing VMs.
You can configure the Log Analytics workspace with Microsoft Sentinel for alert
## Before you start
-If you are new to Azure or unfamiliar with any of the services previously mentioned, review the following articles:
+If you're new to Azure or unfamiliar with any of the services previously mentioned, review the following articles:
- [Automation account authentication overview](../automation/automation-security-overview.md) - [Designing your Azure Monitor Logs deployment](../azure-monitor/logs/workspace-design.md) and [Azure Monitor](../azure-monitor/overview.md)
If you are new to Azure or unfamiliar with any of the services previously mentio
- [What is Azure Arc enabled servers?](../azure-arc/servers/overview.md) and [What is Azure Arc enabled Kubernetes?](../azure-arc/kubernetes/overview.md) - [Update Management overview](../automation/update-management/overview.md) -- ## Enable Azure Update Management [Azure Update Management](../automation/update-management/overview.md) in Azure Automation manages operating system updates for your Windows and Linux machines in a hybrid environment. It monitors patching compliance and forwards patching deviation alerts to Azure Monitor for remediation. Azure Update Management must connect to your Log Analytics workspace to use stored data to assess the status of updates on your VMs.
If you are new to Azure or unfamiliar with any of the services previously mentio
1. [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md). If you prefer, you can also create a workspace via [CLI](../azure-monitor/logs/resource-manager-workspace.md), [PowerShell](../azure-monitor/logs/powershell-workspace-configuration.md), or [Azure Resource Manager template](../azure-monitor/logs/resource-manager-workspace.md). 1. [Enable Update Management from an Automation account](../automation/update-management/enable-from-automation-account.md). In the process, you'll link your Log Analytics workspace with your automation account.
-
-1. Once you've enabled Update Management, you can [deploy updates on VMs and review the results](../automation/update-management/deploy-updates.md).
+
+1. Once you've enabled Update Management, you can [deploy updates on VMs and review the results](../automation/update-management/deploy-updates.md).
## Enable Microsoft Defender for Cloud
For more information, see [Integrate Microsoft Defender for Cloud with Azure VMw
Extend Azure management to any infrastructure, including Azure VMware Solution, on-premises, or other cloud platforms. For information on enabling Azure Arc enabled servers for multiple Windows or Linux VMs, see [Connect hybrid machines to Azure at scale](../azure-arc/servers/onboard-service-principal.md). -- ## Onboard hybrid Kubernetes clusters with Azure Arc-enabled Kubernetes Attach a Kubernetes cluster hosted in your Azure VMware Solution environment using Azure Arc enabled Kubernetes. For more information, see [Create an Azure Arc-enabled Kubernetes cluster](../azure-arc/kubernetes/quickstart-connect-cluster.md). - ## Deploy the Log Analytics agent Monitor Azure VMware Solution VMs through the Log Analytics agent. Machines connected to the Log Analytics workspace use the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) to collect data about changes to installed software, Microsoft services, Windows registry and files, and Linux daemons on monitored servers. When data is available, the agent sends it to Azure Monitor Logs for processing. Azure Monitor Logs applies logic to the received data, records it, and makes it available for analysis. Deploy the Log Analytics agent by using [Azure Arc-enabled servers VM extension support](../azure-arc/servers/manage-vm-extensions.md). --- ## Enable Azure Monitor Can collect data from different [sources to monitor and analyze](../azure-monitor/data-sources.md) and different types of [data for analysis, visualization, and alerting](../azure-monitor/data-platform.md). You can also create alert rules to identify issues in your environment, like high use of resources, missing patches, low disk space, and heartbeat of your VMs. You can set an automated response to detected events by sending an alert to IT Service Management (ITSM) tools. Alert detection notification can also be sent via email.
-Monitor guest operating system performance to discover and map application dependencies for Azure VMware Solution or on-premises VMs. Your Log Analytics workspace in Azure Monitor enables log collection and performance counter collection using the Log Analytics agent or extensions.
-
+Monitor guest operating system performance to discover and map application dependencies for Azure VMware Solution or on-premises VMs. Your Log Analytics workspace in Azure Monitor enables log collection and performance counter collection using the Log Analytics agent or extensions.
1. [Design your Azure Monitor Logs deployment](../azure-monitor/logs/workspace-design.md)
Monitor guest operating system performance to discover and map application depen
- [Connect Azure to ITSM tools using IT Service Management Connector](../azure-monitor/alerts/itsmc-overview.md). - ## Next steps Now that you've covered Azure VMware Solution network and interconnectivity concepts, you may want to learn about [integrating Microsoft Defender for Cloud with Azure VMware Solution](azure-security-integration.md).
azure-vmware Tutorial Access Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-access-private-cloud.md
Title: Tutorial - Access your private cloud
description: Learn how to access an Azure VMware Solution private cloud Previously updated : 08/13/2021 Last updated : 10/27/2022+ # Tutorial: Access an Azure VMware Solution private cloud
-Azure VMware Solution doesn't allow you to manage your private cloud with your on-premises vCenter Server. Instead, you'll need to connect to the Azure VMware Solution vCenter Server instance through a jump box.
+Azure VMware Solution doesn't allow you to manage your private cloud with your on-premises vCenter Server. Instead, you'll need to connect to the Azure VMware Solution vCenter Server instance through a jump box.
-In this tutorial, you'll create a jump box in the resource group you created in the [previous tutorial](tutorial-configure-networking.md) and sign into the Azure VMware Solution vCenter Server. This jump box is a Windows virtual machine (VM) on the same virtual network you created. It provides access to both vCenter Server and the NSX Manager.
+In this tutorial, you'll create a jump box in the resource group you created in the [previous tutorial](tutorial-configure-networking.md) and sign into the Azure VMware Solution vCenter Server. This jump box is a Windows virtual machine (VM) on the same virtual network you created. It provides access to both vCenter Server and the NSX Manager.
In this tutorial, you learn how to:
In this tutorial, you learn how to:
1. In the resource group, select **Add**, search for **Microsoft Windows 10**, and select it. Then select **Create**.
- :::image type="content" source="media/tutorial-access-private-cloud/ss8-azure-w10vm-create.png" alt-text="Screenshot of how to add a new Windows 10 VM for a jump box.":::
+ :::image type="content" source="media/tutorial-access-private-cloud/ss8-azure-w10vm-create.png" alt-text="Screenshot of how to add a new Windows 10 VM for a jump box."lightbox="media/tutorial-access-private-cloud/ss8-azure-w10vm-create.png":::
-1. Enter the required information in the fields, and then select **Review + create**.
+1. Enter the required information in the fields, and then select **Review + create**.
For more information on the fields, see the following table.
In this tutorial, you learn how to:
1. From the jump box, sign in to vSphere Client with VMware vCenter Server SSO using a cloud admin username and verify that the user interface displays successfully.
-1. In the Azure portal, select your private cloud, and then **Manage** > **Identity**.
+1. In the Azure portal, select your private cloud, and then **Manage** > **Identity**.
The URLs and user credentials for private cloud vCenter Server and NSX-T Manager display.
- :::image type="content" source="media/tutorial-access-private-cloud/ss4-display-identity.png" alt-text="Screenshot showing the private cloud vCenter Server and NSX Manager URLs and credentials." lightbox="media/tutorial-access-private-cloud/ss4-display-identity.png":::
+ :::image type="content" source="media/tutorial-access-private-cloud/ss4-display-identity.png" alt-text="Screenshot showing the private cloud vCenter Server and NSX Manager URLs and credentials."lightbox="media/tutorial-access-private-cloud/ss4-display-identity.png":::
-1. Navigate to the VM you created in the preceding step and connect to the virtual machine.
+1. Navigate to the VM you created in the preceding step and connect to the virtual machine.
If you need help with connecting to the VM, see [connect to a virtual machine](../virtual-machines/windows/connect-logon.md#connect-to-the-virtual-machine) for details.
-1. In the Windows VM, open a browser and navigate to the vCenter Server and NSX-T Manager URLs in two tabs.
+1. In the Windows VM, open a browser and navigate to the vCenter Server and NSX-T Manager URLs in two tabs.
1. In the vSphere Client tab, enter the `cloudadmin@vsphere.local` user credentials from the previous step.
- :::image type="content" source="media/tutorial-access-private-cloud/ss5-vcenter-login.png" alt-text="Screenshot showing the VMware vSphere sign in page." border="true":::
+ :::image type="content" source="media/tutorial-access-private-cloud/ss5-vcenter-login.png" alt-text="Screenshot showing the VMware vSphere sign in page."lightbox="media/tutorial-access-private-cloud/ss5-vcenter-login.png" border="true":::
- :::image type="content" source="media/tutorial-access-private-cloud/ss6-vsphere-client-home.png" alt-text="Screenshot showing a summary of Cluster-1 in the vSphere Client." border="true":::
+ :::image type="content" source="media/tutorial-access-private-cloud/ss6-vsphere-client-home.png" alt-text="Screenshot showing a summary of Cluster-1 in the vSphere Client."lightbox="media/tutorial-access-private-cloud/ss6-vsphere-client-home.png" border="true":::
1. In the second tab of the browser, sign in to NSX-T Manager.
- :::image type="content" source="media/tutorial-access-private-cloud/ss10-nsx-manager-home.png" alt-text="Screenshot of the NSX-T Manager Overview." border="true":::
--
+ :::image type="content" source="media/tutorial-access-private-cloud/ss10-nsx-manager-home.png" alt-text="Screenshot of the NSX-T Manager Overview."lightbox="media/tutorial-access-private-cloud/ss10-nsx-manager-home.png" border="true":::
## Next steps
Continue to the next tutorial to learn how to create a virtual network to set up
> [!div class="nextstepaction"] > [Create a Virtual Network](tutorial-configure-networking.md)-
azure-vmware Tutorial Create Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-create-private-cloud.md
Title: Tutorial - Deploy an Azure VMware Solution private cloud
description: Learn how to create and deploy an Azure VMware Solution private cloud Previously updated : 09/29/2021 Last updated : 10/27/2022+ # Tutorial: Deploy an Azure VMware Solution private cloud
You use vCenter Server and NSX-T Manager to manage most other aspects of cluster
>[!TIP] >You can always extend the cluster and add more clusters later if you need to go beyond the initial deployment number.
-Because Azure VMware Solution doesn't allow you to manage your private cloud with your cloud vCenter Server at launch, you'll need to do additional steps for the configuration. This tutorial covers these steps and related prerequisites.
+Because Azure VMware Solution doesn't allow you to manage your private cloud with your cloud vCenter Server at launch, you'll need to do more steps for the configuration. This tutorial covers these steps and related prerequisites.
In this tutorial, you'll learn how to:
In this tutorial, you've learned how to:
Continue to the next tutorial to learn how to create a jump box. You use the jump box to connect to your environment to manage your private cloud locally. - > [!div class="nextstepaction"] > [Access an Azure VMware Solution private cloud](tutorial-access-private-cloud.md)
azure-vmware Tutorial Delete Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-delete-private-cloud.md
Title: Tutorial - Delete an Azure VMware Solution private cloud
description: Learn how to delete an Azure VMware Solution private cloud that you no longer need. Previously updated : 03/13/2021 Last updated : 10/27/2022+ # Tutorial: Delete an Azure VMware Solution private cloud
If you have an Azure VMware Solution private cloud that you no longer need, you
* Several virtual machines (VMs)
-When you delete a private cloud, all VMs, their data, clusters, and network address space provisioned get deleted. The dedicated Azure VMware Solution hosts are securely wiped and returned to the free pool.
+When you delete a private cloud, all VMs, their data, clusters, and network address space provisioned get deleted. The dedicated Azure VMware Solution hosts are securely wiped and returned to the free pool.
> [!CAUTION] > Deleting the private cloud terminates all running workloads and components and is an irreversible operation. Once you delete the private cloud, you cannot recover the data.
When you delete a private cloud, all VMs, their data, clusters, and network addr
If you require the VMs and their data later, make sure to backup the data before you delete the private cloud. Unfortunately, there's no way to recover the VMs and their data. - ## Delete the private cloud 1. Access the Azure VMware Solutions console in the [Azure portal](https://portal.azure.com). 2. Select the private cloud you want to delete.
-
-3. Enter the name of the private cloud and select **Yes**.
+
+3. Enter the name of the private cloud and select **Yes**.
>[!NOTE] >The deletion process takes a few hours to complete.
azure-vmware Tutorial Expressroute Global Reach Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-expressroute-global-reach-private-cloud.md
Title: Peer on-premises environments to Azure VMware Solution description: Learn how to create ExpressRoute Global Reach peering to a private cloud in Azure VMware Solution. -+ Previously updated : 07/28/2021 Last updated : 10/27/2022
-# Peer on-premises environments to Azure VMware Solution
+# Tutorial: Peer on-premises environments to Azure VMware Solution
-After you deploy your Azure VMware Solution private cloud, you'll connect it to your on-premises environment. ExpressRoute Global Reach connects your on-premises environment to your Azure VMware Solution private cloud. The ExpressRoute Global Reach connection is established between the private cloud ExpressRoute circuit and an existing ExpressRoute connection to your on-premises environments.
+After you deploy your Azure VMware Solution private cloud, you'll connect it to your on-premises environment. ExpressRoute Global Reach connects your on-premises environment to your Azure VMware Solution private cloud. The ExpressRoute Global Reach connection is established between the private cloud ExpressRoute circuit and an existing ExpressRoute connection to your on-premises environments.
:::image type="content" source="media/pre-deployment/azure-vmware-solution-on-premises-diagram.png" alt-text="Diagram showing ExpressRoute Global Reach on-premises network connectivity." lightbox="media/pre-deployment/azure-vmware-solution-on-premises-diagram.png" border="false":::
The circuit owner creates an authorization, which creates an authorization key t
> [!NOTE] > Each connection requires a separate authorization.
-1. From the **ExpressRoute circuits** blade, under Settings, select **Authorizations**.
+1. From **ExpressRoute circuits** in the left navigation, under Settings, select **Authorizations**.
1. Enter the name for the authorization key and select **Save**.
- :::image type="content" source="media/expressroute-global-reach/start-request-auth-key-on-premises-expressroute.png" alt-text="Select Authorizations and enter the name for the authorization key.":::
+ :::image type="content" source="media/expressroute-global-reach/start-request-auth-key-on-premises-expressroute.png" alt-text="Select Authorizations and enter the name for the authorization key."lightbox="media/expressroute-global-reach/start-request-auth-key-on-premises-expressroute.png":::
Once created, the new key appears in the list of authorization keys for the circuit. 1. Copy the authorization key and the ExpressRoute ID. You'll use them in the next step to complete the peering.
-## Peer private cloud to on-premises
+## Peer private cloud to on-premises
+ Now that you've created an authorization key for the private cloud ExpressRoute circuit, you can peer it with your on-premises ExpressRoute circuit. The peering is done from the on-premises ExpressRoute circuit in the **Azure portal**. You'll use the resource ID (ExpressRoute circuit ID) and authorization key of your private cloud ExpressRoute circuit to finish the peering. 1. From the private cloud, under Manage, select **Connectivity** > **ExpressRoute Global Reach** > **Add**.
- :::image type="content" source="./media/expressroute-global-reach/expressroute-global-reach-tab.png" alt-text="Screenshot showing the ExpressRoute Global Reach tab in the Azure VMware Solution private cloud.":::
+ :::image type="content" source="./media/expressroute-global-reach/expressroute-global-reach-tab.png" alt-text="Screenshot showing the ExpressRoute Global Reach tab in the Azure VMware Solution private cloud." lightbox="./media/expressroute-global-reach/expressroute-global-reach-tab.png":::
1. Enter the ExpressRoute ID and the authorization key created in the previous section.
- :::image type="content" source="./media/expressroute-global-reach/on-premises-cloud-connections.png" alt-text="Screenshot showing the dialog for entering the connection information.":::
+ :::image type="content" source="./media/expressroute-global-reach/on-premises-cloud-connections.png" alt-text="Screenshot showing the dialog for entering the connection information." lightbox="./media/expressroute-global-reach/on-premises-cloud-connections.png":::
1. Select **Create**. The new connection shows in the on-premises cloud connections list. >[!TIP] >You can delete or disconnect a connection from the list by selecting **More**. >
->:::image type="content" source="./media/expressroute-global-reach/on-premises-connection-disconnect.png" alt-text="Screenshot showing how to disconnect or delete an on-premises connection in Azure VMware Solution.":::
-
+>:::image type="content" source="./media/expressroute-global-reach/on-premises-connection-disconnect.png" alt-text="Screenshot showing how to disconnect or delete an on-premises connection in Azure VMware Solution." lightbox="./media/expressroute-global-reach/on-premises-connection-disconnect.png":::
## Verify on-premises network connectivity
In your **on-premises edge router**, you should now see where the ExpressRoute c
>Everyone has a different environment, and some will need to allow these routes to propagate back into the on-premises network. ## Next steps+ Continue to the next tutorial to install VMware HCX add-on in your Azure VMware Solution private cloud. > [!div class="nextstepaction"] > [Install VMware HCX](install-vmware-hcx.md) - <!-- LINKS - external--> <!-- LINKS - internal -->
azure-vmware Tutorial Scale Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-scale-private-cloud.md
Title: Tutorial - Scale clusters in a private cloud
description: In this tutorial, you use the Azure portal to scale an Azure VMware Solution private cloud. Previously updated : 08/03/2021 Last updated : 10/27/2022+ #Customer intent: As a VMware administrator, I want to learn how to scale an Azure VMware Solution private cloud in the Azure portal. # Tutorial: Scale clusters in a private cloud
-To get the most out of your Azure VMware Solution private cloud experience, scale the clusters and hosts to reflect what you need for planned workloads. You can scale the clusters and hosts in a private cloud as required for your application workload. You should address performance and availability limitations for specific services on a case-by-case basis.
+To get the most out of your Azure VMware Solution private cloud experience, scale the clusters and hosts to reflect what you need for planned workloads. You can scale the clusters and hosts in a private cloud as required for your application workload. You should address performance and availability limitations for specific services on a case-by-case basis.
[!INCLUDE [azure-vmware-solutions-limits](includes/azure-vmware-solutions-limits.md)]
In this tutorial, you'll use the Azure portal to:
## Prerequisites
-You'll need an existing private cloud to complete this tutorial. If you haven't created a private cloud, follow the [create a private cloud tutorial](tutorial-create-private-cloud.md) to create one.
+You'll need an existing private cloud to complete this tutorial. If you haven't created a private cloud, follow the [create a private cloud tutorial](tutorial-create-private-cloud.md) to create one.
## Add a new cluster 1. In your Azure VMware Solution private cloud, under **Manage**, select **Clusters** > **Add a cluster**.
- :::image type="content" source="media/tutorial-scale-private-cloud/ss2-select-add-cluster.png" alt-text="Screenshot showing how to add a cluster to an Azure VMware Solution private cloud." border="true":::
+ :::image type="content" source="media/tutorial-scale-private-cloud/ss2-select-add-cluster.png" alt-text="Screenshot showing how to add a cluster to an Azure VMware Solution private cloud." lightbox="media/tutorial-scale-private-cloud/ss2-select-add-cluster.png" border="true":::
1. Use the slider to select the number of hosts and then select **Save**.
- :::image type="content" source="media/tutorial-scale-private-cloud/ss3-configure-new-cluster.png" alt-text="Screenshot showing how to configure a new cluster." border="true":::
+ :::image type="content" source="media/tutorial-scale-private-cloud/ss3-configure-new-cluster.png" alt-text="Screenshot showing how to configure a new cluster." lightbox="media/tutorial-scale-private-cloud/ss3-configure-new-cluster.png" border="true":::
The deployment of the new cluster begins.
-## Scale a cluster
+## Scale a cluster
1. In your Azure VMware Solution private cloud, under **Manage**, select **Clusters**.
-1. Select the cluster you want to scale, select **More** (...) and then select **Edit**.
+1. Select the cluster you want to scale, select **More** (...), then select **Edit**.
- :::image type="content" source="media/tutorial-scale-private-cloud/ss4-select-scale-private-cloud-2.png" alt-text="Screenshot showing where to edit an existing cluster." border="true":::
+ :::image type="content" source="media/tutorial-scale-private-cloud/ss4-select-scale-private-cloud-2.png" alt-text="Screenshot showing where to edit an existing cluster." lightbox="media/tutorial-scale-private-cloud/ss4-select-scale-private-cloud-2.png" border="true":::
1. Use the slider to select the number of hosts and then select **Save**.
You'll need an existing private cloud to complete this tutorial. If you haven't
## Next steps
-If you require another Azure VMware Solution private cloud, [create another private cloud](tutorial-create-private-cloud.md), following the same networking prerequisites, cluster, and host limits.
+If you require another Azure VMware Solution private cloud, [create another private cloud](tutorial-create-private-cloud.md) following the same networking prerequisites, cluster, and host limits.
<!-- LINKS - external-->
backup Backup Azure Dataprotection Use Rest Api Backup Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-dataprotection-use-rest-api-backup-blobs.md
Title: Back up blobs in a storage account using Azure Data Protection REST API. description: In this article, learn how to configure, initiate, and manage backup operations of blobs using REST API. Previously updated : 07/09/2021 Last updated : 10/31/2022 ms.assetid: 7c244b94-d736-40a8-b94d-c72077080bbe++++ # Back up blobs in a storage account using Azure Data Protection via REST API
-This article describes how to manage backups for blobs in a storage account via REST API. Backup of blobs is configured at the storage account level. So, all blobs in the storage account are protected with operational backup.
+Azure Backup enables you to easily configure operational backup for protecting block blobs in your storage accounts.
+
+This article describes how to configure backups for blobs in a storage account via REST API. Backup of blobs is configured at the storage account level. So, all blobs in the storage account are protected with operational backup.
+
+In this article, you'll learn about:
+
+> [!div class="checklist"]
+> - Prerequisites
+> - Configure backup
For information on the Azure blob region availability, supported scenarios and limitations, see the [support matrix](blob-backup-support-matrix.md). ## Prerequisites - [Create a Backup vault](backup-azure-dataprotection-use-rest-api-create-update-backup-vault.md)- - [Create a blob backup policy](backup-azure-dataprotection-use-rest-api-create-update-blob-policy.md) ## Configure backup
-Once the vault and policy are created, there are two critical points that the user needs to consider to protect all Azure blobs within a storage account.
+Once you create the vault and policy, you need to consider two critical points to protect all Azure Blobs within a storage account.
+
+### Key entities
-### Key entities involved
+#### Storage account that contains the blobs for protection
-#### Storage account which contains the blobs to be protected
+Fetch the Azure Resource Manager ID of the storage account which contains the blobs to be protected. This serves as the identifier of the storage account.
-Fetch the Azure Resource Manager ID of the storage account which contains the blobs to be protected. This will serve as the identifier of the storage account. We will use an example of a storage account named _msblobbackup_, under the resource group _RG-BlobBackup_, in a different subscription and in west US.
+For example, we'll use a storage account named *msblobbackup*, under the resource group *RG-BlobBackup*, in a different subscription and in *west US*.
```http "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/RG-BlobBackup/providers/Microsoft.Storage/storageAccounts/msblobbackup"
Fetch the Azure Resource Manager ID of the storage account which contains the bl
#### Backup vault
-The Backup vault requires permissions on the storage account to enable backups on blobs present within the storage account. The system-assigned managed identity of the vault is used for assigning such permissions. We will use an example of a backup vault called "testBkpVault" in "West US" region under "TestBkpVaultRG" resource group.
+The Backup vault requires permissions on the storage account to enable backups on blobs present within the storage account. The system-assigned managed identity of the vault is used for assigning the permissions.
+
+For example, we'll use a backup vault called *testBkpVault* in *West US* region under *TestBkpVaultRG* resource group.
### Assign permissions
-You need to assign a few permissions via RBAC to vault (represented by vault MSI) and the relevant storage account. These can be performed via Portal or PowerShell or REST API. Learn more about all [related permissions](blob-backup-configure-manage.md#grant-permissions-to-the-backup-vault-on-storage-accounts).
+You need to assign a few permissions via Azure role-based access control (Azure RBAC) to vault (represented by vault Managed Service Identity) and the relevant storage account. You can do these via Azure portal, PowerShell, or REST API. Learn more about all [related permissions](blob-backup-configure-manage.md#grant-permissions-to-the-backup-vault-on-storage-accounts).
### Prepare the request to configure backup
-Once the relevant permissions are set to the vault and storage account, and the vault and policy are configured, we can prepare the request to configure backup. The following is the request body to configure backup for all blobs within a storage account. The Azure Resource Manager ID (ARM ID) of the storage account and its details are mentioned in the 'datasourceinfo' section and the policy information is present in the 'policyinfo' section.
+Once you set the relevant permissions to the vault and storage account, and configure the vault and policy, prepare the request to configure backup.
+
+The following is the request body to configure backup for all blobs within a storage account. The Azure Resource Manager ID (ARM ID) of the storage account and its details are mentioned in the *datasourceinfo* section and the policy information is present in the *policyinfo* section.
```json {
Once the relevant permissions are set to the vault and storage account, and the
### Validate the request to configure backup
-We can validate whether the request to configure backup or not will be successful or not using [the validate for backup API](/rest/api/dataprotection/backup-instances/validate-for-backup). The response can be used by customer to perform all required pre-requisites and then submit the configuration for backup request.
+To validate if the request to configure backup will be successful, use [the validate for backup API](/rest/api/dataprotection/backup-instances/validate-for-backup). You can use the response to perform all required prerequisites and then submit the configuration for backup request.
-Validate for backup request is a POST operation and the URI has `{subscriptionId}`, `{vaultName}`, `{vaultresourceGroupName}` parameters.
+*Validate for backup request* is a *POST operation and the URI has `{subscriptionId}`, `{vaultName}`, `{vaultresourceGroupName}` parameters.
```http POST https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{vaultresourceGroupname}/providers/Microsoft.DataProtection/backupVaults/{backupVaultName}/validateForBackup?api-version=2021-01-01 ```
-For example, this translates to
+For example, this translates to:
```http POST https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/TestBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/testBkpVault/validateForBackup?api-version=2021-01-01 ```
-The [request body](#prepare-the-request-to-configure-backup) that we prepared earlier will be used to give the details of the storage account to be protected.
+The [request body](#prepare-the-request-to-configure-backup) that you prepared earlier is used to give the details of the storage account to be protected.
#### Example request body
It returns two responses: 202 (Accepted) when another operation is created and t
###### Error response
-In case the given storage account is already protected, the response is HTTP 400 (Bad request) and clearly states that the given storage account is protected to a backup vault along with details.
+If the given storage account is already protected, the response is HTTP 400 (Bad request) and clearly states that the given storage account is protected to a backup vault along with details.
```http HTTP/1.1 400 BadRequest
X-Powered-By: ASP.NET
} ```
-###### Tracking response
+###### Track response
-If the datasource is unprotected, then the API proceeds for further validations and creates a tracking operation.
+If the data source is unprotected, then the API proceeds for further validations and creates a tracking operation.
```http HTTP/1.1 202 Accepted
Location: https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxx
X-Powered-By: ASP.NET ```
-Track the resulting operation using the "Azure-AsyncOperation" header with a simple *GET* command
+Track the resulting operation using the *Azure-AsyncOperation* header with a simple *GET* command.
```http GET https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/providers/Microsoft.DataProtection/locations/westus/operationStatus/ZmMzNDFmYWMtZWJlMS00NGJhLWE4YTgtMDNjYjI4Y2M5OTExOzM2NDdhZDNjLTFiNGEtNDU4YS05MGJkLTQ4NThiYjRhMWFkYg==?api-version=2021-01-01
GET https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx
} ```
-It returns 200(OK) once it completes and the response body lists further requirements to be fulfilled, such as permissions.
+It returns 200 (OK) once the validation completes and the response body lists further requirements to be fulfilled, such as permissions.
```http GET https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/providers/Microsoft.DataProtection/locations/westus/operationStatus/ZmMzNDFmYWMtZWJlMS00NGJhLWE4YTgtMDNjYjI4Y2M5OTExOzM2NDdhZDNjLTFiNGEtNDU4YS05MGJkLTQ4NThiYjRhMWFkYg==?api-version=2021-01-01
GET https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx
} ```
-If all the permissions are granted, then resubmit the validate request, track the resulting operation and it will return 200(OK) as succeeded if all the conditions are met.
+If all the permissions are granted, then resubmit the validate request and track the resulting operation. It returns 200 (OK) as succeeded, if all the conditions are met.
```http GET https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/providers/Microsoft.DataProtection/locations/westus/operationStatus/ZmMzNDFmYWMtZWJlMS00NGJhLWE4YTgtMDNjYjI4Y2M5OTExOzlhMjk2YWM2LWRjNDMtNGRjZS1iZTU2LTRkZDNiMDhjZDlkOA==?api-version=2021-01-01
GET https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx
### Configure backup request
-Once the request is validated, then you can submit the same to the [create backup instance API](/rest/api/dataprotection/backup-instances/create-or-update). A Backup instance represents an item protected with data protection service of Azure Backup within the backup vault. In this case, the storage account is the backup instance and you can use the same request body, which was validated above, with minor additions.
+Once the request validation is complete, you can submit the same to the [create backup instance API](/rest/api/dataprotection/backup-instances/create-or-update). A Backup instance represents an item protected with data protection service of Azure Backup within the backup vault. In this case, the storage account is the backup instance and you can use the same request body, which was validated above, with minor additions.
-You have to decide a unique name for the backup instance and hence we recommend you use a combination of the resource name and a unique identifier. We will use an example of "msblobbackup-f2df34eb-5628-4570-87b2-0331d797c67d" here and mark it as the backup instance name.
+Use a unique name for the backup instance. So, we recommend you use a combination of the resource name and a unique identifier. In this example, use *msblobbackup-f2df34eb-5628-4570-87b2-0331d797c67d* here and mark it as the backup instance name.
-To create or update the backup instance, use the following ***PUT*** operation.
+To create or update the backup instance, use the following *PUT* operation.
```http PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/{BkpvaultName}/backupInstances/{UniqueBackupInstanceName}?api-version=2021-01-01
To create a backup instance, following are the components of the request body
##### Example request for configure backup
-We will use the same request body that we used to validate the backup request with a unique name as we mentioned [above](#configure-backup).
+Use the same request body that you used to validate the backup request with a unique name as we mentioned [above](#configure-backup).
```json {
It returns two responses: 201 (Created) when backup instance is created and the
##### Example responses to configure backup request
-Once you submit the *PUT* request to create a backup instance, the initial response is 201 (Created) with an Azure-asyncOperation header. Please note that the request body contains all the backup instance properties.
+Once you submit the *PUT* request to create a backup instance, the initial response is 201 (Created) with an Azure-asyncOperation header.
+
+>[Note]
+>The request body contains all the backup instance properties.
```http HTTP/1.1 201 Created
X-Powered-By: ASP.NET
} ```
-Then track the resulting operation using the Azure-AsyncOperation header with a simple *GET* command.
+Then track the resulting operation using the *Azure-AsyncOperation* header with a simple *GET* command.
```http GET https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/providers/Microsoft.DataProtection/locations/westus/operationStatus/ZmMzNDFmYWMtZWJlMS00NGJhLWE4YTgtMDNjYjI4Y2M5OTExOzI1NWUwNmFlLTI5MjUtNDBkNy1iMjMxLTM0ZWZlMDA3NjdkYQ==?api-version=2021-01-01
Once the operation completes, it returns 200 (OK) with the success message in th
### Stop protection and delete data
-To remove the protection on a storage account and delete the backup data as well, perform a delete operation as detailed [here](/rest/api/dataprotection/backup-instances/delete).
+To remove the protection on a storage account and delete the backup data as well, follow [the delete operation process](/rest/api/dataprotection/backup-instances/delete).
Stopping protection and deleting data is a *DELETE* operation.
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backup description: Provides a summary of support settings and limitations when backing up Azure VMs with the Azure Backup service. Previously updated : 09/07/2022 Last updated : 10/31/2022
Azure VM data disks | Support for backup of Azure VMs with up to 32 disks.<br><b
Data disk size | Individual disk size can be up to 32 TB and a maximum of 256 TB combined for all disks in a VM. Storage type | Standard HDD, Standard SSD, Premium SSD. <br><br> Backup and restore of [ZRS disks](../virtual-machines/disks-redundancy.md#zone-redundant-storage-for-managed-disks) is supported. Managed disks | Supported.
-Encrypted disks | Supported.<br/><br/> Azure VMs enabled with Azure Disk Encryption can be backed up (with or without the Azure AD app).<br/><br/> Encrypted VMs can't be recovered at the file/folder level. You must recover the entire VM.<br/><br/> You can enable encryption on VMs that are already protected by Azure Backup.
+Encrypted disks | Supported.<br/><br/> Azure VMs enabled with Azure Disk Encryption can be backed up (with or without the Azure AD app).<br/><br/> Encrypted VMs can't be recovered at the file/folder level. You must recover the entire VM.<br/><br/> You can enable encryption on VMs that are already protected by Azure Backup. <br><br> You can back up and restore disks encrypted using platform-managed keys (PMKs) or customer-managed keys (CMKs). You can also assign a disk-encryption set while restoring in the same region (that is providing disk-encryption set while performing cross-region restore is currently not supported, however, you can assign the DES to the restored disk after the restore is complete).
Disks with Write Accelerator enabled | Azure VM with WA disk backup is available in all Azure public regions starting from May 18, 2020. If WA disk backup is not required as part of VM backup, you can choose to remove with [**Selective disk** feature](selective-disk-backup-restore.md). <br><br>**Important** <br> Virtual machines with WA disks need internet connectivity for a successful backup (even though those disks are excluded from the backup). Disks enabled for access with private EndPoint | Unsupported. Back up & Restore deduplicated VMs/disks | Azure Backup doesn't support deduplication. For more information, see this [article](./backup-support-matrix.md#disk-deduplication-support) <br/> <br/> - Azure Backup doesn't deduplicate across VMs in the Recovery Services vault <br/> <br/> - If there are VMs in deduplication state during restore, the files can't be restored because the vault doesn't understand the format. However, you can successfully perform the full VM restore.
backup Blob Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-support-matrix.md
Operational backup of blobs uses blob point-in-time restore, blob versioning, so
**Other limitations:** -- If you've deleted a container during the retention period, that container won't be restored with the point-in-time restore operation. If you attempt to restore a range of blobs that includes blobs in a deleted container, the point-in-time restore operation will fail. For more information about protecting containers from deletion, see [Soft delete for containers (preview)](../storage/blobs/soft-delete-container-overview.md).
+- If you've deleted a container during the retention period, that container won't be restored with the point-in-time restore operation. If you attempt to restore a range of blobs that includes blobs in a deleted container, the point-in-time restore operation will fail. For more information about protecting containers from deletion, see [Soft delete for containers](../storage/blobs/soft-delete-container-overview.md).
- If a blob has moved between the hot and cool tiers in the period between the present moment and the restore point, the blob is restored to its previous tier. Restoring block blobs in the archive tier isn't supported. For example, if a blob in the hot tier was moved to the archive tier two days ago, and a restore operation restores to a point three days ago, the blob isn't restored to the hot tier. To restore an archived blob, first move it out of the archive tier. For more information, see [Rehydrate blob data from the archive tier](../storage/blobs/archive-rehydrate-overview.md). - A block that has been uploaded via [Put Block](/rest/api/storageservices/put-block) or [Put Block from URL](/rest/api/storageservices/put-block-from-url), but not committed via [Put Block List](/rest/api/storageservices/put-block-list), isn't part of a blob and so isn't restored as part of a restore operation. - A blob with an active lease can't be restored. If a blob with an active lease is included in the range of blobs to restore, the restore operation will fail automatically. Break any active leases before starting the restore operation.
Operational backup of blobs uses blob point-in-time restore, blob versioning, so
## Next steps
-[Overview of operational backup for Azure Blobs](blob-backup-overview.md)
+[Overview of operational backup for Azure Blobs](blob-backup-overview.md)
backup Tutorial Restore Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-restore-disk.md
If the backed-up VM has managed disks and if the intent is to restore managed di
``` > [!WARNING]
- > If **target-resource-group** isn't provided, then the managed disks will be restored as unmanaged disks to the given storage account. This will have significant consequences to the restore time since the time taken to restore the disks entirely depends on the given storage account. You'll get the benefit of instant restore only when the target-resource-group parameter is given. If the intention is to restore managed disks as unmanaged then don't provide the **target-resource-group** parameter and instead provide the parameter **restore-as-unmanaged-disk** parameter as shown below. This parameter is available from az 3.4.0 onwards.
+ > If **target-resource-group** isn't provided, then the managed disks will be restored as unmanaged disks to the given storage account. This will have significant consequences to the restore time since the time taken to restore the disks entirely depends on the given storage account. You'll get the benefit of instant restore only when the target-resource-group parameter is given. If the intention is to restore managed disks as unmanaged then don't provide the **target-resource-group** parameter and instead provide the **restore-as-unmanaged-disk** parameter as shown below. This parameter is available from Azure CLI 3.4.0 onwards.
```azurecli-interactive az backup restore restore-disks \
blockchain Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/architecture.md
- Title: Azure Blockchain Workbench architecture
-description: Overview of Azure Blockchain Workbench Preview architecture and its components.
Previously updated : 02/18/2022--
-#Customer intent: As a developer, I want to understand the architecture and components of Azure Blockchain Workbench.
-
-# Azure Blockchain Workbench architecture
--
-Azure Blockchain Workbench Preview simplifies blockchain application development by providing a solution using several Azure components. Blockchain Workbench can be deployed using a solution template in the Azure Marketplace. The template allows you to pick modules and components to deploy including blockchain stack, type of client application, and support for IoT integration. Once deployed, Blockchain Workbench provides access to a web app, iOS app, and Android app.
-
-![Blockchain Workbench architecture](./media/architecture/architecture.png)
-
-## Identity and authentication
-
-Using Blockchain Workbench, a consortium can federate their enterprise identities using Azure Active Directory (Azure AD). Workbench generates new user accounts for on-chain identities with the enterprise identities stored in Azure AD. The identity mapping facilitates authenticated login to client APIs and applications and uses the authentication policies of organizations. Workbench also provides the ability to associate enterprise identities to specific roles within a given smart contract. In addition, Workbench also provides a mechanism to identify the actions those roles can take and at what time.
-
-After Blockchain Workbench is deployed, users interact with Blockchain Workbench either via the client applications, REST-based client API, or Messaging API. In all cases, interactions must be authenticated, either via Azure Active Directory (Azure AD) or device-specific credentials.
-
-Users federate their identities to a consortium Azure AD by sending an email invitation to participants at their email address. When logging in, these users are authenticated using the name, password, and policies. For example, two-factor authentication of their organization.
-
-Azure AD is used to manage all users who have access to Blockchain Workbench. Each device connecting to a smart contract is also associated with Azure AD.
-
-Azure AD is also used to assign users to a special administrator group. Users associated with the administrator group are granted access to
-rights and actions within Blockchain Workbench including deploying contracts and giving permissions to a user to access a contract. Users outside this group do not have access to administrator actions.
-
-## Client applications
-
-Workbench provides automatically generated client applications for web and mobile (iOS, Android), which can be used to validate, test, and view blockchain applications. The application interface is dynamically generated based on smart contract metadata and can accommodate any use case. The client applications deliver a user-facing front end to the complete blockchain applications generated by Blockchain Workbench. Client applications authenticate users via Azure Active Directory (Azure AD) and then present a user experience tailored to the business context of the smart contract. The user experience enables the creation of new smart contract instances by authorized individuals and then presents the ability to execute certain types of transactions at appropriate points in the business process the smart contract represents.
-
-In the web application, authorized users can access the Administrator Console. The console is available to users in the Administrator group in Azure AD and provides access to the following functionality:
-
-* Deploy Microsoft provided smart contracts for popular scenarios. For example, an asset transfer scenario.
-* Upload and deploy their own smart contracts.
-* Assign a user access to the smart contract in the context of a specific role.
-
-For more information, see the [Azure Blockchain Workbench sample client applications on GitHub](https://github.com/Azure-Samples/blockchain-devkit/tree/master/connect/mobile).
-
-## Gateway service API
-
-Blockchain Workbench includes a REST-based gateway service API. When writing to a blockchain, the API generates and delivers messages to an event broker. When data is requested by the API, queries are sent to the off-chain database. The database contains a replica of on-chain data and metadata that provides context and configuration information for supported smart contracts. Queries return the required data from the off-chain replica in a format informed by the metadata for the contract.
-
-Developers can access the gateway service API to build or integrate blockchain solutions without relying on Blockchain Workbench client apps.
-
-> [!NOTE]
-> To enable authenticated access to the API, two client applications are registered in Azure Active Directory. Azure Active Directory requires distinct application registrations each application type (native and web).
-
-## Message broker for incoming messages
-
-Developers who want to send messages directly to Blockchain Workbench can send messages directly to Service Bus. For example, messages API could be used for system-to-system integration or IoT devices.
-
-## Message broker for downstream consumers
-
-During the lifecycle of the application, events occur. Events can be triggered by the Gateway API or on the ledger. Event notifications can initiate downstream code based on the event.
-
-Blockchain Workbench automatically deploys two types of event consumers. One consumer is triggered by blockchain events to populate the off-chain SQL store. The other consumer is to capture metadata for events generated by the API related to the upload and storage of documents.
-
-## Message consumers
-
- Message consumers take messages from Service Bus. The underlying eventing model for message consumers allows for extensions of additional services and systems. For example, you could add support to populate CosmosDB or evaluate messages using Azure Streaming Analytics. The following sections describe the message consumers included in Blockchain Workbench.
-
-### Distributed ledger consumer
-
-Distributed ledger technology (DLT) messages contain the metadata for transactions to be written to the blockchain. The consumer retrieves the messages and pushes the data to a transaction builder, signer, and router.
-
-### Database consumer
-
-The database consumer takes messages from Service Bus and pushes the data to an attached database, such as a database in Azure SQL Database.
-
-### Storage consumer
-
-The storage consumer takes messages from Service Bus and pushes data to an attached storage. For example, storing hashed documents in Azure Storage.
-
-## Transaction builder and signer
-
-If a message on the inbound message broker needs to be written to the blockchain, it will be processed by the DLT consumer. The DLT consumer is a service, which retrieves the message containing metadata for a desired transaction to execute and then sends the information to the *transaction builder and signer*. The *transaction builder and signer* assembles a blockchain transaction based on the data and the desired blockchain destination. Once assembled, the transaction is signed. Private keys are stored in Azure Key Vault.
-
- Blockchain Workbench retrieves the appropriate private key from Key Vault and signs the transaction outside of Key Vault. Once signed, the transaction is sent to transaction routers and ledgers.
-
-## Transaction routers and ledgers
-
-Transaction routers and ledgers take signed transactions and route them to the appropriate blockchain. Currently, Blockchain Workbench supports Ethereum as its target blockchain.
-
-## DLT watcher
-
-A distributed ledger technology (DLT) watcher monitors events occurring on block chains attached to Blockchain Workbench.
-Events reflect information relevant to individuals and systems. For example, the creation of new contract instances, execution of transactions, and changes of state. The events are captured and sent to the outbound message broker, so they can be consumed by downstream consumers.
-
-For example, the SQL consumer monitors events, consumes them, and populates the database with the included values. The copy enables recreation of a replica of on-chain data in an off-chain store.
-
-## Azure SQL Database
-
-The database attached to Blockchain Workbench stores contract definitions, configuration metadata, and a SQL-accessible replica of data stored in the blockchain. This data can easily be queried, visualized, or analyzed by directly accessing the database. Developers and other users can use
-the database for reporting, analytics, or other data-centric integrations. For example, users can visualize transaction data using Power BI.
-
-This off-chain storage provides the ability for enterprise organizations to query data in SQL rather than in a blockchain ledger. Also, by standardizing on a standard schema that's agnostic of blockchain technology stacks, the off-chain storage enables the reuse of reports and other artifacts across projects, scenarios, and organizations.
-
-## Azure Storage
-
-Azure Storage is used to store contracts and metadata associated with contracts.
-
-From purchase orders and bills of lading, to images used in the news and medical imagery, to video originating from a continuum including police body cameras and major motion pictures, documents play a role in many blockchain-centric scenarios. Documents are not appropriate to place directly on the blockchain.
-
-Blockchain Workbench supports the ability to add documents or other media content with blockchain business logic. A hash of the document or media content is stored in the blockchain and the actual document or media content is stored in Azure Storage. The associated transaction information is delivered to the inbound message broker, packaged up, signed, and routed to the blockchain. This process triggers events, which are shared via
-the outbound message broker. The SQL DB consumes this information and sends it to the DB for later querying. Downstream systems could also consume these events to act as appropriate.
-
-## Monitoring
-
-Workbench provides application logging using Application Insights and Azure Monitor. Application Insights is used to store all logged information from Blockchain Workbench and includes errors, warnings, and successful operations. Application Insights can be used by developers to debug issues with Blockchain Workbench.
-
-Azure Monitor provides information on the health of the blockchain network.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Deploy Azure Blockchain Workbench](./deploy.md)
blockchain Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/configuration.md
- Title: Azure Blockchain Workbench configuration metadata reference
-description: Azure Blockchain Workbench Preview application configuration metadata overview.
Previously updated : 02/18/2022--
-#Customer intent: As a developer, I want to understand application configuration metadata details used by Azure Blockchain Workbench.
-
-# Azure Blockchain Workbench configuration reference
--
-Azure Blockchain Workbench applications are multi-party workflows defined by configuration metadata and smart contract code. Configuration metadata defines the high-level workflows and interaction model of the blockchain application. Smart contracts define the business logic of the blockchain application. Workbench uses configuration and smart contract code to generate blockchain application user experiences.
-
-Configuration metadata specifies the following information for each blockchain application:
-
-* Name and description of the blockchain application
-* Unique roles for users who can act or participate within the blockchain application
-* One or more workflows. Each workflow acts as a state machine to control the flow of the business logic. Workflows can be independent or interact with one another.
-
-Each defined workflow specifies the following:
-
-* Name and description of the workflow
-* States of the workflow. Each state is a stage in the business logic's control flow.
-* Actions to transition to the next state
-* User roles permitted to initiate each action
-* Smart contracts that represent business logic in code files
-
-## Application
-
-A blockchain application contains configuration metadata, workflows, and user roles who can act or participate within the application.
-
-| Field | Description | Required |
-|-|-|:--:|
-| ApplicationName | Unique application name. The corresponding smart contract must use the same **ApplicationName** for the applicable contract class. | Yes |
-| DisplayName | Friendly display name of the application. | Yes |
-| Description | Description of the application. | No |
-| ApplicationRoles | Collection of [ApplicationRoles](#application-roles). User roles who can act or participate within the application. | Yes |
-| Workflows | Collection of [Workflows](#workflows). Each workflow acts as a state machine to control the flow of the business logic. | Yes |
-
-For an example, see [configuration file example](#configuration-file-example).
-
-## Workflows
-
-An application's business logic may be modeled as a state machine where taking an action causes the flow of the business logic to move from one state to another. A workflow is a collection of such states and actions. Each workflow consists of one or more smart contracts, which represent the business logic in code files. An executable contract is an instance of a workflow.
-
-| Field | Description | Required | Max length |
-|-|-|:--:|--:|
-| Name | Unique workflow name. The corresponding smart contract must use the same **Name** for the applicable contract class. | Yes | 50 |
-| DisplayName | Friendly display name of the workflow. | Yes | 255 |
-| Description | Description of the workflow. | No | 255 |
-| Initiators | Collection of [ApplicationRoles](#application-roles). Roles that are assigned to users who are authorized to create contracts in the workflow. | Yes | |
-| StartState | Name of the initial state of the workflow. | Yes | |
-| Properties | Collection of [identifiers](#identifiers). Represents data that can be read off-chain or visualized in a user experience tool. | Yes | |
-| Constructor | Defines input parameters for creating an instance of the workflow. | Yes | |
-| Functions | A collection of [functions](#functions) that can be executed in the workflow. | Yes | |
-| States | A collection of workflow [states](#states). | Yes | |
-
-For an example, see [configuration file example](#configuration-file-example).
-
-## Type
-
-Supported data types.
-
-| Type | Description |
-|-|-|
-| address | Blockchain address type, such as *contracts* or *users*. |
-| array | Single level array of type integer, bool, money, or time. Arrays can be static or dynamic. Use **ElementType** to specify the datatype of the elements within the array. See [example configuration](#example-configuration-of-type-array). |
-| bool | Boolean data type. |
-| contract | Address of type contract. |
-| enum | Enumerated set of named values. When using the enum type, you also specify a list of EnumValues. Each value is limited to 255 characters. Valid value characters include upper and lower case letters (A-Z, a-z) and numbers (0-9). See [example configuration and use in Solidity](#example-configuration-of-type-enum). |
-| int | Integer data type. |
-| money | Money data type. |
-| state | Workflow state. |
-| string | String data type. 4000 character maximum. See [example configuration](#example-configuration-of-type-string). |
-| user | Address of type user. |
-| time | Time data type. |
-|`[ Application Role Name ]`| Any name specified in application role. Limits users to be of that role type. |
-
-### Example configuration of type array
-
-```json
-{
- "Name": "Quotes",
- "Description": "Market quotes",
- "DisplayName": "Quotes",
- "Type": {
- "Name": "array",
- "ElementType": {
- "Name": "int"
- }
- }
-}
-```
-
-#### Using a property of type array
-
-If you define a property as type array in configuration, you need to include an explicit get function to return the public property of the array type in Solidity. For example:
-
-```
-function GetQuotes() public constant returns (int[]) {
- return Quotes;
-}
-```
-
-### Example configuration of type string
-
-``` json
-{
- "Name": "description",
- "Description": "Descriptive text",
- "DisplayName": "Description",
- "Type": {
- "Name": "string"
- }
-}
-```
-
-### Example configuration of type enum
-
-``` json
-{
- "Name": "PropertyType",
- "DisplayName": "Property Type",
- "Description": "The type of the property",
- "Type": {
- "Name": "enum",
- "EnumValues": ["House", "Townhouse", "Condo", "Land"]
- }
-}
-```
-
-#### Using enumeration type in Solidity
-
-Once an enum is defined in configuration, you can use enumeration types in Solidity. For example, you can define an enum called PropertyTypeEnum.
-
-```
-enum PropertyTypeEnum {House, Townhouse, Condo, Land} PropertyTypeEnum public PropertyType;
-```
-
-The list of strings needs to match between the configuration and smart contract to be valid and consistent declarations in Blockchain Workbench.
-
-Assignment example:
-
-```
-PropertyType = PropertyTypeEnum.Townhouse;
-```
-
-Function parameter example:
-
-```
-function AssetTransfer(string description, uint256 price, PropertyTypeEnum propertyType) public
-{
- InstanceOwner = msg.sender;
- AskingPrice = price;
- Description = description;
- PropertyType = propertyType;
- State = StateType.Active;
- ContractCreated();
-}
-
-```
-
-## Constructor
-
-Defines input parameters for an instance of a workflow.
-
-| Field | Description | Required |
-|-|-|:--:|
-| Parameters | Collection of [identifiers](#identifiers) required to initiate a smart contract. | Yes |
-
-### Constructor example
-
-``` json
-{
- "Parameters": [
- {
- "Name": "description",
- "Description": "The description of this asset",
- "DisplayName": "Description",
- "Type": {
- "Name": "string"
- }
- },
- {
- "Name": "price",
- "Description": "The price of this asset",
- "DisplayName": "Price",
- "Type": {
- "Name": "money"
- }
- }
- ]
-}
-```
-
-## Functions
-
-Defines functions that can be executed on the workflow.
-
-| Field | Description | Required | Max length |
-|-|-|:--:|--:|
-| Name | The unique name of the function. The corresponding smart contract must use the same **Name** for the applicable function. | Yes | 50 |
-| DisplayName | Friendly display name of the function. | Yes | 255 |
-| Description | Description of the function | No | 255 |
-| Parameters | Collection of [identifiers](#identifiers) corresponding to the parameters of the function. | Yes | |
-
-### Functions example
-
-``` json
-"Functions": [
- {
- "Name": "Modify",
- "DisplayName": "Modify",
- "Description": "Modify the description/price attributes of this asset transfer instance",
- "Parameters": [
- {
- "Name": "description",
- "Description": "The new description of the asset",
- "DisplayName": "Description",
- "Type": {
- "Name": "string"
- }
- },
- {
- "Name": "price",
- "Description": "The new price of the asset",
- "DisplayName": "Price",
- "Type": {
- "Name": "money"
- }
- }
- ]
- },
- {
- "Name": "Terminate",
- "DisplayName": "Terminate",
- "Description": "Used to cancel this particular instance of asset transfer",
- "Parameters": []
- }
-]
-
-```
-
-## States
-
-A collection of unique states within a workflow. Each state captures a step in the business logic's control flow.
-
-| Field | Description | Required | Max length |
-|-|-|:--:|--:|
-| Name | Unique name of the state. The corresponding smart contract must use the same **Name** for the applicable state. | Yes | 50 |
-| DisplayName | Friendly display name of the state. | Yes | 255 |
-| Description | Description of the state. | No | 255 |
-| PercentComplete | An integer value displayed in the Blockchain Workbench user interface to show the progress within the business logic control flow. | Yes | |
-| Style | Visual hint indicating whether the state represents a success or failure state. There are two valid values: `Success` or `Failure`. | Yes | |
-| Transitions | Collection of available [transitions](#transitions) from the current state to the next set of states. | No | |
-
-### States example
-
-``` json
-"States": [
- {
- "Name": "Active",
- "DisplayName": "Active",
- "Description": "The initial state of the asset transfer workflow",
- "PercentComplete": 20,
- "Style": "Success",
- "Transitions": [
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceOwner" ],
- "Description": "Cancels this instance of asset transfer",
- "Function": "Terminate",
- "NextStates": [ "Terminated" ],
- "DisplayName": "Terminate Offer"
- },
- {
- "AllowedRoles": [ "Buyer" ],
- "AllowedInstanceRoles": [],
- "Description": "Make an offer for this asset",
- "Function": "MakeOffer",
- "NextStates": [ "OfferPlaced" ],
- "DisplayName": "Make Offer"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceOwner" ],
- "Description": "Modify attributes of this asset transfer instance",
- "Function": "Modify",
- "NextStates": [ "Active" ],
- "DisplayName": "Modify"
- }
- ]
- },
- {
- "Name": "Accepted",
- "DisplayName": "Accepted",
- "Description": "Asset transfer process is complete",
- "PercentComplete": 100,
- "Style": "Success",
- "Transitions": []
- },
- {
- "Name": "Terminated",
- "DisplayName": "Terminated",
- "Description": "Asset transfer has been canceled",
- "PercentComplete": 100,
- "Style": "Failure",
- "Transitions": []
- }
- ]
-```
-
-## Transitions
-
-Available actions to the next state. One or more user roles may perform an action at each state, where an action may transition a state to another state in the workflow.
-
-| Field | Description | Required |
-|-|-|:--:|
-| AllowedRoles | List of applications roles allowed to initiate the transition. All users of the specified role may be able to perform the action. | No |
-| AllowedInstanceRoles | List of user roles participating or specified in the smart contract allowed to initiate the transition. Instance roles are defined in **Properties** within workflows. AllowedInstanceRoles represent a user participating in an instance of a smart contract. AllowedInstanceRoles give you the ability to restrict taking an action to a user role in a contract instance. For example, you may only want to allow the user who created the contract (InstanceOwner) to be able to terminate rather than all users in role type (Owner) if you specified the role in AllowedRoles. | No |
-| DisplayName | Friendly display name of the transition. | Yes |
-| Description | Description of the transition. | No |
-| Function | The name of the function to initiate the transition. | Yes |
-| NextStates | A collection of potential next states after a successful transition. | Yes |
-
-### Transitions example
-
-``` json
-"Transitions": [
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceOwner" ],
- "Description": "Cancels this instance of asset transfer",
- "Function": "Terminate",
- "NextStates": [ "Terminated" ],
- "DisplayName": "Terminate Offer"
- },
- {
- "AllowedRoles": [ "Buyer" ],
- "AllowedInstanceRoles": [],
- "Description": "Make an offer for this asset",
- "Function": "MakeOffer",
- "NextStates": [ "OfferPlaced" ],
- "DisplayName": "Make Offer"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceOwner" ],
- "Description": "Modify attributes of this asset transfer instance",
- "Function": "Modify",
- "NextStates": [ "Active" ],
- "DisplayName": "Modify"
- }
-]
-
-```
-
-## Application roles
-
-Application roles define a set of roles that can be assigned to users who want to act or participate within the application. Application roles can be used to restrict actions and participation within the blockchain application and corresponding workflows.
-
-| Field | Description | Required | Max length |
-|-|-|:--:|--:|
-| Name | The unique name of the application role. The corresponding smart contract must use the same **Name** for the applicable role. Base type names are reserved. You cannot name an application role with the same name as [Type](#type)| Yes | 50 |
-| Description | Description of the application role. | No | 255 |
-
-### Application roles example
-
-``` json
-"ApplicationRoles": [
- {
- "Name": "Appraiser",
- "Description": "User that signs off on the asset price"
- },
- {
- "Name": "Buyer",
- "Description": "User that places an offer on an asset"
- }
-]
-```
-## Identifiers
-
-Identifiers represent a collection of information used to describe workflow properties, constructor, and function parameters.
-
-| Field | Description | Required | Max length |
-|-|-|:--:|--:|
-| Name | The unique name of the property or parameter. The corresponding smart contract must use the same **Name** for the applicable property or parameter. | Yes | 50 |
-| DisplayName | Friendly display name for the property or parameter. | Yes | 255 |
-| Description | Description of the property or parameter. | No | 255 |
-| Type | Property [data type](#type). | Yes |
-
-### Identifiers example
-
-``` json
-"Properties": [
- {
- "Name": "State",
- "DisplayName": "State",
- "Description": "Holds the state of the contract",
- "Type": {
- "Name": "state"
- }
- },
- {
- "Name": "Description",
- "DisplayName": "Description",
- "Description": "Describes the asset being sold",
- "Type": {
- "Name": "string"
- }
- }
-]
-```
-
-## Configuration file example
-
-Asset transfer is a smart contract scenario for buying and selling high value assets, which require an inspector and appraiser. Sellers can list their assets by instantiating an asset transfer smart contract. Buyers can make offers by taking an action on the smart contract, and other parties can take actions to inspect or appraise the asset. Once the asset is marked both inspected and appraised, the buyer and seller will confirm the sale again before the contract is set to complete. At each point in the process, all participants have visibility into the state of the contract as it is updated. 
-
-For more information including the code files, see
-[asset transfer sample for Azure Blockchain Workbench](https://github.com/Azure-Samples/blockchain/tree/master/blockchain-workbench/application-and-smart-contract-samples/asset-transfer)
-
-The following configuration file is for the asset transfer sample:
-
-``` json
-{
- "ApplicationName": "AssetTransfer",
- "DisplayName": "Asset Transfer",
- "Description": "Allows transfer of assets between a buyer and a seller, with appraisal/inspection functionality",
- "ApplicationRoles": [
- {
- "Name": "Appraiser",
- "Description": "User that signs off on the asset price"
- },
- {
- "Name": "Buyer",
- "Description": "User that places an offer on an asset"
- },
- {
- "Name": "Inspector",
- "Description": "User that inspects the asset and signs off on inspection"
- },
- {
- "Name": "Owner",
- "Description": "User that signs off on the asset price"
- }
- ],
- "Workflows": [
- {
- "Name": "AssetTransfer",
- "DisplayName": "Asset Transfer",
- "Description": "Handles the business logic for the asset transfer scenario",
- "Initiators": [ "Owner" ],
- "StartState": "Active",
- "Properties": [
- {
- "Name": "State",
- "DisplayName": "State",
- "Description": "Holds the state of the contract",
- "Type": {
- "Name": "state"
- }
- },
- {
- "Name": "Description",
- "DisplayName": "Description",
- "Description": "Describes the asset being sold",
- "Type": {
- "Name": "string"
- }
- },
- {
- "Name": "AskingPrice",
- "DisplayName": "Asking Price",
- "Description": "The asking price for the asset",
- "Type": {
- "Name": "money"
- }
- },
- {
- "Name": "OfferPrice",
- "DisplayName": "Offer Price",
- "Description": "The price being offered for the asset",
- "Type": {
- "Name": "money"
- }
- },
- {
- "Name": "InstanceAppraiser",
- "DisplayName": "Instance Appraiser",
- "Description": "The user that appraises the asset",
- "Type": {
- "Name": "Appraiser"
- }
- },
- {
- "Name": "InstanceBuyer",
- "DisplayName": "Instance Buyer",
- "Description": "The user that places an offer for this asset",
- "Type": {
- "Name": "Buyer"
- }
- },
- {
- "Name": "InstanceInspector",
- "DisplayName": "Instance Inspector",
- "Description": "The user that inspects this asset",
- "Type": {
- "Name": "Inspector"
- }
- },
- {
- "Name": "InstanceOwner",
- "DisplayName": "Instance Owner",
- "Description": "The seller of this particular asset",
- "Type": {
- "Name": "Owner"
- }
- }
- ],
- "Constructor": {
- "Parameters": [
- {
- "Name": "description",
- "Description": "The description of this asset",
- "DisplayName": "Description",
- "Type": {
- "Name": "string"
- }
- },
- {
- "Name": "price",
- "Description": "The price of this asset",
- "DisplayName": "Price",
- "Type": {
- "Name": "money"
- }
- }
- ]
- },
- "Functions": [
- {
- "Name": "Modify",
- "DisplayName": "Modify",
- "Description": "Modify the description/price attributes of this asset transfer instance",
- "Parameters": [
- {
- "Name": "description",
- "Description": "The new description of the asset",
- "DisplayName": "Description",
- "Type": {
- "Name": "string"
- }
- },
- {
- "Name": "price",
- "Description": "The new price of the asset",
- "DisplayName": "Price",
- "Type": {
- "Name": "money"
- }
- }
- ]
- },
- {
- "Name": "Terminate",
- "DisplayName": "Terminate",
- "Description": "Used to cancel this particular instance of asset transfer",
- "Parameters": []
- },
- {
- "Name": "MakeOffer",
- "DisplayName": "Make Offer",
- "Description": "Place an offer for this asset",
- "Parameters": [
- {
- "Name": "inspector",
- "Description": "Specify a user to inspect this asset",
- "DisplayName": "Inspector",
- "Type": {
- "Name": "Inspector"
- }
- },
- {
- "Name": "appraiser",
- "Description": "Specify a user to appraise this asset",
- "DisplayName": "Appraiser",
- "Type": {
- "Name": "Appraiser"
- }
- },
- {
- "Name": "offerPrice",
- "Description": "Specify your offer price for this asset",
- "DisplayName": "Offer Price",
- "Type": {
- "Name": "money"
- }
- }
- ]
- },
- {
- "Name": "Reject",
- "DisplayName": "Reject",
- "Description": "Reject the user's offer",
- "Parameters": []
- },
- {
- "Name": "AcceptOffer",
- "DisplayName": "Accept Offer",
- "Description": "Accept the user's offer",
- "Parameters": []
- },
- {
- "Name": "RescindOffer",
- "DisplayName": "Rescind Offer",
- "Description": "Rescind your placed offer",
- "Parameters": []
- },
- {
- "Name": "ModifyOffer",
- "DisplayName": "Modify Offer",
- "Description": "Modify the price of your placed offer",
- "Parameters": [
- {
- "Name": "offerPrice",
- "DisplayName": "Price",
- "Type": {
- "Name": "money"
- }
- }
- ]
- },
- {
- "Name": "Accept",
- "DisplayName": "Accept",
- "Description": "Accept the inspection/appraisal results",
- "Parameters": []
- },
- {
- "Name": "MarkInspected",
- "DisplayName": "Mark Inspected",
- "Description": "Mark the asset as inspected",
- "Parameters": []
- },
- {
- "Name": "MarkAppraised",
- "DisplayName": "Mark Appraised",
- "Description": "Mark the asset as appraised",
- "Parameters": []
- }
- ],
- "States": [
- {
- "Name": "Active",
- "DisplayName": "Active",
- "Description": "The initial state of the asset transfer workflow",
- "PercentComplete": 20,
- "Style": "Success",
- "Transitions": [
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceOwner" ],
- "Description": "Cancels this instance of asset transfer",
- "Function": "Terminate",
- "NextStates": [ "Terminated" ],
- "DisplayName": "Terminate Offer"
- },
- {
- "AllowedRoles": [ "Buyer" ],
- "AllowedInstanceRoles": [],
- "Description": "Make an offer for this asset",
- "Function": "MakeOffer",
- "NextStates": [ "OfferPlaced" ],
- "DisplayName": "Make Offer"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceOwner" ],
- "Description": "Modify attributes of this asset transfer instance",
- "Function": "Modify",
- "NextStates": [ "Active" ],
- "DisplayName": "Modify"
- }
- ]
- },
- {
- "Name": "OfferPlaced",
- "DisplayName": "Offer Placed",
- "Description": "Offer has been placed for the asset",
- "PercentComplete": 30,
- "Style": "Success",
- "Transitions": [
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceOwner" ],
- "Description": "Accept the proposed offer for the asset",
- "Function": "AcceptOffer",
- "NextStates": [ "PendingInspection" ],
- "DisplayName": "Accept Offer"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceOwner" ],
- "Description": "Reject the proposed offer for the asset",
- "Function": "Reject",
- "NextStates": [ "Active" ],
- "DisplayName": "Reject"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceOwner" ],
- "Description": "Cancel this instance of asset transfer",
- "Function": "Terminate",
- "NextStates": [ "Terminated" ],
- "DisplayName": "Terminate"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceBuyer" ],
- "Description": "Rescind the offer you previously placed for this asset",
- "Function": "RescindOffer",
- "NextStates": [ "Active" ],
- "DisplayName": "Rescind Offer"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceBuyer" ],
- "Description": "Modify the price that you specified for your offer",
- "Function": "ModifyOffer",
- "NextStates": [ "OfferPlaced" ],
- "DisplayName": "Modify Offer"
- }
- ]
- },
- {
- "Name": "PendingInspection",
- "DisplayName": "Pending Inspection",
- "Description": "Asset is pending inspection",
- "PercentComplete": 40,
- "Style": "Success",
- "Transitions": [
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceOwner" ],
- "Description": "Reject the offer",
- "Function": "Reject",
- "NextStates": [ "Active" ],
- "DisplayName": "Reject"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceOwner" ],
- "Description": "Cancel the offer",
- "Function": "Terminate",
- "NextStates": [ "Terminated" ],
- "DisplayName": "Terminate"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceBuyer" ],
- "Description": "Rescind the offer you placed for this asset",
- "Function": "RescindOffer",
- "NextStates": [ "Active" ],
- "DisplayName": "Rescind Offer"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceInspector" ],
- "Description": "Mark this asset as inspected",
- "Function": "MarkInspected",
- "NextStates": [ "Inspected" ],
- "DisplayName": "Mark Inspected"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceAppraiser" ],
- "Description": "Mark this asset as appraised",
- "Function": "MarkAppraised",
- "NextStates": [ "Appraised" ],
- "DisplayName": "Mark Appraised"
- }
- ]
- },
- {
- "Name": "Inspected",
- "DisplayName": "Inspected",
- "PercentComplete": 45,
- "Style": "Success",
- "Transitions": [
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceOwner" ],
- "Description": "Reject the offer",
- "Function": "Reject",
- "NextStates": [ "Active" ],
- "DisplayName": "Reject"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceOwner" ],
- "Description": "Cancel the offer",
- "Function": "Terminate",
- "NextStates": [ "Terminated" ],
- "DisplayName": "Terminate"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceBuyer" ],
- "Description": "Rescind the offer you placed for this asset",
- "Function": "RescindOffer",
- "NextStates": [ "Active" ],
- "DisplayName": "Rescind Offer"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceAppraiser" ],
- "Description": "Mark this asset as appraised",
- "Function": "MarkAppraised",
- "NextStates": [ "NotionalAcceptance" ],
- "DisplayName": "Mark Appraised"
- }
- ]
- },
- {
- "Name": "Appraised",
- "DisplayName": "Appraised",
- "Description": "Asset has been appraised, now awaiting inspection",
- "PercentComplete": 45,
- "Style": "Success",
- "Transitions": [
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceOwner" ],
- "Description": "Reject the offer",
- "Function": "Reject",
- "NextStates": [ "Active" ],
- "DisplayName": "Reject"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceOwner" ],
- "Description": "Cancel the offer",
- "Function": "Terminate",
- "NextStates": [ "Terminated" ],
- "DisplayName": "Terminate"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceBuyer" ],
- "Description": "Rescind the offer you placed for this asset",
- "Function": "RescindOffer",
- "NextStates": [ "Active" ],
- "DisplayName": "Rescind Offer"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceInspector" ],
- "Description": "Mark the asset as inspected",
- "Function": "MarkInspected",
- "NextStates": [ "NotionalAcceptance" ],
- "DisplayName": "Mark Inspected"
- }
- ]
- },
- {
- "Name": "NotionalAcceptance",
- "DisplayName": "Notional Acceptance",
- "Description": "Asset has been inspected and appraised, awaiting final sign-off from buyer and seller",
- "PercentComplete": 50,
- "Style": "Success",
- "Transitions": [
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceOwner" ],
- "Description": "Sign-off on inspection and appraisal",
- "Function": "Accept",
- "NextStates": [ "SellerAccepted" ],
- "DisplayName": "SellerAccept"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceOwner" ],
- "Description": "Reject the proposed offer for the asset",
- "Function": "Reject",
- "NextStates": [ "Active" ],
- "DisplayName": "Reject"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceOwner" ],
- "Description": "Cancel this instance of asset transfer",
- "Function": "Terminate",
- "NextStates": [ "Terminated" ],
- "DisplayName": "Terminate"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceBuyer" ],
- "Description": "Sign-off on inspection and appraisal",
- "Function": "Accept",
- "NextStates": [ "BuyerAccepted" ],
- "DisplayName": "BuyerAccept"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceBuyer" ],
- "Description": "Rescind the offer you placed for this asset",
- "Function": "RescindOffer",
- "NextStates": [ "Active" ],
- "DisplayName": "Rescind Offer"
- }
- ]
- },
- {
- "Name": "BuyerAccepted",
- "DisplayName": "Buyer Accepted",
- "Description": "Buyer has signed-off on inspection and appraisal",
- "PercentComplete": 75,
- "Style": "Success",
- "Transitions": [
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceOwner" ],
- "Description": "Sign-off on inspection and appraisal",
- "Function": "Accept",
- "NextStates": [ "SellerAccepted" ],
- "DisplayName": "Accept"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceOwner" ],
- "Description": "Reject the proposed offer for the asset",
- "Function": "Reject",
- "NextStates": [ "Active" ],
- "DisplayName": "Reject"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceOwner" ],
- "Description": "Cancel this instance of asset transfer",
- "Function": "Terminate",
- "NextStates": [ "Terminated" ],
- "DisplayName": "Terminate"
- }
- ]
- },
- {
- "Name": "SellerAccepted",
- "DisplayName": "Seller Accepted",
- "Description": "Seller has signed-off on inspection and appraisal",
- "PercentComplete": 75,
- "Style": "Success",
- "Transitions": [
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceBuyer" ],
- "Description": "Sign-off on inspection and appraisal",
- "Function": "Accept",
- "NextStates": [ "Accepted" ],
- "DisplayName": "Accept"
- },
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": [ "InstanceBuyer" ],
- "Description": "Rescind the offer you placed for this asset",
- "Function": "RescindOffer",
- "NextStates": [ "Active" ],
- "DisplayName": "Rescind Offer"
- }
- ]
- },
- {
- "Name": "Accepted",
- "DisplayName": "Accepted",
- "Description": "Asset transfer process is complete",
- "PercentComplete": 100,
- "Style": "Success",
- "Transitions": []
- },
- {
- "Name": "Terminated",
- "DisplayName": "Terminated",
- "Description": "Asset transfer has been canceled",
- "PercentComplete": 100,
- "Style": "Failure",
- "Transitions": []
- }
- ]
- }
- ]
-}
-```
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Azure Blockchain Workbench REST API reference](/rest/api/azure-blockchain-workbench)
blockchain Create App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/create-app.md
- Title: Create a blockchain application - Azure Blockchain Workbench
-description: Tutorial on how to create a blockchain application for Azure Blockchain Workbench Preview.
Previously updated : 02/18/2022--
-#Customer intent: As a developer, I want to use Azure Blockchain Workbench to create a blockchain app.
-
-# Tutorial: Create a blockchain application for Azure Blockchain Workbench
--
-You can use Azure Blockchain Workbench to create blockchain applications that represent multi-party workflows defined by configuration and smart contract code.
-
-You'll learn how to:
-
-> [!div class="checklist"]
-> * Configure a blockchain application
-> * Create a smart contract code file
-> * Add a blockchain application to Blockchain Workbench
-> * Add members to the blockchain application
--
-## Prerequisites
-
-* A Blockchain Workbench deployment. For more information, see [Azure Blockchain Workbench deployment](deploy.md) for details on deployment.
-* Azure Active Directory users in the tenant associated with Blockchain Workbench. For more information, see [add Azure AD users in Azure Blockchain Workbench](manage-users.md#add-azure-ad-users).
-* A Blockchain Workbench administrator account. For more information, see add [Blockchain Workbench administrators in Azure Blockchain Workbench](manage-users.md#manage-blockchain-workbench-administrators).
-
-## Hello, Blockchain!
-
-Let's build a basic application in which a requestor sends a request and a responder send a response to the request.
-For example, a request can be, "Hello, how are you?", and the response can be, "I'm great!". Both the request and the response are recorded on the underlying blockchain.
-
-Follow the steps to create the application files or you can [download the sample from GitHub](https://github.com/Azure-Samples/blockchain/tree/master/blockchain-workbench/application-and-smart-contract-samples/hello-blockchain).
-
-## Configuration file
-
-Configuration metadata defines the high-level workflows and interaction model of the blockchain application. Configuration metadata represents the workflow stages and interaction model of the blockchain application. For more information about the contents of configuration files, see [Azure Blockchain Workflow configuration reference](configuration.md).
-
-1. In your favorite editor, create a file named `HelloBlockchain.json`.
-2. Add the following JSON to define the configuration of the blockchain application.
-
- ``` json
- {
- "ApplicationName": "HelloBlockchain",
- "DisplayName": "Hello, Blockchain!",
- "Description": "A simple application to send request and get response",
- "ApplicationRoles": [
- {
- "Name": "Requestor",
- "Description": "A person sending a request."
- },
- {
- "Name": "Responder",
- "Description": "A person responding to a request"
- }
- ],
- "Workflows": [
- {
- "Name": "HelloBlockchain",
- "DisplayName": "Request Response",
- "Description": "A simple workflow to send a request and receive a response.",
- "Initiators": [ "Requestor" ],
- "StartState": "Request",
- "Properties": [
- {
- "Name": "State",
- "DisplayName": "State",
- "Description": "Holds the state of the contract.",
- "Type": {
- "Name": "state"
- }
- },
- {
- "Name": "Requestor",
- "DisplayName": "Requestor",
- "Description": "A person sending a request.",
- "Type": {
- "Name": "Requestor"
- }
- },
- {
- "Name": "Responder",
- "DisplayName": "Responder",
- "Description": "A person sending a response.",
- "Type": {
- "Name": "Responder"
- }
- },
- {
- "Name": "RequestMessage",
- "DisplayName": "Request Message",
- "Description": "A request message.",
- "Type": {
- "Name": "string"
- }
- },
- {
- "Name": "ResponseMessage",
- "DisplayName": "Response Message",
- "Description": "A response message.",
- "Type": {
- "Name": "string"
- }
- }
- ],
- "Constructor": {
- "Parameters": [
- {
- "Name": "message",
- "Description": "...",
- "DisplayName": "Request Message",
- "Type": {
- "Name": "string"
- }
- }
- ]
- },
- "Functions": [
- {
- "Name": "SendRequest",
- "DisplayName": "Request",
- "Description": "...",
- "Parameters": [
- {
- "Name": "requestMessage",
- "Description": "...",
- "DisplayName": "Request Message",
- "Type": {
- "Name": "string"
- }
- }
- ]
- },
- {
- "Name": "SendResponse",
- "DisplayName": "Response",
- "Description": "...",
- "Parameters": [
- {
- "Name": "responseMessage",
- "Description": "...",
- "DisplayName": "Response Message",
- "Type": {
- "Name": "string"
- }
- }
- ]
- }
- ],
- "States": [
- {
- "Name": "Request",
- "DisplayName": "Request",
- "Description": "...",
- "PercentComplete": 50,
- "Value": 0,
- "Style": "Success",
- "Transitions": [
- {
- "AllowedRoles": ["Responder"],
- "AllowedInstanceRoles": [],
- "Description": "...",
- "Function": "SendResponse",
- "NextStates": [ "Respond" ],
- "DisplayName": "Send Response"
- }
- ]
- },
- {
- "Name": "Respond",
- "DisplayName": "Respond",
- "Description": "...",
- "PercentComplete": 90,
- "Value": 1,
- "Style": "Success",
- "Transitions": [
- {
- "AllowedRoles": [],
- "AllowedInstanceRoles": ["Requestor"],
- "Description": "...",
- "Function": "SendRequest",
- "NextStates": [ "Request" ],
- "DisplayName": "Send Request"
- }
- ]
- }
- ]
- }
- ]
- }
- ```
-
-3. Save the `HelloBlockchain.json` file.
-
-The configuration file has several sections. Details about each section are as follows:
-
-### Application metadata
-
-The beginning of the configuration file contains information about the application including application name and description.
-
-### Application roles
-
-The application roles section defines the user roles who can act or participate within the blockchain application. You define a set of distinct roles based on functionality. In the request-response scenario, there is a distinction between the functionality of a requestor as an entity that produces requests and a responder as an entity that produces responses.
-
-### Workflows
-
-Workflows define one or more stages and actions of the contract. In the request-response scenario, the first stage (state) of the workflow is a requestor (role) takes an action (transition) to send a request (function). The next stage (state) is a responder (role) takes an action (transition) to send a response (function). An application's workflow can involve properties, functions, and states required describe the flow of a contract.
-
-## Smart contract code file
-
-Smart contracts represent the business logic of the blockchain application. Currently, Blockchain Workbench supports Ethereum for the blockchain ledger. Ethereum uses [Solidity](https://solidity.readthedocs.io) as its programming language for writing self-enforcing business logic for smart contracts.
-
-Smart contracts in Solidity are similar to classes in object-oriented languages. Each contract contains state and functions to implement stages and actions of the smart contract.
-
-In your favorite editor, create a file called `HelloBlockchain.sol`.
-
-### Version pragma
-
-As a best practice, indicate the version of Solidity you are targeting. Specifying the version helps avoid incompatibilities with future Solidity versions.
-
-Add the following version pragma at the top of `HelloBlockchain.sol` smart contract code file.
-
-``` solidity
-pragma solidity >=0.4.25 <0.6.0;
-```
-
-### Configuration and smart contract code relationship
-
-Blockchain Workbench uses the configuration file and smart contract code file to create a blockchain application. There is a relationship between what is defined in the configuration and the code in the smart contract. Contract details, functions, parameters, and types are required to match to create the application. Blockchain Workbench verifies the files prior to application creation.
-
-### Contract
-
-Add the **contract** header to your `HelloBlockchain.sol` smart contract code file.
-
-``` solidity
-contract HelloBlockchain {
-```
-
-### State variables
-
-State variables store values of the state for each contract instance. The state variables in your contract must match the workflow properties defined in the configuration file.
-
-Add the state variables to your contract in your `HelloBlockchain.sol` smart contract code file.
-
-``` solidity
- //Set of States
- enum StateType { Request, Respond}
-
- //List of properties
- StateType public State;
- address public Requestor;
- address public Responder;
-
- string public RequestMessage;
- string public ResponseMessage;
-```
-
-### Constructor
-
-The constructor defines input parameters for a new smart contract instance of a workflow. Required parameters for the constructor are defined as constructor parameters in the configuration file. The number, order, and type of parameters must match in both files.
-
-In the constructor function, write any business logic you want to perform prior to creating the contract. For example, initialize the state variables with starting values.
-
-Add the constructor function to your contract in your `HelloBlockchain.sol` smart contract code file.
-
-``` solidity
- // constructor function
- constructor(string memory message) public
- {
- Requestor = msg.sender;
- RequestMessage = message;
- State = StateType.Request;
- }
-```
-
-### Functions
-
-Functions are the executable units of business logic within a contract. Required parameters for the function are defined as function parameters in the configuration file. The number, order, and type of parameters must match in both files. Functions are associated to transitions in a Blockchain Workbench workflow in the configuration file. A transition is an action performed to move to the next stage of an application's workflow as determined by the contract.
-
-Write any business logic you want to perform in the function. For example, modifying a state variable's value.
-
-1. Add the following functions to your contract in your `HelloBlockchain.sol` smart contract code file.
-
- ``` solidity
- // call this function to send a request
- function SendRequest(string memory requestMessage) public
- {
- if (Requestor != msg.sender)
- {
- revert();
- }
-
- RequestMessage = requestMessage;
- State = StateType.Request;
- }
-
- // call this function to send a response
- function SendResponse(string memory responseMessage) public
- {
- Responder = msg.sender;
-
- ResponseMessage = responseMessage;
- State = StateType.Respond;
- }
- }
- ```
-
-2. Save your `HelloBlockchain.sol` smart contract code file.
-
-## Add blockchain application to Blockchain Workbench
-
-To add a blockchain application to Blockchain Workbench, you upload the configuration and smart contract files to define the application.
-
-1. In a web browser, navigate to the Blockchain Workbench web address. For example, `https://{workbench URL}.azurewebsites.net/` The web application is created when you deploy Blockchain Workbench. For information on how to find your Blockchain Workbench web address, see [Blockchain Workbench Web URL](deploy.md#blockchain-workbench-web-url)
-2. Sign in as a [Blockchain Workbench administrator](manage-users.md#manage-blockchain-workbench-administrators).
-3. Select **Applications** > **New**. The **New application** pane is displayed.
-4. Select **Upload the contract configuration** > **Browse** to locate the **HelloBlockchain.json** configuration file you created. The configuration file is automatically validated. Select the **Show** link to display validation errors. Fix validation errors before you deploy the application.
-5. Select **Upload the contract code** > **Browse** to locate the **HelloBlockchain.sol** smart contract code file. The code file is automatically validated. Select the **Show** link to display validation errors. Fix validation errors before you deploy the application.
-6. Select **Deploy** to create the blockchain application based on the configuration and smart contract files.
-
-Deployment of the blockchain application takes a few minutes. When deployment is finished, the new application is displayed in **Applications**.
-
-> [!NOTE]
-> You can also create blockchain applications by using the [Azure Blockchain Workbench REST API](/rest/api/azure-blockchain-workbench).
-
-## Add blockchain application members
-
-Add application members to your application to initiate and take actions on contracts. To add application members, you need to be a [Blockchain Workbench administrator](manage-users.md#manage-blockchain-workbench-administrators).
-
-1. Select **Applications** > **Hello, Blockchain!**.
-2. The number of members associated to the application is displayed in the upper right corner of the page. For a new application, the number of members will be zero.
-3. Select the **members** link in the upper right corner of the page. A current list of members for the application is displayed.
-4. In the membership list, select **Add members**.
-5. Select or enter the member's name you want to add. Only Azure AD users that exist in the Blockchain Workbench tenant are listed. If the user is not found, you need to [add Azure AD users](manage-users.md#add-azure-ad-users).
-6. Select the **Role** for the member. For the first member, select **Requestor** as the role.
-7. Select **Add** to add the member with the associated role to the application.
-8. Add another member to the application with the **Responder** role.
-
-For more information about managing users in Blockchain Workbench, see [managing users in Azure Blockchain Workbench](manage-users.md)
-
-## Next steps
-
-In this how-to article, you've created a basic request and response application. To learn how to use the application, continue to the next how-to article.
-
-> [!div class="nextstepaction"]
-> [Using a blockchain application](use.md)
blockchain Data Excel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/data-excel.md
- Title: Use Azure Blockchain Workbench data in Microsoft Excel
-description: Learn how to load and view Azure Blockchain Workbench Preview SQL DB data in Microsoft Excel.
Previously updated : 02/18/2022--
-#Customer intent: As a developer, I want to view Azure Blockchain Workbench data in Microsoft Excel for analysis.
--
-# View Azure Blockchain Workbench data with Microsoft Excel
--
-You can use Microsoft Excel to view data in Azure Blockchain Workbench's SQL DB. This article provides the steps you need to:
-
-* Connect to the Blockchain Workbench database from Microsoft Excel
-* Look at Blockchain Workbench database tables and views
-* Load Blockchain Workbench view data into Excel
-
-## Connect to the Blockchain Workbench database
-
-To connect to a Blockchain Workbench database:
-
-1. Open Microsoft Excel.
-2. On the **Data** tab, choose **Get Data**.
-3. Select **From Azure** and then select **From Azure SQL Database**.
-
- ![Connect to Azure SQL Database](./media/data-excel/connect-sql-db.png)
-
-4. In the **SQL Server database** dialog box:
-
- * For **Server**, enter the name of the Blockchain Workbench server.
- * For **Database (optional)**, enter the name of the database.
-
- ![Provide database server and database](./media/data-excel/provide-server-db.png)
-
-5. In the **SQL Server database** dialog navigation bar, select **Database**. Enter your **Username** and **Password**, and then select **Connect**.
-
- > [!NOTE]
- > If you're using the credentials created during the Azure Blockchain Workbench deployment process, the **User name** is `dbadmin`. The **Password** is the one you created when you deployed the Blockchain Workbench.
-
- ![Provide credentials to access database](./media/data-excel/provide-credentials.png)
-
-## Look at database tables and views
-
-The Excel Navigator dialog opens after you connect to the database. You can use the Navigator to look at the tables and views in the database. The views are designed for reporting and their names are prefixed with **vw**.
-
- ![Excel Navigator preview of a view](./media/data-excel/excel-navigator.png)
-
-## Load view data into an Excel workbook
-
-The next example shows how you can load data from a view into an Excel workbook.
-
-1. In the **Navigator** scroll bar, select the **vwContractAction** view. The **vwContractAction** preview shows all the actions related to a contract in the Blockchain Workbench database.
-2. Select **Load** to retrieve all the data in the view and put it in your Excel workbook.
-
- ![Data loaded from a view](./media/data-excel/view-data.png)
-
-Now that you have the data loaded, you can use Excel features to create your own reports using the metadata and transaction data from the Azure Blockchain Workbench database.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Database views in Azure Blockchain Workbench](database-views.md)
blockchain Data Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/data-powerbi.md
- Title: Use Azure Blockchain Workbench data in Microsoft Power BI
-description: Learn how to load and view Azure Blockchain Workbench SQL DB data in Microsoft Power BI.
Previously updated : 02/18/2022--
-#Customer intent: As a developer, I want to load and view Azure Blockchain Workbench data in Power BI for analysis.
-
-# Using Azure Blockchain Workbench data with Microsoft Power BI
--
-Microsoft Power BI provides the ability to easily generate powerful reports from SQL DB databases using Power BI Desktop and then publish them to [https://www.powerbi.com](https://www.powerbi.com).
-
-This article contains a step by step walkthrough of how to connect to Azure Blockchain Workbench's SQL Database from within PowerBI desktop, create a report, and deploy the report to powerbi.com.
-
-## Prerequisites
-
-* Download [Power BI Desktop](https://powerbi.microsoft.com/desktop/).
-
-## Connecting Power BI to data in Azure Blockchain Workbench
-
-1. Open Power BI Desktop.
-2. Select **Get Data**.
-
- ![Get data](./media/data-powerbi/get-data.png)
-3. Select **SQL Server** from the list of data source types.
-
-4. Provide the server and database name in the dialog. Specify if you want to import the data or perform a **DirectQuery**. Select **OK**.
-
- ![Select SQL Server](./media/data-powerbi/select-sql.png)
-
-5. Provide the database credentials to access Azure Blockchain Workbench. Select **Database** and enter your credentials.
-
- If you are using the credentials created by the Azure Blockchain Workbench deployment process, the username is **dbadmin** and the password is the one you provided during deployment.
-
- ![SQL DB settings](./media/data-powerbi/db-settings.png)
-
-6. Once connected to the database, the **Navigator** dialog displays the tables and views available within the database. The views are designed for reporting and are all prefixed **vw**.
-
- ![Screen capture of Power BI desktop with the Navigator dialog box with vwContractAction selected.](./media/data-powerbi/navigator.png)
-
-7. Select the views you wish to include. For demonstration purposes, we include **vwContractAction**, which provides details on the actions that have taken place on a contract.
-
- ![Select views](./media/data-powerbi/select-views.png)
-
-You can now create and publish reports as you normally would with Power BI.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Database views in Azure Blockchain Workbench](database-views.md)
blockchain Data Sql Management Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/data-sql-management-studio.md
- Title: Query Azure Blockchain Workbench data using SQL Server Management Studio
-description: Learn how to connect to Azure Blockchain Workbench's SQL Database from within SQL Server Management Studio.
Previously updated : 02/18/2022---
-#Customer intent: As a developer, I want to use SQL Server Management Studio to query Azure Blockchain Workbench data.
-
-# Using Azure Blockchain Workbench data with SQL Server Management Studio
--
-Microsoft SQL Server Management Studio provides the ability to rapidly
-write and test queries against Azure Blockchain Workbench's SQL DB. This section contains a step-by-step walkthrough of how to connect to Azure Blockchain Workbench's SQL Database from within SQL Server Management Studio.
-
-## Prerequisites
-
-* Download [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms).
-
-## Connecting SQL Server Management Studio to data in Azure Blockchain Workbench
-
-1. Open the SQL Server Management Studio and select **Connect**.
-2. Select **Database Engine**.
-
- ![Database engine](./media/data-sql-management-studio/database-engine.png)
-
-3. In the **Connect to Server** dialog, enter the server name and your
- database credentials.
-
- If you are using the credentials created by the Azure Blockchain Workbench deployment process, the username is **dbadmin** and the password is the one you provided during deployment.
-
- ![Enter SQL credentials](./media/data-sql-management-studio/sql-creds.png)
-
- 1. SQL Server Management Studio displays the list of databases, database views, and stored procedures in the Azure Blockchain Workbench database.
-
- ![Database list](./media/data-sql-management-studio/db-list.png)
-
-5. To view the data associated with any of the database views, you can automatically generate a select statement using the following steps.
-6. Right-click any of the database views in the Object Explorer.
-7. Select **Script View as**.
-8. Choose **SELECT to**.
-9. Select **New Query Editor Window**.
-10. A new query can be created by selecting **New Query**.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Database views in Azure Blockchain Workbench](database-views.md)
blockchain Database Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/database-firewall.md
- Title: Configure Azure Blockchain Workbench database firewall
-description: Learn how to configure the Azure Blockchain Workbench Preview database firewall to allow external clients and applications to connect.
Previously updated : 02/18/2022--
-#Customer intent: As an administrator, I want to configure Azure Blockchain Workbench's SQL Server firewall to allow external clients to connect.
--
-# Configure the Azure Blockchain Workbench database firewall
--
-This article shows how to configure a firewall rule using the Azure portal. Firewall rules let external clients or applications connect to your Azure Blockchain Workbench database.
-
-## Connect to the Blockchain Workbench database
-
-To connect to the database where you want to configure a rule:
-
-1. Sign in to the Azure portal with an account that has **Owner** permissions for the Azure Blockchain Workbench resources.
-2. In the left navigation pane, choose **Resource groups**.
-3. Choose the name of the resource group for your Blockchain Workbench deployment.
-4. Select **Type** to sort the list of resources, and then choose your **SQL server**.
-5. The resource list example in the following screen capture shows two databases: *master* and *lsgn-sdk*. You configure the firewall rule on *lsgn-sdk*.
-
-![List Blockchain Workbench resources](./media/database-firewall/list-database-resources.png)
-
-## Create a database firewall rule
-
-To create a firewall rule:
-
-1. Choose the link to the "lsgn-sdk" database.
-2. On the menu bar, select **Set server firewall**.
-
- ![Set server firewall](./media/database-firewall/configure-server-firewall.png)
-
-3. To create a rule for your organization:
-
- * Enter a **RULE NAME**
- * Enter an IP address for the **START IP** of the address range
- * Enter an IP address for the **END IP** of the address range
-
- ![Create firewall rule](./media/database-firewall/create-firewall-rule.png)
-
- > [!NOTE]
- > If you only want to add the IP address of your computer, choose **+ Add client IP**.
-
-1. To save your firewall configuration, select **Save**.
-2. Test the IP address range you configured for the database by connecting from an application or tool. For example, SQL
- Server Management Studio.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Database views in Azure Blockchain Workbench](database-views.md)
blockchain Database Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/database-views.md
- Title: Azure Blockchain Workbench database views
-description: Overview of available Azure Blockchain Workbench Preview SQL DB database views.
Previously updated : 02/18/2022--
-#Customer intent: As a developer, I want to understand the available Azure Blockchain Workbench SQL Server database views for querying off-chain blockchain data.
-
-# Azure Blockchain Workbench database views
--
-Azure Blockchain Workbench Preview delivers data from distributed ledgers to an *off-chain* SQL DB database. The off-chain database makes it possible to use SQL and existing tools, such as [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms), to interact with blockchain data.
-
-Azure Blockchain Workbench provides a set of database views that provide access to data that will be helpful when performing your queries. These views are heavily denormalized to make it easy to quickly get started building reports, analytics, and otherwise consume blockchain data with existing tools and without having to retrain database staff.
-
-This section includes an overview of the database views and the data they contain.
-
-> [!NOTE]
-> Any direct usage of database tables found in the database outside of these views, while possible, is not supported.
->
-
-## vwApplication
-
-This view provides details on **Applications** that have been uploaded to Azure Blockchain Workbench.
-
-| Name | Type | Can Be Null | Description |
-|-||-||
-| ApplicationId | int | No | A unique identifier for the application |
-| ApplicationName | nvarchar(50) | No | The name of the application |
-| ApplicationDescription | nvarchar(255) | Yes | A description of the application |
-| ApplicationDisplayName | nvarchar(255) | No | The name to be displayed in a user interface |
-| ApplicationEnabled | bit | No | Identifies if the application is currently enabled<br /> **Note:** Even though an application can be reflected as disabled in the database, associated contracts remain on the blockchain and data about those contracts remain in the database. |
-| UploadedDtTm | datetime2(7) | No | The date and time a contract was uploaded |
-| UploadedByUserId | int | No | The ID of the user who uploaded the application |
-| UploadedByUserExternalId | nvarchar(255) | No | The external identifier for the user who uploaded the application. By default, this ID is the user from the Azure Active Directory for the consortium. |
-| UploadedByUserProvisioningStatus | int | No | Identifies the current status of provisioning process for the user. Possible values are: <br />0 ΓÇô User has been created by the API<br />1 ΓÇô A key has been associated with the user in the database<br />2 ΓÇô The user is fully provisioned |
-| UploadedByUserFirstName | nvarchar(50) | Yes | The first name of the user who uploaded the contract |
-| UploadedByUserLastName | nvarchar(50) | Yes | The last name of the user who uploaded the contract |
-| UploadedByUserEmailAddress | nvarchar(255) | Yes | The email address of the user who uploaded the contract |
-
-## vwApplicationRole
-
-This view provides details on the roles that have been defined in Azure Blockchain Workbench applications.
-
-In an *Asset Transfer* application, for example, roles such as *Buyer* and *Seller* roles may be defined.
-
-| Name | Type | Can Be Null | Description |
-|||-||
-| ApplicationId | int | No | A unique identifier for the application |
-| ApplicationName | nvarchar(50) | No | The name of the application |
-| ApplicationDescription | nvarchar(255) | Yes | A description of the application |
-| ApplicationDisplayName | nvarchar(255) | No | The name to be displayed in a user interface |
-| RoleId | int | No | A unique identifier for a role in the application |
-| RoleName | nvarchar50) | No | The name of the role |
-| RoleDescription | description(255) | Yes | A description of the role |
-
-## vwApplicationRoleUser
-
-This view provides details on the roles that have been defined in Azure Blockchain Workbench applications and the users associated with them.
-
-In an *Asset Transfer* application, for example, *John Smith* may be associated with the *Buyer* role.
-
-| Name | Type | Can Be Null | Description |
-|-||-||
-| ApplicationId | int | No | A unique identifier for the application |
-| ApplicationName | nvarchar(50) | No | The name of the application |
-| ApplicationDescription | nvarchar(255) | Yes | A description of the application |
-| ApplicationDisplayName | nvarchar(255) | No | The name to be displayed in a user interface |
-| ApplicationRoleId | int | No | A unique identifier for a role in the application |
-| ApplicationRoleName | nvarchar50) | No | The name of the role |
-| ApplicationRoleDescription | nvarchar(255) | Yes | A description of the role |
-| UserId | int | No | The ID of the user associated with the role |
-| UserExternalId | nvarchar(255) | No | The external identifier for the user who is associated with the role. By default, this ID is the user from the Azure Active Directory for the consortium. |
-| UserProvisioningStatus | int | No | Identifies the current status of provisioning process for the user. Possible values are: <br />0 ΓÇô User has been created by the API<br />1 ΓÇô A key has been associated with the user in the database<br />2 ΓÇô The user is fully provisioned |
-| UserFirstName | nvarchar(50) | Yes | The first name of the user who is associated with the role |
-| UserLastName | nvarchar(255) | Yes | The last name of the user who is associated with the role |
-| UserEmailAddress | nvarchar(255) | Yes | The email address of the user who is associated with the role |
-
-## vwConnectionUser
-
-This view provides details on the connections defined in Azure Blockchain Workbench and the users associated with them. For each connection, this view contains the following data:
--- Associated ledger details-- Associated user information-
-| Name | Type | Can Be Null | Description |
-|--||-||
-| ConnectionId | int | No | The unique identifier for a connection in Azure Blockchain Workbench |
-| ConnectionEndpointUrl | nvarchar(50) | No | The endpoint url for a connection |
-| ConnectionFundingAccount | nvarchar(255) | Yes | The funding account associated with a connection, if applicable |
-| LedgerId | int | No | The unique identifier for a ledger |
-| LedgerName | nvarchar(50) | No | The name of the ledger |
-| LedgerDisplayName | nvarchar(255) | No | The name of the ledger to display in the UI |
-| UserId | int | No | The ID of the user associated with the connection |
-| UserExternalId | nvarchar(255) | No | The external identifier for the user who is associated with the connection. By default, this ID is the user from the Azure Active Directory for the consortium. |
-| UserProvisioningStatus | int | No |Identifies the current status of provisioning process for the user. Possible values are: <br />0 ΓÇô User has been created by the API<br />1 ΓÇô A key has been associated with the user in the database<br />2 ΓÇô The user is fully provisioned |
-| UserFirstName | nvarchar(50) | Yes | The first name of the user who is associated with the connection |
-| UserLastName | nvarchar(255) | Yes | The last name of the user who is associated with the connection |
-| UserEmailAddress | nvarchar(255) | Yes | The email address of the user who is associated with the connection |
-
-## vwContract
-
-This view provides details about deployed contracts. For each contract, this view contains the following data:
--- Associated application definition-- Associated workflow definition-- Associated ledger implementation for the function-- Details for the user who initiated the action-- Details related to the blockchain block and transaction-
-| Name | Type | Can Be Null | Description |
-||-|-||
-| ConnectionId | int | No | The unique identifier for a connection in Azure Blockchain Workbench. |
-| ConnectionEndpointUrl | nvarchar(50) | No | The endpoint url for a connection |
-| ConnectionFundingAccount | nvarchar(255) | Yes | The funding account associated with a connection, if applicable |
-| LedgerId | int | No | The unique identifier for a ledger |
-| LedgerName | nvarchar(50) | No | The name of the ledger |
-| LedgerDisplayName | nvarchar(255) | No | The name of the ledger to display in the UI |
-| ApplicationId | int | No | A unique identifier for the application |
-| ApplicationName | nvarchar (50) | No | The name of the application |
-| ApplicationDisplayName | nvarchar (255) | No | The name to be displayed in a user interface |
-| ApplicationEnabled | bit | No | Identifies if the application is currently enabled.<br /> **Note:** Even though an application can be reflected as disabled in the database, associated contracts remain on the blockchain and data about those contracts remain in the database. |
-| WorkflowId | int | No | A unique identifier for the workflow associated with a contract |
-| WorkflowName | nvarchar(50) | No | The name of the workflow associated with a contract |
-| WorkflowDisplayName | nvarchar(255) | No | The name of the workflow associated with the contract displayed in the user interface |
-| WorkflowDescription | nvarchar(255) | Yes | The description of the workflow associated with a contract |
-| ContractCodeId | int | No | A unique identifier for the contract code associated with the contract |
-| ContractFileName | int | No | The name of the file containing the smart contract code for this workflow. |
-| ContractUploadedDtTm | int | No | The date and time the contract code was uploaded |
-| ContractId | int | No | The unique identifier for the contract |
-| ContractProvisioningStatus | int | No | Identifies the current status of the provisioning process for the contract. Possible values are: <br />0 ΓÇô The contract has been created by the API in the database<br />1 ΓÇô The contract has been sent to the ledger<br />2 ΓÇô The contract has been successfully deployed to the ledger<br />3 or 4 - The contract failed to be deployed to the ledger<br />5 - The contract was successfully deployed to the ledger <br /><br />Beginning with version 1.5, values 0 through 5 are supported. For backwards compatibility in the current release, view **vwContractV0** is available that only supports values 0 through 2. |
-| ContractLedgerIdentifier | nvarchar (255) | | The email address of the user who deployed the contract |
-| ContractDeployedByUserId | int | No | An external identifier for the user who deployed the contract. By default, this ID is the guid representing the Azure Active Directory ID for the user. |
-| ContractDeployedByUserExternalId | nvarchar(255) | No | An external identifier for the user that deployed the contract. By default, this ID is the guid representing the Azure Active Directory ID for the user. |
-| ContractDeployedByUserProvisioningStatus | int | No | Identifies the current status of the provisioning process for the user. Possible values are: <br />0 ΓÇô user has been created by the API<br />1 ΓÇô A key has been associated with the user in the database <br />2 ΓÇô The user is fully provisioned |
-| ContractDeployedByUserFirstName | nvarchar(50) | Yes | The first name of the user who deployed the contract |
-| ContractDeployedByUserLastName | nvarchar(255) | Yes | The last name of the user who deployed the contract |
-| ContractDeployedByUserEmailAddress | nvarchar(255) | Yes | The email address of the user who deployed the contract |
-
-## vwContractAction
-
-This view represents the majority of information related to actions taken on contracts and is designed to readily facilitate common reporting scenarios. For each action taken, this view contains the following data:
--- Associated application definition-- Associated workflow definition-- Associated smart contract function and parameter definition-- Associated ledger implementation for the function-- Specific instance values provided for parameters-- Details for the user who initiated the action-- Details related to the blockchain block and transaction-
-| Name | Type | Can Be Null | Description |
-|||-|-|
-| ApplicationId | int | No | A unique identifier for the application |
-| ApplicationName | nvarchar(50) | No | The name of the application |
-| ApplicationDisplayName | nvarchar(255) | No | The name to be displayed in a user interface |
-| ApplicationEnabled | bit | No | This field identifies if the application is currently enabled. Note ΓÇô Even though an application can be reflected as disabled in the database, associated contracts remain on the blockchain and data about those contracts remain in the database. |
-| WorkflowId | int | No | A unique identifier for a workflow |
-| WorkflowName | nvarchar(50) | No | The name of the workflow |
-| WorkflowDisplayName | nvarchar(255) | No | The name of the workflow to display in a user interface |
-| WorkflowDescription | nvarchar(255) | Yes | The description of the workflow |
-| ContractId | int | No | A unique identifier for the contract |
-| ContractProvisioningStatus | int | No | Identifies the current status of the provisioning process for the contract. Possible values are: <br />0 ΓÇô The contract has been created by the API in the database<br />1 ΓÇô The contract has been sent to the ledger<br />2 ΓÇô The contract has been successfully deployed to the ledger<br />3 or 4 - The contract failed to be deployed to the ledger<br />5 - The contract was successfully deployed to the ledger <br /><br />Beginning with version 1.5, values 0 through 5 are supported. For backwards compatibility in the current release, view **vwContractActionV0** is available that only supports values 0 through 2. |
-| ContractCodeId | int | No | A unique identifier for the code implementation of the contract |
-| ContractLedgerIdentifier | nvarchar(255) | Yes | A unique identifier associated with the deployed version of a smart contract for a specific distributed ledger. For example, Ethereum. |
-| ContractDeployedByUserId | int | No | The unique identifier of the user that deployed the contract |
-| ContractDeployedByUserFirstName | nvarchar(50) | Yes | First name of the user who deployed the contract |
-| ContractDeployedByUserLastName | nvarchar(255) | Yes | Last name of the user who deployed the contract |
-| ContractDeployedByUserExternalId | nvarchar(255) | No | External identifier of the user who deployed the contract. By default, this ID is the guid that represents their identity in the consortium Azure Active Directory. |
-| ContractDeployedByUserEmailAddress | nvarchar(255) | Yes | The email address of the user who deployed the contract |
-| WorkflowFunctionId | int | No | A unique identifier for a workflow function |
-| WorkflowFunctionName | nvarchar(50) | No | The name of the function |
-| WorkflowFunctionDisplayName | nvarchar(255) | No | The name of a function to be displayed in the user interface |
-| WorkflowFunctionDescription | nvarchar(255) | No | The description of the function |
-| ContractActionId | int | No | The unique identifier for a contract action |
-| ContractActionProvisioningStatus | int | No | Identifies the current status of the provisioning process for the contract action. Possible values are: <br />0 ΓÇô The contract action has been created by the API in the database<br />1 ΓÇô The contract action has been sent to the ledger<br />2 ΓÇô The contract action has been successfully deployed to the ledger<br />3 or 4 - The contract failed to be deployed to the ledger<br />5 - The contract was successfully deployed to the ledger <br /><br />Beginning with version 1.5, values 0 through 5 are supported. For backwards compatibility in the current release, view **vwContractActionV0** is available that only supports values 0 through 2. |
-| ContractActionTimestamp | datetime(2,7) | No | The timestamp of the contract action |
-| ContractActionExecutedByUserId | int | No | Unique identifier of the user that executed the contract action |
-| ContractActionExecutedByUserFirstName | int | Yes | First name of the user who executed the contract action |
-| ContractActionExecutedByUserLastName | nvarchar(50) | Yes | Last name of the user who executed the contract action |
-| ContractActionExecutedByUserExternalId | nvarchar(255) | Yes | External identifier of the user who executed the contract action. By default, this ID is the guid that represents their identity in the consortium Azure Active Directory. |
-| ContractActionExecutedByUserEmailAddress | nvarchar(255) | Yes | The email address of the user who executed the contract action |
-| WorkflowFunctionParameterId | int | No | A unique identifier for a parameter of the function |
-| WorkflowFunctionParameterName | nvarchar(50) | No | The name of a parameter of the function |
-| WorkflowFunctionParameterDisplayName | nvarchar(255) | No | The name of a function parameter to be displayed in the user interface |
-| WorkflowFunctionParameterDataTypeId | int | No | The unique identifier for the data type associated with a workflow function parameter |
-| WorkflowParameterDataTypeName | nvarchar(50) | No | The name of a data type associated with a workflow function parameter |
-| ContractActionParameterValue | nvarchar(255) | No | The value for the parameter stored in the smart contract |
-| BlockHash | nvarchar(255) | Yes | The hash of the block |
-| BlockNumber | int | Yes | The number of the block on the ledger |
-| BlockTimestamp | datetime(2,7) | Yes | The time stamp of the block |
-| TransactionId | int | No | A unique identifier for the transaction |
-| TransactionFrom | nvarchar(255) | Yes | The party that originated the transaction |
-| TransactionTo | nvarchar(255) | Yes | The party that was transacted with |
-| TransactionHash | nvarchar(255) | Yes | The hash of a transaction |
-| TransactionIsWorkbenchTransaction | bit | Yes | A bit that identifies if the transaction is an Azure Blockchain Workbench transaction |
-| TransactionProvisioningStatus | int | Yes | Identifies the current status of the provisioning process for the transaction. Possible values are: <br />0 ΓÇô The transaction has been created by the API in the database<br />1 ΓÇô The transaction has been sent to the ledger<br />2 ΓÇô The transaction has been successfully deployed to the ledger |
-| TransactionValue | decimal(32,2) | Yes | The value of the transaction |
-
-## vwContractProperty
-
-This view represents the majority of information related to properties associated with a contract and is designed to readily facilitate common reporting scenarios. For each property taken, this view contains the following data:
--- Associated application definition-- Associated workflow definition-- Details for the user who deployed the workflow-- Associated smart contract property definition-- Specific instance values for properties-- Details for the state property of the contract-
-| Name | Type | Can Be Null | Description |
-|||-||
-| ApplicationId | int | No | A unique identifier for the application |
-| ApplicationName | nvarchar(50) | No | The name of the application |
-| ApplicationDisplayName | nvarchar(255) | No | The name to be displayed in a user interface |
-| ApplicationEnabled | bit | No | Identifies if the application is currently enabled.<br />**Note:** Even though an application can be reflected as disabled in the database, associated contracts remain on the blockchain and data about those contracts remain in the database. |
-| WorkflowId | int | No | The unique identifier for the workflow |
-| WorkflowName | nvarchar(50) | No | The name of the workflow |
-| WorkflowDisplayName | nvarchar(255) | No | The name of the workflow displayed in the user interface |
-| WorkflowDescription | nvarchar(255) | Yes | The description of the workflow |
-| ContractId | int | No | The unique identifier for the contract |
-| ContractProvisioningStatus | int | No | Identifies the current status of the provisioning process for the contract. Possible values are: <br />0 ΓÇô The contract has been created by the API in the database<br />1 ΓÇô The contract has been sent to the ledger<br />2 ΓÇô The contract has been successfully deployed to the ledger<br />3 or 4 - The contract failed to be deployed to the ledger<br />5 - The contract was successfully deployed to the ledger <br /><br />Beginning with version 1.5, values 0 through 5 are supported. For backwards compatibility in the current release, view **vwContractPropertyV0** is available that only supports values 0 through 2. |
-| ContractCodeId | int | No | A unique identifier for the code implementation of the contract |
-| ContractLedgerIdentifier | nvarchar(255) | Yes | A unique identifier associated with the deployed version of a smart contract for a specific distributed ledger. For example, Ethereum. |
-| ContractDeployedByUserId | int | No | The unique identifier of the user that deployed the contract |
-| ContractDeployedByUserFirstName | nvarchar(50) | Yes | First name of the user who deployed the contract |
-| ContractDeployedByUserLastName | nvarchar(255) | Yes | Last name of the user who deployed the contract |
-| ContractDeployedByUserExternalId | nvarchar(255) | No | External identifier of the user who deployed the contract. By default, this ID is the guid that represents their identity in the consortium Azure Active Directory |
-| ContractDeployedByUserEmailAddress | nvarchar(255) | Yes | The email address of the user who deployed the contract |
-| WorkflowPropertyId | int | | A unique identifier for a property of a workflow |
-| WorkflowPropertyDataTypeId | int | No | The ID of the data type of the property |
-| WorkflowPropertyDataTypeName | nvarchar(50) | No | The name of the data type of the property |
-| WorkflowPropertyName | nvarchar(50) | No | The name of the workflow property |
-| WorkflowPropertyDisplayName | nvarchar(255) | No | The display name of the workflow property |
-| WorkflowPropertyDescription | nvarchar(255) | Yes | A description of the property |
-| ContractPropertyValue | nvarchar(255) | No | The value for a property on the contract |
-| StateName | nvarchar(50) | Yes | If this property contains the state of the contract, it is the display name for the state. If it is not associated with the state, the value will be null. |
-| StateDisplayName | nvarchar(255) | No | If this property contains the state, it is the display name for the state. If it is not associated with the state, the value will be null. |
-| StateValue | nvarchar(255) | Yes | If this property contains the state, it is the state value. If it is not associated with the state, the value will be null. |
-
-## vwContractState
-
-This view represents the majority of information related to the state of a specific contract and is designed to readily facilitate common reporting scenarios. Each record in this view contains the following data:
--- Associated application definition-- Associated workflow definition-- Details for the user who deployed the workflow-- Associated smart contract property definition-- Details for the state property of the contract-
-| Name | Type | Can Be Null | Description |
-|||-||
-| ApplicationId | int | No | A unique identifier for the application |
-| ApplicationName | nvarchar(50) | No | The name of the application |
-| ApplicationDisplayName | nvarchar(255) | No | The name to be displayed in a user interface |
-| ApplicationEnabled | bit | No | Identifies if the application is currently enabled.<br />**Note:** Even though an application can be reflected as disabled in the database, associated contracts remain on the blockchain and data about those contracts remain in the database. |
-| WorkflowId | int | No | A unique identifier for the workflow |
-| WorkflowName | nvarchar(50) | No | The name of the workflow |
-| WorkflowDisplayName | nvarchar(255) | No | The name displayed in the user interface |
-| WorkflowDescription | nvarchar(255) | Yes | The description of the workflow |
-| ContractLedgerImplementationId | nvarchar(255) | Yes | A unique identifier associated with the deployed version of a smart contract for a specific distributed ledger. For example, Ethereum. |
-| ContractId | int | No | A unique identifier for the contract |
-| ContractProvisioningStatus | int | No |Identifies the current status of the provisioning process for the contract. Possible values are: <br />0 ΓÇô The contract has been created by the API in the database<br />1 ΓÇô The contract has been sent to the ledger<br />2 ΓÇô The contract has been successfully deployed to the ledger<br />3 or 4 - The contract failed to be deployed to the ledger<br />5 - The contract was successfully deployed to the ledger <br /><br />Beginning with version 1.5, values 0 through 5 are supported. For backwards compatibility in the current release, view **vwContractStateV0** is available that only supports values 0 through 2. |
-| ConnectionId | int | No | A unique identifier for the blockchain instance the workflow is deployed to |
-| ContractCodeId | int | No | A unique identifier for the code implementation of the contract |
-| ContractDeployedByUserId | int | No | Unique identifier of the user that deployed the contract |
-| ContractDeployedByUserExternalId | nvarchar(255) | No | External identifier of the user who deployed the contract. By default, this ID is the guid that represents their identity in the consortium Azure Active Directory. |
-| ContractDeployedByUserFirstName | nvarchar(50) | Yes | First name of the user who deployed the contract |
-| ContractDeployedByUserLastName | nvarchar(255) | Yes | Last name of the user who deployed the contract |
-| ContractDeployedByUserEmailAddress | nvarchar(255) | Yes | The email address of the user who deployed the contract |
-| WorkflowPropertyId | int | No | A unique identifier for a workflow property |
-| WorkflowPropertyDataTypeId | int | No | The ID of the data type of the workflow property |
-| WorkflowPropertyDataTypeName | nvarchar(50) | No | The name of the data type of the workflow property |
-| WorkflowPropertyName | nvarchar(50) | No | The name of the workflow property |
-| WorkflowPropertyDisplayName | nvarchar(255) | No | The display name of the property to show in a UI |
-| WorkflowPropertyDescription | nvarchar(255) | Yes | The description of the property |
-| ContractPropertyValue | nvarchar(255) | No | The value for a property stored in the contract |
-| StateName | nvarchar(50) | Yes | If this property contains the state, it the display name for the state. If it is not associated with the state, the value will be null. |
-| StateDisplayName | nvarchar(255) | No | If this property contains the state, it is the display name for the state. If it is not associated with the state, the value will be null. |
-| StateValue | nvarchar(255) | Yes | If this property contains the state, it is the state value. If it is not associated with the state, the value will be null. |
-
-## vwUser
-
-This view provides details on the consortium members that are provisioned to use Azure Blockchain Workbench. By default, data is populated through the initial provisioning of the user.
-
-| Name | Type | Can Be Null | Description |
-|--||-|-|
-| ID | int | No | A unique identifier for a user |
-| ExternalID | nvarchar(255) | No | An external identifier for a user. By default, this ID is the guid representing the Azure Active Directory ID for the user. |
-| ProvisioningStatus | int | No |Identifies the current status of provisioning process for the user. Possible values are: <br />0 ΓÇô User has been created by the API<br />1 ΓÇô A key has been associated with the user in the database<br />2 ΓÇô The user is fully provisioned |
-| FirstName | nvarchar(50) | Yes | The first name of the user |
-| LastName | nvarchar(50) | Yes | The last name of the user |
-| EmailAddress | nvarchar(255) | Yes | The email address of the user |
-
-## vwWorkflow
-
-This view represents the details core workflow metadata as well as the workflow's functions and parameters. Designed for reporting, it also contains metadata about the application associated with the workflow. This view contains data from multiple underlying tables to facilitate reporting on workflows. For each workflow, this view contains the following data:
--- Associated application definition-- Associated workflow definition-- Associated workflow start state information-
-| Name | Type | Can Be Null | Description |
-|--||-|--|
-| ApplicationId | int | No | A unique identifier for the application |
-| ApplicationName | nvarchar(50) | No | The name of the application |
-| ApplicationDisplayName | nvarchar(255) | No | The name to be displayed in a user interface |
-| ApplicationEnabled | bit | No | Identifies if the application is enabled |
-| WorkflowId | int | Yes | A unique identifier for a workflow |
-| WorkflowName | nvarchar(50) | No | The name of the workflow |
-| WorkflowDisplayName | nvarchar(255) | No | The name displayed in the user interface |
-| WorkflowDescription | nvarchar(255) | Yes | The description of the workflow. |
-| WorkflowConstructorFunctionId | int | No | The identifier of the workflow function that serves as the constructor for the workflow |
-| WorkflowStartStateId | int | No | A unique identifier for the state |
-| WorkflowStartStateName | nvarchar(50) | No | The name of the state |
-| WorkflowStartStateDisplayName | nvarchar(255) | No | The name to be displayed in the user interface for the state |
-| WorkflowStartStateDescription | nvarchar(255) | Yes | A description of the workflow state |
-| WorkflowStartStateStyle | nvarchar(50) | Yes | This value identifies the percentage complete that the workflow is when in this state |
-| WorkflowStartStateValue | int | No | The value of the state |
-| WorkflowStartStatePercentComplete | int | No | A text description that provides a hint to clients on how to render this state in the UI. Supported states include *Success* and *Failure* |
-
-## vwWorkflowFunction
-
-This view represents the details core workflow metadata as well as the workflow's functions and parameters. Designed for reporting, it also contains metadata about the application associated with the workflow. This view contains data from multiple underlying tables to facilitate reporting on workflows. For each workflow function, this view contains the following data:
--- Associated application definition-- Associated workflow definition-- Workflow function details-
-| Name | Type | Can Be Null | Description |
-|--||-|--|
-| ApplicationId | int | No | A unique identifier for the application |
-| ApplicationName | nvarchar(50) | No | The name of the application |
-| ApplicationDisplayName | nvarchar(255) | No | The name to be displayed in a user interface |
-| ApplicationEnabled | bit | No | Identifies if the application is enabled |
-| WorkflowId | int | No | A unique identifier for a workflow |
-| WorkflowName | nvarchar(50) | No | The name of the workflow |
-| WorkflowDisplayName | nvarchar(255) | No | The name of the workflow displayed in the user interface |
-| WorkflowDescription | nvarchar(255) | Yes | The description of the workflow |
-| WorkflowFunctionId | int | No | A unique identifier for a function |
-| WorkflowFunctionName | nvarchar(50) | Yes | The name of the function |
-| WorkflowFunctionDisplayName | nvarchar(255) | No | The name of a function to be displayed in the user interface |
-| WorkflowFunctionDescription | nvarchar(255) | Yes | The description of the workflow function |
-| WorkflowFunctionIsConstructor | bit | No | Identifies if the workflow function is the constructor for the workflow |
-| WorkflowFunctionParameterId | int | No | A unique identifier for a parameter of a function |
-| WorkflowFunctionParameterName | nvarchar(50) | No | The name of a parameter of the function |
-| WorkflowFunctionParameterDisplayName | nvarchar(255) | No | The name of a function parameter to be displayed in the user interface |
-| WorkflowFunctionParameterDataTypeId | int | No | A unique identifier for the data type associated with a workflow function parameter |
-| WorkflowParameterDataTypeName | nvarchar(50) | No | The name of a data type associated with a workflow function parameter |
-
-## vwWorkflowProperty
-
-This view represents the properties defined for a workflow. For each property, this view contains the following data:
--- Associated application definition-- Associated workflow definition-- Workflow property details-
-| Name | Type | Can Be Null | Description |
-|||-||
-| ApplicationId | int | No | A unique identifier for the application |
-| ApplicationName | nvarchar(50) | No | The name of the application |
-| ApplicationDisplayName | nvarchar(255) | No | The name to be displayed in a user interface |
-| ApplicationEnabled | bit | No | Identifies if the application is currently enabled.<br />**Note:** Even though an application can be reflected as disabled in the database, associated contracts remain on the blockchain and data about those contracts remain in the database. |
-| WorkflowId | int | No | A unique identifier for the workflow |
-| WorkflowName | nvarchar(50) | No | The name of the workflow |
-| WorkflowDisplayName | nvarchar(255) | No | The name to be displayed for the workflow in a user interface |
-| WorkflowDescription | nvarchar(255) | Yes | A description of the workflow |
-| WorkflowPropertyID | int | No | A unique identifier for a property of a workflow |
-| WorkflowPropertyName | nvarchar(50) | No | The name of the property |
-| WorkflowPropertyDescription | nvarchar(255) | Yes | A description of the property |
-| WorkflowPropertyDisplayName | nvarchar(255) | No | The name to be displayed in a user interface |
-| WorkflowPropertyWorkflowId | int | No | The ID of the workflow to which this property is associated |
-| WorkflowPropertyDataTypeId | int | No | The ID of the data type defined for the property |
-| WorkflowPropertyDataTypeName | nvarchar(50) | No | The name of the data type defined for the property |
-| WorkflowPropertyIsState | bit | No | This field identifies if this workflow property contains the state of the workflow |
-
-## vwWorkflowState
-
-This view represents the properties associated with a workflow. For each contract, this view contains the following data:
--- Associated application definition-- Associated workflow definition-- Workflow state information-
-| Name | Type | Can Be Null | Description |
-|||-||
-| ApplicationId | int | No | A unique identifier for the application |
-| ApplicationName | nvarchar(50) | No | The name of the application |
-| ApplicationDisplayName | nvarchar(255) | No | A description of the application |
-| ApplicationEnabled | bit | No | Identifies if the application is currently enabled.<br />**Note:** Even though an application can be reflected as disabled in the database, associated contracts remain on the blockchain and data about those contracts remain in the database. |
-| WorkflowId | int | No | The unique identifier for the workflow |
-| WorkflowName | nvarchar(50) | No | The name of the workflow |
-| WorkflowDisplayName | nvarchar(255) | No | The name displayed in the user interface for the workflow |
-| WorkflowDescription | nvarchar(255) | Yes | The description of the workflow |
-| WorkflowStateID | int | No | The unique identifier for the state |
-| WorkflowStateName | nvarchar(50) | No | The name of the state |
-| WorkflowStateDisplayName | nvarchar(255) | No | The name to be displayed in the user interface for the state |
-| WorkflowStateDescription | nvarchar(255) | Yes | A description of the workflow state |
-| WorkflowStatePercentComplete | int | No | This value identifies the percentage complete that the workflow is when in this state |
-| WorkflowStateValue | nvarchar(50) | No | Value of the state |
-| WorkflowStateStyle | nvarchar(50) | No | A text description that provides a hint to clients on how to render this state in the UI. Supported states include *Success* and *Failure* |
blockchain Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/deploy.md
- Title: Deploy Azure Blockchain Workbench Preview
-description: How to deploy Azure Blockchain Workbench Preview
Previously updated : 02/18/2022---
-#Customer intent: As a developer, I want to deploy Azure Blockchain Workbench so that I can create a blockchain apps.
-
-# Deploy Azure Blockchain Workbench Preview
--
-Azure Blockchain Workbench Preview is deployed using a solution template in the Azure Marketplace. The template simplifies the deployment of components needed to create blockchain applications. Once deployed, Blockchain Workbench provides access to client apps to create and manage users and blockchain applications.
-
-For more information about the components of Blockchain Workbench, see [Azure Blockchain Workbench architecture](architecture.md).
--
-## Prepare for deployment
-
-Blockchain Workbench allows you to deploy a blockchain ledger along with a set of relevant Azure services most often used to build a blockchain-based application. Deploying Blockchain Workbench results in the following Azure services being provisioned within a resource group in your Azure subscription.
-
-* App Service Plan (Standard)
-* Application Insights
-* Event Grid
-* Azure Key Vault
-* Service Bus
-* SQL Database (Standard S0)
-* Azure Storage account (Standard LRS)
-* Virtual machine scale set with capacity of 1
-* Virtual Network resource group (with Load Balancer, Network Security Group, Public IP Address, Virtual Network)
-* Azure Blockchain Service. If you are using a previous Blockchain Workbench deployment, consider redeploying Azure Blockchain Workbench to use Azure Blockchain Service.
-
-The following is an example deployment created in **myblockchain** resource group.
-
-![Example deployment](media/deploy/example-deployment.png)
-
-The cost of Blockchain Workbench is an aggregate of the cost of the underlying Azure services. Pricing information for Azure services can be calculated using the [pricing calculator](https://azure.microsoft.com/pricing/calculator/).
-
-## Prerequisites
-
-Azure Blockchain Workbench requires Azure AD configuration and application registrations. You can choose to do the Azure AD [configurations manually](#azure-ad-configuration) before deployment or run a script post deployment. If you are redeploying Blockchain Workbench, see [Azure AD configuration](#azure-ad-configuration) to verify your Azure AD configuration.
-
-> [!IMPORTANT]
-> Workbench does not have to be deployed in the same tenant as the one you are using to register an Azure AD application. Workbench must be deployed in a tenant where you have sufficient permissions to deploy resources. For more information on Azure AD tenants, see [How to get an Active Directory tenant](../../active-directory/develop/quickstart-create-new-tenant.md) and [Integrating applications with Azure Active Directory](../../active-directory/develop/quickstart-register-app.md).
-
-## Deploy Blockchain Workbench
-
-Once the prerequisite steps have been completed, you are ready to deploy the Blockchain Workbench. The following sections outline how to deploy the framework.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select your account in the top-right corner, and switch to the desired Azure AD tenant where you want to deploy Azure Blockchain Workbench.
-1. Select **Create a resource** in the upper left-hand corner of the Azure portal.
-1. Select **Blockchain** > **Azure Blockchain Workbench (preview)**.
-
- ![Create Azure Blockchain Workbench](media/deploy/blockchain-workbench-settings-basic.png)
-
- | Setting | Description |
- ||--|
- | Resource prefix | Short unique identifier for your deployment. This value is used as a base for naming resources. |
- | VM user name | The user name is used as administrator for all virtual machines (VM). |
- | Authentication type | Select if you want to use a password or key for connecting to VMs. |
- | Password | The password is used for connecting to VMs. |
- | SSH | Use an RSA public key in the single-line format beginning with **ssh-rsa** or use the multi-line PEM format. You can generate SSH keys using `ssh-keygen` on Linux and OS X, or by using PuTTYGen on Windows. More information on SSH keys, see [How to use SSH keys with Windows on Azure](../../virtual-machines/linux/ssh-from-windows.md). |
- | Database and Blockchain password | Specify the password to use for access to the database created as part of the deployment. The password must meet three of the following four requirements: length needs to be between 12 & 72 characters, 1 lower case character, 1 upper case character, 1 number, and 1 special character that is not number sign(#), percent(%), comma(,), star(*), back quote(\`), double quote("), single quote('), dash(-) and semicolumn(;) |
- | Deployment region | Specify where to deploy Blockchain Workbench resources. For best availability, this should match the **Region** location setting. Not all regions are available during preview. Features may not be available in some regions. Azure Blockchain Data Manager is available in the following Azure regions: East US and West Europe.|
- | Subscription | Specify the Azure Subscription you wish to use for your deployment. |
- | Resource groups | Create a new Resource group by selecting **Create new** and specify a unique resource group name. |
- | Location | Specify the region you wish to deploy the framework. |
-
-1. Select **OK** to finish the basic setting configuration section.
-
-1. In **Advanced Settings**, choose the existing Ethereum proof-of-authority blockchain network, Active Directory settings, and preferred VM size for Blockchain Workbench components.
-
- The Ethereum RPC endpoint has the following requirements:
-
- * The endpoint must be an Ethereum Proof-of-Authority (PoA) blockchain network.
- * The endpoint must be publicly accessible over the network.
- * The PoA blockchain network should be configured to have gas price set to zero.
- * The endpoint starts with `https://` or `http://` and ends with a port number. For example, `http<s>://<network-url>:<port>`
-
- > [!NOTE]
- > Blockchain Workbench accounts are not funded. If funds are required, the transactions fail.
-
- ![Advanced settings for existing blockchain network](media/deploy/advanced-blockchain-settings-existing.png)
-
- | Setting | Description |
- ||--|
- | Ethereum RPC Endpoint | Provide the RPC endpoint of an existing PoA blockchain network. |
- | Azure Active Directory settings | Choose **Add Later**.</br>Note: If you chose to [pre-configure Azure AD](#azure-ad-configuration) or are redeploying, choose to *Add Now*. |
- | VM selection | Select preferred storage performance and VM size for your blockchain network. Choose a smaller VM size such as *Standard DS1 v2* if you are on a subscription with low service limits like Azure free tier. |
-
-1. Select **Review + create** to finish Advanced Settings.
-
-1. Review the summary to verify your parameters are accurate.
-
- ![Summary](media/deploy/blockchain-workbench-summary.png)
-
-1. Select **Create** to agree to the terms and deploy your Azure Blockchain Workbench.
-
-The deployment can take up to 90 minutes. You can use the Azure portal to monitor progress. In the newly created resource group, select **Deployments > Overview** to see the status of the deployed artifacts.
-
-> [!IMPORTANT]
-> Post deployment, you need to complete Active Directory settings. If you chose **Add Later**, you need to run the [Azure AD configuration script](#azure-ad-configuration-script). If you chose **Add now**, you need to [configure the Reply URL](#configuring-the-reply-url).
-
-## Blockchain Workbench web URL
-
-Once the deployment of the Blockchain Workbench has completed, a new resource group contains your Blockchain Workbench resources. Blockchain Workbench services are accessed through a web URL. The following steps show you how to retrieve the web URL of the deployed framework.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the left-hand navigation pane, select **Resource groups**.
-1. Choose the resource group name you specified when deploying Blockchain Workbench.
-1. Select the **TYPE** column heading to sort the list alphabetically by type.
-1. There are two resources with type **App Service**. Select the resource of type **App Service** *without* the "-api" suffix.
-
- ![App service list](media/deploy/resource-group-list.png)
-
-1. In the App Service **Overview**, copy the **URL** value, which represents the web URL to your deployed Blockchain Workbench.
-
- ![App service essentials](media/deploy/app-service.png)
-
-To associate a custom domain name with Blockchain Workbench, see [configuring a custom domain name for a web app in Azure App Service using Traffic Manager](../../app-service/configure-domain-traffic-manager.md).
-
-## Azure AD configuration script
-
-Azure AD must be configured to complete your Blockchain Workbench deployment. You'll use a PowerShell script to do the configuration.
-
-1. In a browser, navigate to the [Blockchain Workbench Web URL](#blockchain-workbench-web-url).
-1. You'll see instructions to set up Azure AD using Cloud Shell. Copy the command and launch Cloud Shell.
-
- ![Launch AAD script](media/deploy/launch-aad-script.png)
-
-1. Choose the Azure AD tenant where you deployed Blockchain Workbench.
-1. In Cloud Shell PowerShell environment, paste and run the command.
-1. When prompted, enter the Azure AD tenant you want to use for Blockchain Workbench. This will be the tenant containing the users for Blockchain Workbench.
-
- > [!IMPORTANT]
- > The authenticated user requires permissions to create Azure AD application registrations and grant delegated application permissions in the tenant. You may need to ask an administrator of the tenant to run the Azure AD configuration script or create a new tenant.
-
- ![Enter Azure AD tenant](media/deploy/choose-tenant.png)
-
-1. You'll be prompted to authenticate to the Azure AD tenant using a browser. Open the web URL in a browser, enter the code, and authenticate.
-
- ![Authenticate with code](media/deploy/authenticate.png)
-
-1. The script outputs several status messages. You get a **SUCCESS** status message if the tenant was successfully provisioned.
-1. Navigate to the Blockchain Workbench URL. You are asked to consent to grant read permissions to the directory. This allows the Blockchain Workbench web app access to the users in the tenant. If you are the tenant administrator, you can choose to consent for the entire organization. This option accepts consent for all users in the tenant. Otherwise, each user is prompted for consent on first use of the Blockchain Workbench web application.
-1. Select **Accept** to consent.
-
- ![Consent to read users profiles](media/deploy/graph-permission-consent.png)
-
-1. After consent, the Blockchain Workbench web app can be used.
-
-You have completed your Azure Blockchain Workbench deployment. See [Next steps](#next-steps) for suggestions to get started using your deployment.
-
-## Azure AD configuration
-
-If you choose to manually configure or verify Azure AD settings prior to deployment, complete all steps in this section. If you prefer to automatically configure Azure AD settings, use [Azure AD configuration script](#azure-ad-configuration-script) after you deploy Blockchain Workbench.
-
-### Blockchain Workbench API app registration
-
-Blockchain Workbench deployment requires registration of an Azure AD application. You need an Azure Active Directory (Azure AD) tenant to register the app. You can use an existing tenant or create a new tenant. If you are using an existing Azure AD tenant, you need sufficient permissions to register applications, grant Graph API permissions, and allow guest access within an Azure AD tenant. If you do not have sufficient permissions in an existing Azure AD tenant, create a new tenant.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select your account in the top-right corner, and switch to the desired Azure AD tenant. The tenant should be the subscription admin's tenant of the subscription where Azure Blockchain Workbench is deployed and you have sufficient permissions to register applications.
-1. In the left-hand navigation pane, select the **Azure Active Directory** service. Select **App registrations** > **New registration**.
-
- ![App registration](media/deploy/app-registration.png)
-
-1. Provide a display **Name** and choose **Accounts in this organizational directory only**.
-
- ![Create app registration](media/deploy/app-registration-create.png)
-
-1. Select **Register** to register the Azure AD application.
-
-### Modify manifest
-
-Next, you need to modify the manifest to use application roles within Azure AD to specify Blockchain Workbench administrators. For more information about application manifests, see [Azure Active Directory application manifest](../../active-directory/develop/reference-app-manifest.md).
-
-1. A GUID is required for the manifest. You can generate a GUID using the PowerShell command `[guid]::NewGuid()` or `New-GUID` cmdlet. Another option is to use a GUID generator website.
-1. For the application you registered, select **Manifest** in the **Manage** section.
-1. Next, update the **appRoles** section of the manifest. Replace `"appRoles": []` with the provided JSON. Be sure to replace the value for the `id` field with the GUID you generated.
- ![Edit manifest](media/deploy/edit-manifest.png)
-
- ``` json
- "appRoles": [
- {
- "allowedMemberTypes": [
- "User",
- "Application"
- ],
- "displayName": "Administrator",
- "id": "<A unique GUID>",
- "isEnabled": true,
- "description": "Blockchain Workbench administrator role allows creation of applications, user to role assignments, etc.",
- "value": "Administrator"
- }
- ],
- ```
-
- > [!IMPORTANT]
- > The value **Administrator** is needed to identify Blockchain Workbench administrators.
-
-1. In the manifest, also change the **Oauth2AllowImplicitFlow** value to **true**.
-
- ``` json
- "oauth2AllowImplicitFlow": true,
- ```
-
-1. Select **Save** to save the manifest changes.
-
-### Add Graph API required permissions
-
-The API application needs to request permission from the user to access the directory. Set the following required permission for the API application:
-
-1. In the *Blockchain API* app registration, select **API permissions**. By default, the Graph API **User.Read** permission is added.
-1. The Workbench application requires read access to users' basic profile information. In *Configured permissions*, select **Add a permission**. In **Microsoft APIs**, select **Microsoft Graph**.
-1. Since the Workbench application uses the authenticated user credentials, select **Delegated permissions**.
-1. In the *User* category, choose **User.ReadBasic.All** permission.
-
- ![Azure AD app registration configuration showing adding the Microsoft Graph User.ReadBasic.All delegated permission](media/deploy/add-graph-user-permission.png)
-
- Select **Add permissions**.
-
-1. In *Configured permissions*, select **Grant admin consent** for the domain then select **Yes** for the verification prompt.
-
- ![Grant permissions](media/deploy/client-app-grant-permissions.png)
-
- Granting permission allows Blockchain Workbench to access users in the directory. The read permission is required to search and add members to Blockchain Workbench.
-
-### Get application ID
-
-The application ID and tenant information are required for deployment. Collect and store the information for use during deployment.
-
-1. For the application you registered, select **Overview**.
-1. Copy and store the **Application ID** value for later use during deployment.
-
- ![API app properties](media/deploy/app-properties.png)
-
- | Setting to store | Use in deployment |
- ||-|
- | Application (client) ID | Azure Active Directory setup > Application ID |
-
-### Get tenant domain name
-
-Collect and store the Active Directory tenant domain name where the applications are registered.
-
-In the left-hand navigation pane, select the **Azure Active Directory** service. Select **Custom domain names**. Copy and store the domain name.
-
-![Domain name](media/deploy/domain-name.png)
-
-### Guest user settings
-
-If you have guest users in your Azure AD tenant, follow the additional steps to ensure Blockchain Workbench user assignment and management works properly.
-
-1. Switch you your Azure AD tenant and select **Azure Active Directory > User settings > Manage external collaboration settings**.
-1. Set **Guest user permissions are limited** to **No**.
- ![External collaboration settings](media/deploy/user-collaboration-settings.png)
-
-## Configuring the reply URL
-
-Once the Azure Blockchain Workbench has been deployed, you have to configure the Azure Active Directory (Azure AD) client application **Reply URL** of the deployed Blockchain Workbench web URL.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Verify you are in the tenant where you registered the Azure AD client application.
-1. In the left-hand navigation pane, select the **Azure Active Directory** service. Select **App registrations**.
-1. Select the Azure AD client application you registered in the prerequisite section.
-1. Select **Authentication**.
-1. Specify the main web URL of the Azure Blockchain Workbench deployment you retrieved in the [Blockchain Workbench web URL](#blockchain-workbench-web-url) section. The Reply URL is prefixed with `https://`. For example, `https://myblockchain2-7v75.azurewebsites.net`
-
- ![Authentication reply URLs](media/deploy/configure-reply-url.png)
-
-1. In the **Advanced setting** section, check **Access tokens** and **ID tokens**.
-
- ![Authentication advanced settings](media/deploy/authentication-advanced-settings.png)
-
-1. Select **Save** to update the client registration.
-
-## Remove a deployment
-
-When a deployment is no longer needed, you can remove a deployment by deleting the Blockchain Workbench resource group.
-
-1. In the Azure portal, navigate to **Resource group** in the left navigation pane and select the resource group you want to delete.
-1. Select **Delete resource group**. Verify deletion by entering the resource group name and select **Delete**.
-
- ![Delete resource group](media/deploy/delete-resource-group.png)
-
-## Next steps
-
-In this how-to article, you deployed Azure Blockchain Workbench. To learn how to create a blockchain application, continue to the next how-to article.
-
-> [!div class="nextstepaction"]
-> [Create a blockchain application in Azure Blockchain Workbench](create-app.md)
blockchain Getdb Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/getdb-details.md
- Title: Get Azure Blockchain Workbench database details
-description: Learn how to get Azure Blockchain Workbench Preview database and database server information.
Previously updated : 02/18/2022--
-#Customer intent: As a developer, I want to get Azure Blockchain database details to connect and view off-chain blockchain data.
--
-# Get information about your Azure Blockchain Workbench database
-
-This article shows how to get detailed information about your Azure Blockchain Workbench Preview database.
-
-## Overview
-
-Information about applications, workflows, and smart contract execution is provided using database views in the Blockchain Workbench SQL DB. Developers can use this information when using tools such as Microsoft Excel, Power BI, Visual Studio, and SQL Server Management Studio.
-
-Before a developer can connect to the database, they need:
-
-* External client access allowed in the database firewall. This article about configuring a database firewall article explains how to allow access.
-* The database server name and database name.
-
-## Connect to the Blockchain Workbench database
-
-To connect to the database:
-
-1. Sign in to the Azure portal with an account that has **Owner** permissions for the Azure Blockchain Workbench resources.
-2. In the left navigation pane, choose **Resource groups**.
-3. Choose the name of the resource group for your Blockchain Workbench deployment.
-4. Select **Type** to sort the resource list, and then choose your **SQL server**. The sorted list in the next screen capture shows two databases, "master" and one that uses "lhgn" as the **Resource prefix**.
-
- ![Sorted Blockchain Workbench resource list](./media/getdb-details/sorted-workbench-resource-list.png)
-
-5. To see detailed information about the Blockchain Workbench database, select the link for the database with the **Resource prefix** you provided for deploying Blockchain Workbench.
-
- ![Database details](./media/getdb-details/workbench-db-details.png)
-
-The database server name and database name let you connect to the Blockchain Workbench database using your development or reporting tool.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Database views in Azure Blockchain Workbench](database-views.md)
blockchain Integration Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/integration-patterns.md
- Title: Smart contract integration patterns - Azure Blockchain Workbench
-description: Overview of smart contract integration patterns in Azure Blockchain Workbench Preview.
Previously updated : 02/18/2022--
-#Customer intent: As a developer, I want to understand recommended integration pattern using Azure Blockchain Workbench so that I can integrate with external systems.
-
-# Smart contract integration patterns
--
-Smart contracts often represent a business workflow that needs to integrate with external systems and devices.
-
-The requirements of these workflows include a need to initiate transactions on a distributed ledger that include data from an external system, service, or device. They also need to have external systems react to events originating from smart contracts on a distributed ledger.
-
-The REST API and messaging integration sends transactions from external systems to smart contracts included in an Azure Blockchain Workbench application. It also sends event notifications to external systems based on changes that take place within an application.
-
-For data integration scenarios, Azure Blockchain Workbench includes a set of database views that merge a combination of transactional data from the blockchain and meta-data about applications and smart contracts.
-
-In addition, some scenarios, such as those related to supply chain or media, may also require the integration of documents. While Azure Blockchain Workbench does not provide API calls for handling documents directly, documents can be incorporated into a blockchain
-application. This section also includes that pattern.
-
-This section includes the patterns identified for implementing each of these types of integrations in your end to end solutions.
-
-## REST API-based integration
-
-Capabilities within the Azure Blockchain Workbench generated web application are exposed via the REST API. Capabilities include Azure Blockchain Workbench uploading, configuration and administration of applications, sending transactions to a distributed ledger, and the querying of application metadata and ledger data.
-
-The REST API is primarily used for interactive clients such as web, mobile, and bot applications.
-
-This section looks at patterns focused on the aspects of the REST API that send transactions to a distributed ledger and patterns that query data about transactions from Azure Blockchain Workbench's *off chain* database.
-
-### Sending transactions to a distributed ledger from an external system
-
-The Azure Blockchain Workbench REST API sends authenticated requests to execute transactions on a distributed ledger.
-
-![Sending transactions to a distributed ledger](./media/integration-patterns/send-transactions-ledger.png)
-
-Executing transactions occurs using the process depicted previously, where:
--- The external application authenticates to the Azure Active Directory provisioned as part of the Azure Blockchain Workbench deployment.-- Authorized users receive a bearer token that can be sent with requests to the API.-- External applications make calls to the REST API using the bearer token.-- The REST API packages the request as a message and sends it to the Service Bus. From here it is retrieved, signed, and sent to the appropriate distributed ledger.-- The REST API makes a request to the Azure Blockchain Workbench SQL DB to record the request and establish the current provisioning status.-- The SQL DB returns the provisioning status and the API call returns the ID to the external application that called it.-
-### Querying Blockchain Workbench metadata and distributed ledger transactions
-
-The Azure Blockchain Workbench REST API sends authenticated requests to query details related to smart contract execution on a distributed ledger.
-
-![Querying metadata](./media/integration-patterns/querying-metadata.png)
-
-Querying occurs using the process depicted previously, where:
-
-1. The external application authenticates to the Azure Active Directory provisioned as part of the Azure Blockchain Workbench deployment.
-2. Authorized users receive a bearer token that can be sent with requests to the API.
-3. External applications make calls to the REST API using the bearer token.
-4. The REST API queries the data for the request from the SQL DB and returns it to the client.
-
-## Messaging integration
-
-Messaging integration facilitates interaction with systems, services, and devices where an interactive sign-in is not possible or desirable. Messaging integration focuses on two types of messages: messages requesting transactions be executed on a distributed ledger, and events exposed by that ledger when transactions have taken place.
-
-Messaging integration focuses on the execution and monitoring of transactions related to user creation, contract creation, and execution of transactions on contracts and is primarily used by *headless* back-end systems.
-
-This section looks at patterns focused on the aspects of the message-based API that send transactions to a distributed ledger and patterns that represent event messages exposed by the underlying distributed ledger.
-
-### One-way event delivery from a smart contract to an event consumer
-
-In this scenario, an event occurs within a smart contract, for example, a state change or the execution of a specific type of transaction. This event is broadcast via an Event Grid to downstream consumers, and those consumers then take appropriate actions.
-
-An example of this scenario is that when a transaction occurs, a consumer would be alerted and could take action, such as recording the information in a SQL DB or the Common Data Service. This scenario is the same pattern that Workbench follows to populate its *off chain* SQL DB.
-
-Another would be if a smart contract transitions to a particular state, for example when a contract goes into an *OutOfCompliance*. When this state change happens, it could trigger an alert to be sent to an administrator's mobile phone.
-
-![One-way event delivery](./media/integration-patterns/one-way-event-delivery.png)
-
-This scenario occurs using the process depicted previously, where:
--- The smart contract transitions to a new state and sends an event to the ledger.-- The ledger receives and delivers the event to Azure Blockchain Workbench.-- Azure Blockchain Workbench is subscribed to events from the ledger and receives the event.-- Azure Blockchain Workbench publishes the event to subscribers on the Event Grid.-- External systems are subscribed to the Event Grid, consume the message, and take the appropriate actions.-
-## One-way event delivery of a message from an external system to a smart contract
-
-There is also a scenario that flows from the opposite direction. In this case, an event is generated by a sensor or an external system and the data from that event should be sent to a smart contract.
-
-A common example is the delivery of data from financial markets, for example, prices of commodities, stock, or bonds, to a smart contract.
-
-### Direct delivery of an Azure Blockchain Workbench in the expected format
-
-Some applications are built to integrate with Azure Blockchain Workbench and directly generates and send messages in the expected formats.
-
-![Direct delivery](./media/integration-patterns/direct-delivery.png)
-
-This delivery occurs using the process depicted previously, where:
--- An event occurs in an external system that triggers the creation of a message for Azure Blockchain Workbench.-- The external system has code written to create this message in a known format and sends it directly to the Service Bus.-- Azure Blockchain Workbench is subscribed to events from the Service Bus and retrieves the message.-- Azure Blockchain Workbench initiates a call to the ledger, sending data from the external system to a specific contract.-- Upon receipt of the message, the contract transitions to a new state.-
-### Delivery of a message in a format unknown to Azure Blockchain Workbench
-
-Some systems cannot be modified to deliver messages in the standard formats used by Azure Blockchain Workbench. In these cases, existing mechanisms and message formats from these systems can often be used. Specifically, the native message types of these systems can be
-transformed using Logic Apps, Azure Functions, or other custom code to map to one of the standard messaging formats expected.
-
-![Unknown message format](./media/integration-patterns/unknown-message-format.png)
-
-This occurs using the process depicted previously, where:
--- An event occurs in an external system that triggers the creation of a message.-- A Logic App or custom code is used to receive that message and transform it to a standard Azure Blockchain Workbench formatted message.-- The Logic App sends the transformed message directly to the Service Bus.-- Azure Blockchain Workbench is subscribed to events from the Service
- Bus and retrieves the message.
-- Azure Blockchain Workbench initiates a call to the ledger, sending data from the external system to a specific function on the contract.-- The function executes and typically modifies the state. The change of state moves forward the business workflow reflected in the smart contract, enabling other functions to now be executed as appropriate.-
-### Transitioning control to an external process and await completion
-
-There are scenarios where a smart contract must stop internal execution and hand off to an external process. That external process would then complete, send a message to the smart contract, and execution would then continue within the smart contract.
-
-#### Transition to the external process
-
-This pattern is typically implemented using the following approach:
--- The smart contract transitions to a specific state. In this state, either no or a limited number of functions can be executed until an external system takes a desired action.-- The change of state is surfaced as an event to a downstream consumer.-- The downstream consumer receives the event and triggers external code execution.-
-![The diagram shows a state change within the Contract causing an event to go to Distributed Ledger. Blockchain Workbench then picks up the event and publishes it.](./media/integration-patterns/transition-external-process.png)
-
-#### Return of control from the smart contract
-
-Depending on the ability to customize the external system, it may or may not be able to deliver messages in one of the standard formats that Azure Blockchain Workbench expects. Based on the external systems ability to generate one of these messages determine which of the following two return paths is taken.
-
-##### Direct delivery of an Azure Blockchain Workbench in the expected format
-
-![The diagram shows an A P I message from the External System being picked up by Blockchain Workbench via the Service Bus. Blockchain Workbench then sends a message as a transaction to Distributed Ledger, on behalf of the agent. It is passed on to Contract, where it causes a state change.](./media/integration-patterns/direct-delivery.png)
-
-In this model, the communication to the contract and subsequent state
-change occurs following the previous process where -
--- Upon reaching the completion or a specific milestone in the external
- code execution, an event is sent to the Service Bus connected to
- Azure Blockchain Workbench.
--- For systems that can't be directly adapted to write a message that
- conforms to the expectations of the API, it is transformed.
--- The content of the message is packaged up and sent to a specific
- function on the smart contract. This delivery is done on behalf of the user
- associated with the external system.
--- The function executes and typically modifies the state. The
- change of state moves forward the business workflow reflected in the
- smart contract, enabling other functions to now be executed as
- appropriate.
-
-### Delivery of a message in a format unknown to Azure Blockchain Workbench
-
-![Unknown message format](./media/integration-patterns/unknown-message-format.png)
-
-In this model where a message in a standard format cannot be sent directly, the communication to the contract and subsequent state change occurs following the previous process where:
-
-1. Upon reaching the completion or a specific milestone in the external code execution, an event is sent to the Service Bus connected to Azure Blockchain Workbench.
-2. A Logic App or custom code is used to receive that message and transform it to a standard Azure Blockchain Workbench formatted message.
-3. The Logic App sends the transformed message directly to the Service Bus.
-4. Azure Blockchain Workbench is subscribed to events from the Service Bus and retrieves the message.
-5. Azure Blockchain Workbench initiates a call to the ledger, sending data from the external system to a specific contract.
-6. The content of the message is packaged up and sent to a specific function on the smart contract. This delivery is done on behalf of the user associated with the external system.
-7. The function executes and typically modifies the state. The change of state moves forward the business workflow reflected in the smart contract, enabling other functions to now be executed as appropriate.
-
-## IoT integration
-
-A common integration scenario is the inclusion of telemetry data retrieved from sensors in a smart contract. Based on data delivered by sensors, smart contracts could take informed actions and alter the state of the contract.
-
-For example, if a truck delivering medicine had its temperature soar to 110 degrees, it may impact the effectiveness of the medicine and may cause a public safety issue if not detected and removed from the supply chain. If a driver accelerated their car to 100 miles per hour, the resulting sensor information could trigger a cancellation of insurance by
-their insurance provider. If the car was a rental car, GPS data could indicate when the driver went outside a geography covered by their rental agreement and charge a penalty.
-
-The challenge is that these sensors can be delivering data on a constant basis and it is not appropriate to send all of this data to a smart contract. A typical approach is to limit the number of messages sent to the blockchain while delivering all messages to a secondary store. For example, deliver messages received at only fixed interval, for example, once per hour, and when a contained value falls outside of an agreed upon range for a smart contract. Checking values that fall outside of tolerances, ensures that the data relevant to the contracts business logic is received and executed. Checking the value at the interval confirms that the sensor is still reporting. All data is sent to a secondary reporting store to enable broader reporting, analytics, and machine learning. For example, while getting sensor readings for GPS may not be required every minute for a smart contract, they could provide interesting data to be used in reports or mapping routes.
-
-On the Azure platform, integration with devices is typically done with IoT Hub. IoT Hub provides routing of messages based on content, and enables the type of functionality described previously.
-
-![IoT messages](./media/integration-patterns/iot.png)
-
-The process depicts a pattern:
--- A device communicates directly or via a field gateway to IoT Hub.-- IoT Hub receives the messages and evaluates the messages against routes established that check the content of the message, for example. *Does the sensor report a temperature greater than 50 degrees?*-- The IoT Hub sends messages that meet the criteria to a defined Service Bus for the route.-- A Logic App or other code listens to the Service Bus that IoT Hub has established for the route.-- The Logic App or other code retrieves and transform the message to a known format.-- The transformed message, now in a standard format, is sent to the Service Bus for Azure Blockchain Workbench.-- Azure Blockchain Workbench is subscribed to events from the Service Bus and retrieves the message.-- Azure Blockchain Workbench initiates a call to the ledger, sending data from the external system to a specific contract.-- Upon receipt of the message, the contract evaluates the data and may change the state based on the outcome of that evaluation, for example, for a high temperature, change the state to *Out of Compliance*.-
-## Data integration
-
-In addition to REST and message-based API, Azure Blockchain Workbench also provides access to a SQL DB populated with application and contract meta-data as well as transactional data from distributed ledgers.
-
-![Data integration](./media/integration-patterns/data-integration.png)
-
-The data integration is well known:
--- Azure Blockchain Workbench stores metadata about applications, workflows, contracts, and transactions as part of its normal operating behavior.-- External systems or tools provide one or more dialogs to facilitate the collection of information about the database, such as database server name, database name, type of authentication, login credentials, and which database views to utilize.-- Queries are written against database views to facilitate downstream consumption by external systems, services, reporting, developer tools, and enterprise productivity tools.-
-## Storage integration
-
-Many scenarios may require the need to incorporate attestable files. For multiple reasons, it is inappropriate to put files on a blockchain. Instead, a common approach is to perform a cryptographic hash (for example, SHA-256) against a file and share that hash on a distributed ledger. Performing the hash again at any future time should return the same result. If the file is
-modified, even if just one pixel is modified in an image, the hash returns a different value.
-
-![Storage integration](./media/integration-patterns/storage-integration.png)
-
-The pattern can be implemented where:
--- An external system persists a file in a storage mechanism, such as Azure Storage.-- A hash is generated with the file or the file and associated metadata such as an identifier for the owner, the URL where the file is located, etc.-- The hash and any metadata is sent to a function on a smart contract, such as *FileAdded*-- In future, the file and meta-data can be hashed again and compared against the values stored on the ledger.-
-## Prerequisites for implementing integration patterns using the REST and message APIs
-
-To facilitate the ability for an external system or device to interact with the smart contract using either the REST or message API, the following must occur -
-
-1. In the Azure Active Directory for the consortium, an account is created that represents the external system or device.
-2. One or more appropriate smart contracts for your Azure Blockchain Workbench application have functions defined to accept the events from your external system or device.
-3. The application configuration file for your smart contract contains the role, which the system or device is assigned.
-4. The application configuration file for your smart contract identifies in which states this function is called by the defined role.
-5. The Application configuration file and its smart contracts are uploaded to Azure Blockchain Workbench.
-
-Once the application is uploaded, the Azure Active Directory account for the external system is assigned to the contract and the associated role.
-
-## Testing External System Integration Flows Prior to Writing Integration Code
-
-Integrating with external systems is a key requirement of many scenarios. It is desirable to be able to validate smart contract design prior or in parallel to the development of code to integrate with external systems.
-
-The use of Azure Active Directory (Azure AD) can greatly accelerate developer productivity and time to value. Specifically, the code integration with an external system may take a non-trivial amount of time. By using Azure AD and the auto-generation of UX by Azure Blockchain Workbench, you can allow developers to sign in to Blockchain Workbench as the external system and populate values from the external system via the UX. You can rapidly develop and validate ideas in a proof of concept environment before integration code is written for the external systems.
blockchain Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/manage-users.md
- Title: Manage users in Azure Blockchain Workbench
-description: How to manage users in Azure Blockchain Workbench.
Previously updated : 02/18/2022--
-#Customer intent: As an administrator of Blockchain Workbench, I want to manage users for blockchain apps in Azure Blockchain Workbench.
-
-# Manage Users in Azure Blockchain Workbench
--
-Azure Blockchain Workbench includes user management for people and organizations that are part of your consortium.
-
-## Prerequisites
-
-A Blockchain Workbench deployment is required. See [Azure Blockchain Workbench deployment](deploy.md) for details on deployment.
-
-## Add Azure AD users
-
-The Azure Blockchain Workbench uses Azure Active Directory (Azure AD) for authentication, access control, and roles. Users in the Blockchain Workbench Azure AD tenant can authenticate and use Blockchain Workbench. Add users to the Administrator application role to interact and perform actions.
-
-Blockchain Workbench users need to exist in the Azure AD tenant before you can assign them to applications and roles. To add users to Azure AD, use the following steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select your account in the top right corner, and switch to the Azure AD tenant associated to Blockchain Workbench.
-1. Select **Azure Active Directory > Users**. You see a list of users in your directory.
-1. To add users to the directory, select **New user**. For external users, select **New guest user**.
-1. Complete the required fields for the new user. Select **Create**.
-
-Visit [Azure AD](../../active-directory/fundamentals/add-users-azure-active-directory.md) documentation for more details on how to manage users within Azure AD.
-
-## Manage Blockchain Workbench administrators
-
-Once users have been added to the directory, the next step is to choose which users are Blockchain Workbench administrators. Users in the **Administrator** group are associated with the **Administrator application role** in Blockchain Workbench. Administrators can add or remove users, assign users to specific scenarios, and create new applications.
-
-To add users to the **Administrator** group in the Azure AD directory:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Verify you are in the Azure AD tenant associated to Blockchain Workbench by selecting your account in the top-right corner.
-1. Select **Azure Active Directory > Enterprise applications**.
-1. Change **Application type** drop-down filter to **All Applications** and select **Apply**.
-1. Select the Azure AD client application for Azure Blockchain Workbench
-
- ![All enterprise application registrations](./media/manage-users/select-blockchain-client-app.png)
-
-1. Select **Users and groups > Add user**.
-1. In **Add Assignment**, select **Users**. Choose or search for the user you want to add as an administrator. Click **Select** when finished choosing.
-
- ![Add assignment](./media/manage-users/add-user-assignment.png)
-
-1. Verify **Role** is set to **Administrator**
-1. Select **Assign**. The added users are displayed in the list with the administrator role assigned.
-
- ![Blockchain client app users](./media/manage-users/blockchain-admin-list.png)
-
-## Managing Blockchain Workbench members
-
-Use the Blockchain Workbench application to manage users and organizations that are part of your consortium. You can add or remove users to applications and roles.
-
-1. [Open the Blockchain Workbench](deploy.md#blockchain-workbench-web-url) in your browser and sign in as an administrator.
-
- ![Blockchain Workbench](./media/manage-users/blockchain-workbench-applications.png)
-
- Members are added to each application. Members can have one or more application roles to initiate contracts or take actions.
-
-1. To manage members for an application, select an application tile in the **Applications** pane.
-
- The number of members associated to the selected application is reflected in the members tile.
-
- ![Select application](./media/manage-users/blockchain-workbench-select-application.png)
--
-#### Add member to application
-
-1. Select the member tile to display a list of the current members.
-1. Select **Add members**.
-
- ![Screenshot shows the application membership window with the Add a member button highlighted.](./media/manage-users/application-add-members.png)
-
-1. Search for the user's name. Only Azure AD users that exist in the Blockchain Workbench tenant are listed. If the user is not found, you need to [Add Azure AD users](#add-azure-ad-users).
-
- ![Add members](./media/manage-users/find-user.png)
-
-1. Select a **Role** from the drop-down.
-
- ![Select role members](./media/manage-users/application-select-role.png)
-
-1. Select **Add** to add the member with the associated role to the application.
-
-#### Remove member from application
-
-1. Select the member tile to display a list of the current members.
-1. For the user you want to remove, choose **Remove** from the role drop-down.
-
- ![Remove member](./media/manage-users/application-remove-member.png)
-
-#### Change or add role
-
-1. Select the member tile to display a list of the current members.
-1. For the user you want to change, click the drop-down and select the new role.
-
- ![Change role](./media/manage-users/application-change-role.png)
-
-## Next steps
-
-In this how-to article, you have learned how to manage users for Azure Blockchain Workbench. To learn how to create a blockchain application, continue to the next how-to article.
-
-> [!div class="nextstepaction"]
-> [Create a blockchain application in Azure Blockchain Workbench](create-app.md)
blockchain Messages Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/messages-overview.md
-
Title: Use messages to integrate with Azure Blockchain Workbench
-description: Overview of using messages to integrate Azure Blockchain Workbench Preview with other systems.
Previously updated : 02/18/2022--
-#Customer intent: As an developer, I want to use messages to integrate external systems with Azure Blockchain Workbench.
--
-# Azure Blockchain Workbench messaging integration
--
-In addition to providing a REST API, Azure Blockchain Workbench also provides messaging-based integration. Workbench publishes ledger-centric events via Azure Event Grid, enabling downstream consumers to ingest data or take action based on these events. For those clients that require reliable messaging, Azure Blockchain Workbench delivers messages to an Azure Service Bus endpoint as well.
-
-## Input APIs
-
-If you want to initiate transactions from external systems to create users, create contracts, and update contracts, you can use messaging input APIs to perform transactions on a ledger. See [messaging integration samples](https://aka.ms/blockchain-workbench-integration-sample) for a sample that demonstrates input APIs.
-
-The following are the currently available input APIs.
-
-### Create user
-
-Creates a new user.
-
-The request requires the following fields:
-
-| **Name** | **Description** |
-|-||
-| requestId | Client supplied GUID |
-| firstName | First name of the user |
-| lastName | Last name of the user |
-| emailAddress | Email address of the user |
-| externalId | Azure AD object ID of the user |
-| connectionId | Unique identifier for the blockchain connection |
-| messageSchemaVersion | Messaging schema version |
-| messageName | **CreateUserRequest** |
-
-Example:
-
-``` json
-{
- "requestId": "e2264523-6147-41fc-bbbb-edba8e44562d",
- "firstName": "Ali",
- "lastName": "Alio",
- "emailAddress": "aa@contoso.com",
- "externalId": "6a9b7f65-ffff-442f-b3b8-58a35abd1bcd",
- "connectionId": 1,
- "messageSchemaVersion": "1.0.0",
- "messageName": "CreateUserRequest"
-}
-```
-
-Blockchain Workbench returns a response with the following fields:
-
-| **Name** | **Description** |
-|--|--|
-| requestId | Client supplied GUID |
-| userId | ID of the user that was created |
-| userChainIdentifier | Address of the user that was created on the blockchain network. In Ethereum, the address is the user's **on-chain** address. |
-| connectionId | Unique identifier for the blockchain connection|
-| messageSchemaVersion | Messaging schema version |
-| messageName | **CreateUserUpdate** |
-| status | Status of the user creation request. If successful, value is **Success**. On failure, value is **Failure**. |
-| additionalInformation | Additional information provided based on the status |
-
-Example successful **create user** response from Blockchain Workbench:
-
-``` json
-{
- "requestId": "e2264523-6147-41fc-bb59-edba8e44562d",
- "userId": 15,
- "userChainIdentifier": "0x9a8DDaCa9B7488683A4d62d0817E965E8f248398",
- "connectionId": 1,
- "messageSchemaVersion": "1.0.0",
- "messageName": "CreateUserUpdate",
- "status": "Success",
- "additionalInformation": { }
-}
-```
-
-If the request was unsuccessful, details about the failure are included in additional information.
-
-``` json
-{
- "requestId": "e2264523-6147-41fc-bb59-edba8e44562d",
- "userId": 15,
- "userChainIdentifier": null,
- "connectionId": 1,
- "messageSchemaVersion": "1.0.0",
- "messageName": "CreateUserUpdate",
- "status": "Failure",
- "additionalInformation": {
- "errorCode": 4000,
- "errorMessage": "User cannot be provisioned on connection."
- }
-}
-```
-
-### Create contract
-
-Creates a new contract.
-
-The request requires the following fields:
-
-| **Name** | **Description** |
-|-||
-| requestId | Client supplied GUID |
-| userChainIdentifier | Address of the user that was created on the blockchain network. In Ethereum, this address is the user's **on chain** address. |
-| applicationName | Name of the application |
-| version | Version of the application. Required if you have multiple versions of the application enabled. Otherwise, version is optional. For more information on application versioning, see [Azure Blockchain Workbench application versioning](version-app.md). |
-| workflowName | Name of the workflow |
-| parameters | Parameters input for contract creation |
-| connectionId | Unique identifier for the blockchain connection |
-| messageSchemaVersion | Messaging schema version |
-| messageName | **CreateContractRequest** |
-
-Example:
-
-``` json
-{
- "requestId": "ce3c429b-a091-4baa-b29b-5b576162b211",
- "userChainIdentifier": "0x9a8DDaCa9B7488683A4d62d0817E965E8f248398",
- "applicationName": "AssetTransfer",
- "version": "1.0",
- "workflowName": "AssetTransfer",
- "parameters": [
- {
- "name": "description",
- "value": "a 1969 dodge charger"
- },
- {
- "name": "price",
- "value": "12345"
- }
- ],
- "connectionId": 1,
- "messageSchemaVersion": "1.0.0",
- "messageName": "CreateContractRequest"
-}
-```
-
-Blockchain Workbench returns a response with the following fields:
-
-| **Name** | **Description** |
-|--|--|
-| requestId | Client supplied GUID |
-| contractId | Unique identifier for the contract inside Azure Blockchain Workbench |
-| contractLedgerIdentifier | Address of the contract on the ledger |
-| connectionId | Unique identifier for the blockchain connection |
-| messageSchemaVersion | Messaging schema version |
-| messageName | **CreateContractUpdate** |
-| status | Status of the contract creation request. Possible values: **Submitted**, **Committed**, **Failure**. |
-| additionalInformation | Additional information provided based on the status |
-
-Example of a submitted **create contract** response from Blockchain Workbench:
-
-``` json
-{
- "requestId": "ce3c429b-a091-4baa-b29b-5b576162b211",
- "contractId": 55,
- "contractLedgerIdentifier": "0xde0B295669a9FD93d5F28D9Ec85E40f4cb697BAe",
- "connectionId": 1,
- "messageSchemaVersion": "1.0.0",
- "messageName": "CreateContractUpdate",
- "status": "Submitted",
- "additionalInformation": { }
-}
-```
-
-Example of a committed **create contract** response from Blockchain Workbench:
-
-``` json
-{
- "requestId": "ce3c429b-a091-4baa-b29b-5b576162b211",
- "contractId": 55,
- "contractLedgerIdentifier": "0xde0B295669a9FD93d5F28D9Ec85E40f4cb697BAe",
- "connectionId": 1,
- "messageSchemaVersion": "1.0.0",
- "messageName": "CreateContractUpdate",
- "status": "Committed",
- "additionalInformation": { }
-}
-```
-
-If the request was unsuccessful, details about the failure are included in additional information.
-
-``` json
-{
- "requestId": "ce3c429b-a091-4baa-b29b-5b576162b211",
- "contractId": 55,
- "contractLedgerIdentifier": null,
- "connectionId": 1,
- "messageSchemaVersion": "1.0.0",
- "messageName": "CreateContractUpdate",
- "status": "Failure",
- "additionalInformation": {
- "errorCode": 4000,
- "errorMessage": "Contract cannot be provisioned on connection."
- }
-}
-```
-
-### Create contract action
-
-Creates a new contract action.
-
-The request requires the following fields:
-
-| **Name** | **Description** |
-|--||
-| requestId | Client supplied GUID |
-| userChainIdentifier | Address of the user that was created on the blockchain network. In Ethereum, this address is the user's **on chain** address. |
-| contractLedgerIdentifier | Address of the contract on the ledger |
-| version | Version of the application. Required if you have multiple versions of the application enabled. Otherwise, version is optional. For more information on application versioning, see [Azure Blockchain Workbench application versioning](version-app.md). |
-| workflowFunctionName | Name of the workflow function |
-| parameters | Parameters input for contract creation |
-| connectionId | Unique identifier for the blockchain connection |
-| messageSchemaVersion | Messaging schema version |
-| messageName | **CreateContractActionRequest** |
-
-Example:
-
-``` json
-{
- "requestId": "a5530932-9d6b-4eed-8623-441a647741d3",
- "userChainIdentifier": "0x9a8DDaCa9B7488683A4d62d0817E965E8f248398",
- "contractLedgerIdentifier": "0xde0B295669a9FD93d5F28D9Ec85E40f4cb697BAe",
- "version": "1.0",
- "workflowFunctionName": "modify",
- "parameters": [
- {
- "name": "description",
- "value": "a 1969 dodge charger"
- },
- {
- "name": "price",
- "value": "12345"
- }
- ],
- "connectionId": 1,
- "messageSchemaVersion": "1.0.0",
- "messageName": "CreateContractActionRequest"
-}
-```
-
-Blockchain Workbench returns a response with the following fields:
-
-| **Name** | **Description** |
-|--|--|
-| requestId | Client supplied GUID|
-| contractId | Unique identifier for the contract inside Azure Blockchain Workbench |
-| connectionId | Unique identifier for the blockchain connection |
-| messageSchemaVersion | Messaging schema version |
-| messageName | **CreateContractActionUpdate** |
-| status | Status of the contract action request. Possible values: **Submitted**, **Committed**, **Failure**. |
-| additionalInformation | Additional information provided based on the status |
-
-Example of a submitted **create contract action** response from Blockchain Workbench:
-
-``` json
-{
- "requestId": "a5530932-9d6b-4eed-8623-441a647741d3",
- "contractId": 105,
- "connectionId": 1,
- "messageSchemaVersion": "1.0.0",
- "messageName": "CreateContractActionUpdate",
- "status": "Submitted",
- "additionalInformation": { }
-}
-```
-
-Example of a committed **create contract action** response from Blockchain Workbench:
-
-``` json
-{
- "requestId": "a5530932-9d6b-4eed-8623-441a647741d3",
- "contractId": 105,
- "connectionId": 1,
- "messageSchemaVersion": "1.0.0",
- "messageName": "CreateContractActionUpdate",
- "status": "Committed",
- "additionalInformation": { }
-}
-```
-
-If the request was unsuccessful, details about the failure are included in additional information.
-
-``` json
-{
- "requestId": "a5530932-9d6b-4eed-8623-441a647741d3",
- "contractId": 105,
- "connectionId": 1,
- "messageSchemaVersion": "1.0.0",
- "messageName": "CreateContractActionUpdate",
- "status": "Failure",
- "additionalInformation": {
- "errorCode": 4000,
- "errorMessage": "Contract action cannot be provisioned on connection."
- }
-}
-```
-
-### Input API error codes and messages
-
-**Error code 4000: Bad request error**
-- Invalid connectionId-- CreateUserRequest deserialization failed-- CreateContractRequest deserialization failed-- CreateContractActionRequest deserialization failed-- Application {identified by application name} does not exist-- Application {identified by application name} does not have workflow-- UserChainIdentifier does not exist-- Contract {identified by ledger identifier} does not exist-- Contract {identified by ledger identifier} does not have function {workflow function name}-- UserChainIdentifier does not exist-
-**Error code 4090: Conflict error**
-- User already exists-- Contract already exists-- Contract action already exists-
-**Error code 5000: Internal server error**
-- Exception messages-
-## Event notifications
-
-Event notifications can be used to notify users and downstream systems of events that happen in Blockchain Workbench and the blockchain network it is connected to. Event notifications can be consumed directly in code or used with tools such as Logic Apps and Flow to trigger flow of data to downstream systems.
-
-See [Notification message reference](#notification-message-reference)
-for details of various messages that can be received.
-
-### Consuming Event Grid events with Azure Functions
-
-If a user wants to use Event Grid to be notified about events that happen in Blockchain Workbench, you can consume events from Event Grid by using Azure Functions.
-
-1. Create an **Azure Function App** in the Azure portal.
-2. Create a new function.
-3. Locate the template for Event Grid. Basic template code for reading the message is shown. Modify the code as needed.
-4. Save the Function.
-5. Select the Event Grid from Blockchain Workbench's resource group.
-
-### Consuming Event Grid events with Logic Apps
-
-1. Create a new **Azure Logic App** in the Azure portal.
-2. When opening the Azure Logic App in the portal, you will be prompted to select a trigger. Select **Azure Event Grid -- When a resource event occurs**.
-3. When the workflow designer is displayed, you will be prompted to sign in.
-4. Select the Subscription. Resource as **Microsoft.EventGrid.Topics**. Select the **Resource Name** from the name of the resource from the Azure Blockchain Workbench resource group.
-5. Select the Event Grid from Blockchain Workbench's resource group.
-
-## Using Service Bus Topics for notifications
-
-Service Bus Topics can be used to notify users about events that happen in Blockchain Workbench.
-
-1. Browse to the Service Bus within the Workbench's resource group.
-2. Select **Topics**.
-3. Select **egress-topic**.
-4. Create a new subscription to this topic. Obtain a key for it.
-5. Create a program, which subscribes to events from this subscription.
-
-### Consuming Service Bus Messages with Logic Apps
-
-1. Create a new **Azure Logic App** in the Azure portal.
-2. When opening the Azure Logic App in the portal, you will be prompted to select a trigger. Type **Service Bus** into the search box and select the trigger appropriate for the type of interaction you want to have with the Service Bus. For example, **Service Bus -- When a message is received in a topic subscription (auto-complete)**.
-3. When the workflow designer is displayed, specify the connection information for the Service Bus.
-4. Select your subscription and specify the topic of **workbench-external**.
-5. Develop the logic for your application that utilizes the message from
-this trigger.
-
-## Notification message reference
-
-Depending on the **messageName**, the notification messages have one of the following message types.
-
-### Block message
-
-Contains information about individual blocks. The *BlockMessage* includes a section with block level information and a section with transaction information.
-
-| Name | Description |
-||-|
-| block | Contains [block information](#block-information) |
-| transactions | Contains a collection [transaction information](#transaction-information) for the block |
-| connectionId | Unique identifier for the connection |
-| messageSchemaVersion | Messaging schema version |
-| messageName | **BlockMessage** |
-| additionalInformation | Additional information provided |
-
-#### Block information
-
-| Name | Description |
-|-|-|
-| blockId | Unique identifier for the block inside Azure Blockchain Workbench |
-| blockNumber | Unique identifier for a block on the ledger |
-| blockHash | The hash of the block |
-| previousBlockHash | The hash of the previous block |
-| blockTimestamp | The timestamp of the block |
-
-#### Transaction information
-
-| Name | Description |
-|--|-|
-| transactionId | Unique identifier for the transaction inside Azure Blockchain Workbench |
-| transactionHash | The hash of the transaction on the ledger |
-| from | Unique identifier on the ledger for the transaction origin |
-| to | Unique identifier on the ledger for the transaction destination |
-| provisioningStatus | Identifies the current status of the provisioning process for the transaction. Possible values are: </br>0 ΓÇô The transaction has been created by the API in the database</br>1 ΓÇô The transaction has been sent to the ledger</br>2 ΓÇô The transaction has been successfully committed to the ledger</br>3 or 4 - The transaction failed to be committed to the ledger</br>5 - The transaction was successfully committed to the ledger |
-
-Example of a *BlockMessage* from Blockchain Workbench:
-
-``` json
-{
- "block": {
- "blockId": 123,
- "blockNumber": 1738312,
- "blockHash": "0x03a39411e25e25b47d0ec6433b73b488554a4a5f6b1a253e0ac8a200d13fffff",
- "previousBlockHash": null,
- "blockTimestamp": "2018-10-09T23:35:58Z",
- },
- "transactions": [
- {
- "transactionId": 234,
- "transactionHash": "0xa4d9c95b581f299e41b8cc193dd742ef5a1d3a4ddf97bd11b80d123fec27ffff",
- "from": "0xd85e7262dd96f3b8a48a8aaf3dcdda90f60dffff",
- "to": null,
- "provisioningStatus": 1
- },
- {
- "transactionId": 235,
- "transactionHash": "0x5c1fddea83bf19d719e52a935ec8620437a0a6bdaa00ecb7c3d852cf92e1ffff",
- "from": "0xadd97e1e595916e29ea94fda894941574000ffff",
- "to": "0x9a8DDaCa9B7488683A4d62d0817E965E8f24ffff",
- "provisioningStatus": 2
- }
- ],
- "connectionId": 1,
- "messageSchemaVersion": "1.0.0",
- "messageName": "BlockMessage",
- "additionalInformation": {}
-}
-```
-
-### Contract message
-
-Contains information about a contract. The message includes a section with contract properties and a section with transaction information. All
-transactions that have modified the contract for the particular block are included in the transaction section.
-
-| Name | Description |
-||-|
-| blockId | Unique identifier for the block inside Azure Blockchain Workbench |
-| blockHash | Hash of the block |
-| modifyingTransactions | [Transactions that modified](#modifying-transaction-information) the contract |
-| contractId | Unique identifier for the contract inside Azure Blockchain Workbench |
-| contractLedgerIdentifier | Unique identifier for the contract on the ledger |
-| contractProperties | [Properties of the contract](#contract-properties) |
-| isNewContract | Indicates whether or not this contract was newly created. Possible values are: true: this contract was a new contract created. false: this contract is a contract update. |
-| connectionId | Unique identifier for the connection |
-| messageSchemaVersion | Messaging schema version |
-| messageName | **ContractMessage** |
-| additionalInformation | Additional information provided |
-
-#### Modifying transaction information
-
-| Name | Description |
-|--|-|
-| transactionId | Unique identifier for the transaction inside Azure Blockchain Workbench |
-| transactionHash | The hash of the transaction on the ledger |
-| from | Unique identifier on the ledger for the transaction origin |
-| to | Unique identifier on the ledger for the transaction destination |
-
-#### Contract properties
-
-| Name | Description |
-|--|-|
-| workflowPropertyId | Unique identifier for the workflow property inside Azure Blockchain Workbench |
-| name | Name of the workflow property |
-| value | Value of the workflow property |
-
-Example of a *ContractMessage* from Blockchain Workbench:
-
-``` json
-{
- "blockId": 123,
- "blockhash": "0x03a39411e25e25b47d0ec6433b73b488554a4a5f6b1a253e0ac8a200d13fffff",
- "modifyingTransactions": [
- {
- "transactionId": 234,
- "transactionHash": "0x5c1fddea83bf19d719e52a935ec8620437a0a6bdaa00ecb7c3d852cf92e1ffff",
- "from": "0xd85e7262dd96f3b8a48a8aaf3dcdda90f60dffff",
- "to": "0xf8559473b3c7197d59212b401f5a9f07ffff"
- },
- {
- "transactionId": 235,
- "transactionHash": "0xa4d9c95b581f299e41b8cc193dd742ef5a1d3a4ddf97bd11b80d123fec27ffff",
- "from": "0xd85e7262dd96f3b8a48a8aaf3dcdda90f60dffff",
- "to": "0xf8559473b3c7197d59212b401f5a9f07b429ffff"
- }
- ],
- "contractId": 111,
- "contractLedgerIdentifier": "0xf8559473b3c7197d59212b401f5a9f07b429ffff",
- "contractProperties": [
- {
- "workflowPropertyId": 1,
- "name": "State",
- "value": "0"
- },
- {
- "workflowPropertyId": 2,
- "name": "Description",
- "value": "1969 Dodge Charger"
- },
- {
- "workflowPropertyId": 3,
- "name": "AskingPrice",
- "value": "30000"
- },
- {
- "workflowPropertyId": 4,
- "name": "OfferPrice",
- "value": "0"
- },
- {
- "workflowPropertyId": 5,
- "name": "InstanceAppraiser",
- "value": "0x0000000000000000000000000000000000000000"
- },
- {
- "workflowPropertyId": 6,
- "name": "InstanceBuyer",
- "value": "0x0000000000000000000000000000000000000000"
- },
- {
- "workflowPropertyId": 7,
- "name": "InstanceInspector",
- "value": "0x0000000000000000000000000000000000000000"
- },
- {
- "workflowPropertyId": 8,
- "name": "InstanceOwner",
- "value": "0x9a8DDaCa9B7488683A4d62d0817E965E8f24ffff"
- },
- {
- "workflowPropertyId": 9,
- "name": "ClosingDayOptions",
- "value": "[21,48,69]"
- }
- ],
- "isNewContract": false,
- "connectionId": 1,
- "messageSchemaVersion": "1.0.0",
- "messageName": "ContractMessage",
- "additionalInformation": {}
-}
-```
-
-### Event message: Contract function invocation
-
-Contains information when a contract function is invoked, such as the function name, parameters input, and the caller of the function.
-
-| Name | Description |
-||-|
-| eventName | **ContractFunctionInvocation** |
-| caller | [Caller information](#caller-information) |
-| contractId | Unique identifier for the contract inside Azure Blockchain Workbench |
-| contractLedgerIdentifier | Unique identifier for the contract on the ledger |
-| functionName | Name of the function |
-| parameters | [Parameter information](#parameter-information) |
-| transaction | Transaction information |
-| inTransactionSequenceNumber | The sequence number of the transaction in the block |
-| connectionId | Unique identifier for the connection |
-| messageSchemaVersion | Messaging schema version |
-| messageName | **EventMessage** |
-| additionalInformation | Additional information provided |
-
-#### Caller information
-
-| Name | Description |
-||-|
-| type | Type of the caller, like a user or a contract |
-| id | Unique identifier for the caller inside Azure Blockchain Workbench |
-| ledgerIdentifier | Unique identifier for the caller on the ledger |
-
-#### Parameter information
-
-| Name | Description |
-||-|
-| name | Parameter name |
-| value | Parameter value |
-
-#### Event message transaction information
-
-| Name | Description |
-|--|-|
-| transactionId | Unique identifier for the transaction inside Azure Blockchain Workbench |
-| transactionHash | The hash of the transaction on the ledger |
-| from | Unique identifier on the ledger for the transaction origin |
-| to | Unique identifier on the ledger for the transaction destination |
-
-Example of an *EventMessage ContractFunctionInvocation* from Blockchain Workbench:
-
-``` json
-{
- "eventName": "ContractFunctionInvocation",
- "caller": {
- "type": "User",
- "id": 21,
- "ledgerIdentifier": "0xd85e7262dd96f3b8a48a8aaf3dcdda90f60ffff"
- },
- "contractId": 34,
- "contractLedgerIdentifier": "0xf8559473b3c7197d59212b401f5a9f07b429ffff",
- "functionName": "Modify",
- "parameters": [
- {
- "name": "description",
- "value": "a new description"
- },
- {
- "name": "price",
- "value": "4567"
- }
- ],
- "transaction": {
- "transactionId": 234,
- "transactionHash": "0x5c1fddea83bf19d719e52a935ec8620437a0a6bdaa00ecb7c3d852cf92e1ffff",
- "from": "0xd85e7262dd96f3b8a48a8aaf3dcdda90f60dffff",
- "to": "0xf8559473b3c7197d59212b401f5a9f07b429ffff"
- },
- "inTransactionSequenceNumber": 1,
- "connectionId": 1,
- "messageSchemaVersion": "1.0.0",
- "messageName": "EventMessage",
- "additionalInformation": { }
-}
-```
-
-### Event message: Application ingestion
-
-Contains information when an application is uploaded to Workbench, such as the name and version of the application uploaded.
-
-| Name | Description |
-||-|
-| eventName | **ApplicationIngestion** |
-| applicationId | Unique identifier for the application inside Azure Blockchain Workbench |
-| applicationName | Application name |
-| applicationDisplayName | Application display name |
-| applicationVersion | Application version |
-| applicationDefinitionLocation | URL where the application configuration file is located |
-| contractCodes | Collection of [contract codes](#contract-code-information) for the application |
-| applicationRoles | Collection of [application roles](#application-role-information) for the application |
-| applicationWorkflows | Collection of [application workflows](#application-workflow-information) for the application |
-| connectionId | Unique identifier for the connection |
-| messageSchemaVersion | Messaging schema version |
-| messageName | **EventMessage** |
-| additionalInformation | Additional information provided here includes the application workflow states and transition information. |
-
-#### Contract code information
-
-| Name | Description |
-||-|
-| id | Unique identifier for the contract code file inside Azure Blockchain Workbench |
-| ledgerId | Unique identifier for the ledger inside Azure Blockchain Workbench |
-| location | URL where the contract code file is located |
-
-#### Application role information
-
-| Name | Description |
-||-|
-| id | Unique identifier for the application role inside Azure Blockchain Workbench |
-| name | Name of the application role |
-
-#### Application workflow information
-
-| Name | Description |
-||-|
-| id | Unique identifier for the application workflow inside Azure Blockchain Workbench |
-| name | Application workflow name |
-| displayName | Application workflow display name |
-| functions | Collection of [functions for the application workflow](#workflow-function-information)|
-| states | Collection of [states for the application workflow](#workflow-state-information) |
-| properties | Application [workflow properties information](#workflow-property-information) |
-
-##### Workflow function information
-
-| Name | Description |
-||-|
-| id | Unique identifier for the application workflow function inside Azure Blockchain Workbench |
-| name | Function name |
-| parameters | Parameters for the function |
-
-##### Workflow state information
-
-| Name | Description |
-||-|
-| name | State name |
-| displayName | State display name |
-| style | State style (success or failure) |
-
-##### Workflow property information
-
-| Name | Description |
-||-|
-| id | Unique identifier for the application workflow property inside Azure Blockchain Workbench |
-| name | Property name |
-| type | Property type |
-
-Example of an *EventMessage ApplicationIngestion* from Blockchain Workbench:
-
-``` json
-{
- "eventName": "ApplicationIngestion",
- "applicationId": 31,
- "applicationName": "AssetTransfer",
- "applicationDisplayName": "Asset Transfer",
- "applicationVersion": "1.0",
- "applicationDefinitionLocation": "http://url",
- "contractCodes": [
- {
- "id": 23,
- "ledgerId": 1,
- "location": "http://url"
- }
- ],
- "applicationRoles": [
- {
- "id": 134,
- "name": "Buyer"
- },
- {
- "id": 135,
- "name": "Seller"
- }
- ],
- "applicationWorkflows": [
- {
- "id": 89,
- "name": "AssetTransfer",
- "displayName": "Asset Transfer",
- "functions": [
- {
- "id": 912,
- "name": "",
- "parameters": [
- {
- "name": "description",
- "type": {
- "name": "string"
- }
- },
- {
- "name": "price",
- "type": {
- "name": "int"
- }
- }
- ]
- },
- {
- "id": 913,
- "name": "modify",
- "parameters": [
- {
- "name": "description",
- "type": {
- "name": "string"
- }
- },
- {
- "name": "price",
- "type": {
- "name": "int"
- }
- }
- ]
- }
- ],
- "states": [
- {
- "name": "Created",
- "displayName": "Created",
- "style" : "Success"
- },
- {
- "name": "Terminated",
- "displayName": "Terminated",
- "style" : "Failure"
- }
- ],
- "properties": [
- {
- "id": 879,
- "name": "Description",
- "type": {
- "name": "string"
- }
- },
- {
- "id": 880,
- "name": "Price",
- "type": {
- "name": "int"
- }
- }
- ]
- }
- ],
- "connectionId": [ ],
- "messageSchemaVersion": "1.0.0",
- "messageName": "EventMessage",
- "additionalInformation":
- {
- "states" :
- [
- {
- "Name": "BuyerAccepted",
- "Transitions": [
- {
- "DisplayName": "Accept",
- "AllowedRoles": [ ],
- "AllowedInstanceRoles": [ "InstanceOwner" ],
- "Function": "Accept",
- "NextStates": [ "SellerAccepted" ]
- }
- ]
- }
- ]
- }
-}
-```
-
-### Event message: Role assignment
-
-Contains information when a user is assigned a role in Workbench, such as who performed the role assignment and the name of the role and corresponding application.
-
-| Name | Description |
-||-|
-| eventName | **RoleAssignment** |
-| applicationId | Unique identifier for the application inside Azure Blockchain Workbench |
-| applicationName | Application name |
-| applicationDisplayName | Application display name |
-| applicationVersion | Application version |
-| applicationRole | Information about the [application role](#roleassignment-application-role) |
-| assigner | Information about the [assigner](#roleassignment-assigner) |
-| assignee | Information about the [assignee](#roleassignment-assignee) |
-| connectionId | Unique identifier for the connection |
-| messageSchemaVersion | Messaging schema version |
-| messageName | **EventMessage** |
-| additionalInformation | Additional information provided |
-
-#### RoleAssignment application role
-
-| Name | Description |
-||-|
-| id | Unique identifier for the application role inside Azure Blockchain Workbench |
-| name | Name of the application role |
-
-#### RoleAssignment assigner
-
-| Name | Description |
-||-|
-| id | Unique identifier of the user inside Azure Blockchain Workbench |
-| type | Type of the assigner |
-| chainIdentifier | Unique identifier of the user on the ledger |
-
-#### RoleAssignment assignee
-
-| Name | Description |
-||-|
-| id | Unique identifier of the user inside Azure Blockchain Workbench |
-| type | Type of the assignee |
-| chainIdentifier | Unique identifier of the user on the ledger |
-
-Example of an *EventMessage RoleAssignment* from Blockchain Workbench:
-
-``` json
-{
- "eventName": "RoleAssignment",
- "applicationId": 31,
- "applicationName": "AssetTransfer",
- "applicationDisplayName": "Asset Transfer",
- "applicationVersion": "1.0",
- "applicationRole": {
- "id": 134,
- "name": "Buyer"
- },
- "assigner": {
- "id": 1,
- "type": null,
- "chainIdentifier": "0xeFFC7766d38aC862d79706c3C5CEEf089564ffff"
- },
- "assignee": {
- "id": 3,
- "type": null,
- "chainIdentifier": "0x9a8DDaCa9B7488683A4d62d0817E965E8f24ffff"
- },
- "connectionId": [ ],
- "messageSchemaVersion": "1.0.0",
- "messageName": "EventMessage",
- "additionalInformation": { }
-}
-```
-
-## Next steps
--- [Smart contract integration patterns](integration-patterns.md)
blockchain Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/overview.md
- Title: Azure Blockchain Workbench Preview overview
-description: Overview of Azure Blockchain Workbench Preview and its capabilities.
Previously updated : 02/18/2022--
-#Customer intent: As an developer or administrator, I want to understand what Azure Blockchain Workbench is and its capabilities.
-
-# What is Azure Blockchain Workbench?
--
-Azure Blockchain Workbench Preview is a collection of Azure services and capabilities designed to help you create and deploy blockchain applications to share business processes and data with other organizations. Azure Blockchain Workbench provides the infrastructure scaffolding for building blockchain applications enabling developers to focus on creating business logic and smart contracts. It also makes it easier to create blockchain applications by integrating several Azure services and capabilities to help automate common development tasks.
--
-## Create blockchain applications
-
-With Blockchain Workbench, you can define blockchain applications using configuration and writing smart contract code. You can jumpstart blockchain application development and focus on defining your contract and writing business logic instead of building scaffolding and setting up supporting services.
-
-## Manage applications and users
-
-Azure Blockchain Workbench provides a web application and REST APIs for managing blockchain applications and users. Blockchain Workbench administrators can manage application access and assign your users to application roles. Azure AD users are automatically mapped to members in the application.
-
-## Integrate blockchain with applications
-
-You can use the Blockchain Workbench REST APIs and message-based APIs to integrate with existing systems. The APIs provide an interface to allow for replacing or using multiple distributed ledger technologies, storage, and database offerings.
-
-Blockchain Workbench can transform messages sent to its message-based API to build transactions in a format expected by that blockchain's native API. Workbench can sign and route transactions to the appropriate blockchain.
-
-Workbench automatically delivers events to Service Bus and Event Grid to send messages to downstream consumers. Developers can integrate with either of these messaging systems to drive transactions and to look at results.
-
-## Deploy a blockchain network
-
-Azure Blockchain Workbench simplifies consortium blockchain network setup as a preconfigured solution with an Azure Resource Manager solution template. The template provides simplified deployment that deploys all components needed to run a consortium. Today, Blockchain Workbench currently supports Ethereum.
-
-## Use Active Directory
-
-With existing blockchain protocols, blockchain identities are represented as an address on the network. Azure Blockchain Workbench abstracts away the blockchain identity by associating it with an Active Directory identity, making it simpler to build enterprise applications with Active Directory identities.
-
-## Synchronize on-chain data with off-chain storage
-
-Azure Blockchain Workbench makes it easier to analyze blockchain events and data by automatically synchronizing data on the blockchain to off-chain storage. Instead of extracting data directly from the blockchain, you can query off-chain database systems such as SQL Server. Blockchain expertise is not required for end users who are doing data analysis tasks.
-
-## Support and feedback
-
-For Azure Blockchain news, visit the [Azure Blockchain blog](https://azure.microsoft.com/blog/topics/blockchain/) to stay up to date on blockchain service offerings and information from the Azure Blockchain engineering team.
-
-To provide product feedback or to request new features, post or vote for an idea via the [Azure feedback forum for blockchain](https://aka.ms/blockchainuservoice).
-
-### Community support
-
-Engage with Microsoft engineers and Azure Blockchain community experts.
-
-* [Microsoft Q&A question page for Azure Blockchain Workbench](/answers/topics/azure-blockchain-workbench.html)
-* [Microsoft Tech Community](https://techcommunity.microsoft.com/t5/Blockchain/bd-p/AzureBlockchain)
-* [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-blockchain-workbench)
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Azure Blockchain Workbench architecture](architecture.md)
blockchain Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/troubleshooting.md
- Title: Azure Blockchain Workbench troubleshooting
-description: How to troubleshoot an Azure Blockchain Workbench Preview application.
Previously updated : 02/18/2022--
-#Customer intent: As an developer, I want to know how I can troubleshoot a blockchain application in Azure Blockchain Workbench.
--
-# Azure Blockchain Workbench Preview troubleshooting
--
-A PowerShell script is available to assist with developer debugging or support. The script generates a summary and collects detailed logs for troubleshooting. Collected logs include:
-
-* Blockchain network, such as Ethereum
-* Blockchain Workbench microservices
-* Application Insights
-* Azure Monitoring (Azure Monitor logs)
-
-You can use the information to determine next steps and determine root cause of issues.
--
-## Troubleshooting script
-
-The PowerShell troubleshooting script is available on GitHub. [Download a zip file](https://github.com/Azure-Samples/blockchain/archive/master.zip) or clone the sample from GitHub.
-
-```
-git clone https://github.com/Azure-Samples/blockchain.git
-```
-
-## Run the script
-
-Run the `collectBlockchainWorkbenchTroubleshooting.ps1` script to collect logs and create a ZIP file containing a folder of troubleshooting information. For example:
-
-``` powershell
-collectBlockchainWorkbenchTroubleshooting.ps1 -SubscriptionID "<subscription_id>" -ResourceGroupName "workbench-resource-group-name"
-```
-The script accepts the following parameters:
-
-| Parameter | Description | Required |
-|||-|
-| SubscriptionID | SubscriptionID to create or locate all resources. | Yes |
-| ResourceGroupName | Name of the Azure Resource Group where Blockchain Workbench has been deployed. | Yes |
-| OutputDirectory | Path to create the output .ZIP file. If not specified, defaults to the current directory. | No |
-| LookbackHours | Number of hours to use when pulling telemetry. Default value is 24 hours. Maximum value is 90 hours | No |
-| OmsSubscriptionId | The subscription ID where Azure Monitor logs is deployed. Only pass this parameter if the Azure Monitor logs for the blockchain network is deployed outside of Blockchain Workbench's resource group.| No |
-| OmsResourceGroup |The resource group where Azure Monitor logs is deployed. Only pass this parameter if the Azure Monitor logs for the blockchain network is deployed outside of Blockchain Workbench's resource group.| No |
-| OmsWorkspaceName | The Log Analytics workspace name. Only pass this parameter if the Azure Monitor logs for the blockchain network is deployed outside of Blockchain Workbench's resource group | No |
-
-## What is collected?
-
-The output ZIP file contains the following folder structure:
-
-| Folder or File | Description |
-|||
-| \Summary.txt | Summary of the system |
-| \Metrics\blockchain | Metrics about the blockchain |
-| \Metrics\Workbench | Metrics about the workbench |
-| \Details\Blockchain | Detailed logs about the blockchain |
-| \Details\Workbench | Detailed logs about the workbench |
-
-The summary file gives you a snapshot of the overall state of the application and health of the application. The summary provides recommended actions, highlights top errors, and metadata about running services.
-
-The **Metrics** folder contains metrics of various system components over time. For example, the output file `\Details\Workbench\apiMetrics.txt` contains a summary of different response codes, and response times throughout the collection period.
-The **Details** folder contains detailed logs for troubleshooting specific issues with Workbench or the underlying blockchain network. For example, `\Details\Workbench\Exceptions.csv` contains a list of the most recent exceptions that have occurred in the system, which is useful for troubleshooting errors with smart contracts or interactions with the blockchain.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Azure Blockchain Workbench Application Insights troubleshooting guide](https://aka.ms/workbenchtroubleshooting)
blockchain Use Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/use-api.md
- Title: Using Azure Blockchain Workbench REST APIs
-description: Scenarios for how to use the Azure Blockchain Workbench Preview REST API
Previously updated : 02/18/2022--
-#Customer intent: As a developer, I want to understand the Azure Blockchain Workbench REST API to so that I can integrate apps with Blockchain Workbench.
-
-# Using the Azure Blockchain Workbench Preview REST API
--
-Azure Blockchain Workbench Preview REST API provides developers and information workers a way to build rich integrations to blockchain applications. This article highlights several scenarios of how to use the Workbench REST API. For example, suppose you want to create a custom blockchain client that allows signed in users to view and interact with their assigned blockchain applications. The client can use the Blockchain Workbench API to view contract instances and take actions on smart contracts.
-
-## Blockchain Workbench API endpoint
-
-Blockchain Workbench APIs are accessed through an endpoint for your deployment. To get the API endpoint URL for your deployment:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the left-hand navigation pane, select **Resource groups**.
-1. Choose the resource group name your deployed Blockchain Workbench.
-1. Select the **TYPE** column heading to sort the list alphabetically by type.
-1. There are two resources with type **App Service**. Select the resource of type **App Service** *with* the "-api" suffix.
-1. In the App Service **Overview**, copy the **URL** value, which represents the API endpoint URL to your deployed Blockchain Workbench.
-
- ![App service API endpoint URL](media/use-api/app-service-api.png)
-
-## Authentication
-
-Requests to the Blockchain Workbench REST API are protected with Azure Active Directory (Azure AD).
-
-To make an authenticated request to the REST APIs, client code requires authentication with valid credentials before you can call the API. Authentication is coordinated between the various actors by Azure AD, and provides your client with an [access token](../../active-directory/develop/developer-glossary.md#access-token) as proof of the authentication. The token is then sent in the HTTP Authorization header of REST API requests. To learn more about Azure AD authentication, see [Azure Active Directory for developers](../../active-directory/develop/index.yml).
-
-See [REST API samples](https://github.com/Azure-Samples/blockchain/tree/master/blockchain-workbench/rest-api-samples) for examples of how to authenticate.
-
-## Using Postman
-
-If you want to test or experiment with Workbench APIs, you can use [Postman](https://www.postman.com) to make API calls to your deployment. [Download a sample Postman collection of Workbench API requests](https://github.com/Azure-Samples/blockchain/tree/master/blockchain-workbench/rest-api-samples/postman) from GitHub. See the README file for details on authenticating and using the example API requests.
-
-## Create an application
-
-You use two API calls to create a Blockchain Workbench application. This method can only be performed by users who are Workbench administrators.
-
-Use the [Applications POST API](/rest/api/azure-blockchain-workbench/applications/applicationspost) to upload the application's JSON file and get an application ID.
-
-### Applications POST request
-
-Use the **appFile** parameter to send the configuration file as part of the request body.
-
-``` http
-POST /api/v1/applications
-Content-Type: multipart/form-data;
-Authorization : Bearer {access token}
-Content-Disposition: form-data; name="appFile"; filename="/C:/smart-contract-samples/HelloWorld.json"
-Content-Type: application/json
-```
-
-### Applications POST response
-
-The created application ID is returned in the response. You need the application ID to associate the configuration file with the code file when you call the next API.
-
-``` http
-HTTP/1.1 200 OK
-Content-Type: "application/json"
-1
-```
-
-### Contract code POST request
-
-Use the [Applications contract code POST API](/rest/api/azure-blockchain-workbench/applications/contractcodepost) by passing the application ID to upload the application's Solidity code file. The payload can be a single Solidity file or a zipped file containing Solidity files.
-
-Replace the following values:
-
-| Parameter | Value |
-|--|-|
-| {applicationId} | Return value from the applications POST API. |
-| {ledgerId} | Index of the ledger. The value is usually 1. You can also check the [Ledger table](data-sql-management-studio.md) for the value. |
-
-``` http
-POST /api/v1/applications/{applicationId}/contractCode?ledgerId={ledgerId}
-Content-Type: multipart/form-data;
-Authorization : Bearer {access token}
-Content-Disposition: form-data; name="contractFile"; filename="/C:/smart-contract-samples/HelloWorld.sol"
-```
-
-### Contract code POST response
-
-If successful, the response includes the created contract code ID from the [ContractCode table](data-sql-management-studio.md).
-
-``` http
-HTTP/1.1 200 OK
-Content-Type: "application/json"
-2
-```
-
-## Assign roles to users
-
-Use the [Applications role assignments POST API](/rest/api/azure-blockchain-workbench/applications/roleassignmentspost) by passing the application ID, user ID, and application role ID to create a user-to-role mapping in the specified blockchain application. This method can only be performed by users who are Workbench administrators.
-
-### Role assignments POST request
-
-Replace the following values:
-
-| Parameter | Value |
-|--|-|
-| {applicationId} | Return value from the Applications POST API. |
-| {userId} | User ID value from the [User table](data-sql-management-studio.md). |
-| {applicationRoleId} | Application role ID value associated to the application ID from the [ApplicationRole table](data-sql-management-studio.md). |
-
-``` http
-POST /api/v1/applications/{applicationId}/roleAssignments
-Content-Type: application/json;
-Authorization : Bearer {access token}
-
-{
- "userId": {userId},
- "applicationRoleId": {applicationRoleId}
-}
-```
-
-### Role assignments POST response
-
-If successful, the response includes the created role assignment ID from the [RoleAssignment table](data-sql-management-studio.md).
-
-``` http
-HTTP/1.1 200
-1
-```
-
-## List applications
-
-Use the [Applications GET API](/rest/api/azure-blockchain-workbench/applications/applicationsget) to retrieve all Blockchain Workbench applications for the user. In this example, the signed-in user has access to two applications:
--- [Asset transfer](https://github.com/Azure-Samples/blockchain/blob/master/blockchain-workbench/application-and-smart-contract-samples/asset-transfer/readme.md)-- [Refrigerated transportation](https://github.com/Azure-Samples/blockchain/blob/master/blockchain-workbench/application-and-smart-contract-samples/refrigerated-transportation/readme.md)-
-### Applications GET request
-
-``` http
-GET /api/v1/applications
-Authorization : Bearer {access token}
-```
-
-### Applications GET response
-
-The response lists all blockchain applications to which a user has access in Blockchain Workbench. Blockchain Workbench administrators get every blockchain application. Non-Workbench administrators get all blockchain applications for which they have at least one associated application role or an associated smart contract instance role.
-
-``` http
-HTTP/1.1 200 OK
-Content-type: application/json
-{
- "nextLink": "/api/v1/applications?skip=2",
- "applications": [
- {
- "id": 1,
- "name": "AssetTransfer",
- "description": "Allows transfer of assets between a buyer and a seller, with appraisal/inspection functionality",
- "displayName": "Asset Transfer",
- "createdByUserId": 1,
- "createdDtTm": "2018-04-28T05:59:14.4733333",
- "enabled": true,
- "applicationRoles": null
- },
- {
- "id": 2,
- "name": "RefrigeratedTransportation",
- "description": "Application to track end-to-end transportation of perishable goods.",
- "displayName": "Refrigerated Transportation",
- "createdByUserId": 7,
- "createdDtTm": "2018-04-28T18:25:38.71",
- "enabled": true,
- "applicationRoles": null
- }
- ]
-}
-```
-
-## List workflows for an application
-
-Use [Applications Workflows GET API](/rest/api/azure-blockchain-workbench/applications/workflowsget) to list all workflows of a specified blockchain application to which a user has access in Blockchain Workbench. Each blockchain application has one or more workflows and each workflow has zero or smart contract instances. For a blockchain client application that has only one workflow, we recommend skipping the user experience flow that allows users to select the appropriate workflow.
-
-### Application workflows request
-
-``` http
-GET /api/v1/applications/{applicationId}/workflows
-Authorization: Bearer {access token}
-```
-
-### Application workflows response
-
-Blockchain Workbench administrators get every blockchain workflow. Non-Workbench administrators get all workflows for which they have at least one associated application role or is associated with a smart contract instance role.
-
-``` http
-HTTP/1.1 200 OK
-Content-type: application/json
-{
- "nextLink": "/api/v1/applications/1/workflows?skip=1",
- "workflows": [
- {
- "id": 1,
- "name": "AssetTransfer",
- "description": "Handles the business logic for the asset transfer scenario",
- "displayName": "Asset Transfer",
- "applicationId": 1,
- "constructorId": 1,
- "startStateId": 1
- }
- ]
-}
-```
-
-## Create a contract instance
-
-Use [Contracts V2 POST API](/rest/api/azure-blockchain-workbench/contractsv2/contractpost) to create a new smart contract instance for a workflow. Users are only able to create a new smart contract instance if the user is associated with an application role, which can initiate a smart contract instance for the workflow.
-
-> [!NOTE]
-> In this example, version 2 of the API is used. Version 2 contract APIs provide more granularity for the associated ProvisioningStatus fields.
-
-### Contracts POST request
-
-Replace the following values:
-
-| Parameter | Value |
-|--|-|
-| {workflowId} | Workflow ID value is the contract's ConstructorID from the [Workflow table](data-sql-management-studio.md). |
-| {contractCodeId} | Contract code ID value from the [ContractCode table](data-sql-management-studio.md). Correlate the application ID and ledger ID for the contract instance you want to create. |
-| {connectionId} | Connection ID value from the [Connection table](data-sql-management-studio.md). |
-
-For the request body, set values using the following information:
-
-| Parameter | Value |
-|--|-|
-| workflowFunctionID | ID from the [WorkflowFunction table](data-sql-management-studio.md). |
-| workflowActionParameters | Name value pairs of parameters passed to the constructor. For each parameter, use the workflowFunctionParameterID value from the [WorkflowFunctionParameter](data-sql-management-studio.md) table. |
-
-``` http
-POST /api/v2/contracts?workflowId={workflowId}&contractCodeId={contractCodeId}&connectionId={connectionId}
-Content-Type: application/json;
-Authorization : Bearer {access token}
-
-{
- "workflowFunctionID": 2,
- "workflowActionParameters": [
- {
- "name": "message",
- "value": "Hello, world!",
- "workflowFunctionParameterId": 3
- }
- ]
-}
-```
-
-### Contracts POST response
-
-If successful, role assignments API returns the ContractActionID from the [ContractActionParameter table](data-sql-management-studio.md).
-
-``` http
-HTTP/1.1 200 OK
-4
-```
-
-## List smart contract instances for a workflow
-
-Use [Contracts GET API](/rest/api/azure-blockchain-workbench/contractsv2/contractsget) to show all smart contract instances for a workflow. Or you can allow users to deep dive into any of the shown smart contract instances.
-
-### Contracts request
-
-In this example, consider a user would like to interact with one of the smart contract instances to take action.
-
-``` http
-GET api/v1/contracts?workflowId={workflowId}
-Authorization: Bearer {access token}
-```
-
-### Contracts response
-
-The response lists all smart contract instances of the specified workflow. Workbench administrators get all smart contract instances. Non-Workbench administrators get every smart contract instance for which they have at least one associated application role or is associated with a smart contract instance role.
-
-``` http
-HTTP/1.1 200 OK
-Content-type: application/json
-{
- "nextLink": "/api/v1/contracts?skip=3&workflowId=1",
- "contracts": [
- {
- "id": 1,
- "provisioningStatus": 2,
- "connectionID": 1,
- "ledgerIdentifier": "0xbcb6127be062acd37818af290c0e43479a153a1c",
- "deployedByUserId": 1,
- "workflowId": 1,
- "contractCodeId": 1,
- "contractProperties": [
- {
- "workflowPropertyId": 1,
- "value": "0"
- },
- {
- "workflowPropertyId": 2,
- "value": "My first car"
- },
- {
- "workflowPropertyId": 3,
- "value": "54321"
- },
- {
- "workflowPropertyId": 4,
- "value": "0"
- },
- {
- "workflowPropertyId": 5,
- "value": "0x0000000000000000000000000000000000000000"
- },
- {
- "workflowPropertyId": 6,
- "value": "0x0000000000000000000000000000000000000000"
- },
- {
- "workflowPropertyId": 7,
- "value": "0x0000000000000000000000000000000000000000"
- },
- {
- "workflowPropertyId": 8,
- "value": "0xd882530eb3d6395e697508287900c7679dbe02d7"
- }
- ],
- "transactions": [
- {
- "id": 1,
- "connectionId": 1,
- "transactionHash": "0xf3abb829884dc396e03ae9e115a770b230fcf41bb03d39457201449e077080f4",
- "blockID": 241,
- "from": "0xd882530eb3d6395e697508287900c7679dbe02d7",
- "to": null,
- "value": 0,
- "isAppBuilderTx": true
- }
- ],
- "contractActions": [
- {
- "id": 1,
- "userId": 1,
- "provisioningStatus": 2,
- "timestamp": "2018-04-29T23:41:14.9333333",
- "parameters": [
- {
- "name": "Description",
- "value": "My first car"
- },
- {
- "name": "Price",
- "value": "54321"
- }
- ],
- "workflowFunctionId": 1,
- "transactionId": 1,
- "workflowStateId": 1
- }
- ]
- }
- ]
-}
-```
-
-## List available actions for a contract
-
-Use [Contract Action GET API](/rest/api/azure-blockchain-workbench/contractsv2/contractactionget) to show the available user actions given the state of the contract.
-
-### Contract action request
-
-In this example, the user is looking at all available actions for a new smart contract they created.
-
-``` http
-GET /api/v1/contracts/{contractId}/actions
-Authorization: Bearer {access token}
-```
-
-### Contract action response
-
-Response lists all actions to which a user can take given the current state of the specified smart contract instance.
-
-* Modify: Allows the user to modify the description and price of an asset.
-* Terminate: Allows the user to end the contract of the asset.
-
-Users get all applicable actions if the user has an associated application role or is associated with a smart contract instance role for the current state of the specified smart contract instance.
-
-``` http
-HTTP/1.1 200 OK
-Content-type: application/json
-{
- "nextLink": "/api/v1/contracts/1/actions?skip=2",
- "workflowFunctions": [
- {
- "id": 2,
- "name": "Modify",
- "description": "Modify the description/price attributes of this asset transfer instance",
- "displayName": "Modify",
- "parameters": [
- {
- "id": 1,
- "name": "description",
- "description": "The new description of the asset",
- "displayName": "Description",
- "type": {
- "id": 2,
- "name": "string",
- "elementType": null,
- "elementTypeId": 0
- }
- },
- {
- "id": 2,
- "name": "price",
- "description": "The new price of the asset",
- "displayName": "Price",
- "type": {
- "id": 3,
- "name": "money",
- "elementType": null,
- "elementTypeId": 0
- }
- }
- ],
- "workflowId": 1
- },
- {
- "id": 3,
- "name": "Terminate",
- "description": "Used to cancel this particular instance of asset transfer",
- "displayName": "Terminate",
- "parameters": [],
- "workflowId": 1
- }
- ]
-}
-```
-
-## Execute an action for a contract
-
-Use [Contract Action POST API](/rest/api/azure-blockchain-workbench/contractsv2/contractactionpost) to take action for the specified smart contract instance.
-
-### Contract action POST request
-
-In this case, consider the scenario where a user would like to modify the description and price of an asset.
-
-``` http
-POST /api/v1/contracts/{contractId}/actions
-Authorization: Bearer {access token}
-actionInformation: {
- "workflowFunctionId": 2,
- "workflowActionParameters": [
- {
- "name": "description",
- "value": "My updated car"
- },
- {
- "name": "price",
- "value": "54321"
- }
- ]
-}
-```
-
-Users are only able to execute the action given the current state of the specified smart contract instance and the user's associated application role or smart contract instance role.
-
-### Contract action POST response
-
-If the post is successful, an HTTP 200 OK response is returned with no response body.
-
-``` http
-HTTP/1.1 200 OK
-Content-type: application/json
-```
-
-## Next steps
-
-For reference information on Blockchain Workbench APIs, see the [Azure Blockchain Workbench REST API reference](/rest/api/azure-blockchain-workbench).
blockchain Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/use.md
- Title: Using applications in Azure Blockchain Workbench
-description: Tutorial on how to use application contracts in Azure Blockchain Workbench Preview.
Previously updated : 02/18/2022--
-#Customer intent: As a developer, I want to use a blockchain application I created in Azure Blockchain Workbench.
--
-# Tutorial: Using applications in Azure Blockchain Workbench
--
-You can use Blockchain Workbench to create and take actions on contracts. You can also view contract details such as status and transaction history.
-
-You'll learn how to:
-
-> [!div class="checklist"]
-> * Create a new contract
-> * Take an action on a contract
--
-## Prerequisites
-
-* A Blockchain Workbench deployment. For more information, see [Azure Blockchain Workbench deployment](deploy.md) for details on deployment
-* A deployed blockchain application in Blockchain Workbench. See [Create a blockchain application in Azure Blockchain Workbench](create-app.md)
-
-[Open the Blockchain Workbench](deploy.md#blockchain-workbench-web-url) in your browser.
-
-![Blockchain Workbench](./media/use/workbench.png)
-
-You need to sign in as a member of the Blockchain Workbench. If there are no applications listed, you are a member of Blockchain Workbench but not a member of any applications. The Blockchain Workbench administrator can assign members to applications.
-
-## Create new contract
-
-To create a new contract, you need to be a member specified as a contract **initiator**. For information defining application roles and initiators for the contract, see [workflows in the configuration overview](configuration.md#workflows). For information on assigning members to application roles, see [add a member to application](manage-users.md#add-member-to-application).
-
-1. In Blockchain Workbench application section, select the application tile that contains the contract you want to create. A list of active contracts is displayed.
-
-2. To create a new contract, select **New contract**.
-
- ![New contract button](./media/use/contract-list.png)
-
-3. The **New contract** pane is displayed. Specify the initial parameters values. Select **Create**.
-
- ![New contract pane](./media/use/new-contract.png)
-
- The newly created contract is displayed in the list with the other active contracts.
-
- ![Active contracts list](./media/use/active-contracts.png)
-
-## Take action on contract
-
-Depending on the state the contract is in, members can take actions to transition to the next state of the contract. Actions are defined as [transitions](configuration.md#transitions) within a [state](configuration.md#states). Members belonging to an allowed application or instance role for the transition can take the action.
-
-1. In Blockchain Workbench application section, select the application tile that contains the contract to take the action.
-2. Select the contract in the list. Details about the contract are displayed in different sections.
-
- ![Contract details](./media/use/contract-details.png)
-
- | Section | Description |
- |||
- | Status | Lists the current progress within the contract stages |
- | Details | The current values of the contract |
- | Action | Details about the last action |
- | Activity | Transaction history of the contract |
-
-3. In the **Action** section, select **Take action**.
-
-4. The details about the current state of the contract are displayed in a pane. Choose the action you want to take in the drop-down.
-
- ![Choose action](./media/use/choose-action.png)
-
-5. Select **Take action** to initiate the action.
-6. If parameters are required for the action, specify the values for the action.
-
- ![Take action](./media/use/take-action.png)
-
-7. Select **Take action** to execute the action.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Azure Blockchain Workbench application versioning](version-app.md)
blockchain Version App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/version-app.md
- Title: Blockchain app versioning - Azure Blockchain Workbench
-description: How to use application versions in Azure Blockchain Workbench Preview.
Previously updated : 02/18/2022--
-#Customer intent: As a developer, I want to create and use multiple versions of an Azure Blockchain Workbench app.
-
-# Azure Blockchain Workbench Preview application versioning
--
-You can create and use multiple versions of an Azure Blockchain Workbench Preview app. If multiple versions of the same application are uploaded, a version history is available and users can choose which version they want to use.
--
-## Prerequisites
-
-* A Blockchain Workbench deployment. For more information, see [Azure Blockchain Workbench deployment](deploy.md) for details on deployment
-* A deployed blockchain application in Blockchain Workbench. See [Create a blockchain application in Azure Blockchain Workbench](create-app.md)
-
-## Add an app version
-
-To add a new version, upload the new configuration and smart contract files to Blockchain Workbench.
-
-1. In a web browser, navigate to the Blockchain Workbench web address. For example, `https://{workbench URL}.azurewebsites.net/` For information on how to find your Blockchain Workbench web address, see [Blockchain Workbench Web URL](deploy.md#blockchain-workbench-web-url)
-2. Sign in as a [Blockchain Workbench administrator](manage-users.md#manage-blockchain-workbench-administrators).
-3. Select the blockchain application you want to update with another version.
-4. Select **Add version**. The **Add version** pane is displayed.
-5. Choose the new version contract configuration and contract code files to upload. The configuration file is automatically validated. Fix any validation errors before you deploy the application.
-6. Select **Add version** to add the new blockchain application version.
-
- ![Add a new version](media/version-app/add-version.png)
-
-Deployment of the blockchain application can take a few minutes. When deployment is finished, refresh the application page. Choosing the application and selecting the **Version history** button, displays the version history of the application.
-
-> [!IMPORTANT]
-> Previous versions of the application are disabled. You can individually re-enable past versions.
->
-> You may need to re-add members to application roles if changes were made to the application roles in the new version.
-
-## Using app versions
-
-By default, the latest enabled version of the application is used in Blockchain Workbench. If you want to use a previous version of an application, you need to choose the version from the application page first.
-
-1. In Blockchain Workbench application section, select the application checkbox that contains the contract you want to use. If previous versions are enabled, the version history button is available.
-2. Select the **Version history** button.
-3. In the version history pane, choose the version of the application by selecting the link in the *Date modified* column.
-
- ![Choose a previous version](media/version-app/use-version.png)
-
- You can create new contracts or take actions on previous version contracts. The version of the application is displayed following the application name and a warning is displayed about the older version.
-
-## Next steps
-
-* [Azure Blockchain Workbench troubleshooting](troubleshooting.md)
chaos-studio Chaos Studio Permissions Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-permissions-security.md
All user interactions with Chaos Studio happen through Azure Resource Manager. I
Azure Chaos Studio doesn't support Private Link for agent-based scenarios. ## Service tags
-A service tag is a group of IP address prefixes that can be assigned to in-bound and out-bound NSG rules. It handles updates to the group of IP address prefixes without any intervention. This benefits you because you can use service tags to explicitly allow in-bound traffic from Chaos Studio, without needing to know the IP addresses of the platform. Currently service tags can be enabled via PowerShell.
-* Limitation of service tags is that they can only be used with resources that have a public IP address. If a resource only has a private IP address, then service tags will not be able to allow traffic to route to it.
+A [service tags](../virtual-network/service-tags-overview.md) is a group of IP address prefixes that can be assigned to in-bound and out-bound NSG rules. It automatically handles updates to the group of IP address prefixes without any intervention. This benefits you because you can use service tags to explicitly allow in-bound traffic from Chaos Studio without needing to know the IP addresses of the platform. Currently service tags can be enabled via PowerShell and support will soon be added to the Chaos Studio user interface.
+* Limitation of service tags is that they can only be used with applications that have a public IP address. If a resource only has a private IP address, then service tags will not be able to allow traffic to route to it.
## Data encryption
cognitive-services Batch Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/How-to/batch-inference.md
+
+ Title: Trigger batch inference with trained model
+
+description: Trigger batch inference with trained model
++++++ Last updated : 11/01/2022+++
+# Trigger batch inference with trained model
+
+You could choose the batch inference API, or the streaming inference API for detection.
+
+| Batch inference API | Streaming inference API |
+| - | - |
+| More suitable for batch use cases when customers donΓÇÖt need to get inference results immediately and want to detect anomalies and get results over a longer time period.| When customers want to get inference immediately and want to detect multivariate anomalies in real-time, this API is recommended. Also suitable for customers having difficulties conducting the previous compressing and uploading process for inference. |
+
+|API Name| Method | Path | Description |
+| | - | -- | |
+|**Batch Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-batch | Trigger an asynchronous inference with `modelId`, which works in a batch scenario |
+|**Get Batch Inference Results**| GET | `{endpoint}`/anomalydetector/v1.1/multivariate/detect-batch/`{resultId}` | Get batch inference results with `resultId` |
+|**Streaming Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-last | Trigger a synchronous inference with `modelId`, which works in a streaming scenario |
+
+## Trigger a batch inference
+
+To perform batch inference, provide the blob URL containing the inference data, the start time, and end time. For inference data volume, at least `1 sliding window` length and at most **20000** timestamps.
+
+This inference is asynchronous, so the results aren't returned immediately. Notice that you need to save in a variable the link of the results in the **response header** which contains the `resultId`, so that you may know where to get the results afterwards.
+
+Failures are usually caused by model issues or data issues. You can't perform inference if the model isn't ready or the data link is invalid. Make sure that the training data and inference data are consistent, meaning they should be **exactly** the same variables but with different timestamps. More variables, fewer variables, or inference with a different set of variables won't pass the data verification phase and errors will occur. Data verification is deferred so that you'll get error messages only when you query the results.
+
+### Request
+
+A sample request:
+
+```json
+{
+ "dataSource": "{{dataSource}}",
+ "topContributorCount": 3,
+ "startTime": "2021-01-02T12:00:00Z",
+ "endTime": "2021-01-03T00:00:00Z"
+}
+```
+#### Required parameters
+
+* **dataSource**: This is the Blob URL that linked to your folder or CSV file located in Azure Blob Storage. The schema should be the same as your training data, either OneTable or MultiTable, and the variable number and name should be exactly the same as well.
+* **startTime**: The start time of data used for inference. If it's earlier than the actual earliest timestamp in the data, the actual earliest timestamp will be used as the starting point.
+* **endTime**: The end time of data used for inference, which must be later than or equal to `startTime`. If `endTime` is later than the actual latest timestamp in the data, the actual latest timestamp will be used as the ending point.
+
+#### Optional parameters
+
+* **topContributorCount**: This is a number that you could specify N from **1 to 30**, which will give you the details of top N contributed variables in the anomaly results. For example, if you have 100 variables in the model, but you only care the top five contributed variables in detection results, then you should fill this field with 5. The default number is **10**.
+
+### Response
+
+A sample response:
+
+```json
+{
+ "resultId": "aaaaaaaa-5555-1111-85bb-36f8cdfb3365",
+ "summary": {
+ "status": "CREATED",
+ "errors": [],
+ "variableStates": [],
+ "setupInfo": {
+ "dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv",
+ "topContributorCount": 3,
+ "startTime": "2021-01-02T12:00:00Z",
+ "endTime": "2021-01-03T00:00:00Z"
+ }
+ },
+ "results": []
+}
+```
+* **resultId**: This is the information that you'll need to trigger **Get Batch Inference Results API**.
+* **status**: This indicates whether you trigger a batch inference task successfully. If you see **CREATED**, then you don't need to trigger this API again, you should use the **Get Batch Inference Results API** to get the detection status and anomaly results.
+
+## Get batch detection results
+
+There's no content in the request body, what's required only is to put the resultId in the API path, which will be in a format of:
+**{{endpoint}}anomalydetector/v1.1/multivariate/detect-batch/{{resultId}}**
+
+### Response
+
+A sample response:
+
+```json
+{
+ "resultId": "aaaaaaaa-5555-1111-85bb-36f8cdfb3365",
+ "summary": {
+ "status": "READY",
+ "errors": [],
+ "variableStates": [
+ {
+ "variable": "series_0",
+ "filledNARatio": 0.0,
+ "effectiveCount": 721,
+ "firstTimestamp": "2021-01-02T12:00:00Z",
+ "lastTimestamp": "2021-01-03T00:00:00Z"
+ },
+ {
+ "variable": "series_1",
+ "filledNARatio": 0.0,
+ "effectiveCount": 721,
+ "firstTimestamp": "2021-01-02T12:00:00Z",
+ "lastTimestamp": "2021-01-03T00:00:00Z"
+ },
+ {
+ "variable": "series_2",
+ "filledNARatio": 0.0,
+ "effectiveCount": 721,
+ "firstTimestamp": "2021-01-02T12:00:00Z",
+ "lastTimestamp": "2021-01-03T00:00:00Z"
+ },
+ {
+ "variable": "series_3",
+ "filledNARatio": 0.0,
+ "effectiveCount": 721,
+ "firstTimestamp": "2021-01-02T12:00:00Z",
+ "lastTimestamp": "2021-01-03T00:00:00Z"
+ },
+ {
+ "variable": "series_4",
+ "filledNARatio": 0.0,
+ "effectiveCount": 721,
+ "firstTimestamp": "2021-01-02T12:00:00Z",
+ "lastTimestamp": "2021-01-03T00:00:00Z"
+ }
+ ],
+ "setupInfo": {
+ "dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv",
+ "topContributorCount": 3,
+ "startTime": "2021-01-02T12:00:00Z",
+ "endTime": "2021-01-03T00:00:00Z"
+ }
+ },
+ "results": [
+ {
+ "timestamp": "2021-01-02T12:00:00Z",
+ "value": {
+ "isAnomaly": false,
+ "severity": 0.0,
+ "score": 0.3377174139022827,
+ "interpretation": []
+ },
+ "errors": []
+ },
+ {
+ "timestamp": "2021-01-02T12:01:00Z",
+ "value": {
+ "isAnomaly": false,
+ "severity": 0.0,
+ "score": 0.24631972312927247,
+ "interpretation": []
+ },
+ "errors": []
+ },
+ {
+ "timestamp": "2021-01-02T12:02:00Z",
+ "value": {
+ "isAnomaly": false,
+ "severity": 0.0,
+ "score": 0.16678125858306886,
+ "interpretation": []
+ },
+ "errors": []
+ },
+ {
+ "timestamp": "2021-01-02T12:03:00Z",
+ "value": {
+ "isAnomaly": false,
+ "severity": 0.0,
+ "score": 0.23783254623413086,
+ "interpretation": []
+ },
+ "errors": []
+ },
+ {
+ "timestamp": "2021-01-02T12:04:00Z",
+ "value": {
+ "isAnomaly": false,
+ "severity": 0.0,
+ "score": 0.24804904460906982,
+ "interpretation": []
+ },
+ "errors": []
+ },
+ {
+ "timestamp": "2021-01-02T12:05:00Z",
+ "value": {
+ "isAnomaly": false,
+ "severity": 0.0,
+ "score": 0.11487171649932862,
+ "interpretation": []
+ },
+ "errors": []
+ },
+ {
+ "timestamp": "2021-01-02T12:06:00Z",
+ "value": {
+ "isAnomaly": true,
+ "severity": 0.32980116622958083,
+ "score": 0.5666913509368896,
+ "interpretation": [
+ {
+ "variable": "series_2",
+ "contributionScore": 0.4130149677604554,
+ "correlationChanges": {
+ "changedVariables": [
+ "series_0",
+ "series_4",
+ "series_3"
+ ]
+ }
+ },
+ {
+ "variable": "series_3",
+ "contributionScore": 0.2993065960239115,
+ "correlationChanges": {
+ "changedVariables": [
+ "series_0",
+ "series_4",
+ "series_3"
+ ]
+ }
+ },
+ {
+ "variable": "series_1",
+ "contributionScore": 0.287678436215633,
+ "correlationChanges": {
+ "changedVariables": [
+ "series_0",
+ "series_4",
+ "series_3"
+ ]
+ }
+ }
+ ]
+ },
+ "errors": []
+ }
+ ]
+}
+```
+
+The response contains the result status, variable information, inference parameters, and inference results.
+
+* **variableStates**: This lists the information of each variable in the inference request.
+* **setupInfo**: This is the request body submitted for this inference.
+* **results**: This contains the detection results. There are three typical types of detection results.
+
+* Error code `InsufficientHistoricalData`. This usually happens only with the first few timestamps because the model inferences data in a window-based manner and it needs historical data to make a decision. For the first few timestamps, there's insufficient historical data, so inference can't be performed on them. In this case, the error message can be ignored.
+
+* **isAnomaly**: `false` indicates the current timestamp isn't an anomaly.`true` indicates an anomaly at the current timestamp.
+ * `severity` indicates the relative severity of the anomaly and for abnormal data it's always greater than 0.
+ * `score` is the raw output of the model on which the model makes a decision. `severity` is a derived value from `score`. Every data point has a `score`.
+
+* **interpretation**: This field only appears when a timestamp is detected as anomalous, which contains `variables`, `contributionScore`, `correlationChanges`.
+
+* **contributors**: This is a list containing the contribution score of each variable. Higher contribution scores indicate higher possibility of the root cause. This list is often used for interpreting anomalies and diagnosing the root causes.
+
+* **correlationChanges**: This field only appears when a timestamp is detected as anomalous, which included in interpretation. It contains `changedVariables` and `changedValues` that interpret which correlations between variables changed.
+
+* **changedVariables**: This field will show which variables that have significant change in correlation with `variable`. The variables in this list are ranked by the extent of correlation changes.
+
+> [!NOTE]
+> A common pitfall is taking all data points with `isAnomaly`=`true` as anomalies. That may end up with too many false positives.
+> You should use both `isAnomaly` and `severity` (or `score`) to sift out anomalies that are not severe and (optionally) use grouping to check the duration of the anomalies to suppress random noise.
+> Please refer to the [FAQ](../concepts/best-practices-multivariate.md#faq) in the best practices document for the difference between `severity` and `score`.
+
+## Next steps
+
+* [Best practices of multivariate anomaly detection](../concepts/best-practices-multivariate.md)
cognitive-services Streaming Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/How-to/streaming-inference.md
+
+ Title: Streaming inference with trained model
+
+description: Streaming inference with trained model
++++++ Last updated : 11/01/2022+++
+# Streaming inference with trained model
+
+You could choose the batch inference API, or the streaming inference API for detection.
+
+| Batch inference API | Streaming inference API |
+| - | - |
+| More suitable for batch use cases when customers donΓÇÖt need to get inference results immediately and want to detect anomalies and get results over a longer time period.| When customers want to get inference immediately and want to detect multivariate anomalies in real-time, this API is recommended. Also suitable for customers having difficulties conducting the previous compressing and uploading process for inference. |
+
+|API Name| Method | Path | Description |
+| | - | -- | |
+|**Batch Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-batch | Trigger an asynchronous inference with `modelId` which works in a batch scenario |
+|**Get Batch Inference Results**| GET | `{endpoint}`/anomalydetector/v1.1/multivariate/detect-batch/`{resultId}` | Get batch inference results with `resultId` |
+|**Streaming Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-last | Trigger a synchronous inference with `modelId`, which works in a streaming scenario |
+
+## Trigger a streaming inference API
+
+### Request
+
+With the synchronous API, you can get inference results point by point in real time, and no need for compressing and uploading task like for training and asynchronous inference. Here are some requirements for the synchronous API:
+* You need to put data in **JSON format** into the API request body.
+* Due to payload limitation, the size of inference data in the request body is limited, which support at most `2880` timestamps * `300` variables, and at least `1 sliding window length`.
+
+You can submit a bunch of timestamps of multiple variables in JSON format in the request body, with an API call like this:
+
+**{{endpoint}}/anomalydetector/v1.1/multivariate/models/{modelId}:detect-last**
+
+A sample request:
+
+```json
+{
+ "variables": [
+ {
+ "variableName": "Variable_1",
+ "timestamps": [
+ "2021-01-01T00:00:00Z",
+ "2021-01-01T00:01:00Z",
+ "2021-01-01T00:02:00Z"
+ //more timestamps
+ ],
+ "values": [
+ 0.4551378545933972,
+ 0.7388603950488748,
+ 0.201088255984052
+ //more values
+ ]
+ },
+ {
+ "variableName": "Variable_2",
+ "timestamps": [
+ "2021-01-01T00:00:00Z",
+ "2021-01-01T00:01:00Z",
+ "2021-01-01T00:02:00Z"
+ //more timestamps
+ ],
+ "values": [
+ 0.9617871613964145,
+ 0.24903311574778408,
+ 0.4920561254118613
+ //more values
+ ]
+ },
+ {
+ "variableName": "Variable_3",
+ "timestamps": [
+ "2021-01-01T00:00:00Z",
+ "2021-01-01T00:01:00Z",
+ "2021-01-01T00:02:00Z"
+ //more timestamps
+ ],
+ "values": [
+ 0.4030756879437628,
+ 0.15526889968448554,
+ 0.36352226408981103
+ //more values
+ ]
+ }
+ ],
+ "topContributorCount": 2
+}
+```
+
+#### Required parameters
+
+* **variableName**: This name should be exactly the same as in your training data.
+* **timestamps**: The length of the timestamps should be equal to **1 sliding window**, since every streaming inference call will use 1 sliding window to detect the last point in the sliding window.
+* **values**: The values of each variable in every timestamp that was inputted above.
+
+#### Optional parameters
+
+* **topContributorCount**: This is a number that you could specify N from **1 to 30**, which will give you the details of top N contributed variables in the anomaly results. For example, if you have 100 variables in the model, but you only care the top five contributed variables in detection results, then you should fill this field with 5. The default number is **10**.
+
+### Response
+
+A sample response:
+
+```json
+{
+ "variableStates": [
+ {
+ "variable": "series_0",
+ "filledNARatio": 0.0,
+ "effectiveCount": 1,
+ "firstTimestamp": "2021-01-03T01:59:00Z",
+ "lastTimestamp": "2021-01-03T01:59:00Z"
+ },
+ {
+ "variable": "series_1",
+ "filledNARatio": 0.0,
+ "effectiveCount": 1,
+ "firstTimestamp": "2021-01-03T01:59:00Z",
+ "lastTimestamp": "2021-01-03T01:59:00Z"
+ },
+ {
+ "variable": "series_2",
+ "filledNARatio": 0.0,
+ "effectiveCount": 1,
+ "firstTimestamp": "2021-01-03T01:59:00Z",
+ "lastTimestamp": "2021-01-03T01:59:00Z"
+ },
+ {
+ "variable": "series_3",
+ "filledNARatio": 0.0,
+ "effectiveCount": 1,
+ "firstTimestamp": "2021-01-03T01:59:00Z",
+ "lastTimestamp": "2021-01-03T01:59:00Z"
+ },
+ {
+ "variable": "series_4",
+ "filledNARatio": 0.0,
+ "effectiveCount": 1,
+ "firstTimestamp": "2021-01-03T01:59:00Z",
+ "lastTimestamp": "2021-01-03T01:59:00Z"
+ }
+ ],
+ "results": [
+ {
+ "timestamp": "2021-01-03T01:59:00Z",
+ "value": {
+ "isAnomaly": false,
+ "severity": 0.0,
+ "score": 0.2675322890281677,
+ "interpretation": []
+ },
+ "errors": []
+ }
+ ]
+}
+```
+
+The response contains the result status, variable information, inference parameters, and inference results.
+
+* **variableStates**: This lists the information of each variable in the inference request.
+* **setupInfo**: This is the request body submitted for this inference.
+* **results**: This contains the detection results. There are three typical types of detection results.
+
+* **isAnomaly**: `false` indicates the current timestamp isn't an anomaly.`true` indicates an anomaly at the current timestamp.
+ * `severity` indicates the relative severity of the anomaly and for abnormal data it's always greater than 0.
+ * `score` is the raw output of the model on which the model makes a decision. `severity` is a derived value from `score`. Every data point has a `score`.
+
+* **interpretation**: This field only appears when a timestamp is detected as anomalous, which contains `variables`, `contributionScore`, `correlationChanges`.
+
+* **contributors**: This is a list containing the contribution score of each variable. Higher contribution scores indicate higher possibility of the root cause. This list is often used for interpreting anomalies and diagnosing the root causes.
+
+* **correlationChanges**: This field only appears when a timestamp is detected as anomalous, which is included in interpretation. It contains `changedVariables` and `changedValues` that interpret which correlations between variables changed.
+
+* **changedVariables**: This field will show which variables that have significant change in correlation with `variable`. The variables in this list are ranked by the extent of correlation changes.
+
+> [!NOTE]
+> A common pitfall is taking all data points with `isAnomaly`=`true` as anomalies. That may end up with too many false positives.
+> You should use both `isAnomaly` and `severity` (or `score`) to sift out anomalies that are not severe and (optionally) use grouping to check the duration of the anomalies to suppress random noise.
+> Please refer to the [FAQ](../concepts/best-practices-multivariate.md#faq) in the best practices document for the difference between `severity` and `score`.
+
+## Next steps
+
+* [Multivariate Anomaly Detection reference architecture](../concepts/multivariate-architecture.md)
+* [Best practices of multivariate anomaly detection](../concepts/best-practices-multivariate.md)
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/How-to/train-model.md
+
+ Title: Train a Multivariate Anomaly Detection model
+
+description: Train a Multivariate Anomaly Detection model
++++++ Last updated : 11/01/2022+++
+# Train a Multivariate Anomaly Detection model
+
+To test out Multivariate Anomaly Detection quickly, try the [Code Sample](https://github.com/Azure-Samples/AnomalyDetector)! For more instructions on how to run a jupyter notebook, please refer to [Install and Run a Jupyter Notebook](https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/install.html#).
+
+## API Overview
+
+There are 7 APIs provided in Multivariate Anomaly Detection:
+* **Training**: Use `Train Model API` to create and train a model, then use `Get Model Status API` to get the status and model metadata.
+* **Inference**:
+ * Use `Async Inference API` to trigger an asynchronous inference process and use `Get Inference results API` to get detection results on a batch of data.
+ * You could also use `Sync Inference API` to trigger a detection on one timestamp every time.
+* **Other operations**: `List Model API` and `Delete Model API` are supported in Multivariate Anomaly Detection model for model management.
+
+![Diagram of model training workflow and inference workflow](../media/train-model/api-workflow.png)
+
+|API Name| Method | Path | Description |
+| | - | -- | |
+|**Train Model**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models | Create and train a model |
+|**Get Model Status**| GET | `{endpoint}`anomalydetector/v1.1/multivariate/models/`{modelId}` | Get model status and model metadata with `modelId` |
+|**Batch Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-batch | Trigger an asynchronous inference with `modelId` which works in a batch scenario |
+|**Get Batch Inference Results**| GET | `{endpoint}`/anomalydetector/v1.1/multivariate/detect-batch/`{resultId}` | Get batch inference results with `resultId` |
+|**Streaming Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-last | Trigger a synchronous inference with `modelId` which works in a streaming scenario |
+|**List Model**| GET | `{endpoint}`/anomalydetector/v1.1/multivariate/models | List all models |
+|**Delete Model**| DELET | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}` | Delete model with `modelId` |
+
+## Train a model
+
+In this process, you'll use the following information that you created previously:
+
+* **Key** of Anomaly Detector resource
+* **Endpoint** of Anomaly Detector resource
+* **Blob URL** of your data in Storage Account
+
+For training data size, the maximum number of timestamps is **1000000**, and a recommended minimum number is **5000** timestamps.
+
+### Request
+
+Here's a sample request body to train a Multivariate Anomaly Detection model.
+
+```json
+{
+ "slidingWindow": 200,
+ "alignPolicy": {
+ "alignMode": "Outer",
+ "fillNAMethod": "Linear",
+ "paddingValue": 0
+ },
+ "dataSource": "{{dataSource}}", //Example: https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv
+ "dataSchema": "OneTable",
+ "startTime": "2021-01-01T00:00:00Z",
+ "endTime": "2021-01-02T09:19:00Z",
+ "displayName": "SampleRequest"
+}
+```
+
+#### Required parameters
+
+These three parameters are required in training and inference API requests:
+
+* **dataSource**: This is the Blob URL that linked to your folder or CSV file located in the Azure Blob Storage.
+* **dataSchema**: This indicates the schema that you're using: `OneTable` or `MultiTable`.
+* **startTime**: The start time of data used for training or inference. If it's earlier than the actual earliest timestamp in the data, the actual earliest timestamp will be used as the starting point.
+* **endTime**: The end time of data used for training or inference, which must be later than or equal to `startTime`. If `endTime` is later than the actual latest timestamp in the data, the actual latest timestamp will be used as the ending point. If `endTime` equals to `startTime`, it means inference of one single data point, which is often used in streaming scenarios.
+
+#### Optional parameters
+
+Other parameters for training API are optional:
+
+* **slidingWindow**: How many data points are used to determine anomalies. An integer between 28 and 2,880. The default value is 300. If `slidingWindow` is `k` for model training, then at least `k` points should be accessible from the source file during inference to get valid results.
+
+ Multivariate Anomaly Detection takes a segment of data points to decide if the next data point is an anomaly. The length of the segment is the `slidingWindow`.
+ Please keep two things in mind when choosing a `slidingWindow` value:
+ 1. The properties of your data: whether it's periodic and the sampling rate. When your data is periodic, you could set the length of 1 - 3 cycles as the `slidingWindow`. When your data is at a high frequency (small granularity) like minute-level or second-level, you could set a relatively higher value of `slidingWindow`.
+ 1. The trade-off between training/inference time and potential performance impact. A larger `slidingWindow` may cause longer training/inference time. There's **no guarantee** that larger `slidingWindow`s will lead to accuracy gains. A small `slidingWindow` may make it difficult for the model to converge on an optimal solution. For example, it's hard to detect anomalies when `slidingWindow` has only two points.
+
+* **alignMode**: How to align multiple variables (time series) on timestamps. There are two options for this parameter, `Inner` and `Outer`, and the default value is `Outer`.
+
+ This parameter is critical when there's misalignment between timestamp sequences of the variables. The model needs to align the variables onto the same timestamp sequence before further processing.
+
+ `Inner` means the model will report detection results only on timestamps on which **every variable** has a value, that is, the intersection of all variables. `Outer` means the model will report detection results on timestamps on which **any variable** has a value, that is, the union of all variables.
+
+ Here's an example to explain different `alignModel` values.
+
+ *Variable-1*
+
+ |timestamp | value|
+ -| --|
+ |2020-11-01| 1
+ |2020-11-02| 2
+ |2020-11-04| 4
+ |2020-11-05| 5
+
+ *Variable-2*
+
+ timestamp | value
+ | -
+ 2020-11-01| 1
+ 2020-11-02| 2
+ 2020-11-03| 3
+ 2020-11-04| 4
+
+ *`Inner` join two variables*
+
+ timestamp | Variable-1 | Variable-2
+ -| - | -
+ 2020-11-01| 1 | 1
+ 2020-11-02| 2 | 2
+ 2020-11-04| 4 | 4
+
+ *`Outer` join two variables*
+
+ timestamp | Variable-1 | Variable-2
+ | - | -
+ 2020-11-01| 1 | 1
+ 2020-11-02| 2 | 2
+ 2020-11-03| `nan` | 3
+ 2020-11-04| 4 | 4
+ 2020-11-05| 5 | `nan`
+
+* **fillNAMethod**: How to fill `nan` in the merged table. There might be missing values in the merged table and they should be properly handled. We provide several methods to fill them up. The options are `Linear`, `Previous`, `Subsequent`, `Zero`, and `Fixed` and the default value is `Linear`.
+
+ | Option | Method |
+ | - | -|
+ | `Linear` | Fill `nan` values by linear interpolation |
+ | `Previous` | Propagate last valid value to fill gaps. Example: `[1, 2, nan, 3, nan, 4]` -> `[1, 2, 2, 3, 3, 4]` |
+ | `Subsequent` | Use next valid value to fill gaps. Example: `[1, 2, nan, 3, nan, 4]` -> `[1, 2, 3, 3, 4, 4]` |
+ | `Zero` | Fill `nan` values with 0. |
+ | `Fixed` | Fill `nan` values with a specified valid value that should be provided in `paddingValue`. |
+
+* **paddingValue**: Padding value is used to fill `nan` when `fillNAMethod` is `Fixed` and must be provided in that case. In other cases it's optional.
+
+* **displayName**: This is an optional parameter, which is used to identify models. For example, you can use it to mark parameters, data sources, and any other metadata about the model and its input data. The default value is an empty string.
+
+### Response
+
+Within the response, the most important thing is the `modelId`, which you'll use to trigger the Get Model Status API.
+
+A response sample:
+
+```json
+{
+ "modelId": "09c01f3e-5558-11ed-bd35-36f8cdfb3365",
+ "createdTime": "2022-11-01T00:00:00Z",
+ "lastUpdatedTime": "2022-11-01T00:00:00Z",
+ "modelInfo": {
+ "dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv",
+ "dataSchema": "OneTable",
+ "startTime": "2021-01-01T00:00:00Z",
+ "endTime": "2021-01-02T09:19:00Z",
+ "displayName": "SampleRequest",
+ "slidingWindow": 200,
+ "alignPolicy": {
+ "alignMode": "Outer",
+ "fillNAMethod": "Linear",
+ "paddingValue": 0.0
+ },
+ "status": "CREATED",
+ "errors": [],
+ "diagnosticsInfo": {
+ "modelState": {
+ "epochIds": [],
+ "trainLosses": [],
+ "validationLosses": [],
+ "latenciesInSeconds": []
+ },
+ "variableStates": []
+ }
+ }
+}
+```
+
+## Get model status
+
+You could use the above API to trigger a training and use **Get model status API** to know whether the model is trained successfully or not.
+
+### Request
+
+There's no content in the request body, what's required only is to put the modelId in the API path, which will be in a format of:
+**{{endpoint}}anomalydetector/v1.1/multivariate/models/{{modelId}}**
+
+### Response
+
+* **status**: The `status` in the response body indicates the model status with this category: *CREATED, RUNNING, READY, FAILED.*
+* **trainLosses & validationLosses**: These are two machine learning concepts indicating the model performance. If the numbers are decreasing and finally to a relatively small number like 0.2, 0.3, then it means the model performance is good to some extent. However, the model performance still needs to be validated through inference and the comparison with labels if any.
+* **epochIds**: indicates how many epochs the model has been trained out of a total of 100 epochs. For example, if the model is still in training status, `epochId` might be `[10, 20, 30, 40, 50]` , which means that it has completed its 50th training epoch, and therefore is halfway complete.
+* **latenciesInSeconds**: contains the time cost for each epoch and is recorded every 10 epochs. In this example, the 10th epoch takes approximately 0.34 second. This would be helpful to estimate the completion time of training.
+* **variableStates**: summarizes information about each variable. It's a list ranked by `filledNARatio` in descending order. It tells how many data points are used for each variable and `filledNARatio` tells how many points are missing. Usually we need to reduce `filledNARatio` as much as possible.
+Too many missing data points will deteriorate model accuracy.
+* **errors**: Errors during data processing will be included in the `errors` field.
+
+A response sample:
+
+```json
+{
+ "modelId": "09c01f3e-5558-11ed-bd35-36f8cdfb3365",
+ "createdTime": "2022-11-01T00:00:12Z",
+ "lastUpdatedTime": "2022-11-01T00:00:12Z",
+ "modelInfo": {
+ "dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv",
+ "dataSchema": "OneTable",
+ "startTime": "2021-01-01T00:00:00Z",
+ "endTime": "2021-01-02T09:19:00Z",
+ "displayName": "SampleRequest",
+ "slidingWindow": 200,
+ "alignPolicy": {
+ "alignMode": "Outer",
+ "fillNAMethod": "Linear",
+ "paddingValue": 0.0
+ },
+ "status": "READY",
+ "errors": [],
+ "diagnosticsInfo": {
+ "modelState": {
+ "epochIds": [
+ 10,
+ 20,
+ 30,
+ 40,
+ 50,
+ 60,
+ 70,
+ 80,
+ 90,
+ 100
+ ],
+ "trainLosses": [
+ 0.30325182933699,
+ 0.24335388161919333,
+ 0.22876543213020673,
+ 0.2439815090461211,
+ 0.22489577260884372,
+ 0.22305156764659015,
+ 0.22466289590705524,
+ 0.22133831883018668,
+ 0.2214335961775346,
+ 0.22268397090109912
+ ],
+ "validationLosses": [
+ 0.29047123109451445,
+ 0.263965221366497,
+ 0.2510373182971068,
+ 0.27116744686858824,
+ 0.2518718700216274,
+ 0.24802495975687047,
+ 0.24790137705176768,
+ 0.24640804830223623,
+ 0.2463938973166726,
+ 0.24831805566344597
+ ],
+ "latenciesInSeconds": [
+ 2.1662967205047607,
+ 2.0658926963806152,
+ 2.112030029296875,
+ 2.130472183227539,
+ 2.183091640472412,
+ 2.1442034244537354,
+ 2.117824077606201,
+ 2.1345198154449463,
+ 2.0993552207946777,
+ 2.1198465824127197
+ ]
+ },
+ "variableStates": [
+ {
+ "variable": "series_0",
+ "filledNARatio": 0.0004999999999999449,
+ "effectiveCount": 1999,
+ "firstTimestamp": "2021-01-01T00:01:00Z",
+ "lastTimestamp": "2021-01-02T09:19:00Z"
+ },
+ {
+ "variable": "series_1",
+ "filledNARatio": 0.0004999999999999449,
+ "effectiveCount": 1999,
+ "firstTimestamp": "2021-01-01T00:01:00Z",
+ "lastTimestamp": "2021-01-02T09:19:00Z"
+ },
+ {
+ "variable": "series_2",
+ "filledNARatio": 0.0004999999999999449,
+ "effectiveCount": 1999,
+ "firstTimestamp": "2021-01-01T00:01:00Z",
+ "lastTimestamp": "2021-01-02T09:19:00Z"
+ },
+ {
+ "variable": "series_3",
+ "filledNARatio": 0.0004999999999999449,
+ "effectiveCount": 1999,
+ "firstTimestamp": "2021-01-01T00:01:00Z",
+ "lastTimestamp": "2021-01-02T09:19:00Z"
+ },
+ {
+ "variable": "series_4",
+ "filledNARatio": 0.0004999999999999449,
+ "effectiveCount": 1999,
+ "firstTimestamp": "2021-01-01T00:01:00Z",
+ "lastTimestamp": "2021-01-02T09:19:00Z"
+ }
+ ]
+ }
+ }
+}
+```
+
+## List models
+
+You may refer to [this page](https://westus2.dev.cognitive.microsoft.com/docs/services/AnomalyDetector-v1-1/operations/ListMultivariateModel) for information about the request URL and request headers. Notice that we only return 10 models ordered by update time, but you can visit other models by setting the `$skip` and the `$top` parameters in the request URL. For example, if your request URL is `https://{endpoint}/anomalydetector/v1.1/multivariate/models?$skip=10&$top=20`, then we'll skip the latest 10 models and return the next 20 models.
+
+A sample response is
+
+```json
+{
+ "models": [
+ {
+ "modelId": "09c01f3e-5558-11ed-bd35-36f8cdfb3365",
+ "createdTime": "2022-10-26T18:00:12Z",
+ "lastUpdatedTime": "2022-10-26T18:03:53Z",
+ "modelInfo": {
+ "dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv",
+ "dataSchema": "OneTable",
+ "startTime": "2021-01-01T00:00:00Z",
+ "endTime": "2021-01-02T09:19:00Z",
+ "displayName": "SampleRequest",
+ "slidingWindow": 200,
+ "alignPolicy": {
+ "alignMode": "Outer",
+ "fillNAMethod": "Linear",
+ "paddingValue": 0.0
+ },
+ "status": "READY",
+ "errors": [],
+ "diagnosticsInfo": {
+ "modelState": {
+ "epochIds": [
+ 10,
+ 20,
+ 30,
+ 40,
+ 50,
+ 60,
+ 70,
+ 80,
+ 90,
+ 100
+ ],
+ "trainLosses": [
+ 0.30325182933699,
+ 0.24335388161919333,
+ 0.22876543213020673,
+ 0.2439815090461211,
+ 0.22489577260884372,
+ 0.22305156764659015,
+ 0.22466289590705524,
+ 0.22133831883018668,
+ 0.2214335961775346,
+ 0.22268397090109912
+ ],
+ "validationLosses": [
+ 0.29047123109451445,
+ 0.263965221366497,
+ 0.2510373182971068,
+ 0.27116744686858824,
+ 0.2518718700216274,
+ 0.24802495975687047,
+ 0.24790137705176768,
+ 0.24640804830223623,
+ 0.2463938973166726,
+ 0.24831805566344597
+ ],
+ "latenciesInSeconds": [
+ 2.1662967205047607,
+ 2.0658926963806152,
+ 2.112030029296875,
+ 2.130472183227539,
+ 2.183091640472412,
+ 2.1442034244537354,
+ 2.117824077606201,
+ 2.1345198154449463,
+ 2.0993552207946777,
+ 2.1198465824127197
+ ]
+ },
+ "variableStates": [
+ {
+ "variable": "series_0",
+ "filledNARatio": 0.0004999999999999449,
+ "effectiveCount": 1999,
+ "firstTimestamp": "2021-01-01T00:01:00Z",
+ "lastTimestamp": "2021-01-02T09:19:00Z"
+ },
+ {
+ "variable": "series_1",
+ "filledNARatio": 0.0004999999999999449,
+ "effectiveCount": 1999,
+ "firstTimestamp": "2021-01-01T00:01:00Z",
+ "lastTimestamp": "2021-01-02T09:19:00Z"
+ },
+ {
+ "variable": "series_2",
+ "filledNARatio": 0.0004999999999999449,
+ "effectiveCount": 1999,
+ "firstTimestamp": "2021-01-01T00:01:00Z",
+ "lastTimestamp": "2021-01-02T09:19:00Z"
+ },
+ {
+ "variable": "series_3",
+ "filledNARatio": 0.0004999999999999449,
+ "effectiveCount": 1999,
+ "firstTimestamp": "2021-01-01T00:01:00Z",
+ "lastTimestamp": "2021-01-02T09:19:00Z"
+ },
+ {
+ "variable": "series_4",
+ "filledNARatio": 0.0004999999999999449,
+ "effectiveCount": 1999,
+ "firstTimestamp": "2021-01-01T00:01:00Z",
+ "lastTimestamp": "2021-01-02T09:19:00Z"
+ }
+ ]
+ }
+ }
+ }
+ ],
+ "currentCount": 42,
+ "maxCount": 1000,
+ "nextLink": ""
+}
+```
+
+The response contains four fields, `models`, `currentCount`, `maxCount`, and `nextLink`.
+
+* **models**: This contains the created time, last updated time, model ID, display name, variable counts, and the status of each model.
+* **currentCount**: This contains the number of trained multivariate models in your Anomaly Detector resource.
+* **maxCount**: The maximum number of models supported by your Anomaly Detector resource, which will be differentiated by the pricing tier that you choose.
+* **nextLink**: This could be used to fetch more models since maximum models that will be listed in this API response is **10**.
+
+## Next steps
+
+* [Best practices of multivariate anomaly detection](../concepts/best-practices-multivariate.md)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/whats-new.md
description: This article is regularly updated with news about the Azure Cogniti
Previously updated : 06/03/2022 Last updated : 11/01/2022 # What's new in Anomaly Detector
We've also added links to some user-generated content. Those items will be marke
## Release notes
+### Nov 2022
+
+* Multivariate Anomaly Detection is now a generally available feature in Anomaly Detector service, with a better user experience and better model performance. Learn more about [how to use latest Multivariate Anomaly Detection](quickstarts/client-libraries-multivariate.md).
+
+### June 2022
+
+* New blog released: [4 sets of best practices to use Multivariate Anomaly Detector when monitoring your equipment](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/4-sets-of-best-practices-to-use-multivariate-anomaly-detector/ba-p/3490848#footerContent).
+ ### May 2022 * New blog released: [Detect anomalies in equipment with Multivariate Anomaly Detector in Azure Databricks](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/detect-anomalies-in-equipment-with-anomaly-detector-in-azure/ba-p/3390688).
cognitive-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-ocr.md
Title: What is Optical Character Recognition (OCR)?
+ Title: OCR - Optical Character Recognition
-description: The optical character recognition (OCR) service extracts print and handwritten text from images.
+description: Learn how the optical character recognition (OCR) services extract print and handwritten text from images and documents in global languages.
-# What is Optical Character Recognition (OCR)
+# OCR - Optical Character Recognition
OCR or Optical Character Recognition is also referred to as text recognition or text extraction. Machine-learning based OCR techniques allow you to extract printed or handwritten text from images, such as posters, street signs and product labels, as well as from documents like articles, reports, forms, and invoices. The text is typically extracted as words, text lines, and paragraphs or text blocks, enabling access to digital version of the scanned text. This eliminates or significantly reduces the need for manual data entry.
OCR or Optical Character Recognition is also referred to as text recognition or
Intelligent Document Processing (IDP) uses OCR as its foundational technology to additionally extract structure, relationships, key-values, entities, and other document-centric insights with an advanced machine-learning based AI service like [Form Recognizer](../../applied-ai-services/form-recognizer/overview.md). Form Recognizer includes a document-optimized version of **Read** as its OCR engine while delegating to other models for higher-end insights. If you are extracting text from scanned and digital documents, use [Form Recognizer Read OCR](../../applied-ai-services/form-recognizer/concept-read.md).
-## Read OCR engine
+## OCR engine
Microsoft's **Read** OCR engine is composed of multiple advanced machine-learning based models supporting [global languages](./language-support.md). This allows them to extract printed and handwritten text including mixed languages and writing styles. **Read** is available as cloud service and on-premises container for deployment flexibility. With the latest preview, it's also available as a synchronous API for single, non-document, image-only scenarios with performance enhancements that make it easier to implement OCR-assisted user experiences.
+> [!WARNING]
+> The Computer Vision legacy [ocr](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20d) and [RecognizeText](https://westus.dev.cognitive.microsoft.com/docs/services/5cd27ec07268f6c679a3e641/operations/587f2c6a1540550560080311) operations are no longer supported and should not be used.
+ [!INCLUDE [read-editions](includes/read-editions.md)] ## How to use OCR
Try out OCR by using Vision Studio. Then follow one of the links to the Read edi
:::image type="content" source="Images/vision-studio-ocr-demo.png" alt-text="Screenshot: Read OCR demo in Vision Studio.":::
-## Supported languages
+## OCR supported languages
Both **Read** versions available today in Computer Vision support several languages for printed and handwritten text. OCR for printed text includes support for English, French, German, Italian, Portuguese, Spanish, Chinese, Japanese, Korean, Russian, Arabic, Hindi, and other international languages that use Latin, Cyrillic, Arabic, and Devanagari scripts. OCR for handwritten text includes support for English, Chinese Simplified, French, German, Italian, Japanese, Korean, Portuguese, and Spanish languages. Refer to the full list of [OCR-supported languages](./language-support.md#optical-character-recognition-ocr).
-## Read OCR common features
+## OCR common features
The Read OCR model is available in Computer Vision and Form Recognizer with common baseline capabilities while optimizing for respective scenarios. The following list summarizes the common features:
The Read OCR model is available in Computer Vision and Form Recognizer with comm
* Support for mixed languages, mixed mode (print and handwritten) * Available as Distroless Docker container for on-premises deployment
-## Use the cloud APIs or deploy on-premises
+## Use the OCR cloud APIs or deploy on-premises
The cloud APIs are the preferred option for most customers because of their ease of integration and fast productivity out of the box. Azure and the Computer Vision service handle scale, performance, data security, and compliance needs while you focus on meeting your customers' needs. For on-premises deployment, the [Read Docker container (preview)](./computer-vision-how-to-install-containers.md) enables you to deploy the Computer Vision v3.2 generally available OCR capabilities in your own local environment. Containers are great for specific security and data governance requirements.
-> [!WARNING]
-> The Computer Vision [ocr](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20d) and [RecognizeText](https://westus.dev.cognitive.microsoft.com/docs/services/5cd27ec07268f6c679a3e641/operations/587f2c6a1540550560080311) operations are no longer supported and should not be used.
-
-## Data privacy and security
+## OCR data privacy and security
As with all of the Cognitive Services, developers using the Computer Vision service should be aware of Microsoft's policies on customer data. See the [Cognitive Services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center to learn more. ## Next steps -- For general (non-document) images, try the [Computer Vision 4.0 preview Image Analysis REST API quickstart](./concept-ocr.md).-- For PDF, Office and HTML documents and document images, start with [Form Recognizer Read](../../applied-ai-services/form-recognizer/concept-read.md).
+- OCR for general (non-document) images - try the [Computer Vision 4.0 preview Image Analysis REST API quickstart](./concept-ocr.md).
+- OCR for PDF, Office and HTML documents and document images, start with [Form Recognizer Read](../../applied-ai-services/form-recognizer/concept-read.md).
- Looking for the previous GA version? Refer to the [Computer Vision 3.2 GA SDK or REST API quickstarts](./quickstarts-sdk/client-library.md).
cognitive-services Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/developer-guide.md
Previously updated : 09/15/2022 Last updated : 10/31/2022
The conversation analysis authoring API enables you to author custom models and
* [Conversational language understanding](../conversational-language-understanding/quickstart.md?pivots=rest-api) * [Orchestration workflow](../orchestration-workflow/quickstart.md?pivots=rest-api)
-As you use this API in your application, see the [reference documentation](/rest/api/language/conversational-analysis-authoring) for additional information.
+As you use this API in your application, see the [reference documentation](/rest/api/language/2022-05-01/conversational-analysis-authoring) for additional information.
### Conversation analysis runtime API
It additionally enables you to use the following features, without creating any
* [Conversation summarization](../summarization/quickstart.md?pivots=rest-api&tabs=conversation-summarization) * [Personally Identifiable Information (PII) detection for conversations](../personally-identifiable-information/how-to-call-for-conversations.md?tabs=rest-api#examples)
-As you use this API in your application, see the [reference documentation](/rest/api/language/conversation-analysis-runtime) for additional information.
+As you use this API in your application, see the [reference documentation](/rest/api/language/2022-05-01/conversation-analysis-runtime) for additional information.
### Text analysis authoring API
The text analysis authoring API enables you to author custom models and create/m
* [Custom named entity recognition](../custom-named-entity-recognition/quickstart.md?pivots=rest-api) * [Custom text classification](../custom-text-classification/quickstart.md?pivots=rest-api)
-As you use this API in your application, see the [reference documentation](/rest/api/language/text-analysis-authoring) for additional information.
+As you use this API in your application, see the [reference documentation](/rest/api/language/2022-05-01/text-analysis-authoring) for additional information.
### Text analysis runtime API
It additionally enables you to use the following features, without creating any
* [Sentiment analysis and opinion mining](../sentiment-opinion-mining/quickstart.md?pivots=rest-api) * [Text analytics for health](../text-analytics-for-health/quickstart.md?pivots=rest-api)
-As you use this API in your application, see the [reference documentation](/rest/api/language/text-analysis-runtime) for additional information.
+As you use this API in your application, see the [reference documentation](/rest/api/language/2022-05-01/text-analysis-runtime/analyze-text) for additional information.
### Question answering APIs
cognitive-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/role-based-access-control.md
Previously updated : 08/23/2022 Last updated : 10/31/2022
A user that should only be validating and reviewing the Language apps, typically
:::column-end::: :::column span=""::: All GET APIs under:
- * [Language authoring conversational language understanding APIs](/rest/api/language/conversational-analysis-authoring)
- * [Language authoring text analysis APIs](/rest/api/language/text-analysis-authoring)
+ * [Language authoring conversational language understanding APIs](/rest/api/language/2022-05-01/conversational-analysis-authoring)
+ * [Language authoring text analysis APIs](/rest/api/language/2022-05-01/text-analysis-authoring)
* [Question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects) Only `TriggerExportProjectJob` POST operation under:
- * [Language authoring conversational language understanding export API](/rest/api/language/conversational-analysis-authoring/export?tabs=HTTP)
- * [Language authoring text analysis export API](/rest/api/language/text-analysis-authoring/export?tabs=HTTP)
+ * [Language authoring conversational language understanding export API](/rest/api/language/2022-05-01/text-analysis-authoring/export)
+ * [Language authoring text analysis export API](/rest/api/language/2022-05-01/text-analysis-authoring/export)
Only Export POST operation under: * [Question Answering Projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects/export) All the Batch Testing Web APIs
- *[Language Runtime CLU APIs](/rest/api/language/conversation-analysis-runtime)
- *[Language Runtime Text Analysis APIs](/rest/api/language/text-analysis-runtime)
+ *[Language Runtime CLU APIs](/rest/api/language/2022-05-01/conversation-analysis-runtime)
+ *[Language Runtime Text Analysis APIs](/rest/api/language/2022-05-01/text-analysis-runtime/analyze-text)
:::column-end::: :::row-end:::
A user that is responsible for building and modifying an application, as a colla
:::column span=""::: * All APIs under Language reader * All POST, PUT and PATCH APIs under:
- * [Language conversational language understanding APIs](/rest/api/language/conversational-analysis-authoring)
- * [Language text analysis APIs](/rest/api/language/text-analysis-authoring)
+ * [Language conversational language understanding APIs](/rest/api/language/2022-05-01/conversational-analysis-authoring)
+ * [Language text analysis APIs](/rest/api/language/2022-05-01/text-analysis-authoring)
* [question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects) Except for * Delete deployment
These users are the gatekeepers for the Language applications in production envi
:::column-end::: :::column span=""::: All APIs available under:
- * [Language authoring conversational language understanding APIs](/rest/api/language/conversational-analysis-authoring)
- * [Language authoring text analysis APIs](/rest/api/language/text-analysis-authoring)
+ * [Language authoring conversational language understanding APIs](/rest/api/language/2022-05-01/conversational-analysis-authoring)
+ * [Language authoring text analysis APIs](/rest/api/language/2022-05-01/text-analysis-authoring)
* [question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects) :::column-end:::
cognitive-services Use Asynchronously https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/use-asynchronously.md
Previously updated : 08/25/2022 Last updated : 10/31/2022
When you send asynchronous requests, you will incur charges based on number of t
## Submit an asynchronous job using the REST API
-To submit an asynchronous job, review the [reference documentation](/rest/api/language/text-analysis-runtime/submit-job) for the JSON body you'll send in your request.
+To submit an asynchronous job, review the [reference documentation](/rest/api/language/2022-05-01/text-analysis-runtime/submit-job) for the JSON body you'll send in your request.
1. Add your documents to the `analysisInput` object. 1. In the `tasks` object, include the operations you want performed on your data. For example, if you wanted to perform sentiment analysis, you would include the `SentimentAnalysisLROTask` object. 1. You can optionally: 1. Choose a specific [version of the model](model-lifecycle.md) used on your data.
- 1. Include additional Language ervice features in the `tasks` object, to be performed on your data at the same time.
+ 1. Include additional Language service features in the `tasks` object, to be performed on your data at the same time.
Once you've created the JSON body for your request, add your key to the `Ocp-Apim-Subscription-Key` header. Then send your API request to job creation endpoint. For example:
A successful call will return a 202 response code. The `operation-location` in t
GET {Endpoint}/language/analyze-text/jobs/12345678-1234-1234-1234-12345678?api-version=2022-05-01 ```
-To [get the status and retrieve the results](/rest/api/language/text-analysis-runtime/job-status) of the request, send a GET request to the URL you received in the `operation-location` header from the previous API response. Remember to include your key in the `Ocp-Apim-Subscription-Key`. The response will include the results of your API call.
+To [get the status and retrieve the results](/rest/api/language/2022-05-01/text-analysis-runtime/job-status) of the request, send a GET request to the URL you received in the `operation-location` header from the previous API response. Remember to include your key in the `Ocp-Apim-Subscription-Key`. The response will include the results of your API call.
## Send asynchronous API requests using the client library
cognitive-services Migrate From Luis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/migrate-from-luis.md
The following table presents a side-by-side comparison between the features of L
|Role-Based Access Control (RBAC) for LUIS resources |Role-Based Access Control (RBAC) available for Language resources |Language resource RBAC must be [manually added after migration](../../concepts/role-based-access-control.md). | |Single training mode| Standard and advanced [training modes](#how-are-the-training-times-different-in-clu-how-is-standard-training-different-from-advanced-training) | Training will be required after application migration. | |Two publishing slots and version publishing |Ten deployment slots with custom naming | Deployment will be required after the applicationΓÇÖs migration and training. |
-|LUIS authoring APIs and SDK support in .NET, Python, Java, and Node.js |[CLU Authoring REST APIs](/rest/api/language/conversational-analysis-authoring). | For more information, see the [quickstart article](../quickstart.md?pivots=rest-api) for information on the CLU authoring APIs. [Refactoring](#do-i-have-to-refactor-my-code-if-i-migrate-my-applications-from-luis-to-clu) will be necessary to use the CLU authoring APIs. |
-|LUIS Runtime APIs and SDK support in .NET, Python, Java, and Node.js |[CLU Runtime APIs](/rest/api/language/conversation-analysis-runtime). CLU Runtime SDK support for [.NET](/dotnet/api/overview/azure/ai.language.conversations-readme) and [Python](/python/api/overview/azure/ai-language-conversations-readme?view=azure-python-preview&preserve-view=true). | See [how to call the API](../how-to/call-api.md#use-the-client-libraries-azure-sdk) for more information. [Refactoring](#do-i-have-to-refactor-my-code-if-i-migrate-my-applications-from-luis-to-clu) will be necessary to use the CLU runtime API response. |
+|LUIS authoring APIs and SDK support in .NET, Python, Java, and Node.js |[CLU Authoring REST APIs](/rest/api/language/2022-05-01/conversational-analysis-authoring). | For more information, see the [quickstart article](../quickstart.md?pivots=rest-api) for information on the CLU authoring APIs. [Refactoring](#do-i-have-to-refactor-my-code-if-i-migrate-my-applications-from-luis-to-clu) will be necessary to use the CLU authoring APIs. |
+|LUIS Runtime APIs and SDK support in .NET, Python, Java, and Node.js |[CLU Runtime APIs](/rest/api/language/2022-05-01/conversation-analysis-runtime). CLU Runtime SDK support for [.NET](/dotnet/api/overview/azure/ai.language.conversations-readme) and [Python](/python/api/overview/azure/ai-language-conversations-readme?view=azure-python-preview&preserve-view=true). | See [how to call the API](../how-to/call-api.md#use-the-client-libraries-azure-sdk) for more information. [Refactoring](#do-i-have-to-refactor-my-code-if-i-migrate-my-applications-from-luis-to-clu) will be necessary to use the CLU runtime API response. |
## Migrate your LUIS applications
The API objects of CLU applications are different from LUIS and therefore code r
If you are using the LUIS [programmatic](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c40) and [runtime](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a9459a1fe8fa44c28dd8) APIs, you can replace them with their equivalent APIs.
-[CLU authoring APIs](/rest/api/language/conversational-analysis-authoring): Instead of LUIS's specific CRUD APIs for individual actions such as _add utterance_, _delete entity_, and _rename intent_, CLU offers an [import API](/rest/api/language/conversational-analysis-authoring/import) that replaces the full content of a project using the same name. If your service used LUIS programmatic APIs to provide a platform for other customers, you must consider this new design paradigm. All other APIs such as: _listing projects_, _training_, _deploying_, and _deleting_ are available. APIs for actions such as _importing_ and _deploying_ are asynchronous operations instead of synchronous as they were in LUIS.
+[CLU authoring APIs](/rest/api/language/2022-05-01/conversational-analysis-authoring): Instead of LUIS's specific CRUD APIs for individual actions such as _add utterance_, _delete entity_, and _rename intent_, CLU offers an [import API](/rest/api/language/2022-05-01/conversational-analysis-authoring/import) that replaces the full content of a project using the same name. If your service used LUIS programmatic APIs to provide a platform for other customers, you must consider this new design paradigm. All other APIs such as: _listing projects_, _training_, _deploying_, and _deleting_ are available. APIs for actions such as _importing_ and _deploying_ are asynchronous operations instead of synchronous as they were in LUIS.
-[CLU runtime APIs](/rest/api/language/conversation-analysis-runtime/analyze-conversation): The new API request and response includes many of the same parameters such as: _query_, _prediction_, _top intent_, _intents_, _entities_, and their values. The CLU response object offers a more straightforward approach. Entity predictions are provided as they are within the utterance text, and any additional information such as resolution or list keys are provided in extra parameters called `extraInformation` and `resolution`. See the [reference documentation](/rest/api/language/conversation-analysis-runtime/analyze-conversation) for more information on the API response structure.
+[CLU runtime APIs](/rest/api/language/2022-05-01/conversation-analysis-runtime): The new API request and response includes many of the same parameters such as: _query_, _prediction_, _top intent_, _intents_, _entities_, and their values. The CLU response object offers a more straightforward approach. Entity predictions are provided as they are within the utterance text, and any additional information such as resolution or list keys are provided in extra parameters called `extraInformation` and `resolution`. See the [reference documentation](/rest/api/language/2022-05-01/conversation-analysis-runtime) for more information on the API response structure.
You can use the [.NET](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Language.Conversations_1.0.0-beta.3/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples/) or [Python](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-language-conversations_1.1.0b1/sdk/cognitivelanguage/azure-ai-language-conversations/samples/README.md) CLU runtime SDK to replace the LUIS runtime SDK. There is currently no authoring SDK available for CLU.
cognitive-services Entity Resolutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/concepts/entity-resolutions.md
Title: Entity resolutions provided by Named Entity Recognition
description: Learn about entity resolutions in the NER feature. -+ Last updated 10/12/2022-+
cognitive-services Named Entity Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/concepts/named-entity-categories.md
Title: Entity categories recognized by Named Entity Recognition in Azure Cogniti
description: Learn about the entities the NER feature can recognize from unstructured text. -+ Last updated 11/02/2021-+
cognitive-services How To Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/how-to-call.md
Title: How to perform Named Entity Recognition (NER)
description: This article will show you how to extract named entities from text. -+ Last updated 03/01/2022-+
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/language-support.md
Title: Named Entity Recognition (NER) language support
description: This article explains which natural languages are supported by the NER feature of Azure Cognitive Service for Language. -+ Last updated 06/27/2022-+
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/overview.md
Title: What is the Named Entity Recognition (NER) feature in Azure Cognitive Ser
description: An overview of the Named Entity Recognition feature in Azure Cognitive Services, which helps you extract categories of entities in text. -+ Last updated 06/15/2022-+
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/quickstart.md
Title: "Quickstart: Use the NER client library"
description: Use this quickstart to start using the Named Entity Recognition (NER) API. -+ Last updated 08/15/2022-+ ms.devlang: csharp, java, javascript, python keywords: text mining, key phrase
cognitive-services Extract Excel Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/tutorials/extract-excel-information.md
Title: Extract information in Excel using Power Automate
description: Learn how to Extract Excel text without having to write code, using Named Entity Recognition and Power Automate. -+ Last updated 07/27/2022-+
cognitive-services Conversations Entity Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/concepts/conversations-entity-categories.md
Title: Entity categories recognized by Conversational Personally Identifiable In
description: Learn about the entities the Conversational PII feature (preview) can recognize from conversation inputs. -+ Last updated 05/15/2022-+
cognitive-services Entity Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/concepts/entity-categories.md
Title: Entity categories recognized by Personally Identifiable Information (dete
description: Learn about the entities the PII feature can recognize from unstructured text. -+ Last updated 11/15/2021-+
cognitive-services How To Call For Conversations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/how-to-call-for-conversations.md
Title: How to detect Personally Identifiable Information (PII) in conversations.
description: This article will show you how to extract PII from chat and spoken transcripts and redact identifiable information. -+ Last updated 05/10/2022-+
cognitive-services How To Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/how-to-call.md
Title: How to detect Personally Identifiable Information (PII)
description: This article will show you how to extract PII and health information (PHI) from text and detect identifiable information. -+ Last updated 07/27/2022-+
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/language-support.md
Title: Personally Identifiable Information (PII) detection language support
description: This article explains which natural languages are supported by the PII detection feature of Azure Cognitive Service for Language. -+ Last updated 08/02/2022-+
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/overview.md
Title: What is the Personally Identifying Information (PII) detection feature in
description: An overview of the PII detection feature in Azure Cognitive Services, which helps you extract entities and sensitive information (PII) in text. -+ Last updated 08/02/2022-+
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/quickstart.md
Title: "Quickstart: Detect Personally Identifying Information (PII) in text"
description: Use this quickstart to start using the PII detection API. -+ Last updated 08/15/2022-+ ms.devlang: csharp, java, javascript, python zone_pivot_groups: programming-languages-text-analytics
cognitive-services Assertion Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/concepts/assertion-detection.md
Title: Assertion detection in Text Analytics for health
description: Learn about assertion detection. -+ Last updated 11/02/2021-+
cognitive-services Health Entity Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/concepts/health-entity-categories.md
Title: Entity categories recognized by Text Analytics for health
description: Learn about categories recognized by Text Analytics for health -+ Last updated 11/02/2021-+
cognitive-services Relation Extraction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/concepts/relation-extraction.md
Title: Relation extraction in Text Analytics for health
description: Learn about relation extraction -+ Last updated 11/02/2021-+
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/how-to/call-api.md
Title: How to call Text Analytics for health
description: Learn how to extract and label medical information from unstructured clinical text with Text Analytics for health. -+ Last updated 09/05/2022-+
cognitive-services Configure Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/how-to/configure-containers.md
Title: Configure Text Analytics for health containers
description: Text Analytics for health containers uses a common configuration framework, so that you can easily configure and manage storage, logging and telemetry, and security settings for your containers. -+ Last updated 11/02/2021-+
cognitive-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/how-to/use-containers.md
Title: How to use Text Analytics for health containers
description: Learn how to extract and label medical information on premises using Text Analytics for health Docker container. -+ Last updated 09/05/2022-+ ms.devlang: azurecli
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/language-support.md
Title: Text Analytics for health language support
description: "This article explains which natural languages are supported by the Text Analytics for health." -+ Last updated 9/5/2022-+
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/overview.md
Title: What is the Text Analytics for health in Azure Cognitive Service for Lang
description: An overview of Text Analytics for health in Azure Cognitive Services, which helps you extract medical information from unstructured text, like clinical documents. -+ Last updated 06/15/2022-+
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/quickstart.md
Title: "Quickstart: Use the Text Analytics for health REST API and client librar
description: Use this quickstart to start using Text Analytics for health. -+ Last updated 08/15/2022-+ ms.devlang: csharp, java, javascript, python keywords: text mining, health, text analytics for health
connectors Connectors Create Api Sqlazure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sqlazure.md
ms.suite: integration Previously updated : 09/23/2022 Last updated : 10/31/2022 tags: connectors
+## As a developer, I want to access my SQL database from my logic app workflow.
# Connect to an SQL database from workflows in Azure Logic Apps
The SQL Server connector has different versions, based on [logic app type and ho
| Logic app | Environment | Connector version | |--|-|-|
-| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector (Standard class). For more information, review the following documentation: <br><br>- [SQL Server managed connector reference](/connectors/sql) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
-| **Consumption** | Integration service environment (ISE) | Managed connector (Standard class) and ISE version, which has different message limits than the Standard class. For more information, review the following documentation: <br><br>- [SQL Server managed connector reference](/connectors/sql) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
-| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector (Azure-hosted) and built-in connector, which is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in version differs in the following ways: <br><br>- The built-in version doesn't have triggers. You can use the SQL managed connector trigger or a different trigger. <br><br>- The built-in version can connect directly to an SQL database and access Azure virtual networks. You don't need an on-premises data gateway.<br><br>For more information, review the following documentation: <br><br>- [SQL Server managed connector reference](/connectors/sql/) <br>- [SQL Server built-in connector reference](#built-in-connector-operations) section later in this article <br>- [Built-in connectors in Azure Logic Apps](built-in.md) |
+| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector, which appears in the designer under the **Standard** label. For more information, review the following documentation: <br><br>- [SQL Server managed connector reference](/connectors/sql) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
+| **Consumption** | Integration service environment (ISE) | Managed connector, which appears in the designer under the **Standard** label, and the ISE version, which has different message limits than the Standard class. For more information, review the following documentation: <br><br>- [SQL Server managed connector reference](/connectors/sql) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the designer under the **Azure** label, and built-in connector, which appears in the designer under the **Built-in** label and is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in version differs in the following ways: <br><br>- The built-in version doesn't have triggers. You can use the SQL managed connector trigger or a different trigger. <br><br>- The built-in version can connect directly to an SQL database and access Azure virtual networks. You don't need an on-premises data gateway. <br><br>For more information, review the following documentation: <br><br>- [SQL Server managed connector reference](/connectors/sql/) <br>- [SQL Server built-in connector reference](/azure/logic-apps/connectors/built-in/reference/sql/) <br>- [Built-in connectors in Azure Logic Apps](built-in.md) |
-## Limitations
+### Limitations
-For more information, review the [SQL Server managed connector reference](/connectors/sql/) or the [SQL Server built-in connector reference](#built-in-connector-operations).
+For more information, review the [SQL Server managed connector reference](/connectors/sql/) or the [SQL Server built-in connector reference](/azure/logic-apps/connectors/built-in/reference/sql/).
## Prerequisites
For more information, review the [SQL Server managed connector reference](/conne
You can use the SQL Server built-in connector or managed connector.
- * To use the built-in connector, you can authenticate your connection with either a managed identity, Active Directory OAuth, or a connection string. You can adjust connection pooling by specifying parameters in the connection string. For more information, review [Connection Pooling](/dotnet/framework/data/adonet/connection-pooling).
+ * To use Azure Active Directory authentication or managed identity authentication with your logic app, you have to set up your SQL Server to work with these authentication types. For more information, see [Authentication - SQL Server managed connector reference](/connectors/sql/#authentication).
+
+ * To use the built-in connector, you can authenticate your connection with either a managed identity, Azure Active Directory, or a connection string. You can adjust connection pooling by specifying parameters in the connection string. For more information, review [Connection Pooling](/dotnet/framework/data/adonet/connection-pooling).
* To use the SQL Server managed connector, follow the same requirements as a Consumption logic app workflow in multi-tenant Azure Logic Apps. For other connector requirements, review the [SQL Server managed connector reference](/connectors/sql/).
In Standard logic app workflows, only the SQL Server managed connector has trigg
When you save your workflow, this step automatically publishes your updates to your deployed logic app, which is live in Azure. With only a trigger, your workflow just checks the SQL database based on your specified schedule. You have to [add an action](#add-sql-action) that responds to the trigger.
-<a name="trigger-recurrence-shift-drift"></a>
-
-## Trigger recurrence shift and drift (daylight saving time)
-
-Recurring connection-based triggers where you need to create a connection first, such as the SQL Server managed connector trigger, differ from built-in triggers that run natively in Azure Logic Apps, such as the [Recurrence trigger](../connectors/connectors-native-recurrence.md). For recurring connection-based triggers, the recurrence schedule isn't the only driver that controls execution, and the time zone only determines the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, *and* other factors that might cause run times to drift or produce unexpected behavior. For example, unexpected behavior can include failure to maintain the specified schedule when daylight saving time (DST) starts and ends.
-
-To make sure that the recurrence time doesn't shift when DST takes effect, manually adjust the recurrence. That way, your workflow continues to run at the expected or specified start time. Otherwise, the start time shifts one hour forward when DST starts and one hour backward when DST ends. For more information, see [Recurrence for connection-based triggers](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#recurrence-for-connection-based-triggers).
- <a name="add-sql-action"></a> ## Add a SQL Server action
In this example, the logic app workflow starts with the [Recurrence trigger](../
1. Under the **Choose an operation** search box, select either of the following options:
- * **Built-in** when you want to use SQL Server [built-in actions](#built-in-connector-operations) such as **Execute query**
+ * **Built-in** when you want to use SQL Server [built-in actions](/azure/logic-apps/connectors/built-in/reference/sql/) such as **Execute query**
![Screenshot showing the Azure portal, workflow designer for Standard logic app, and designer search box with "Built-in" selected underneath.](./media/connectors-create-api-sqlazure/select-built-in-category-standard.png)
In this example, the logic app workflow starts with the [Recurrence trigger](../
1. From the actions list, select the SQL Server action that you want.
- * [Built-in actions](#built-in-connector-operations)
+ * [Built-in actions](/azure/logic-apps/connectors/built-in/reference/sql/)
This example selects the built-in action named **Execute query**.
When you call a stored procedure by using the SQL Server connector, the returned
1. To reference the JSON content properties, click inside the edit boxes where you want to reference those properties so that the dynamic content list appears. In the list, under the [**Parse JSON**](../logic-apps/logic-apps-perform-data-operations.md#parse-json-action) heading, select the data tokens for the JSON content properties that you want.
-<a name="built-in-connector-app-settings"></a>
-
-## Built-in connector app settings
-
-In a Standard logic app resource, the SQL Server built-in connector includes app settings that control various thresholds for performance, throughput, capacity, and so on. For example, you can change the query timeout value from 30 seconds. For more information, review [Reference for app settings - local.settings.json](../logic-apps/edit-app-settings-host-settings.md#reference-local-settings-json).
-
-<a name="built-in-connector-operations"></a>
-
-## SQL built-in connector operations
-
-The SQL Server built-in connector is available only for Standard logic app workflows and provides the following actions, but no triggers:
-
-| Action | Description |
-|--|-|
-| [**Delete rows**](#delete-rows) | Deletes and returns the table rows that match the specified **Where condition** value. |
-| [**Execute query**](#execute-query) | Runs a query on an SQL database. |
-| [**Execute stored procedure**](#execute-stored-procedure) | Runs a stored procedure on an SQL database. |
-| [**Get rows**](#get-rows) | Gets the table rows that match the specified **Where condition** value. |
-| [**Get tables**](#get-tables) | Gets all the tables from the database. |
-| [**Insert row**](#insert-row) | Inserts a single row in the specified table. |
-| [**Update rows**](#update-rows) | Updates the specified columns in all the table rows that match the specified **Where condition** value using the **Set columns** column names and values. |
-
-<a name="delete-rows"></a>
-
-### Delete rows
-
-Operation ID: `deleteRows`
-
-Deletes and returns the table rows that match the specified **Where condition** value.
-
-#### Parameters
-
-| Name | Key | Required | Type | Description |
-||--|-||-|
-| **Table name** | `tableName` | True | String | The name for the table |
-| **Where condition** | `columnValuesForWhereCondition` | True | Object | This object contains the column names and corresponding values used for selecting the rows to delete. To provide this information, follow the *key-value* pair format, for example, *columnName* and *columnValue*, which also lets you specify single or specific rows to delete. |
-
-#### Returns
-
-| Name | Type |
-|||
-| **Result** | An array object that returns all the deleted rows. Each row contains the column name and the corresponding deleted value. |
-| **Result Item** | An array object that returns one deleted row at a time. A **For each** loop is automatically added to your workflow to iterate through the array. Each row contains the column name and the corresponding deleted value. |
-
-*Example*
-
-The following example shows sample parameter values for the **Delete rows** action:
-
-**Sample values**
-
-| Parameter | JSON name | Sample value |
-|--|--|--|
-| **Table name** | `tableName` | tableName1 |
-| **Where condition** | `columnValuesForWhereCondition` | Key-value pairs: <br><br>- <*columnName1*>, <*columnValue1*> <br><br>- <*columnName2*>, <*columnValue2*> |
-
-**Parameters in the action's underlying JSON definition**
-
-```json
-"parameters": {
- "tableName": "tableName1",
- "columnValuesForWhereCondition": {
- "columnName1": "columnValue1",
- "columnName2": "columnValue2"
- }
-},
-```
-
-<a name="execute-query"></a>
-
-### Execute query
-
-Operation ID: `executeQuery`
-
-Runs a query on an SQL database.
-
-#### Parameters
-
-| Name | Key | Required | Type | Description |
-||--|-||-|
-| **Query** | `query` | True | Dynamic | The body for your SQL query |
-| **Query parameters** | `queryParameters` | False | Objects | The parameters for your query. <br><br>**Note**: If the query requires input parameters, you must provide these parameters. |
-
-#### Returns
-
-| Name | Type |
-|||
-| **Result** | An array object that returns all the query results. Each row contains the column name and the corresponding value. |
-| **Result Item** | An array object that returns one query result at a time. A **For each** loop is automatically added to your workflow to iterate through the array. Each row contains the column name and the corresponding value. |
-
-<a name="execute-stored-procedure"></a>
-
-### Execute stored procedure
-
-Operation ID: `executeStoredProcedure`
-
-Runs a stored procedure on an SQL database.
-
-#### Parameters
-
-| Name | Key | Required | Type | Description |
-||--|-||-|
-| **Procedure name** | `storedProcedureName` | True | String | The name for your stored procedure |
-| **Parameters** | `storedProcedureParameters` | False | Dynamic | The parameters for your stored procedure. <br><br>**Note**: If the stored procedure requires input parameters, you must provide these parameters. |
-
-#### Returns
-
-| Name | Type |
-|||
-| **Result** | An object that contains the result sets array, return code, and output parameters |
-| **Result Result Sets** | An object array that contains all the result sets from the stored procedure, which might return zero, one, or multiple result sets. |
-| **Result Return Code** | An integer that represents the status code from the stored procedure |
-| **Result Stored Procedure Parameters** | An object that contains the final values of the stored procedure's output and input-output parameters |
-| **Status Code** | The status code from the **Execute stored procedure** operation |
-
-<a name="get-rows"></a>
-
-### Get rows
-
-Operation ID: `getRows`
-
-Gets the table rows that match the specified **Where condition** value.
-
-#### Parameters
-
-| Name | Key | Required | Type | Description |
-||--|-||-|
-| **Table name** | `tableName` | True | String | The name for the table |
-| **Where condition** | `columnValuesForWhereCondition` | False | Dynamic | This object contains the column names and corresponding values used for selecting the rows to get. To provide this information, follow the *key-value* pair format, for example, *columnName* and *columnValue*, which also lets you specify single or specific rows to get. |
-
-#### Returns
-
-| Name | Type |
-|||
-| **Result** | An array object that returns all the row results. |
-| **Result Item** | An array object that returns one row result at a time. A **For each** loop is automatically added to your workflow to iterate through the array. |
-
-*Example*
-
-The following example shows sample parameter values for the **Get rows** action:
-
-**Sample values**
-
-| Parameter | JSON name | Sample value |
-|--|--|--|
-| **Table name** | `tableName` | tableName1 |
-| **Where condition** | `columnValuesForWhereCondition` | Key-value pairs: <br><br>- <*columnName1*>, <*columnValue1*> <br><br>- <*columnName2*>, <*columnValue2*> |
-
-**Parameters in the action's underlying JSON definition**
-
-```json
-"parameters": {
- "tableName": "tableName1",
- "columnValuesForWhereCondition": {
- "columnName1": "columnValue1",
- "columnName2": "columnValue2"
- }
-},
-```
-
-<a name="get-tables"></a>
-
-### Get tables
-
-Operation ID: `getTables`
-
-Gets a list of all the tables in the database.
-
-#### Parameters
-
-None.
-
-#### Returns
-
-| Name | Type |
-|||
-| **Result** | An array object that contains the full names and display names for all tables in the database. |
-| **Result Display Name** | An array object that contains the display name for each table in the database. A **For each** loop is automatically added to your workflow to iterate through the array. |
-| **Result Full Name** | An array object that contains the full name for each table in the database. A **For each** loop is automatically added to your workflow to iterate through the array. |
-| **Result Item** | An array object that returns the full name and display name one at time for each table. A **For each** loop is automatically added to your workflow to iterate through the array. |
-
-<a name="insert-row"></a>
-
-### Insert row
-
-Operation ID: `insertRow`
-
-Inserts a single row in the specified table.
-
-| Name | Key | Required | Type | Description |
-||--|-||-|
-| **Table name** | `tableName` | True | String | The name for the table |
-| **Set columns** | `setColumns` | False | Dynamic | This object contains the column names and corresponding values to insert. To provide this information, follow the *key-value* pair format, for example, *columnName* and *columnValue*. If the table has columns with default or autogenerated values, you can leave this field empty. |
-
-#### Returns
-
-| Name | Type |
-|||
-| **Result** | The inserted row, including the names and values of any autogenerated, default, and null value columns. |
-
-<a name="update-rows"></a>
-
-### Update rows
-
-Operation ID: `updateRows`
-
-Updates the specified columns in all the table rows that match the specified **Where condition** value using the **Set columns** column names and values.
-
-| Name | Key | Required | Type | Description |
-||--|-||-|
-| **Table name** | `tableName` | True | String | The name for the table |
-| **Where condition** | `columnValuesForWhereCondition` | True | Dynamic | This object contains the column names and corresponding values for selecting the rows to update. To provide this information, follow the *key-value* pair format, for example, *columnName* and *columnValue*, which also lets you specify single or specific rows to update. |
-| **Set columns** | `setColumns` | True | Dynamic | This object contains the column names and the corresponding values to use for the update. To provide this information, follow the *key-value* pair format, for example, *columnName* and *columnValue*. |
-
-#### Returns
-
-| Name | Type |
-|||
-| **Result** | An array object that returns all the columns for the updated rows. |
-| **Result Item** | An array object that returns one column at a time from the updated rows. A **For each** loop is automatically added to your workflow to iterate through the array. |
-
-*Example*
-
-The following example shows sample parameter values for the **Update rows** action:
-
-**Sample values**
-
-| Parameter | JSON name | Sample value |
-|--|--|--|
-| **Table name** | `tableName` | tableName1 |
-| **Where condition** | `columnValuesForWhereCondition` | Key-value pairs: <br><br>- <*columnName1*>, <*columnValue1*> <br><br>- <*columnName2*>, <*columnValue2*> |
-
-**Parameters in the action's underlying JSON definition**
-
-```json
-"parameters": {
- "tableName": "tableName1",
- "columnValuesForWhereCondition": {
- "columnName1": "columnValue1",
- "columnName2": "columnValue2"
- }
-},
-```
-
-## Troubleshoot problems
-
-<a name="connection-problems"></a>
-
-### Connection problems
-
-Connection problems can commonly happen, so to troubleshoot and resolve these kinds of issues, review [Solving connectivity errors to SQL Server](https://support.microsoft.com/help/4009936/solving-connectivity-errors-to-sql-server). The following list provides some examples:
-
-* **A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections.**
-
-* **(provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 53)**
-
-* **(provider: TCP Provider, error: 0 - No such host is known.) (Microsoft SQL Server, Error: 11001)**
- ## Next steps * [Managed connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
-* [Built-in connectors for Azure Logic Apps](built-in.md)
+* [Built-in connectors for Azure Logic Apps](built-in.md)
container-instances Container Instances Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-github-action.md
az role assignment create \
### Save credentials to GitHub repo
-1. In the GitHub UI, navigate to your forked repository and select **Settings** > **Secrets** > **Actions**.
+1. In the GitHub UI, navigate to your forked repository and select **Security > Secrets and variables > Actions**.
1. Select **New repository secret** to add the following secrets:
container-registry Container Registry Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-delete.md
To maintain the size of a repository or registry, you might need to periodically
The following Azure CLI command lists all manifest digests in a repository older than a specified timestamp, in ascending order. Replace `<acrName>` and `<repositoryName>` with values appropriate for your environment. The timestamp could be a full date-time expression or a date, as in this example. ```azurecli
-az acr manifest list-metadata --name <repositoryName> --registry <acrName> <repositoryName> \
+az acr manifest list-metadata --name <repositoryName> --registry <acrName> \
--orderby time_asc -o tsv --query "[?lastUpdateTime < '2019-04-05'].[digest, lastUpdateTime]" ```
container-registry Container Registry Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-geo-replication.md
A geo-replicated registry provides the following benefits:
> * If you need to maintain copies of container images in more than one Azure container registry, Azure Container Registry also supports [image import](container-registry-import-images.md). For example, in a DevOps workflow, you can import an image from a development registry to a production registry, without needing to use Docker commands. > * If you want to move a registry to a different Azure region, instead of geo-replicating the registry, see [Manually move a container registry to another region](manual-regional-move.md).
+## Prerequisites
+
+* The user will require following permissions (at registry level) to create/delete replications:
+
+ | Permission | Description |
+ |||
+ | Microsoft.ContainerRegistry/registries/write | Create a replication |
+ | Microsoft.ContainerRegistry/registries/replications/write | Delete a replication |
+ ## Example use case Contoso runs a public presence website located across the US, Canada, and Europe. To serve these markets with local and network-close content, Contoso runs [Azure Kubernetes Service](../aks/index.yml) (AKS) clusters in West US, East US, Canada Central, and West Europe. The website application, deployed as a Docker image, utilizes the same code and image across all regions. Content, local to that region, is retrieved from a database, which is provisioned uniquely in each region. Each regional deployment has its unique configuration for resources like the local database.
container-registry Container Registry Helm Repos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-helm-repos.md
Run `helm registry login` to authenticate with the registry. You may pass [regi
``` - Authenticate with a [repository scoped token](container-registry-repository-scoped-permissions.md) (Preview). ```azurecli
- USER_NAME="helm-token"
+ USER_NAME="helmtoken"
PASSWORD=$(az acr token create -n $USER_NAME \ -r $ACR_NAME \ --scope-map _repositories_admin \
container-registry Container Registry Import Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-import-images.md
In the following example, *mysourceregistry* is in a different subscription from
```azurecli az acr import \ --name myregistry \
- --source samples/aci-helloworld:latest \
+ --source aci-helloworld:latest \
--image aci-hello-world:latest \ --registry /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/sourceResourceGroup/providers/Microsoft.ContainerRegistry/registries/mysourceregistry ```
container-registry Container Registry Transfer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-transfer-troubleshooting.md
* **Template deployment failures or errors** * If a pipeline run fails, look at the `pipelineRunErrorMessage` property of the run resource. * For common template deployment errors, see [Troubleshoot ARM template deployments](../azure-resource-manager/templates/template-tutorial-troubleshoot.md)
+* **Problems accessing Key Vault**<a name="problems-accessing-key-vault"></a>
+ * If your pipelineRun deployment fails with a `403 Forbidden` error when accessing Azure Key Vault, verify that your pipeline managed identity has adequate permissions.
+ * A pipelineRun uses the exportPipeline or importPipeline managed identity to fetch the SAS token secret from your Key Vault. ExportPipelines and importPipelines are provisioned with either a system-assigned or user-assigned managed identity. This managed identity is required to have `secret get` permissions on the Key Vault in order to read the SAS token secret. Ensure that an access policy for the managed identity was added to the Key Vault. For more information, reference [Give the ExportPipeline identity keyvault policy access](./container-registry-transfer-cli.md#give-the-exportpipeline-identity-keyvault-policy-access) and [Give the ImportPipeline identity keyvault policy access](./container-registry-transfer-cli.md#give-the-importpipeline-identity-keyvault-policy-access).
* **Problems accessing storage**<a name="problems-accessing-storage"></a> * If you see a `403 Forbidden` error from storage, you likely have a problem with your SAS token. * The SAS token might not currently be valid. The SAS token might be expired or the storage account keys might have changed since the SAS token was created. Verify that the SAS token is valid by attempting to use the SAS token to authenticate for access to the storage account container. For example, put an existing blob endpoint followed by the SAS token in the address bar of a new Microsoft Edge InPrivate window or upload a blob to the container with the SAS token by using `az storage blob upload`.
[az-deployment-group-show]: /cli/azure/deployment/group#az_deployment_group_show [az-acr-repository-list]: /cli/azure/acr/repository#az_acr_repository_list [az-acr-import]: /cli/azure/acr#az_acr_import
-[az-resource-delete]: /cli/azure/resource#az_resource_delete
+[az-resource-delete]: /cli/azure/resource#az_resource_delete
container-registry Github Action Scan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/github-action-scan.md
In this example, you'll create a three secrets that you can use to authenticate
:::image type="content" source="media/github-action-scan/github-repo-settings.png" alt-text="Select Settings in the navigation.":::
-1. Select **Secrets** and then **New Secret**.
+1. Select **Security > Secrets and variables > Actions**.
- :::image type="content" source="media/github-action-scan/azure-secret-add.png" alt-text="Choose to add a secret.":::
+1. Select **New repository secret**.
1. Paste the following values for each secret created with the following values from the Azure portal by navigating to the **Access Keys** in the Container Registry.
container-registry Scan Images Defender https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/scan-images-defender.md
Last updated 10/11/2022
To scan images in your Azure container registries for vulnerabilities, you can integrate one of the available Azure Marketplace solutions or, if you want to use Microsoft Defender for Cloud, optionally enable **Microsoft Defender for container registries** at the subscription level.
-* Learn more about [Microsoft Defender for container registries](../security-center/defender-for-container-registries-introduction.md)
-* Learn more about [container security in Microsoft Defender for Cloud](../security-center/container-security.md)
+* Learn more about [Microsoft Defender for container registries](https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-va-acr)
+* Learn more about [container security in Microsoft Defender for Cloud](https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-introduction)
## Registry operations by Microsoft Defender for Cloud
-Microsoft Defender for Cloud scans images that are pushed to a registry, imported into a registry, or any images pulled within the last 30 days. If vulnerabilities are detected, [recommended remediations](../security-center/defender-for-container-registries-usage.md#view-and-remediate-findings) appear in Microsoft Defender for Cloud.
+Microsoft Defender for Cloud scans images that are pushed to a registry, imported into a registry, or any images pulled within the last 30 days. If vulnerabilities are detected, [recommended remediations](https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-va-acr#view-and-remediate-findings) appear in Microsoft Defender for Cloud.
After you've taken the recommended steps to remediate the security issue, replace the image in your registry. Microsoft Defender for Cloud rescans the image to confirm that the vulnerabilities are remediated.
-For details, see [Use Microsoft Defender for container registries](../security-center/defender-for-container-registries-usage.md).
+For details, see [Use Microsoft Defender for container registries](https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-va-acr).
> [!TIP] > Microsoft Defender for Cloud authenticates with the registry to pull images for vulnerability scanning. If [resource logs](monitor-service-reference.md#resource-logs) are collected for your registry, you'll see registry login events and image pull events generated by Microsoft Defender for Cloud. These events are associated with an alphanumeric ID such as `b21cb118-5a59-4628-bab0-3c3f0e434cg6`.
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-rbac.md
Previously updated : 02/16/2022 Last updated : 10/31/2022
The Azure Cosmos DB data plane RBAC is built on concepts that are commonly found
> - [Azure PowerShell scripts](./sql/manage-with-powershell.md) > - [Azure CLI scripts](./sql/manage-with-cli.md) > - Azure management libraries available in:
-> - [.NET](https://www.nuget.org/packages/Microsoft.Azure.Management.CosmosDB/)
+> - [.NET](https://www.nuget.org/packages/Azure.ResourceManager.CosmosDB/)
> - [Java](https://search.maven.org/artifact/com.azure.resourcemanager/azure-resourcemanager-cosmos) > - [Python](https://pypi.org/project/azure-mgmt-cosmosdb/) >
cosmos-db Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/indexing.md
Regardless of the value specified for the **Background** index property, index u
There is no impact to read availability when adding a new index. Queries will only utilize new indexes once the index transformation is complete. During the index transformation, the query engine will continue to use existing indexes, so you'll observe similar read performance during the indexing transformation to what you had observed before initiating the indexing change. When adding new indexes, there is also no risk of incomplete or inconsistent query results.
-When removing indexes and immediately running queries the have filters on the dropped indexes, results might be inconsistent and incomplete until the index transformation finishes. If you remove indexes, the query engine does not provide consistent or complete results when queries filter on these newly removed indexes. Most developers do not drop indexes and then immediately try to query them so, in practice, this situation is unlikely.
+When removing indexes and immediately running queries that have filters on the dropped indexes, results might be inconsistent and incomplete until the index transformation finishes. If you remove indexes, the query engine does not provide consistent or complete results when queries filter on these newly removed indexes. Most developers do not drop indexes and then immediately try to query them so, in practice, this situation is unlikely.
> [!NOTE] > You can [track index progress](#track-index-progress).
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-dotnet.md
Title: Quickstart - Azure Cosmos DB for MongoDB for .NET with MongoDB drier
+ Title: Quickstart - Azure Cosmos DB for MongoDB for .NET with MongoDB driver
description: Learn how to build a .NET app to manage Azure Cosmos DB for MongoDB account resources in this quickstart.
data-share How To Add Recipients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/how-to-add-recipients.md
Title: Add recipients in Azure Data Share description: Learn how to add recipients to an existing data share in Azure Data Share.--++ Previously updated : 02/07/2022 Last updated : 10/27/2022 # How to add a recipient to your share
data-share How To Delete Invitation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/how-to-delete-invitation.md
Title: Delete an invitation in Azure Data Share description: Learn how to delete an invitation to a data share recipient in Azure Data Share.--++ Previously updated : 01/03/2022 Last updated : 10/27/2022 # How to delete an invitation to a recipient in Azure Data Share
data-share How To Revoke Share Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/how-to-revoke-share-subscription.md
Title: Revoke a share subscription in Azure Data Share description: Learn how to revoke a share subscription from a recipient using Azure Data Share.--++ Previously updated : 01/03/2022 Last updated : 10/31/2022 # How to revoke a consumer's share subscription in Azure Data Share
data-share Move To New Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/move-to-new-region.md
Title: Move Azure Data Share Accounts to another Azure region using the Azure po
description: Use Azure Resource Manager template to move Azure Data Share account from one Azure region to another using the Azure portal. Previously updated : 03/17/2022 Last updated : 10/27/2022
data-share Accept Share Invitations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/scripts/powershell/accept-share-invitations-powershell.md
Title: "PowerShell script: Accept invitation from an Azure Data Share" description: This PowerShell script accepts invitations from an existing data share. --++ Previously updated : 01/03/2022 Last updated : 10/31/2022
data-share Add Datasets Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/scripts/powershell/add-datasets-powershell.md
Title: "PowerShell script: Add a blob dataset to an Azure Data Share" description: This PowerShell script adds a blob dataset to an existing share. --++ Previously updated : 01/03/2022 Last updated : 10/31/2022
data-share Create New Share Account Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/scripts/powershell/create-new-share-account-powershell.md
Title: "PowerShell script: Create new Azure Data Share account" description: This PowerShell script creates a new Data Share account.-+ Previously updated : 01/03/2022- Last updated : 10/31/2022+
data-share Create View Trigger Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/scripts/powershell/create-view-trigger-powershell.md
Title: "PowerShell script: Create and view an Azure Data Share snapshot triggers" description: This PowerShell script creates and gets share snapshot triggers.--++ Previously updated : 01/03/2022 Last updated : 10/31/2022
data-share Monitor Usage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/scripts/powershell/monitor-usage-powershell.md
Title: "PowerShell script: Monitor usage of an Azure Data Share" description: This PowerShell script retrieves usage metrics of a sent data share.-+ Previously updated : 01/03/2022- Last updated : 10/31/2022+
data-share Set View Synchronizations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/scripts/powershell/set-view-synchronizations-powershell.md
Title: "PowerShell script: Set and view Azure Data Share synchronization settings" description: This PowerShell script sets and gets share synchronization settings.-+ Previously updated : 01/03/2022- Last updated : 10/31/2022+
data-share View Sent Invitations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/scripts/powershell/view-sent-invitations-powershell.md
Title: "PowerShell script: List Azure Data Share invitations sent to a consumer" description: Learn how this PowerShell script gets invitations sent to a consumer and see an example of the script that you can use.--++ Previously updated : 01/03/2022 Last updated : 10/31/2022
data-share View Share Details Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/scripts/powershell/view-share-details-powershell.md
Title: "PowerShell script: List existing shares in Azure Data Share" description: This PowerShell script lists and displays details of shares. --++ Previously updated : 01/03/2022 Last updated : 10/31/2022
defender-for-cloud Auto Deploy Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-vulnerability-assessment.md
To assess your machines for vulnerabilities, you can use one of the following so
Defender for Cloud also offers vulnerability assessment for your: - SQL databases - [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)-- Azure Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-va-acr.md)-- Amazon AWS Elastic Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-va-ecr.md)
+- Azure Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md)
+- Amazon AWS Elastic Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-elastic.md)
defender-for-cloud Custom Dashboards Azure Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-dashboards-azure-workbooks.md
Learn more about using these scanners:
- [Find vulnerabilities with Microsoft threat and vulnerability management](deploy-vulnerability-assessment-tvm.md) - [Find vulnerabilities with the integrated Qualys scanner](deploy-vulnerability-assessment-vm.md)-- [Scan your ACR images for vulnerabilities](defender-for-containers-va-acr.md)-- [Scan your ECR images for vulnerabilities](defender-for-containers-va-ecr.md)
+- [Scan your ACR images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md)
+- [Scan your ECR images for vulnerabilities](defender-for-containers-vulnerability-assessment-elastic.md)
- [Scan your SQL resources for vulnerabilities](defender-for-sql-on-machines-vulnerability-assessment.md) Findings for each resource type are reported in separate recommendations:
defender-for-cloud Defender For Cloud Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-glossary.md
+
+ Title: Defender for Cloud glossary for device builder
+description: The glossary provides a brief description of important Defender for Cloud platform terms and concepts.
Last updated : 10/30/2022+++
+# Defender for Cloud glossary for device builder
+
+This glossary provides a brief description of important terms and concepts for the Microsoft Defender for Cloud platform. Select the **Learn more** links to go to related terms in the glossary. This will help you to learn and use the product tools quickly and effectively.
+
+<a name="glossary-a"></a>
+## A
+| Term | Description | Learn more |
+|--|--|--|
+|**AAC**|Adaptive application controls are an intelligent and automated solution for defining allowlists of known-safe applications for your machines. |[Adaptive Application Controls](adaptive-application-controls.md)
+| **ACR Tasks** | A suite of features within Azure container registry | [Frequently asked questions - Azure Container Registry](../container-registry/container-registry-faq.yml) |
+|**ADO**|Azure DevOps provides developer services for allowing teams to plan work, collaborate on code development, and build and deploy applications.|[What is Azure DevOps?](/azure/devops/user-guide/what-is-azure-devops) |
+|**AKS**| Azure Kubernetes Service, Microsoft's managed service for developing, deploying, and managing containerized applications.| [Kubernetes Concepts](/azure-stack/aks-hci/kubernetes-concepts)|
+|**Alerts**| Alerts defend your workloads in real-time so you can react immediately and prevent security events from developing.|[Security alerts and incidents](alerts-overview.md)|
+|**ANH** | Adaptive network hardening| [Improve your network security posture with adaptive network hardening](adaptive-network-hardening.md)
+|**APT** | Advanced Persistent Threats | [Video: Understanding APTs](/events/teched-2012/sia303)|
+| **Arc-enabled Kubernetes**| Azure Arc-enabled Kubernetes allows you to attach and configure Kubernetes clusters running anywhere. You can connect your clusters running on other public cloud providers or clusters running on your on-premises data center.|[What is Azure Arc-enabled Logic Apps? (Preview)](../logic-apps/azure-arc-enabled-logic-apps-overview.md)
+|**ARM**| Azure Resource Manager-the deployment and management service for Azure.| [Azure Resource Manager Overview](/azure/azure-resource-manager/management/overview)|
+|**ASB**| Azure Security Benchmark provides recommendations on how you can secure your cloud solutions on Azure.| [Azure Security Benchmark](/azure/baselines/security-center-security-baseline) |
+|**Auto-provisioning**| To make sure that your server resources are secure, Microsoft Defender for Cloud uses agents installed on your servers to send information about your servers to Microsoft Defender for Cloud for analysis. You can use auto provisioning to quietly deploy the Azure Monitor Agent on your servers.| [Configure auto provision](../iot-dps/quick-setup-auto-provision.md)|
+
+## B
+| Term | Description | Learn more |
+|--|--|--|
+|**Blob storage**| Azure Blob Storage is the high scale object storage service for Azure and a key building block for data storage in Azure.| [what is Azure blob storage?](/azure/storage/blobs/storage-blobs-introduction)|
+
+## C
+| Term | Description | Learn more |
+|--|--|--|
+|**Cacls** | Change access control list, Microsoft Windows native command-line utility often used for modifying the security permission on folders and files.| [access-control-lists](/windows/win32/secauthz/access-control-lists) |
+|**CIS Benchmark** | (Kubernetes) Center for Internet Security benchmark| [CIS](/azure/aks/cis-kubernetes)|
+|**CORS**| Cross origin resource sharing, an HTTP feature that enables a web application running under one domain to access resources in another domain.| [CORS](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services)|
+|**CNCF**|Cloud Native Computing Foundation|[Build CNCF projects by using Azure Kubernetes service](/azure/architecture/example-scenario/apps/build-cncf-incubated-graduated-projects-aks)|
+|**CSPM**|Cloud Security Posture Management| [Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md)|
+|**CWPP** | Cloud Workload Protection Platform | [CWPP](/azure/defender-for-cloud/overview-page)|
+
+## D
+| Term | Description | Learn more |
+|--|--|--|
+| **DDOS Attack** | Distributed denial-of-service, a type of attack where an attacker sends more requests to an application than the application is capable of handling.| [DDOS FAQs](/azure/ddos-protection/ddos-faq)|
+
+## E
+| Term | Description | Learn more |
+|--|--|--|
+|**EDR**| Endpoint Detection and Response|[Microsoft Defender for Endpoint]([Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md)|
+|**EKS**| Amazon Elastic Kubernetes Service, Amazon's managed service for running Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes.|[EKS](https://aws.amazon.com/eks/)|
+|**eBPF**|Extended Berkley Packet Filter |[What is eBPF?](https://ebpf.io/)|
+
+## F
+| Term | Description | Learn more |
+|--|--|--|
+|**FIM**| File Integrity Monitoring | ([File Integrity Monitoring in Microsoft Defender for Cloud](file-integrity-monitoring-overview.md)|
+**FTP** | File Transfer Protocol | [Deploy content using FTP](/azure/app-service/deploy-ftp?tabs=portal)|
+
+## G
+| Term | Description | Learn more |
+|--|--|--|
+|**GCP**| Google Cloud Platform | [Onboard a GPC Project](/azure/active-directory/cloud-infrastructure-entitlement-management/onboard-gcp)|
+|**GKE**| Google Kubernetes Engine, GoogleΓÇÖs managed environment for deploying, managing, and scaling applications using GCP infrastructure.|[Deploy a Kubernetes workload using GPU sharing on your Azure Stack Edge Pro](../databox-online/azure-stack-edge-gpu-deploy-kubernetes-gpu-sharing.md)|
+
+## J
+| Term | Description | Learn more |
+|--|--|--|
+| **JIT** | Just-in-Time VM access |[Understanding just-in-time (JIT) VM access](just-in-time-access-overview.md)|
+
+## K
+| Term | Description | Learn more |
+|--|--|--|
+|**KQL**|Kusto Query Language-a tool to explore your data and discover patterns, identify anomalies and outliers, create statistical modeling, and more.| [KQL Overview](/azure/data-explorer/kusto/query/)|
+
+## L
+| Term | Description | Learn more |
+|--|--|--|
+|**LSA**| Local Security Authority| [Secure and use policies on virtual machines in Azure](../virtual-machines/security-policy.md)|
+
+## M
+| Term | Description | Learn more |
+|--|--|--|
+|**MDC**| Microsoft Defender for Cloud is a Cloud Security Posture Management (CSPM) and Cloud Workload Protection Platform (CWPP) for all of your Azure, on-premises, and multicloud (Amazon AWS and Google GCP) resources. | [What is Microsoft Defender for Cloud?](defender-for-cloud-introduction.md)|
+|**MDE**| Microsoft Defender for Endpoint is an enterprise endpoint security platform designed to help enterprise networks prevent, detect, investigate, and respond to advanced threats.|[Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md)|
+|**MFA**|multi factor authentication, a process in which users are prompted during the sign-in process for an additional form of identification, such as a code on their cellphone or a fingerprint scan.|[How it works: Azure Multi Factor Authentication](/azure/active-directory/authentication/concept-mfa-howitworks)|
+|**MITRE ATT&CK**| a globally-accessible knowledge base of adversary tactics and techniques based on real-world observations.|[MITRE ATT&CK](https://attack.mitre.org/)|
+|**MMA**| Microsoft Monitoring Agent, also known as Log Analytics Agent|[Log Analytics Agent Overview](/azure/azure-monitor/agents/log-analytics-agent)|
+
+## N
+| Term | Description | Learn more |
+|--|--|--|
+|**NGAV**| Next Generation Anti-Virus |
+**NIST** | National Institute of Standards and Technology|[National Institute of Standards and Technology](https://www.nist.gov/)
+
+## R
+| Term | Description | Learn more |
+|--|--|--|
+|**RaMP**| Rapid Modernization Plan, guidance based on initiatives, giving you a set of deployment paths to more quickly implement key layers of protection.|[Zero Trust Rapid Modernization Plan](../security/fundamentals/zero-trust.md)|
+|**RBAC**| Azure role-based access control (Azure RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to. | [RBAC Overview](/azure/role-based-access-control/overview)|
+|**RDP** | Remote Desktop Protocol (RDP) is a sophisticated technology that uses various techniques to perfect the server's remote graphics' delivery to the client device.| [RDP Bandwidth Requirements](/azure/virtual-desktop/rdp-bandwidth)|
+|**Recommendations**|Recommendations secure your workloads with step-by-step actions that protect your workloads from known security risks.| [What are security policies, initiatives, and recommendations?](security-policy-concept.md)|
+**Regulatory Compliance** | Regulatory compliance refers to the discipline and process of ensuring that a company follows the laws enforced by governing bodies in their geography or rules required | [Regulatory Compliance Overview](/azure/cloud-adoption-framework/govern/policy-compliance/regulatory-compliance) |
++
+## S
+| Term | Description | Learn more |
+|--|--|--|
+|**Secure Score**|Defender for Cloud continually assesses your cross-cloud resources for security issues. It then aggregates all the findings into a single score that represents your current security situation: the higher the score, the lower the identified risk level.|[Security posture for Microsoft Defender for Cloud](secure-score-security-controls.md)|
+|**Security Initiative** | A collection of Azure Policy Definitions, or rules, that are grouped together towards a specific goal or purpose. | [What are security policies, initiatives, and recommendations?](security-policy-concept.md)
+|**Security Policy**| An Azure rule about specific security conditions that you want controlled.|[Understanding Security Policies](security-policy-concept.md)|
+|**SOAR**| Security Orchestration Automated Response, a collection of software tools designed to collect data about security threats from multiple sources and respond to low-level security events without human assistance.| [SOAR](/azure/sentinel/automation)|
+
+## T
+| Term | Description | Learn more |
+|--|--|--|
+|**TVM**|Threat and Vulnerability Management, a built-in module in Microsoft Defender for Endpoint that can discover vulnerabilities and misconfigurations in near real time and prioritize vulnerabilities based on the threat landscape and detections in your organization.|[Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md)
+
+## Z
+| Term | Description | Learn more |
+|--|--|--|
+|**Zero-Trust**|A new security model that assumes breach and verifies each request as though it originated from an uncontrolled network.|[Zero-Trust Security](/azure/security/fundamentals/zero-trust)|
+
+ ## Next Steps
+[Microsoft Defender for Cloud-overview](overview-page.md)
+
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
Review the findings from these vulnerability scanners and respond to them all fr
Learn more on the following pages: - [Defender for Cloud's integrated Qualys scanner for Azure and hybrid machines](deploy-vulnerability-assessment-vm.md)-- [Identify vulnerabilities in images in Azure container registries](defender-for-containers-va-acr.md)-- [Identify vulnerabilities in images in AWS Elastic Container Registry](defender-for-containers-va-ecr.md)
+- [Identify vulnerabilities in images in Azure container registries](defender-for-containers-vulnerability-assessment-azure.md)
+- [Identify vulnerabilities in images in AWS Elastic Container Registry](defender-for-containers-vulnerability-assessment-elastic.md)
## Enforce your security policy from the top down
defender-for-cloud Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-introduction.md
If you connect unsupported registries to your Azure subscription, Defender for C
### Can I customize the findings from the vulnerability scanner? Yes. If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't impact your secure score or generate unwanted noise.
-[Learn about creating rules to disable findings from the integrated vulnerability assessment tool](defender-for-containers-va-acr.md#disable-specific-findings).
+[Learn about creating rules to disable findings from the integrated vulnerability assessment tool](defender-for-containers-vulnerability-assessment-azure.md#disable-specific-findings).
### Why is Defender for Cloud alerting me to vulnerabilities about an image that isnΓÇÖt in my registry? Defender for Cloud provides vulnerability assessments for every image pushed or pulled in a registry. Some images may reuse tags from an image that was already scanned. For example, you may reassign the tag ΓÇ£LatestΓÇ¥ every time you add an image to a digest. In such cases, the ΓÇÿoldΓÇÖ image does still exist in the registry and may still be pulled by its digest. If the image has security findings and is pulled, it'll expose security vulnerabilities.
Defender for Cloud provides vulnerability assessments for every image pushed or
## Next steps > [!div class="nextstepaction"]
-> [Scan your images for vulnerabilities](defender-for-containers-va-acr.md)
+> [Scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md)
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
Learn about this plan in [Overview of Microsoft Defender for Containers](defende
You can learn more by watching these videos from the Defender for Cloud in the Field video series: -- [Microsoft Defender for Containers in a multi-cloud environment](episode-nine.md)
+- [Microsoft Defender for Containers in a multicloud environment](episode-nine.md)
- [Protect Containers in GCP with Defender for Containers](episode-ten.md) ::: zone pivot="defender-for-container-arc,defender-for-container-eks,defender-for-container-gke"
You can check out the following blogs:
Now that you enabled Defender for Containers, you can: -- [Scan your ACR images for vulnerabilities](defender-for-containers-va-acr.md)-- [Scan your Amazon AWS ECR images for vulnerabilities](defender-for-containers-va-ecr.md)
+- [Scan your ACR images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md)
+- [Scan your Amazon AWS ECR images for vulnerabilities](defender-for-containers-vulnerability-assessment-elastic.md)
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
When you push an image to a container registry and while the image is stored in
When the scan completes, Defender for Containers provides details for each vulnerability detected, a security classification for each vulnerability detected, and guidance on how to remediate issues and protect vulnerable attack surfaces. Learn more about:-- [Vulnerability assessment for Azure Container Registry (ACR)](defender-for-containers-va-acr.md)-- [Vulnerability assessment for Amazon AWS Elastic Container Registry (ECR)](defender-for-containers-va-ecr.md)
+- [Vulnerability assessment for Azure Container Registry (ACR)](defender-for-containers-vulnerability-assessment-azure.md)
+- [Vulnerability assessment for Amazon AWS Elastic Container Registry (ECR)](defender-for-containers-vulnerability-assessment-elastic.md)
### View vulnerabilities for running images in Azure Container Registry (ACR)
To provide findings for the recommendation, Defender for Cloud collects the inve
:::image type="content" source="media/defender-for-containers/running-image-vulnerabilities-recommendation.png" alt-text="Screenshot showing where the recommendation is viewable." lightbox="media/defender-for-containers/running-image-vulnerabilities-recommendation-expanded.png":::
-Learn more about [viewing vulnerabilities for running images in (ACR)](defender-for-containers-va-acr.md).
+Learn more about [viewing vulnerabilities for running images in (ACR)](defender-for-containers-vulnerability-assessment-azure.md).
## Run-time protection for Kubernetes nodes and clusters
Yes.
### Does Microsoft Defender for Containers support AKS without scale set (default)?
-No. Only Azure Kubernetes Service (AKS) clusters that use virtual machine scale sets for the nodes is supported.
+No. Only Azure Kubernetes Service (AKS) clusters that use Virtual Machine Scale Sets for the nodes is supported.
### Do I need to install the Log Analytics VM extension on my AKS nodes for security protection?
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
+
+ Title: Identify vulnerabilities in Azure Container Registry with Microsoft Defender for Cloud
+description: Learn how to use Defender for Containers to scan images in your Azure Container Registry to find vulnerabilities.
++ Last updated : 10/24/2022++++
+# Use Defender for Containers to scan your Azure Container Registry images for vulnerabilities
+
+This article explains how to use Defender for Containers to scan the container images stored in your Azure Resource Manager-based Azure Container Registry, as part of the protections provided within Microsoft Defender for Cloud.
+
+To enable scanning of vulnerabilities in containers, you have to [enable Defender for Containers](defender-for-containers-enable.md). When the scanner, powered by Qualys, reports vulnerabilities, Defender for Cloud presents the findings and related information as recommendations. In addition, the findings include related information such as remediation steps, relevant CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific registry.
+
+Defender for Cloud filters and classifies findings from the scanner. Images without vulnerabilities are marked as healthy and Defender for Cloud doesn't send notifications about healthy images to keep you from getting unwanted informational alerts.
+
+The triggers for an image scan are:
+
+- **On push** - Whenever an image is pushed to your registry, Defender for Containers automatically scans that image. To trigger the scan of an image, push it to your repository.
+
+- **Recently pulled** - Since new vulnerabilities are discovered every day, **Microsoft Defender for Containers** also scans, on a weekly basis, any image that has been pulled within the last 30 days. There's no extra charge for these rescans; as mentioned above, you're billed once per image.
+
+- **On import** - Azure Container Registry has import tools to bring images to your registry from Docker Hub, Microsoft Container Registry, or another Azure container registry. **Microsoft Defender for Containers** scans any supported images you import. Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md).
+
+- **Continuous scan**- This trigger has two modes:
+
+ - A continuous scan based on an image pull. This scan is performed every seven days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension.
+
+ - (Preview) Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
+
+When a scan is triggered, findings are available as Defender for Cloud recommendations from 2 minutes up to 15 minutes after the scan is complete.
+
+Also, check out the ability scan container images for vulnerabilities as the images are built in your CI/CD GitHub workflows. Learn more in [Defender for DevOps](defender-for-devops-introduction.md).
+
+## Prerequisites
+
+Before you can scan your ACR images:
+
+- [Enable Defender for Containers](defender-for-containers-enable.md) for your subscription. Defender for Containers is now ready to scan images in your registries.
+
+ >[!NOTE]
+ > This feature is charged per image.
+
+- If you want to find vulnerabilities in images stored in other container registries, you can import the images into ACR and scan them.
+
+ Use the ACR tools to bring images to your registry from Docker Hub or Microsoft Container Registry. When the import completes, the imported images are scanned by the built-in vulnerability assessment solution.
+
+ Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md)
+
+ You can also [scan images in Amazon AWS Elastic Container Registry](defender-for-containers-vulnerability-assessment-elastic.md) directly from the Azure portal.
+
+For a list of the types of images and container registries supported by Microsoft Defender for Containers, see [Availability](supported-machines-endpoint-solutions-clouds-containers.md?tabs=azure-aks#registries-and-images).
+
+## View and remediate findings
+
+1. To view the findings, open the **Recommendations** page. If issues were found, you'll see the recommendation [Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648).
+
+ ![Recommendation to remediate issues .](media/monitor-container-security/acr-finding.png)
+
+1. Select the recommendation.
+
+ The recommendation details page opens with additional information. This information includes the list of registries with vulnerable images ("Affected resources") and the remediation steps.
+
+1. Select a specific registry to see the repositories within it that have vulnerable repositories.
+
+ ![Select a registry.](media/monitor-container-security/acr-finding-select-registry.png)
+
+ The registry details page opens with the list of affected repositories.
+
+1. Select a specific repository to see the repositories within it that have vulnerable images.
+
+ ![Select a repository.](media/monitor-container-security/acr-finding-select-repository.png)
+
+ The repository details page opens. It lists the vulnerable images together with an assessment of the severity of the findings.
+
+1. Select a specific image to see the vulnerabilities.
+
+ ![Select images.](media/monitor-container-security/acr-finding-select-image.png)
+
+ The list of findings for the selected image opens.
+
+ ![List of findings.](media/monitor-container-security/acr-findings.png)
+
+1. To learn more about a finding, select the finding.
+
+ The findings details pane opens.
+
+ [![Findings details pane.](media/monitor-container-security/acr-finding-details-pane.png)](media/monitor-container-security/acr-finding-details-pane.png#lightbox)
+
+ This pane includes a detailed description of the issue and links to external resources to help mitigate the threats.
+
+1. Follow the steps in the remediation section of this pane.
+
+1. When you've taken the steps required to remediate the security issue, replace the image in your registry:
+
+ 1. Push the updated image to trigger a scan.
+
+ 1. Check the recommendations page for the recommendation [Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648).
+
+ If the recommendation still appears and the image you've handled still appears in the list of vulnerable images, check the remediation steps again.
+
+ 1. When you're sure the updated image has been pushed, scanned, and is no longer appearing in the recommendation, delete the ΓÇ£oldΓÇ¥ vulnerable image from your registry.
+
+## Disable specific findings
+
+> [!NOTE]
+> [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]
+
+If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't affect your secure score or generate unwanted noise.
+
+When a finding matches the criteria you've defined in your disable rules, it won't appear in the list of findings. Typical scenarios include:
+
+- Disable findings with severity below medium
+- Disable findings that are non-patchable
+- Disable findings with CVSS score below 6.5
+- Disable findings with specific text in the security check or category (for example, ΓÇ£RedHatΓÇ¥, ΓÇ£CentOS Security Update for sudoΓÇ¥)
+
+> [!IMPORTANT]
+> To create a rule, you need permissions to edit a policy in Azure Policy.
+>
+> Learn more in [Azure RBAC permissions in Azure Policy](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy).
+
+You can use any of the following criteria:
+
+- Finding ID
+- Category
+- Security check
+- CVSS v3 scores
+- Severity
+- Patchable status
+
+To create a rule:
+
+1. From the recommendations detail page for [Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648), select **Disable rule**.
+1. Select the relevant scope.
+1. Define your criteria.
+1. Select **Apply rule**.
+
+ :::image type="content" source="./media/defender-for-containers-vulnerability-assessment-azure/new-disable-rule-for-registry-finding.png" alt-text="Create a disable rule for VA findings on registry.":::
+
+1. To view, override, or delete a rule:
+ 1. Select **Disable rule**.
+ 1. From the scope list, subscriptions with active rules show as **Rule applied**.
+ :::image type="content" source="./media/remediate-vulnerability-findings-vm/modify-rule.png" alt-text="Modify or delete an existing rule.":::
+ 1. To view or delete the rule, select the ellipsis menu ("...").
+
+## FAQ
+
+### How does Defender for Containers scan an image?
+
+Defender for Containers pulls the image from the registry and runs it in an isolated sandbox with the Qualys scanner. The scanner extracts a list of known vulnerabilities.
+
+Defender for Cloud filters and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying you when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts.
+
+### Can I get the scan results via REST API?
+
+Yes. The results are under [Sub-Assessments REST API](/rest/api/defenderforcloud/sub-assessments/list). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan.
+
+### Why is Defender for Cloud alerting me to vulnerabilities about an image that isnΓÇÖt in my registry?
+
+Some images may reuse tags from an image that was already scanned. For example, you may reassign the tag ΓÇ£LatestΓÇ¥ every time you add an image to a digest. In such cases, the ΓÇÿoldΓÇÖ image does still exist in the registry and may still be pulled by its digest. If the image has security findings and is pulled, it will expose security vulnerabilities.
+
+## Next steps
+
+Learn more about the [advanced protection plans of Microsoft Defender for Cloud](enhanced-security-features-overview.md).
defender-for-cloud Defender For Containers Vulnerability Assessment Elastic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-elastic.md
+
+ Title: Identify vulnerabilities in Amazon AWS Elastic Container Registry with Microsoft Defender for Cloud
+description: Learn how to use Defender for Containers to scan images in your Amazon AWS Elastic Container Registry (ECR) to find vulnerabilities.
++ Last updated : 09/11/2022++++
+# Use Defender for Containers to scan your Amazon AWS Elastic Container Registry images for vulnerabilities (Preview)
+
+Defender for Containers lets you scan the container images stored in your Amazon AWS Elastic Container Registry (ECR) as part of the protections provided within Microsoft Defender for Cloud.
+
+To enable scanning of vulnerabilities in containers, you have to [connect your AWS account to Defender for Cloud](quickstart-onboard-aws.md) and [enable Defender for Containers](defender-for-containers-enable.md). The agentless scanner, powered by the open-source scanner Trivy, scans your ECR repositories and reports vulnerabilities.
+
+Defender for Containers creates resources in your AWS account to build an inventory of the software in your images. The scan then sends only the software inventory to Defender for Cloud. This architecture protects your information privacy and intellectual property, and also keeps the outbound network traffic to a minimum. Defender for Containers creates an ECS cluster in a dedicated VPC, an internet gateway, and an S3 bucket in the us-east-1 and eu-central-1 regions to build the software inventory.
+
+Defender for Cloud filters and classifies findings from the software inventory that the scanner creates. Images without vulnerabilities are marked as healthy and Defender for Cloud doesn't send notifications about healthy images to keep you from getting unwanted informational alerts.
+
+The triggers for an image scan are:
+
+- **On push** - Whenever an image is pushed to your registry, Defender for Containers automatically scans that image within 2 hours.
+
+- **Continuous scan** - Defender for Containers reassesses the images based on the latest database of vulnerabilities of Trivy. This reassessment is performed weekly.
+
+## Prerequisites
+
+Before you can scan your ECR images:
+
+- [Connect your AWS account to Defender for Cloud and enable Defender for Containers](quickstart-onboard-aws.md)
+- You must have at least one free VPC in the `us-east-1` and `eu-central-1` regions to host the AWS resources that build the software inventory.
+
+For a list of the types of images not supported by Microsoft Defender for Containers, see [Availability](supported-machines-endpoint-solutions-clouds-containers.md?tabs=aws-eks#images).
+
+## Enable vulnerability assessment
+
+To enable vulnerability assessment:
+
+1. From Defender for Cloud's menu, open **Environment settings**.
+1. Select the AWS connector that connects to your AWS account.
+
+ :::image type="content" source="media/defender-for-kubernetes-intro/select-aws-connector.png" alt-text="Screenshot of Defender for Cloud's environment settings page showing an AWS connector.":::
+
+1. In the Monitoring Coverage section of the Containers plan, select **Settings**.
+
+ :::image type="content" source="media/defender-for-containers-vulnerability-assessment-elastic/aws-containers-settings.png" alt-text="Screenshot of Containers settings for the AWS connector." lightbox="media/defender-for-containers-vulnerability-assessment-elastic/aws-containers-settings.png":::
+
+1. Turn on **Vulnerability assessment**.
+
+ :::image type="content" source="media/defender-for-containers-vulnerability-assessment-elastic/aws-containers-enable-va.png" alt-text="Screenshot of the toggle to turn on vulnerability assessment for ECR images.":::
+
+1. Select **Save** > **Next: Configure access**.
+
+1. Download the CloudFormation template.
+
+1. Using the downloaded CloudFormation template, create the stack in AWS as instructed on screen. If you're onboarding a management account, you'll need to run the CloudFormation template both as Stack and as StackSet. It takes up to 30 minutes for the AWS resources to be created. The resources have the prefix `defender-for-containers-va`.
+
+1. Select **Next: Review and generate**.
+
+1. Select **Update**.
+
+Findings are available as Defender for Cloud recommendations from 2 hours after vulnerability assessment is turned on. The recommendation also shows any reason that a repository is identified as not scannable ("Not applicable"), such as images pushed more than 3 months before you enabled vulnerability assessment.
+
+## View and remediate findings
+
+Vulnerability assessment lists the repositories with vulnerable images as the results of the [Elastic container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/03587042-5d4b-44ff-af42-ae99e3c71c87) recommendation. From the recommendation, you can identify vulnerable images and get details about the vulnerabilities.
+
+Vulnerability findings for an image are still shown in the recommendation for 48 hours after an image is deleted.
+
+1. To view the findings, open the **Recommendations** page. If the scan found issues, you'll see the recommendation [Elastic container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/03587042-5d4b-44ff-af42-ae99e3c71c87).
+
+ :::image type="content" source="media/defender-for-containers-vulnerability-assessment-elastic/elastic-container-registry-recommendation.png" alt-text="Screenshot of the Recommendation to remediate findings in ECR images.":::
+
+1. Select the recommendation.
+
+ The recommendation details page opens with additional information. This information includes the list of repositories with vulnerable images ("Affected resources") and the remediation steps.
+
+1. Select specific repositories to the vulnerabilities found in images in those repositories.
+
+ :::image type="content" source="media/defender-for-containers-vulnerability-assessment-elastic/elastic-container-registry-unhealthy-repositories.png" alt-text="Screenshot of ECR repositories that have vulnerabilities." lightbox="media/defender-for-containers-vulnerability-assessment-elastic/elastic-container-registry-unhealthy-repositories.png":::
+
+ The vulnerabilities section shows the identified vulnerabilities.
+
+1. To learn more about a vulnerability, select the vulnerability.
+
+ The vulnerability details pane opens.
+
+ :::image type="content" source="media/defender-for-containers-vulnerability-assessment-elastic/elastic-container-registry-vulnerability.png" alt-text="Screenshot of vulnerability details in ECR repositories." lightbox="media/defender-for-containers-vulnerability-assessment-elastic/elastic-container-registry-vulnerability.png":::
+
+ This pane includes a detailed description of the issue and links to external resources to help mitigate the threats.
+
+1. Follow the steps in the remediation section of the recommendation.
+
+1. When you've taken the steps required to remediate the security issue, replace the image in your registry:
+
+ 1. Push the updated image to trigger a scan.
+
+ 1. Check the recommendations page for the recommendation [Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648).
+
+ If the recommendation still appears and the image you've handled still appears in the list of vulnerable images, check the remediation steps again.
+
+ 1. When you're sure the updated image has been pushed, scanned, and is no longer appearing in the recommendation, delete the ΓÇ£oldΓÇ¥ vulnerable image from your registry.
+
+<!--
+## Disable specific findings
+
+> [!NOTE]
+> [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]
+
+If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't affect your secure score or generate unwanted noise.
+
+When a finding matches the criteria you've defined in your disable rules, it won't appear in the list of findings. Typical scenarios include:
+
+- Disable findings with severity below medium
+- Disable findings that are non-patchable
+- Disable findings with CVSS score below 6.5
+- Disable findings with specific text in the security check or category (for example, ΓÇ£RedHatΓÇ¥, ΓÇ£CentOS Security Update for sudoΓÇ¥)
+
+> [!IMPORTANT]
+> To create a rule, you need permissions to edit a policy in Azure Policy.
+>
+> Learn more in [Azure RBAC permissions in Azure Policy](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy).
+
+You can use any of the following criteria:
+
+- Finding ID
+- Category
+- Security check
+- CVSS v3 scores
+- Severity
+- Patchable status
+
+To create a rule:
+
+1. From the recommendations detail page for [Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648), select **Disable rule**.
+1. Select the relevant scope.
+1. Define your criteria.
+1. Select **Apply rule**.
+
+ :::image type="content" source="media/defender-for-containers-vulnerability-assessment-azure/new-disable-rule-for-registry-finding.png" alt-text="Screenshot of how to create a disable rule for VA findings on registry.":::
+
+1. To view, override, or delete a rule:
+ 1. Select **Disable rule**.
+ 1. From the scope list, subscriptions with active rules show as **Rule applied**.
+ :::image type="content" source="./media/remediate-vulnerability-findings-vm/modify-rule.png" alt-text="Screenshot of how to modify or delete an existing rule.":::
+ 1. To view or delete the rule, select the ellipsis menu ("..."). -->
+
+## FAQs
+
+### Can I get the scan results via REST API?
+
+Yes. The results are under [Sub-Assessments REST API](/rest/api/defenderforcloud/sub-assessments/list). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan.
+
+## Next steps
+
+Learn more about:
+
+- [Advanced protection plans of Microsoft Defender for Cloud](enhanced-security-features-overview.md)
+- [Multicloud protections](multicloud.yml) for your AWS account
defender-for-cloud Defender For Storage Exclude https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-exclude.md
-# Exclude a storage account from Microsoft Defender for Storage protections
+# Exclude a storage account from a protected subscription in the per-transaction plan
-When you [enable Microsoft Defender for Storage](../storage/common/azure-defender-storage-configure.md#set-up-microsoft-defender-for-cloud) on a subscription, all current and future Azure Storage accounts in that subscription are protected. If you have specific accounts that you want to exclude from the Defender for Storage protections, you can exclude them using the Azure portal, PowerShell, or the Azure CLI.
+When you [enable Microsoft Defender for Storage](../storage/common/azure-defender-storage-configure.md) on a subscription for the per-transaction pricing, all current and future Azure Storage accounts in that subscription are protected. You can exclude specific storage accounts from the Defender for Storage protections using the Azure portal, PowerShell, or the Azure CLI.
We don't recommend that you exclude storage accounts from Defender for Storage because attackers can use any opening in order to compromise your environment. If you want to optimize your Azure costs and remove storage accounts that you feel are low risk from Defender for Storage, you can use the [Price Estimation Workbook](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SecurityMenuBlade/~/28) in the Azure portal to evaluate the cost savings.
-## Exclude an Azure Storage account
+## Exclude an Azure Storage account protection on a subscription with per-transaction pricing
To exclude an Azure Storage account from Microsoft Defender for Storage:
To exclude an Azure Storage account from Microsoft Defender for Storage:
> [!TIP] > Learn more about tags in [az tag](/cli/azure/tag).
-1. Disable Microsoft Defender for Storage for the desired account on the relevant subscription with the ``security atp storage`` command (using the same resource ID):
+1. Disable Microsoft Defender for Storage for the desired account on the relevant subscription with the `security atp storage` command (using the same resource ID):
```azurecli az security atp storage update --resource-group MyResourceGroup --storage-account MyStorageAccount --is-enabled false
The Microsoft Defender for Storage account will inherit the tag of the Databrick
## Next steps -- Explore the [Microsoft Defender for Storage ΓÇô Price Estimation Dashboard](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-storage-price-estimation-dashboard/ba-p/2429724)
+- Explore the [Microsoft Defender for Storage ΓÇô Price Estimation Dashboard](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-storage-price-estimation-dashboard/ba-p/2429724)
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
**Microsoft Defender for Storage** is an Azure-native layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit your storage accounts. It uses advanced threat detection capabilities and [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) data to provide contextual security alerts. Those alerts also include steps to mitigate the detected threats and prevent future attacks.
-You can [enable Microsoft Defender for Storage](../storage/common/azure-defender-storage-configure.md#set-up-microsoft-defender-for-cloud) at either the subscription level (recommended) or the resource level.
+You can [enable Microsoft Defender for Storage](../storage/common/azure-defender-storage-configure.md) at either the subscription level (recommended) or the resource level.
Defender for Storage continually analyzes the telemetry stream generated by the [Azure Blob Storage](https://azure.microsoft.com/services/storage/blobs/) and Azure Files services. When potentially malicious activities are detected, security alerts are generated. These alerts are displayed in Microsoft Defender for Cloud, together with the details of the suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations.
Alerts include details of the incident that triggered them, and recommendations
> [!TIP] > For a comprehensive list of all Defender for Storage alerts, see the [alerts reference page](alerts-reference.md#alerts-azurestorage). This is useful for workload owners who want to know what threats can be detected and help SOC teams gain familiarity with detections before investigating them. Learn more about what's in a Defender for Cloud security alert, and how to manage your alerts in [Manage and respond to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md).
+## Explore security anomalies
-### Limitations of hash reputation analysis
+When storage activity anomalies occur, you receive an email notification with information about the suspicious security event. Details of the event include:
+
+- The nature of the anomaly
+- The storage account name
+- The event time
+- The storage type
+- The potential causes
+- The investigation steps
+- The remediation steps
+
+The email also includes details on possible causes and recommended actions to investigate and mitigate the potential threat.
++
+You can review and manage your current security alerts from Microsoft Defender for Cloud's [Security alerts tile](managing-and-responding-alerts.md). Select an alert for details and actions for investigating the current threat and addressing future threats.
++
+## Limitations of hash reputation analysis
- **Hash reputation isn't deep file inspection** - Microsoft Defender for Storage uses hash reputation analysis supported by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) to determine whether an uploaded file is suspicious. The threat protection tools donΓÇÖt scan the uploaded files; rather they analyze the telemetry generated from the Blobs Storage and Files services. Defender for Storage then compares the hashes of newly uploaded files with hashes of known viruses, trojans, spyware, and ransomware.
defender-for-cloud Defender For Storage Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-test.md
# Trigger a test alert for Microsoft Defender for Storage
-After you enable Defender for Storage, you can create a test alert to demonstrate how Defender for Storage recognizes and alerts on security risks.
+After you enable Defender for Storage, you can create a test alert to demonstrate how Defender for Storage recognizes and triggers alerts on security risks.
## Demonstrate Defender for Storage alerts To test the security alerts from Microsoft Defender for Storage in your environment, generate the alert "Access from a Tor exit node to a storage account" with the following steps:
-1. Open a storage account with [Microsoft Defender for Storage enabled](../storage/common/azure-defender-storage-configure.md#set-up-microsoft-defender-for-cloud).
+1. Open a storage account with [Microsoft Defender for Storage enabled](../storage/common/azure-defender-storage-configure.md).
1. From the sidebar, select ΓÇ£ContainersΓÇ¥ and open an existing container or create a new one. :::image type="content" source="media/defender-for-storage-introduction/opening-storage-container.png" alt-text="Opening a blob container from an Azure Storage account." lightbox="media/defender-for-storage-introduction/opening-storage-container.png":::
defender-for-cloud Deploy Vulnerability Assessment Byol Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-byol-vm.md
When you set up your solution, you must choose a resource group to attach it to.
Defender for Cloud also offers vulnerability analysis for your: - SQL databases - [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)-- Azure Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-va-acr.md)-- Amazon AWS Elastic Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-va-ecr.md)
+- Azure Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md)
+- Amazon AWS Elastic Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-elastic.md)
defender-for-cloud Deploy Vulnerability Assessment Tvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-tvm.md
You can check out the following blogs:
Defender for Cloud also offers vulnerability analysis for your: - SQL databases - [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)-- Azure Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-va-acr.md)-- Amazon AWS Elastic Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-va-ecr.md)
+- Azure Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md)
+- Amazon AWS Elastic Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-elastic.md)
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
Within 48 hrs of the disclosure of a critical vulnerability, Qualys incorporates
Defender for Cloud also offers vulnerability analysis for your: - SQL databases - [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)-- Azure Container Registry images - [Use Defender for Containers to scan your ACR images for vulnerabilities](defender-for-containers-va-acr.md)
+- Azure Container Registry images - [Use Defender for Containers to scan your ACR images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md)
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enhanced-security-features-overview.md
If you enable the Servers plan on cross-subscription workspaces, connected VMs f
### Will I be charged for machines without the Log Analytics agent installed?
-Yes. When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on an Azure subscription or a connected AWS account, you'll be charged for all machines that are connected to your Azure subscription or AWS account. The term machines include Azure virtual machines, Azure virtual machine scale sets instances, and Azure Arc-enabled servers. Machines that don't have Log Analytics installed are covered by protections that don't depend on the Log Analytics agent.
+Yes. When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on an Azure subscription or a connected AWS account, you'll be charged for all machines that are connected to your Azure subscription or AWS account. The term machines include Azure virtual machines, Azure Virtual Machine Scale Sets instances, and Azure Arc-enabled servers. Machines that don't have Log Analytics installed are covered by protections that don't depend on the Log Analytics agent.
### If a Log Analytics agent reports to multiple workspaces, will I be charged twice?
defender-for-cloud Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/github-action.md
Security DevOps uses the following Open Source tools:
# macos-latest supporting coming soon runs-on: windows-latest
- steps:
- - uses: actions/checkout@v2
+ steps:
+ - uses: actions/checkout@v2
- - uses: actions/setup-dotnet@v1
- with:
- dotnet-version: |
- 5.0.x
- 6.0.x
+ - uses: actions/setup-dotnet@v1
+ with:
+ dotnet-version: |
+ 5.0.x
+ 6.0.x
# Run analyzers - name: Run Microsoft Security DevOps Analysis uses: microsoft/security-devops-action@preview id: msdo
- # Upload alerts to the Security tab
- - name: Upload alerts to Security tab
- uses: github/codeql-action/upload-sarif@v1
- with:
- sarif_file: ${{ steps.msdo.outputs.sarifFile }}
+ # Upload alerts to the Security tab
+ - name: Upload alerts to Security tab
+ uses: github/codeql-action/upload-sarif@v1
+ with:
+ sarif_file: ${{ steps.msdo.outputs.sarifFile }}
``` For details on various input options, see [action.yml](https://github.com/microsoft/security-devops-action/blob/main/action.yml)
defender-for-cloud Monitoring Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/monitoring-components.md
Last updated 09/12/2022
# How does Defender for Cloud collect data?
-Defender for Cloud collects data from your Azure virtual machines (VMs), virtual machine scale sets, IaaS containers, and non-Azure (including on-premises) machines to monitor for security vulnerabilities and threats. Some Defender plans require monitoring components to collect data from your workloads.
+Defender for Cloud collects data from your Azure virtual machines (VMs), Virtual Machine Scale Sets, IaaS containers, and non-Azure (including on-premises) machines to monitor for security vulnerabilities and threats. Some Defender plans require monitoring components to collect data from your workloads.
-Data collection is required to provide visibility into missing updates, misconfigured OS security settings, endpoint protection status, and health and threat protection. Data collection is only needed for compute resources such as VMs, virtual machine scale sets, IaaS containers, and non-Azure computers.
+Data collection is required to provide visibility into missing updates, misconfigured OS security settings, endpoint protection status, and health and threat protection. Data collection is only needed for compute resources such as VMs, Virtual Machine Scale Sets, IaaS containers, and non-Azure computers.
You can benefit from Microsoft Defender for Cloud even if you donΓÇÖt provision agents. However, you'll have limited security and the capabilities listed above aren't supported.
defender-for-cloud Partner Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/partner-integration.md
Learn more about the integration of [vulnerability scanning tools from Qualys](d
Defender for Cloud also offers vulnerability analysis for your: - SQL databases - [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)-- Azure Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-va-acr.md)-- Amazon AWS Elastic Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-va-ecr.md)
+- Azure Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md)
+- Amazon AWS Elastic Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-elastic.md)
## How security solutions are integrated Azure security solutions that are deployed from Defender for Cloud are automatically connected. You can also connect other security data sources, including computers running on-premises or in other clouds.
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Follow the steps below to create your GCP cloud connector.
|--|--| | CSPM service account reader role <br> Microsoft Defender for Cloud identity federation <br> CSPM identity pool <br>*Microsoft Defender for Servers* service account (when the servers plan is enabled) <br>*Azure-Arc for servers onboarding* service account (when the Arc for servers auto-provisioning is enabled) | Microsoft Defender ContainersΓÇÖ service account role <br> Microsoft Defender Data Collector service account role <br> Microsoft Defender for cloud identity pool |
-(**Servers/SQL only**) When Arc auto-provisioning is enabled, copy the unique numeric ID presented at the end of the Cloud Shell script.
--
-To locate the unique numeric ID in the GCP portal, navigate to **IAM & Admin** > **Service Accounts**, locate `Azure-Arc for servers onboarding` in the Name column, and copy the unique numeric ID number (OAuth 2 Client ID).
-
-1. Navigate back to the Microsoft Defender for Cloud portal.
-
-1. (Optional) If you changed any of the names of any of the resources, update the names in the appropriate fields.
-
-1. Select the **Next: Review and generate >**.
-
-1. Ensure the information presented is correct.
-
-1. Select the **Create**.
- After creating a connector, a scan will start on your GCP environment. New recommendations will appear in Defender for Cloud after up to 6 hours. If you enabled auto-provisioning, Azure Arc and any enabled extensions will install automatically for each new resource detected. ## (Optional) Configure selected plans
Connecting your GCP project is part of the multicloud experience available in Mi
- [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md) - [Google Cloud resource hierarchy](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy) - Learn about the Google Cloud resource hierarchy in Google's online docs-- [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector)
+- [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector)
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
Changes in our roadmap and priorities have removed the need for the network traf
Defender for Container's image scan now supports Windows images that are hosted in Azure Container Registry. This feature is free while in preview, and will incur a cost when it becomes generally available.
-Learn more in [Use Microsoft Defender for Container to scan your images for vulnerabilities](defender-for-containers-va-acr.md).
+Learn more in [Use Microsoft Defender for Container to scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md).
### New alert for Microsoft Defender for Storage (preview)
It's likely that this change will impact your secure scores. For most subscripti
### Azure Defender for container registries now scans for vulnerabilities in registries protected with Azure Private Link
-Azure Defender for container registries includes a vulnerability scanner to scan images in your Azure Container Registry registries. Learn how to scan your registries and remediate findings in [Use Azure Defender for container registries to scan your images for vulnerabilities](defender-for-containers-va-acr.md).
+Azure Defender for container registries includes a vulnerability scanner to scan images in your Azure Container Registry registries. Learn how to scan your registries and remediate findings in [Use Azure Defender for container registries to scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md).
To limit access to a registry hosted in Azure Container Registry, assign virtual network private IP addresses to the registry endpoints and use Azure Private Link as explained in [Connect privately to an Azure container registry using Azure Private Link](../container-registry/container-registry-private-link.md).
Learn more about Security Center's vulnerability scanners:
- [Azure Defender's integrated Qualys vulnerability scanner for Azure and hybrid machines](deploy-vulnerability-assessment-vm.md) - [Azure Defender's integrated vulnerability assessment scanner for SQL servers](defender-for-sql-on-machines-vulnerability-assessment.md)-- [Azure Defender's integrated vulnerability assessment scanner for container registries](defender-for-containers-va-acr.md)
+- [Azure Defender's integrated vulnerability assessment scanner for container registries](defender-for-containers-vulnerability-assessment-azure.md)
### SQL data classification recommendation severity changed
With the vTPM enabled, the **Guest Attestation extension** can remotely validate
- **Secure Boot should be enabled on supported Windows virtual machines** - **Guest Attestation extension should be installed on supported Windows virtual machines**-- **Guest Attestation extension should be installed on supported Windows virtual machine scale sets**
+- **Guest Attestation extension should be installed on supported Windows Virtual Machine Scale Sets**
- **Guest Attestation extension should be installed on supported Linux virtual machines**-- **Guest Attestation extension should be installed on supported Linux virtual machine scale sets**
+- **Guest Attestation extension should be installed on supported Linux Virtual Machine Scale Sets**
Learn more in [Trusted launch for Azure virtual machines](../virtual-machines/trusted-launch.md).
New vulnerabilities are discovered every day. With this update, container images
Scanning is charged on a per image basis, so there's no additional charge for these rescans.
-Learn more about this scanner in [Use Azure Defender for container registries to scan your images for vulnerabilities](defender-for-containers-va-acr.md).
+Learn more about this scanner in [Use Azure Defender for container registries to scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md).
### Use Azure Defender for Kubernetes to protect hybrid and multicloud Kubernetes deployments (in preview)
This option is available from the recommendations details pages for:
- **Vulnerabilities in Azure Container Registry images should be remediated** - **Vulnerabilities in your virtual machines should be remediated**
-Learn more in [Disable specific findings for your container images](defender-for-containers-va-acr.md#disable-specific-findings) and [Disable specific findings for your virtual machines](remediate-vulnerability-findings-vm.md#disable-specific-findings).
+Learn more in [Disable specific findings for your container images](defender-for-containers-vulnerability-assessment-azure.md#disable-specific-findings) and [Disable specific findings for your virtual machines](remediate-vulnerability-findings-vm.md#disable-specific-findings).
### Exempt a resource from a recommendation
The security findings are now available for export through continuous export whe
Related pages: - [Security Center's integrated Qualys vulnerability assessment solution for Azure virtual machines](deploy-vulnerability-assessment-vm.md)-- [Security Center's integrated vulnerability assessment solution for Azure Container Registry images](defender-for-containers-va-acr.md)
+- [Security Center's integrated vulnerability assessment solution for Azure Container Registry images](defender-for-containers-vulnerability-assessment-azure.md)
- [Continuous export](continuous-export.md) ### Prevent security misconfigurations by enforcing recommendations when creating new resources
Updates in November include:
- [Support for custom policies (preview)](#support-for-custom-policies-preview) - [Extending Azure Security Center coverage with platform for community and partners](#extending-azure-security-center-coverage-with-platform-for-community-and-partners) - [Advanced integrations with export of recommendations and alerts (preview)](#advanced-integrations-with-export-of-recommendations-and-alerts-preview)-- [Onboard on-prem servers to Security Center from Windows Admin Center (preview)](#onboard-on-prem-servers-to-security-center-from-windows-admin-center-preview)
+- [Onboard on-premises servers to Security Center from Windows Admin Center (preview)](#onboard-on-premises-servers-to-security-center-from-windows-admin-center-preview)
### Threat Protection for Azure Key Vault in North America Regions (preview)
In order to enable enterprise level scenarios on top of Security Center, it's no
- With export to Log Analytics workspace, you can create custom dashboards with Power BI. - With export to Event Hubs, you'll be able to export Security Center alerts and recommendations to your third-party SIEMs, to a third-party solution, or Azure Data Explorer.
-### Onboard on-prem servers to Security Center from Windows Admin Center (preview)
+### Onboard on-premises servers to Security Center from Windows Admin Center (preview)
Windows Admin Center is a management portal for Windows Servers who are not deployed in Azure offering them several Azure management capabilities such as backup and system updates. We have recently added an ability to onboard these non-Azure servers to be protected by ASC directly from the Windows Admin Center experience.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Agentless vulnerability scanning is available in both Defender Cloud Security Po
### Defender for DevOps (Preview)
-Microsoft Defender for Cloud enables comprehensive visibility, posture management, and threat protection across multicloud environments including Azure, AWS, Google, and on-premises resources.
+Microsoft Defender for Cloud enables comprehensive visibility, posture management, and threat protection across hybrid and multicloud environments including Azure, AWS, Google, and on-premises resources.