Updates from: 04/18/2022 01:04:06
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/service-limits.md
Title: Azure AD B2C service limits and restrictions-
+ Title: Azure Active Directory B2C service limits and restrictions
description: Reference for service limits and restrictions for Azure Active Directory B2C service. -+ Previously updated : 12/21/2021-+ Last updated : 04/15/2022 zone_pivot_groups: b2c-policy-type
zone_pivot_groups: b2c-policy-type
This article outlines the usage constraints and other service limits for the Azure Active Directory B2C (Azure AD B2C) service. These limits are in place to protect by effectively managing threats and ensuring a high level of service quality.
+> [!NOTE]
+> To increase any of the service limits mentioned in this article, contact **[Support](find-help-open-support-ticket.md)**.
+ ## User/consumption related limits The number of users able to authenticate through an Azure AD B2C tenant is gated through request limits. The following table illustrates the request limits for your Azure AD B2C tenant.
The number of users able to authenticate through an Azure AD B2C tenant is gated
Azure AD B2C is compliant with [OAuth 2.0](https://datatracker.ietf.org/doc/html/rfc6749), [OpenID Connect (OIDC)](https://openid.net/certification), and [SAML](http://saml.xml.org/saml-specifications) protocols. It provides user authentication and single sign-on (SSO) functionality, with the endpoints listed in the following table.
-The frequency of requests made to Azure AD B2C endpoints determine the overall token issuance capability. Azure AD B2C exposes endpoints which consume a different number of requests. Review the [Authentication Protocols](./protocols-overview.md) article for more information on which endpoints are consumed by your application.
+The frequency of requests made to Azure AD B2C endpoints determines the overall token issuance capability. Azure AD B2C exposes endpoints, which consume a different number of requests. Review the [Authentication Protocols](./protocols-overview.md) article for more information on which endpoints are consumed by your application.
|Endpoint |Endpoint type |Requests consumed | |--|||
The token issuance rate of a Custom Policy is dependent on the number of request
|SocialAndLocalAccounts| Federated account sign-in|SignUpOrSignIn| 4| |SocialAndLocalAccounts| Federated account sign-up|SignUpOrSignIn| 6| |SocialAndLocalAccountsWithMfa| Local account sign-in with MFA|SignUpOrSignIn |6|
-|SocialAndLocalAccountsWithMfa| Local account sign-up with MFA|SignUpOrSignIn |10|
+|SocialAndLocalAccountsWithMfa| Local account sign up with MFA|SignUpOrSignIn |10|
|SocialAndLocalAccountsWithMfa| Federated account sign-in with MFA|SignUpOrSignIn| 8|
-|SocialAndLocalAccountsWithMfa| Federated account sign-up with MFA|SignUpOrSignIn |10|
+|SocialAndLocalAccountsWithMfa| Federated account sign up with MFA|SignUpOrSignIn |10|
To obtain the token issuance rate per second for a particular user journey:
Tokens/sec = 200/requests-consumed
## Calculate the token issuance rate of your Custom Policy
-You can craete your own Custom Policy to provide a unique authentication experience for your application. The number of requests consumed at the dynamic endpoint depends on which features a user traverses through your Custom Policy. The below table shows how many requests are consumed for each feature in a Custom Policy.
+You can create your own Custom Policy to provide a unique authentication experience for your application. The number of requests consumed at the dynamic endpoint depends on which features a user traverses through your Custom Policy. The below table shows how many requests are consumed for each feature in a Custom Policy.
|Feature |Requests consumed| |-|--|
You can optimize the token issuance rate by considering the following configurat
- Increasing access and refresh [token lifetimes](./configure-tokens.md). - Increasing the Azure AD B2C [web session lifetime](./session-behavior.md). - Enabling [Keep Me Signed In](./session-behavior.md#enable-keep-me-signed-in-kmsi).-- Caching the [OpenId Connect metadata](./openid-connect.md#validate-the-id-token) documents at your API's.
+- Caching the [OpenId Connect metadata](./openid-connect.md#validate-the-id-token) documents at your APIs.
- Enforcing conditional MFA using [Conditional Access](./conditional-access-identity-protection-overview.md). ## Azure AD B2C configuration limits
The following table lists the administrative configuration limits in the Azure A
|Number of scopes per application  |1000 | |Number of [custom attributes](user-profile-attributes.md#extension-attributes) per user <sup>1</sup> |100 | |Number of redirect URLs per application |100 |
-|Number of sign out URLs per applicationΓÇ» |1 |
+|Number of sign-out URLs per applicationΓÇ» |1 |
|String Limit per Attribute |250 Chars | |Number of B2C tenants per subscription |20 | |Levels of [inheritance](custom-policy-overview.md#inheritance-model) in custom policies |10 |
active-directory Accidental Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/accidental-deletions.md
The feature lets you specify a deletion threshold, above which an admin
needs to explicitly choose to allow the deletions to be processed. > [!NOTE]
-> Accidental deletions are not supported for our Workday / SuccessFactors integrations. It is also not supported for changes in scoping (e.g. changing a scoping filter or changing from "sync all users and groups" to "sync assigned users and groups". Until the accidental deletions prevention feature is fully released, you will need to access the Azure portal using this URL: https://aka.ms/AccidentalDeletionsPreview
+> Accidental deletions are not supported for our Workday / SuccessFactors integrations. It is also not supported for changes in scoping (e.g. changing a scoping filter or changing from "sync all users and groups" to "sync assigned users and groups"). Until the accidental deletions prevention feature is fully released, you'll need to access the Azure portal using this URL: https://aka.ms/AccidentalDeletionsPreview
## Configure accidental deletion prevention
To enable accidental deletion prevention:
2. Select **Enterprise applications** and then select your app. 3. Select **Provisioning** and then on the provisioning page select **Edit provisioning**. 4. Under **Settings**, select the **Prevent accidental deletions** checkbox and specify a deletion
-threshold. Also, be sure the notification email address is completed. If the deletion threshold his met and email will be sent.
+threshold. Also, be sure the notification email address is completed. If the deletion threshold is met an email will be sent.
5. Select **Save**, to save the changes. When the deletion threshold is met, the job will go into quarantine and a notification email will be sent. The quarantined job can then be allowed or rejected. To learn more about quarantine behavior, see [Application provisioning in quarantine status](application-provisioning-quarantine-status.md). ## Known limitations There are two key limitations to be aware of and are actively working to address:-- HR-driven provisioning from Workday and SuccessFactors do not support the accidental deletions feature. -- Changes to your provisioning configuration (e.g. changing scoping) is not supported by the accidental deletions feature.
+- HR-driven provisioning from Workday and SuccessFactors don't support the accidental deletions feature.
+- Changes to your provisioning configuration (e.g. changing scoping) isn't supported by the accidental deletions feature.
## Recovering from an accidental deletion
-If you encounter an accidental deletion you will see it on the provisioning status page. It will say **Provisioning has been quarantined. See quarantine details for more information.**.
+If you encounter an accidental deletion you'll see it on the provisioning status page. It will say **Provisioning has been quarantined. See quarantine details for more information.**.
You can click either **Allow deletes** or **View provisioning logs**.
The **Allow deletes** action will delete the objects that triggered the accident
1. Select **Allow deletes**. 2. Click **Yes** on the confirmation to allow the deletions.
-3. You will see confirmation that the deletions were accepted and the status will return to healthy with the next cycle.
+3. You'll see confirmation that the deletions were accepted and the status will return to healthy with the next cycle.
### Rejecting deletions
-If you do not want to allow the deletions, you need to do the following:
+If you don't want to allow the deletions, you need to do the following:
- Investigate the source of the deletions. You can use the provisioning logs for details. - Prevent the deletion by assigning the user / group to the app again, restoring the user / group, or updating your provisioning configuration.-- Once you've made the necessary changes to prevent the user / group from being deleted, restart provisioning. Please do not restart provisioning until you've made the necessary changes to prevent the users / groups from being deleted.
+- Once you've made the necessary changes to prevent the user / group from being deleted, restart provisioning. Please don't restart provisioning until you've made the necessary changes to prevent the users / groups from being deleted.
### Test deletion prevention You can test the feature by triggering disable / deletion events by setting the threshold to a low number, for example 3, and then changing scoping filters, un-assigning users, and deleting users from the directory (see common scenarios in next section).
-Let the provisioning job run (20 ΓÇô 40 mins) and navigate back to the provisioning page. You will see the provisioning job in quarantine and can choose to allow the deletions or review the provisioning logs to understand why the deletions occurred.
+Let the provisioning job run (20 ΓÇô 40 mins) and navigate back to the provisioning page. You'll see the provisioning job in quarantine and can choose to allow the deletions or review the provisioning logs to understand why the deletions occurred.
## Common de-provisioning scenarios to test - Delete a user / put them into the recycle bin.
application could include: unassigning the user from the application and soft /
evaluated for deletion count towards the deletion threshold. In addition to deletions, the same functionality also works for disables. ### What is the interval that the deletion threshold is evaluated on?
-It is evaluated each cycle. If the number of deletions does not exceed the threshold during a
+It is evaluated each cycle. If the number of deletions doesn't exceed the threshold during a
single cycle, the ΓÇ£circuit breakerΓÇ¥ wonΓÇÖt be triggered. If multiple cycles are needed to reach a steady state, the deletion threshold will be evaluated per cycle.
advisor Advisor Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-cost-recommendations.md
Advisor considers resizing virtual machines when it's possible to fit the curren
- The last 7 days of utilization data are considered - Metrics are sampled every 30 seconds, aggregated to 1 min and then further aggregated to 30 mins (we take the average of max values while aggregating to 30 mins) - An appropriate SKU is determined based on the following criteria:
- - Performance of the workloads on the new SKU should not be impacted. This is achieved by:
- - For user-facing workloads: P95 of the CPU and Outbound Network utilization, and P100 of Memory utilization donΓÇÖt go above 80% on the new SKU
- - For non user-facing workloads:
- - P95 of CPU and Outbound Network utilization donΓÇÖt go above 40% on the recommended SKU
- - P100 of Memory utilization doesnΓÇÖt go above 60% on the recommended SKU
+ - Performance of the workloads on the new SKU should not be impacted.
+ - Target for user-facing workloads:
+ - P95 of CPU and Outbound Network utilization at 40% or lower on the recommended SKU
+ - P100 of Memory utilization at 60% or lower on the recommended SKU
+ - Target for non user-facing workloads:
+ - P95 of the CPU and Outbound Network utilization at 80% or lower on the new SKU
+ - P100 of Memory utilization at 80% or lower on the new SKU
- The new SKU has the same Accelerated Networking and Premium Storage capabilities - The new SKU is supported in the current region of the Virtual Machine with the recommendation - The new SKU is less expensive
application-gateway Rewrite Http Headers Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/rewrite-http-headers-url.md
For example, say you have the following header rewrite rule for the header `"Acc
Here, with only header rewrite configured, the WAF evaluation will be done on `"Accept" : "text/html"`. But when you configure URL rewrite or host header rewrite, then the WAF evaluation will be done on `"Accept" : "image/png"`. >[!NOTE]
-> URL rewrite operations are expected to cause a minor increase in the CPU utilization of your WAF Application Gateway. It is recommended that you monitor the [CPU utilization metric](high-traffic-support.md) for a brief period of time after enabling the URL rewrite rules on your WAF Application Gateway.
+> URL rewrite operations may cause a minor increase in the compute utilization of your WAF Application Gateway. In application gateway v1 deployments, it is recommended that you monitor the [CPU utilization metric](high-traffic-support.md) for a brief period of time after enabling the URL rewrite rules on your WAF Application Gateway.
### Common scenarios for header rewrite
automanage Repair Automanage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/repair-automanage-account.md
Last updated 11/05/2020-+ # Repair an Automanage Account
If you're using an ARM template or the Azure CLI, you'll need the Principal ID (
- Azure portal: Go to **Azure Active Directory** and search for your Automanage Account by name. Under **Enterprise Applications**, select the Automanage Account name when it appears. ### Azure portal+ 1. Under **Subscriptions**, go to the subscription that contains your automanaged VMs.
-1. Go to **Access control (IAM)**.
-1. Select **Add role assignments**.
-1. Select the **Contributor** role and enter the name of your Automanage Account.
-1. Select **Save**.
-1. Repeat steps 3 through 5, this time with the **Resource Policy Contributor** role.
+
+1. Select **Access control (IAM)**.
+
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
+
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+
+ | Setting | Value |
+ | | |
+ | Role | Contributor |
+ | Assign access to | User, group, or service principal |
+ | Members | \<Name of your Automanage account> |
+
+ ![Screenshot showing Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
+
+1. Repeat steps 2 through 4, selecting the **Resource Policy Contributor** role.
### ARM template Run the following ARM template. You'll need the Principal ID of your Automanage Account. The steps to get it are at the start of this section. Enter the ID when you're prompted.
automation Automation Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-role-based-access-control.md
Last updated 09/10/2021 -+ #Customer intent: As an administrator, I want to understand permissions so that I use the least necessary set of permissions.
The following section shows you how to configure Azure RBAC on your Automation a
### Configure Azure RBAC using the Azure portal
-1. Log in to the [Azure portal](https://portal.azure.com/) and open your Automation account from the Automation Accounts page.
-2. Click on **Access control (IAM)** to open the Access control (IAM) page. You can use this page to add new users, groups, and applications to manage your Automation account and view existing roles that are configurable for the Automation account.
-3. Click the **Role assignments** tab.
+1. Sign in to the [Azure portal](https://portal.azure.com/) and open your Automation account from the **Automation Accounts** page.
- ![Access button](media/automation-role-based-access-control/automation-01-access-button.png)
+1. Select **Access control (IAM)** and select a role from the list of available roles. You can choose any of the available built-in roles that an Automation account supports or any custom role you might have defined. Assign the role to a user to which you want to give permissions.
-#### Add a new user and assign a role
+ For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-1. From the Access control (IAM) page, click **+ Add role assignment**. This action opens the Add role assignment page where you can add a user, group, or application, and assign a corresponding role.
-
-2. Select a role from the list of available roles. You can choose any of the available built-in roles that an Automation account supports or any custom role you may have defined.
-
-3. Type the name of the user that you want to give permissions to in the **Select** field. Choose the user from the list and click **Save**.
-
- ![Add users](media/automation-role-based-access-control/automation-04-add-users.png)
-
- Now you should see the user added to the Users page, with the selected role assigned.
-
- ![List users](media/automation-role-based-access-control/automation-05-list-users.png)
-
- You can also assign a role to the user from the Roles page.
+ > [!NOTE]
+ > You can only set role-based access control at the Automation account scope and not at any resource below the Automation account.
-4. Click **Roles** from the Access control (IAM) page to open the Roles page. You can view the name of the role and the number of users and groups assigned to that role.
+#### Remove role assignments from a user
- ![Assign role from users page](media/automation-role-based-access-control/automation-06-assign-role-from-users-blade.png)
+You can remove the access permission for a user who isn't managing the Automation account, or who no longer works for the organization. The following steps show how to remove the role assignments from a user. For detailed steps, see [Remove Azure role assignments](../../articles/role-based-access-control/role-assignments-remove.md):
- > [!NOTE]
- > You can only set role-based access control at the Automation account scope and not at any resource below the Automation account.
+1. Open **Access control (IAM)** at a scope, such as management group, subscription, resource group, or resource, where you want to remove access.
-#### Remove a user
+1. Select the **Role assignments** tab to view all the role assignments at this scope.
-You can remove the access permission for a user who isn't managing the Automation account, or who no longer works for the organization. Following are the steps to remove a user:
+1. In the list of role assignments, add a checkmark next to the user with the role assignment you want to remove.
-1. From the Access control (IAM) page, select the user to remove and click **Remove**.
-2. Click the **Remove** button in the assignment details pane.
-3. Click **Yes** to confirm removal.
+1. Select **Remove**.
![Remove users](media/automation-role-based-access-control/automation-08-remove-users.png)
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md
Python 3 runbooks are supported in the following Azure global infrastructures:
* To use third-party libraries, you must [import the packages](python-packages.md) into the Automation account. * Using **Start-AutomationRunbook** cmdlet in PowerShell/PowerShell Workflow to start a Python 3 runbook (preview) doesn't work. You can use **Start-AzAutomationRunbook** cmdlet from Az.Automation module or **Start-AzureRmAutomationRunbook** cmdlet from AzureRm.Automation module to work around this limitation.  * Azure Automation doesn't support **sys.stderr**.
+* The Python **automationassets** package is not available on pypi.org, so it's not available for import onto a Windows machine.
### Multiple Python versions
For cloud jobs, Python 3 jobs sometimes fail with an exception message `invalid
* To learn about PowerShell runbooks, see [Tutorial: Create a PowerShell runbook](./learn/powershell-runbook-managed-identity.md). * To learn about PowerShell Workflow runbooks, see [Tutorial: Create a PowerShell Workflow runbook](learn/automation-tutorial-runbook-textual.md). * To learn about graphical runbooks, see [Tutorial: Create a graphical runbook](./learn/powershell-runbook-managed-identity.md).
-* To learn about Python runbooks, see [Tutorial: Create a Python runbook](./learn/automation-tutorial-runbook-textual-python-3.md).
+* To learn about Python runbooks, see [Tutorial: Create a Python runbook](./learn/automation-tutorial-runbook-textual-python-3.md).
azure-functions Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/deploy.md
Last updated 06/25/2021
+ms.custon: subject-rbac-steps
# Deploy Start/Stop VMs v2 (preview)
To simplify management and removal, we recommend you deploy Start/Stop VMs v2 (p
After the Start/Stop deployment completes, perform the following steps to enable Start/Stop VMs v2 (preview) to take action across multiple subscriptions.
-1. Copy the value for the Azure Function App Name that you specified during the deployment.
+1. Copy the value for the Azure Function App name that you specified during the deployment.
-1. In the portal, navigate to your secondary subscription. Select the subscription, and then select **Access Control (IAM)**
+1. In the Azure portal, navigate to your secondary subscription.
-1. Select **Add** and then select **Add role assignment**.
+1. Select **Access control (IAM)**.
-1. Select the **Contributor** role from the **Role** drop down list.
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-1. Enter the Azure Function Application Name in the **Select** field. Select the function name in the results.
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-1. Select **Save** to commit your changes.
+ | Setting | Value |
+ | | |
+ | Role | Contributor |
+ | Assign access to | User, group, or service principal |
+ | Members | \<Your Azure Function App name> |
+
+ ![Screenshot showing Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
## Configure schedules overview
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Last updated 03/16/2021
Virtual machines and other compute resources require an agent to collect monitoring data required to measure the performance and availability of their guest operating system and workloads. This article describes the agents used by Azure Monitor and helps you determine which you need to meet the requirements for your particular environment. > [!NOTE]
-> Azure Monitor recently launched a new agent, the [Azure Monitor agent](./azure-monitor-agent-overview.md), that provides all capabilities necessary to collect guest operating system monitoring data. **Use this new agent if you don't require [these current limitations](./azure-monitor-agent-overview.md#current-limitations)**, as it consolidates the features of all the legacy agents listed below and provides additional benefits. If you do require the limitations today, you may continue using the other legacy agents listed below until **August 2024**. [Learn more](./azure-monitor-agent-overview.md)
+> Azure Monitor recently launched a new agent, the [Azure Monitor agent](./azure-monitor-agent-overview.md), that provides all capabilities necessary to collect guest operating system monitoring data. **Use this new agent if you are not bound by [these limitations](./azure-monitor-agent-overview.md#current-limitations)**, as it consolidates the features of all the legacy agents listed below and provides additional benefits. If you do require the limitations today, you may continue using the other legacy agents listed below until **August 2024**. [Learn more](./azure-monitor-agent-overview.md)
## Summary of agents
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
Title: Collect text logs with Azure Monitor agent (preview)
+ Title: Collect text and IIS logs with Azure Monitor agent (preview)
description: Configure collection of filed-based text logs using a data collection rule on virtual machines with the Azure Monitor agent. Previously updated : 04/08/2022 Last updated : 04/15/2022
-# Collect text logs with Azure Monitor agent (preview)
-This tutorial shows you how to configure the collection of file-based text logs with the [Azure Monitor agent](azure-monitor-agent-overview.md) and sending the collected data to a custom table in a Log Analytics workspace. This feature uses a [data collection rule](../essentials/data-collection-rule-overview.md) that you can use to define the structure of the log file and its target table.
+# Collect text and IIS logs with Azure Monitor agent (preview)
+This article describes how to configure the collection of file-based text logs, including logs generated by IIS on Windows computers, with the [Azure Monitor agent](azure-monitor-agent-overview.md). Many applications log information to text files instead of standard logging services such as Windows Event log or Syslog.
> [!NOTE] > This feature is currently in public preview and isn't completely implemented in the Azure portal. This tutorial uses Azure Resource Manager templates for steps that can't yet be performed with the portal.
-In this tutorial, you learn to:
-
-> [!div class="checklist"]
-> * Create a custom table in a Log Analytics workspace.
-> * Create a data collection endpoint to receive data from an agent.
-> * Create a data collection rule that collects data from both a custom text log file.
-> * Create an association to apply the data collection rule to agents.
## Prerequisites
-To complete this tutorial, you need the following:
+To complete this procedure, you need the following:
- Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#manage-access-using-azure-permissions) . - [Permissions to create Data Collection Rule objects](/azure/azure-monitor/essentials/data-collection-rule-overview#permissions) in the workspace. - An agent with supported log file as described in the next section. ## Log files supported
-The log file must meet the following criteria to be collected by this feature:
+IIS logs must be in W3C format. Other log files must meet the following criteria to be collected:
- The log file must be stored on a local drive of a virtual machine, virtual machine scale set, or Arc enabled server with the Azure Monitor installed. - Each entry in the log file must be delineated with an [ISO 8601 formatted](https://www.iso.org/standard/40874.html) time stamp or an end of line.
The log file must meet the following criteria to be collected by this feature:
## Steps to collect text logs The steps to configure log collection are as follows. The detailed steps for each are provided in the sections below:
-1. Create a new table in your workspace to receive the collected data.
+1. Create a new table in your workspace to receive the collected data. (not required for IIS logs)
2. Create a data collection endpoint for the Azure Monitor agent to connect. 3. Create a data collection rule to define the structure of the log file and destination of the collected data. 4. Create association between the data collection rule and the agent collecting the log file.
The steps to configure log collection are as follows. The detailed steps for eac
## Create new table in Log Analytics workspace The custom table must be created before you can send data to it. When you create the table, you provide its name and a definition for each of its columns.
-Use the **Tables - Update** API to create the table with the PowerShell code below. This code creates a table called *MyTable_CL* with two columns. You can modify this schema to collect a different table.
+>[!NOTE]
+> This step isn't required to collect an IIS log. The table [W3CIISLog](/azure/azure-monitor/reference/tables/w3ciislog) will be used for IIS logs.
+
+Use the **Tables - Update** API to create the table with the PowerShell code below. This code creates a table called *MyTable_CL* with two columns. Modify this schema to collect a different table.
> [!IMPORTANT] > Custom tables must use a suffix of *_CL*.
A [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overvi
:::image type="content" source="../logs/media/tutorial-ingestion-time-transformations-api/edit-template.png" lightbox="../logs/media/tutorial-ingestion-time-transformations-api/edit-template.png" alt-text="Screenshot that shows portal blade to edit Resource Manager template."::: - ```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
A [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overvi
} ``` + 4. On the **Custom deployment** screen, specify a **Subscription** and **Resource group** to store the data collection rule and then provide values a **Name** for the data collection endpoint. The **Location** should be the same location as the workspace. The **Region** will already be populated and is used for the location of the data collection endpoint. :::image type="content" source="../logs/media/tutorial-ingestion-time-transformations-api/custom-deployment-values.png" lightbox="../logs/media/tutorial-ingestion-time-transformations-api/custom-deployment-values.png" alt-text="Screenshot that shows portal blade to edit custom deployment values for data collection endpoint.":::
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
:::image type="content" source="../logs/media/tutorial-ingestion-time-transformations-api/build-custom-template.png" lightbox="../logs/media/tutorial-ingestion-time-transformations-api/build-custom-template.png" alt-text="Screenshot that shows portal blade to build template in the editor.":::
-3. Paste the Resource Manager template below into the editor and then change the following values:
-
- You may choose to modify the following details in the DCR defined in this template:
+3. Paste one of the Resource Manager templates below into the editor and then change the following values:
- `streamDeclarations`: Defines the columns of the incoming data. This must match the structure of the log file. - `filePatterns`: Specifies the location and file pattern of the log files to collect. This defines a separate pattern for Windows and Linux agents.
- - `transformKql`: Specifies a [transformation](../logs/../essentials/data-collection-rule-transformations.md) to apply to the incoming data before it's sent to the workspace. Since data collection rules for Azure Monitor agent don't yet support transformations, this value will always be `source`.
+ - `transformKql`: Specifies a [transformation](../logs/../essentials/data-collection-rule-transformations.md) to apply to the incoming data before it's sent to the workspace. Data collection rules for Azure Monitor agent don't yet support transformations, so this value should currently be `source`.
4. Click **Save**. :::image type="content" source="../logs/media/tutorial-ingestion-time-transformations-api/edit-template.png" lightbox="../logs/media/tutorial-ingestion-time-transformations-api/edit-template.png" alt-text="Screenshot that shows portal blade to edit Resource Manager template.":::
+ **Data collection rule for text log**
```json {
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
"name": "[parameters('dataCollectionRuleName')]", "location": "[parameters('location')]", "apiVersion": "2021-09-01-preview",
- "properties": {
+ "properties": {
"dataCollectionEndpointId": "[parameters('endpointResourceId')]", "streamDeclarations": { "Custom-MyLogFileFormat": {
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
}, "name": "myLogFileFormat-Linux" }- ] }, "destinations": {
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
} ```
+ **Data collection rule for IIS log**
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "dataCollectionRuleName": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the name of the Data Collection Rule to create."
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "westus2",
+ "allowedValues": [
+ "westus2",
+ "eastus2",
+ "eastus2euap"
+ ],
+ "metadata": {
+ "description": "Specifies the location in which to create the Data Collection Rule."
+ }
+ },
+ "workspaceName": {
+ "type": "string",
+ "metadata": {
+ "description": "Name of the Log Analytics workspace to use."
+ }
+ },
+ "workspaceResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the Azure resource ID of the Log Analytics workspace to use."
+ }
+ },
+ "endpointResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the Azure resource ID of the Data Collection Endpoint to use."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "name": "[parameters('dataCollectionRuleName')]",
+ "location": "[parameters('location')]",
+ "apiVersion": "2021-09-01-preview",
+ "properties": {
+ "dataCollectionEndpointId": "[parameters('endpointResourceId')]",
+ "dataSources": {
+ "iisLogs": [
+ {
+ "streams": [
+ "Microsoft-W3CIISLog"
+ ],
+ "logDirectories": [
+ "C:\\inetpub\\logs\\LogFiles\\*.log"
+ ],
+ "name": "myIisLogsDataSource"
+ }
+ ]
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "[parameters('workspaceResourceId')]",
+ "name": "[parameters('workspaceName')]"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Microsoft-W3CIISLog"
+ ],
+ "destinations": [
+ "[parameters('workspaceName')]"
+ ],
+ "transformKql": "source"
+ }
+ ]
+ }
+ }
+ ],
+ "outputs": {
+ "dataCollectionRuleId": {
+ "type": "string",
+ "value": "[resourceId('Microsoft.Insights/dataCollectionRules', parameters('dataCollectionRuleName'))]"
+ }
+ }
+ }
+ ```
+ 5. On the **Custom deployment** screen, specify a **Subscription** and **Resource group** to store the data collection rule and then provide values defined in the template. This includes a **Name** for the data collection rule and the **Workspace Resource ID** and **Endpoint Resource ID**. The **Location** should be the same location as the workspace. The **Region** will already be populated and is used for the location of the data collection rule. :::image type="content" source="media/data-collection-text-log/custom-deployment-values.png" lightbox="media/data-collection-text-log/custom-deployment-values.png" alt-text="Screenshot that shows portal blade to edit custom deployment values for data collection rule.":::
azure-monitor Alerts Metric Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-overview.md
For metric alerts, typically you will get notified in under 5 minutes if you set
You can find the full list of supported resource types in this [article](./alerts-metric-near-real-time.md#metrics-and-dimensions-supported).
+## Pricing model
+
+Each Metrics Alert rule is billed based for time series monitored. Prices for Metric Alert rules are available on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
+ ## Next steps - [Learn how to create, view, and manage metric alerts in Azure](../alerts/alerts-metric.md)
azure-monitor Alerts Unified Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-unified-log.md
Each Log Alert rule is billed based the interval at which the log query is evalu
Prices for Log Alert rules are available on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
+### Calculating the price for a Log Alert rule without dimensions
+
+The price of an alert rule which queries 1 resource event every 15-minutes can be calculated as:
+
+Total monthly price = 1 resource * 1 log alert rule * price per 15-minute internal log alert rule per month.
+
+### Calculating the price for a Log Alert rule with dimensions
+
+The price of an alert rule which monitors 10 VM resources at 1-minute frequency, using resource centric log monitoring, can be calculated as Price of alert rule + Price of number of dimensions. For example:
+
+Total monthly price = price per 1-minute log alert rule per month + ( 10 time series - 1 included free time series ) * price per 1-min interval monitored per month.
+
+Pricing of at scale log monitoring is applicable from Scheduled Query Rules API version 2021-02-01.
+ ## View log alerts usage on your Azure bill Log Alerts are listed under resource provider `microsoft.insights/scheduledqueryrules` with:
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Once the migration is complete, you can use [diagnostic settings](../essentials/
> [!NOTE] > - If you currently store Application Insights data for longer than the default 90 days and want to retain this larger retention period, you may need to adjust your workspace retention settings. > - If youΓÇÖve selected data retention greater than 90 days on data ingested into the Classic Application Insights resource prior to migration, data retention will continue to be billed to through that Application Insights resource until that data exceeds the retention period.
+ > - If the retention setting for your Application Insights instance under **Configure** > **Usage and estimated costs** > **Data Retention** is enabled, then use that setting to control the retention days for the telemetry data still saved in your classic resource's storage.
- Understand [Workspace-based Application Insights](../logs/cost-logs.md#application-insights-billing) usage and costs.
azure-monitor Custom Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/custom-fields.md
The first step is to identify the records that will get the custom field. You s
1. Go to **Logs** and use a [query to retrieve the records](./log-query-overview.md) that will have the custom field. 2. Select a record that Log Analytics will use to act as a model for extracting data to populate the custom field. You will identify the data that you want to extract from this record, and Log Analytics will use this information to determine the logic to populate the custom field for all similar records.
-3. Expand the record properties, click the ellipsis to the left of the top property of the record, and select **Extract fields from**.
+3. Right-click on the record, and select **Extract fields from**.
4. The **Field Extraction Wizard** is opened, and the record you selected is displayed in the **Main Example** column. The custom field will be defined for those records with the same values in the properties that are selected. 5. If the selection is not exactly what you want, select additional fields to narrow the criteria. In order to change the field values for the criteria, you must cancel and select a different record matching the criteria you want.
azure-monitor Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/partners.md
InfluxData is on a mission to help developers and organizations, such as IBM, Vi
[Learn more about Azure Monitor integration with InfluxData Telegraf](essentials/collect-custom-metrics-linux-telegraf.md).
-## Logic Monitor
+## LogicMonitor
-![Logic Monitor logo.](./media/partners/logicmonitor.png)
+![LogicMonitor logo.](./media/partners/logicmonitor.png)
LogicMonitor is an SaaS-based performance monitoring platform for complex IT infrastructure. With coverage for thousands of technologies, LogicMonitor provides granular visibility into infrastructure and application performance. LM Cloud's comprehensive Azure monitoring enables users to correlate the performance of Azure cloud, on-premises, and hybrid cloud resourcesΓÇöall from a single platform. Automated resource discovery, built-in monitoring templates, preconfigured alert thresholds, and customizable dashboards combine to give IT the speed, flexibility, and visibility required to succeed.
-For more information, see the [Logic Monitor documentation](https://www.logicmonitor.com/lp/azure-monitoring/).
+For more information, see the [LogicMonitor documentation](https://www.logicmonitor.com/lp/azure-monitoring/).
## LogRhythm
If you use Azure Monitor to route monitoring data to an event hub, you can easil
- [Learn more about Azure Monitor](overview.md) - [Access metrics by using the REST API](essentials/rest-api-walkthrough.md) - [Stream the Activity Log to a non-Microsoft service](essentials/activity-log.md#legacy-collection-methods)-- [Stream resource logs to a non-Microsoft service](essentials/resource-logs.md#send-to-azure-event-hubs)
+- [Stream resource logs to a non-Microsoft service](essentials/resource-logs.md#send-to-azure-event-hubs)
azure-sql Audit Write Storage Account Behind Vnet Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/audit-write-storage-account-behind-vnet-firewall.md
Last updated "03/23/2022" -+ # Write audit to a storage account behind VNet and firewall [!INCLUDE[appliesto-sqldb-asa](../includes/appliesto-sqldb-asa.md)]
To configure SQL Audit to write events to a storage account behind a VNet or Fir
} ```
-2. Open [Azure portal](https://portal.azure.com). Navigate to your storage account. Locate **Access Control (IAM)**, and click **Add role assignment**. Assign **Storage Blob Data Contributor** Azure role to the server hosting the database that you registered with Azure Active Directory (Azure AD) as in the previous step.
+1. Assign the Storage Blob Data Contributor role to the server hosting the database that you registered with Azure Active Directory (Azure AD) in the previous step.
+
+ For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
> [!NOTE] > Only members with Owner privilege can perform this step. For various Azure built-in roles, refer to [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
-3. Configure the [server's blob auditing policy](/rest/api/sql/server%20auditing%20settings/createorupdate), without specifying a *storageAccountAccessKey*:
+1. Configure the [server's blob auditing policy](/rest/api/sql/server%20auditing%20settings/createorupdate), without specifying a *storageAccountAccessKey*:
Sample request
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 3/19/2022 Last updated : 4/16/2022 # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+## April 2022 Guest OS
+
+>[!NOTE]
+
+>The April Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the April Guest OS. This list is subject to change.
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 22-04 | [5012647] | Latest Cumulative Update(LCU) | 6.43 | Apr 12, 2022 |
+| Rel 22-04 | [5011486] | IE Cumulative Updates | 2.122, 3.109, 4.102 | Apr 12, 2022 |
+| Rel 22-04 | [5012604] | Latest Cumulative Update(LCU) | 7.11 | Apr 12, 2022 |
+| Rel 22-04 | [5012596] | Latest Cumulative Update(LCU) | 5.67 | Apr 12, 2022 |
+| Rel 22-04 | [5012138] | .NET Framework 3.5 Security and Quality Rollup | 2.122 | Apr 12, 2022 |
+| Rel 22-04 | [5012141] | .NET Framework 4.5.2 Security and Quality Rollup | 2.122 | Apr 12, 2022 |
+| Rel 22-04 | [5012139] | .NET Framework 3.5 Security and Quality Rollup | 4.102 | Apr 12, 2022 |
+| Rel 22-04 | [5012142] | .NET Framework 4.5.2 Security and Quality Rollup | 4.102 | Apr 12, 2022 |
+| Rel 22-04 | [5012136] | .NET Framework 3.5 Security and Quality Rollup | 3.109 | Apr 12, 2022 |
+| Rel 22-04 | [5012140] | . NET Framework 4.5.2 Security and Quality Rollup | 3.109 | Apr 12, 2022 |
+| Rel 22-04 | [5012128] | . NET Framework 3.5 and 4.7.2 Cumulative Update | 6.43 | Apr 12, 2022 |
+| Rel 22-04 | [5012123] | .NET Framework 4.8 Security and Quality Rollup | 7.11 | Apr 12, 2022 |
+| Rel 22-04 | [5012626] | Monthly Rollup | 2.122 | Apr 12, 2022 |
+| Rel 22-04 | [5012650] | Monthly Rollup | 3.109 | Apr 12, 2022 |
+| Rel 22-04 | [5012670] | Monthly Rollup | 4.102 | Apr 12, 2022 |
+| Rel 22-04 | [5013270] | Servicing Stack update | 3.109 | Apr 12, 2022 |
+| Rel 22-04 | [5012672] | Servicing Stack update | 4.102 | Apr 12, 2022 |
+| Rel 22-04 | [4578013] | Standalone Security Update | 4.102 | Aug 19, 2020 |
+| Rel 22-04 | [5011570] | Servicing Stack update | 5.67 | Mar 8, 2021 |
+| Rel 22-04 | [5011649] | Servicing Stack update | 2.122 | Mar 8, 2022 |
+| Rel 22-04 | [4494175] | Microcode | 5.67 | Sep 1, 2020 |
+| Rel 22-04 | [4494174] | Microcode | 6.43 | Sep 1, 2020 |
+
+[5012647]: https://support.microsoft.com/kb/5012647
+[5011486]: https://support.microsoft.com/kb/5011486
+[5012604]: https://support.microsoft.com/kb/5012604
+[5012596]: https://support.microsoft.com/kb/5012596
+[5012138]: https://support.microsoft.com/kb/5012138
+[5012141]: https://support.microsoft.com/kb/5012141
+[5012139]: https://support.microsoft.com/kb/5012139
+[5012142]: https://support.microsoft.com/kb/5012142
+[5012136]: https://support.microsoft.com/kb/5012136
+[5012140]: https://support.microsoft.com/kb/5012140
+[5012128]: https://support.microsoft.com/kb/5012128
+[5012123]: https://support.microsoft.com/kb/5012123
+[5012626]: https://support.microsoft.com/kb/5012626
+[5012650]: https://support.microsoft.com/kb/5012650
+[5012670]: https://support.microsoft.com/kb/5012670
+[5013270]: https://support.microsoft.com/kb/5013270
+[5012672]: https://support.microsoft.com/kb/5012672
+[4578013]: https://support.microsoft.com/kb/4578013
+[5011570]: https://support.microsoft.com/kb/5011570
+[5011649]: https://support.microsoft.com/kb/5011649
+[4494175]: https://support.microsoft.com/kb/4494175
+[4494174]: https://support.microsoft.com/kb/4494174
+ ## March 2022 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
connectors Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/built-in.md
ms.suite: integration Previously updated : 11/02/2021 Last updated : 04/15/2021 # Built-in triggers and actions in Azure Logic Apps
Azure Logic Apps provides the following built-in triggers and actions:
[**Response**][http-request-doc]: Respond to a request received by the **When a HTTP request is received** trigger in the same workflow. :::column-end::: :::column:::
- [![Batch icon][batch-icon]][batch-doc]<br>(*Consumption logic app only*)
+ [![Batch icon][batch-icon]][batch-doc]
\ \
- [**Batch**][batch-doc]
+ [**Batch**][batch-doc]<br>(*Consumption logic app only*)
\ \ [**Batch messages**][batch-doc]: Trigger a workflow that processes messages in batches.
Azure Logic Apps provides the following built-in triggers and actions:
:::column-end::: :::row-end::: :::row:::
+ :::column:::
+ [![FTP icon][ftp-icon]][ftp-doc]
+ \
+ \
+ [**FTP**][ftp-doc]<br>(*Standard logic app only*)
+ \
+ \
+ Connect to FTP or FTPS servers you can access from the internet so that you can work with your files and folders.
+ :::column-end:::
:::column::: [![SFTP-SSH icon][sftp-ssh-icon]][sftp-ssh-doc] \
Azure Logic Apps provides the following built-in triggers and actions:
:::column-end::: :::column::: :::column-end:::
- :::column:::
- :::column-end:::
- :::column:::
- :::column-end:::
:::row-end::: ## Service-based built-in trigger and actions
Azure Logic Apps provides the following built-in actions for the following servi
[**Azure Blob**][azure-blob-storage-doc]<br>(*Standard logic app only*) \ \
- Connect to your Azure Storage account so that you can create and manage blob content.
+ Connect to your Azure Blob Storage account so you can create and manage blob content.
:::column-end::: :::column::: [![Azure Cosmos DB icon][azure-cosmos-db-icon]][azure-cosmos-db-doc]
Azure Logic Apps provides the following built-in actions for the following servi
\ Manage asynchronous messages, queues, sessions, topics, and topic subscriptions. :::column-end:::
+ :::column:::
+ ![Azure Table Storage icon][azure-table-storage-icon]
+ \
+ \
+ **Azure Table Storage**<br>(*Standard logic app only*)
+ \
+ \
+ Connect to your Azure Table Storage account so you can create and manage tables.
+ :::column-end:::
:::column::: [![IBM DB2 icon][ibm-db2-icon]][ibm-db2-doc] \
Azure Logic Apps provides the following built-in actions for the following servi
\ Connect to IBM DB2 in the cloud or on-premises. Update a row, get a table, and more. :::column-end::: :::column::: [![Azure Event Hubs icon][azure-event-hubs-icon]][azure-event-hubs-doc] \
Azure Logic Apps provides the following built-in actions for the following servi
\ Connect to your SQL Server on premises or an Azure SQL Database in the cloud so that you can manage records, run stored procedures, or perform queries. <p>**Note**: Single-tenant Azure Logic Apps provides both SQL built-in and managed connector operations, while multi-tenant Azure Logic Apps provides only managed connector operations. <p>For more information, review [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](../logic-apps/single-tenant-overview-compare.md). :::column-end:::
- :::column:::
- :::column-end:::
:::row-end::: ## Run code from workflows
Azure Logic Apps provides the following built-in actions, which either require a
[azure-functions-icon]: ./media/apis-list/azure-functions.png [azure-logic-apps-icon]: ./media/apis-list/azure-logic-apps.png [azure-service-bus-icon]: ./media/apis-list/azure-service-bus.png
+[azure-table-storage-icon]: ./media/apis-list/azure-table-storage.png
[batch-icon]: ./media/apis-list/batch.png [condition-icon]: ./media/apis-list/condition.png [data-operations-icon]: ./media/apis-list/data-operations.png [date-time-icon]: ./media/apis-list/date-time.png [for-each-icon]: ./media/apis-list/for-each-loop.png
+[ftp-icon]: ./media/apis-list/ftp.png
[http-icon]: ./media/apis-list/http.png [http-request-icon]: ./media/apis-list/request.png [http-response-icon]: ./media/apis-list/response.png
Azure Logic Apps provides the following built-in actions, which either require a
[condition-doc]: ../logic-apps/logic-apps-control-flow-conditional-statement.md "Evaluate a condition and run different actions based on whether the condition is true or false" [data-operations-doc]: ../logic-apps/logic-apps-perform-data-operations.md "Perform data operations such as filtering arrays or creating CSV and HTML tables" [for-each-doc]: ../logic-apps/logic-apps-control-flow-loops.md#foreach-loop "Perform the same actions on every item in an array"
+[ftp-doc]: ./connectors-create-api-ftp.md "Connect to an FTP or FTPS server for FTP tasks, like uploading, getting, deleting files, and more"
[http-doc]: ./connectors-native-http.md "Call HTTP or HTTPS endpoints from your logic apps" [http-request-doc]: ./connectors-native-reqres.md "Receive HTTP requests in your logic apps" [http-response-doc]: ./connectors-native-reqres.md "Respond to HTTP requests from your logic apps"
cosmos-db Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/certificate-based-authentication.md
Last updated 06/11/2019 -+
The above command results in the output similar to the screenshot below:
1. Sign into the [Azure portal](https://portal.azure.com/).
-1. Navigate to your Azure Cosmos account, open the **Access control (IAM)** blade.
+1. Navigate to your Azure Cosmos account.
-1. Select **Add** and **Add role assignment**. Add the sampleApp you created in the previous step with **Contributor** role as shown in the following screenshot:
+1. Assign the Contributor role to the sample app you created in the previous section.
- :::image type="content" source="./media/certificate-based-authentication/configure-cosmos-account-with-identify.png" alt-text="Configure Azure Cosmos account to use the new identity":::
-
-1. Select **Save** after you fill out the form
+ For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
## Register your certificate with Azure AD
defender-for-iot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/overview.md
Microsoft Defender for IoT is a unified security solution for identifying IoT an
**For end-user organizations**, Microsoft Defender for IoT provides an agentless, network-layer monitoring that integrates smoothly with industrial equipment and SOC tools. You can deploy Microsoft Defender for IoT in Azure-connected and hybrid environments or completely on-premises.
-**For IoT device builders**, Microsoft Defender for IoT also offers a lightweight, micro-agent that supports standard IoT operating systems, such as Linux and RTOS. The Microsoft Defender device builder agent helps you ensure that security is built into your IoT/OT projects, from the cloud. For more information, see [Microsoft Defender for IoT for device builders documentation](/device-builders/index.md).
+**For IoT device builders**, Microsoft Defender for IoT also offers a lightweight, micro-agent that supports standard IoT operating systems, such as Linux and RTOS. The Microsoft Defender device builder agent helps you ensure that security is built into your IoT/OT projects, from the cloud. For more information, see [Microsoft Defender for IoT for device builders documentation](/azure/defender-for-iot/device-builders/overview).
## Agentless device monitoring
healthcare-apis Bulk Importing Fhir Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/bulk-importing-fhir-data.md
- Title: Bulk import data into the FHIR service in Azure Health Data Services
-description: This article describes how to bulk import data to the FHIR service in Azure Health Data Services.
---- Previously updated : 03/01/2022---
-# Bulk importing data to the FHIR service in Azure Health Data Services
-
-In this article, you'll learn how to bulk import data into the FHIR service in Azure Health Data Services. The tools described in this article are freely available at GitHub and can be modified to meet your business needs. Technical support for the tools is available through GitHub and the open-source community.
-
-While tools such as [Postman](../fhir/use-postman.md), [cURL](../fhir/using-curl.md), and [REST Client](../fhir/using-rest-client.md) to ingest data to the FHIR service, they're not typically used to bulk load FHIR data.
-
->[!Note]
->The [bulk import](https://github.com/microsoft/fhir-server/blob/main/docs/BulkImport.md) feature is currently available in the open source FHIR server. It's not available in Azure Health Data Services yet.
-
-## Azure Function FHIR Importer
-
-The [FHIR Importer](https://github.com/microsoft/healthcare-apis-samples/tree/main/src/FhirImporter) is an Azure Function or microservice, written in C#, that imports FHIR bundles in JSON or NDJSON formats as soon as they're uploaded to an Azure storage container.
--- Behind the scenes, the Azure Storage trigger starts the Azure Function when a new document is detected and the document is the input to the function.-- It processes multiple documents in parallel and provides a basic retry logic using [HTTP call retries](/dotnet/architecture/microservices/implement-resilient-applications/implement-http-call-retries-exponential-backoff-polly) when the FHIR service is too busy to handle the requests.-
-The FHIR Importer works for the FHIR service in Azure Health Data Services and Azure API for FHIR.
-
->[!Note]
->The retry logic of Importer does not handle errors after retries have been attempted. It is highly recommended that you revise the retry logic for production use. Also, informational and error logs may be added or removed.
-
-To use the tool, follow the prerequisite steps below:
-
-1. [Deploy a FHIR service](fhir-portal-quickstart.md) or use an existing service instance.
-1. [Register a confidential client application](../register-application-cli-rest.md) with a client secret.
-1. [Grant permissions](../configure-azure-rbac-using-scripts.md) to the client application.
-1. [Deploy FHIR Importer](https://github.com/microsoft/healthcare-apis-samples/tree/main/src/FhirImporter) using the CLI scripts and the Azure Resource Manager template (ARM template). The Azure Function runtime is set to 2.0 by default, but it can be changed to 3.0 from the Azure portal.
-1. Review and modify the application settings for the Azure Function. For example, change `MaxDegreeOfParallelism` from 16 to a smaller number, and set `UUIDtoResourceTypeConversion` to **false** to ingest data without the conversion from a "urn : uuid" string to a corresponding FHIR resource type.
-
- [![Image of user interface of Update Azure Function AppSettings.](media/bulk-import/importer-appsettings.png)](media/bulk-import/importer-appsettings.png#lightbox)
-
-1. Upload the FHIR data to the storage container that the FHIR Importer is monitoring. By default, the storage account is named as the importer function name plus `sa`. For example, `importer1sa` and the container is named `fhirimport`. The `fhirrejected` container is for storing files that canΓÇÖt be processed due to errors. You can use the portal, Azure [AzCopy](../../storage/common/storage-use-azcopy-v10.md) or other upload tools.
-
- [![Image of user interface of Upload Files to Storage.](media/bulk-import/importer-storage-container.png)](media/bulk-import/importer-storage-container.png#lightbox)
-
-1. Test the FHIR Importer with a few documents first before bulk importing. Use App Insights to monitor and troubleshoot the Importer Azure Function. Check the logs and files in the `fhirrejected` storage container.
-
- [![Image of user interface of Importer Monitoring.](media/bulk-import/importer-monitoring.png)](media/bulk-import/importer-monitoring.png#lightbox)
-
-## Other FHIR Data Loading Tools
-
-There are other similar tools that can be used to bulk load FHIR data.
--- [FHIR Data Loader or FHIRDL](https://github.com/microsoft/healthcare-apis-samples/blob/main/docs/HowToLoadData.md) - This is a console application that loads and converts FHIR data in Azure Storage. Also, it sends simulated device data to IoT in one of the menu options. You can run the tool interactively, or you can run it using a command line with some code modification.--- [FHIR Bulk Loader & Export](https://github.com/microsoft/fhir-loader) - This loading tool not only imports bulk FHIR data, but it also provides auditing, error logging, and patient centric data export.-
-## Next steps
-
-In this article, you've learned about the tools and the steps for bulk-importing data into FHIR service. For more information about converting data to FHIR, exporting settings to set up a storage account, and moving data to Azure Synapse, see
-
->[!div class="nextstepaction"]
->[Converting your data to FHIR](convert-data.md)
-
->[!div class="nextstepaction"]
->[Configure export settings and set up a storage account](configure-export-data.md)
-
->[!div class="nextstepaction"]
->[Copy data from Azure API for FHIR to Azure Synapse Analytics](copy-to-synapse.md)
healthcare-apis Configure Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-import-data.md
+
+ Title: Configure import settings in the FHIR service - Azure Health Data Services
+description: This article describes how to configure import settings in the FHIR service
++++ Last updated : 04/16/2022+++
+# Configure bulk import settings (Preview)
+
+The FHIR service supports $import operation that allows you to import data into FHIR service account from a storage account.
+
+The three steps below are used in configuring import settings in the FHIR service:
+
+- Enable managed identity for the FHIR service.
+- Create an Azure storage account or use an existing storage account, and then grant permissions to the FHIR service to access it.
+- Set the import configuration in the FHIR service.
+
+## Enable managed identity on the FHIR service
+
+The first step in configuring the FHIR service for import is to enable system wide managed identity on the service, which will be used to grant the service to access the storage account. For more information about managed identities in Azure, see [About managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
+
+In this step, browse to your FHIR service in the Azure portal, and select the **Identity** blade. Select the **Status** option to **On** , and then select **Save**. The **Yes** and **No** buttons will display. Select **Yes** to enable the managed identity for FHIR service. After the system identity has been enabled, you'll see a system assigned GUID value.
+
+[ ![Enable Managed Identity](media/export-data/fhir-mi-enabled.png) ](media/export-data/fhir-mi-enabled.png#lightbox)
++
+## Assign permissions to the FHIR service to access the storage account
+
+Browse to the **Access Control (IAM)** in the storage account, and then select **Add role assignment**. If the add role assignment option is grayed out, you'll need to ask your Azure Administrator to assign you permission to perform this task.
+
+For more information about assigning roles in the Azure portal, see [Azure built-in roles](../../role-based-access-control/role-assignments-portal.md).
+
+Add the role [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) to the FHIR service, and then select **Save**.
+
+[![Screen shot of the Add role assignment page.](media/bulk-import/add-role-assignment-page.png) ](media/bulk-import/add-role-assignment-page.png#lightbox)
+
+Now you're ready to select the storage account in the FHIR service as a default storage account for import.
+
+## Set import configuration of the FHIR service
+
+The final step is to set the import configuration of the FHIR service, which contains specify storage account, enable import and enable initial import mode.
+
+> [!NOTE]
+> If you haven't assigned storage access permissions to the FHIR service, the import operations ($import) will fail.
+
+To specify the Azure Storage account, you need to use [Rest API](https://docs.microsoft.com/rest/api/healthcareapis/services/create-or-update) to update the FHIR service.
+
+To get the request URL and body, browse to the Azure portal of your FHIR service. Select **Overview**, and then **JSON View**.
+
+[ ![Screenshot of Get JSON View](media/bulk-import/fhir-json-view.png) ](media/bulk-import/fhir-json-view.png#lightbox)
+
+Copy the URL as request URL and do following changes of the JSON as body:
+- Set enabled in importConfiguration to **true**
+- add or change the integrationDataStore with target storage account name
+- Set initialImportMode in importConfiguration to **true**
+- Drop off provisioningState.
+
+[ ![Screenshot of the importer configuration code example](media/bulk-import/importer-url-and-body.png) ](media/bulk-import/importer-url-and-body.png#lightbox)
+
+After you've completed this final step, you're ready to import data using $import.
+
+## Next steps
+
+In this article, you've learned the FHIR service supports $import operation and how it allows you to import data into FHIR service account from a storage account. You also learned about the three steps used in configuring import settings in the FHIR service. For more information about converting data to FHIR, exporting settings to set up a storage account, and moving data to Azure Synapse, see
+
+>[!div class="nextstepaction"]
+>[Converting your data to FHIR](convert-data.md)
+
+>[!div class="nextstepaction"]
+>[Configure export settings and set up a storage account](configure-export-data.md)
+
+>[!div class="nextstepaction"]
+>[Copy data from FHIR service to Azure Synapse Analytics](copy-to-synapse.md)
healthcare-apis Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/import-data.md
+
+ Title: Executing the import by invoking $import operation on FHIR service in Azure Health Data Services
+description: This article describes how to import FHIR data using $import
++++ Last updated : 04/16/2022+++
+# Bulk import FHIR data (Preview)
+
+The Bulk import feature enables importing FHIR data to the FHIR server at high throughput using the $import operation. This feature is suitable for initial data load into the FHIR server.
+
+## Current limitations
+
+* Conditional references in resources aren't supported.
+* If multiple resources share the same resource ID, then only one of those resources will be imported at random and an error will be logged corresponding to the remaining resources sharing the ID.
+* The data to be imported must be in the same Tenant as that of the FHIR service.
+
+## Using $import operation
+
+To use $import, you'll need to configure the FHIR server using the instructions in the [Configure bulk import settings](configure-import-data.md) article and set the **initialImportMode** to *true*. Doing so also suspends write operations (POST and PUT) on the FHIR server. You should set the **initialImportMode** to *false* to reenable write operations after you have finished importing your data.
+
+The FHIR data to be imported must be stored in resource specific files in FHIR NDJSON format on the Azure blob store. All the resources in a file must be of the same type. You may have multiple files per resource type.
+
+### Calling $import
+
+Make a ```POST``` call to ```<<FHIR service base URL>>/$import``` with the following required headers and body, which contains a FHIR [Parameters](http://hl7.org/fhir/parameters.html) resource.
+
+As `$import` is an async operation, a **callback** link will be returned in the `Content-location` header of the response together with ```202-Accepted``` status code. You can use this callback link to check import status.
+
+#### Request Header
+
+```http
+Prefer:respond-async
+Content-Type:application/fhir+json
+```
+
+#### Body
+
+| Parameter Name | Description | Card. | Accepted values |
+| -- | -- | -- | -- |
+| inputFormat | String representing the name of the data source format. Currently only FHIR NDJSON files are supported. | 1..1 | ```application/fhir+ndjson``` |
+| mode | Import mode. Currently only initial load mode is supported. | 1..1 | ```InitialLoad``` |
+| input | Details of the input files. | 1..* | A JSON array with three parts described in the table below. |
+
+| Input part name | Description | Card. | Accepted values |
+| -- | -- | -- | -- |
+| type | Resource type of input file | 1..1 | A valid [FHIR resource type](https://www.hl7.org/fhir/resourcelist.html) that match the input file. |
+|URL | Azure storage url of input file | 1..1 | URL value of the input file that can't be modified. |
+| etag | Etag of the input file on Azure storage used to verify the file content hasn't changed. | 0..1 | Etag value of the input file that can't be modified. |
+
+**Sample body:**
+
+```json
+{
+ "resourceType": "Parameters",
+ "parameter": [
+ {
+ "name": "inputFormat",
+ "valueString": "application/fhir+ndjson"
+ },
+ {
+ "name": "mode",
+ "valueString": "InitialLoad"
+ },
+ {
+ "name": "input",
+ "part": [
+ {
+ "name": "type",
+ "valueString": "Patient"
+ },
+ {
+ "name": "url",
+ "valueUri": "https://example.blob.core.windows.net/resources/Patient.ndjson"
+ },
+ {
+ "name": "etag",
+ "valueUri": "0x8D92A7342657F4F"
+ }
+ ]
+ },
+ {
+ "name": "input",
+ "part": [
+ {
+ "name": "type",
+ "valueString": "CarePlan"
+ },
+ {
+ "name": "url",
+ "valueUri": "https://example.blob.core.windows.net/resources/CarePlan.ndjson"
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Checking import status
+
+Make the REST call with the ```GET``` method to the **callback** link returned in the previous step. You can interpret the response using the following table:
+
+| Response code | Response body |Description |
+| -- | -- |-- |
+| 202 Accepted | |The operation is still running.|
+| 200 OK |The response body doesn't contain any error.url entry|All resources were imported successfully.|
+| 200 OK |The response body contains some error.url entry|Error occurred while importing some of the resources. See the files located at error.url for the details. Rest of the resources were imported successfully.|
+| Other||A fatal error occurred and the operation has stopped. Successfully imported resources haven't been rolled back.|
+
+Below are some of the important fields in the response body:
+
+| Field | Description |
+| -- | -- |
+|transactionTime|Start time of the bulk import operation.|
+|output.count|Count of resources that were successfully imported|
+|error.count|Count of resources that weren't imported due to some error|
+|error.url|URL of the file containing details of the error. Each error.url is unique to an input URL. |
+
+**Sample response:**
+
+```json
+{
+ "transactionTime": "2021-07-16T06:46:52.3873388+00:00",
+ "request": "https://importperf.azurewebsites.net/$Import",
+ "output": [
+ {
+ "type": "Patient",
+ "count": 10000,
+ "inputUrl": "https://example.blob.core.windows.net/resources/Patient.ndjson"
+ },
+ {
+ "type": "CarePlan",
+ "count": 199949,
+ "inputUrl": "https://example.blob.core.windows.net/resources/CarePlan.ndjson"
+ }
+ ],
+ "error": [
+ {
+ "type": "OperationOutcome",
+ "count": 51,
+ "inputUrl": "https://example.blob.core.windows.net/resources/CarePlan.ndjson",
+ "url": "https://example.blob.core.windows.net/fhirlogs/CarePlan06b88c6933a34c7c83cb18b7dd6ae3d8.ndjson"
+ }
+ ]
+}
+```
+
+## Next steps
+
+In this article, you've learned about how the Bulk import feature enables importing FHIR data to the FHIR server at high throughput using the $import operation. For more information about converting data to FHIR, exporting settings to set up a storage account, and moving data to Azure Synapse, see
++
+>[!div class="nextstepaction"]
+>[Converting your data to FHIR](convert-data.md)
+
+>[!div class="nextstepaction"]
+>[Configure export settings and set up a storage account](configure-export-data.md)
+
+>[!div class="nextstepaction"]
+>[Copy data from Azure API for FHIR to Azure Synapse Analytics](copy-to-synapse.md)
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
Supported cloud-based storage services in Azure that can be registered as datast
+ Azure Database for MySQL >[!TIP]
-> You can create datastores with credential-based authentication for accessing storage services, like a service principal or shared access signature (SAS) token. These credentials can be accessed by users who have *Reader* access to the workspace. <br><br>If this is a concern, [create a datastore that uses identity-based data access] to connect to storage services(how-to-identity-based-data-access.md).
+> You can create datastores with credential-based authentication for accessing storage services, like a service principal or shared access signature (SAS) token. These credentials can be accessed by users who have *Reader* access to the workspace. <br><br>If this is a concern, [create a datastore that uses identity-based data access](how-to-identity-based-data-access.md) to connect to storage services.
<a name="datasets"></a> ## Reference data in storage with datasets
See the [Create a dataset monitor](how-to-monitor-datasets.md) article, to learn
## Next steps + Create a dataset in Azure Machine Learning studio or with the Python SDK [using these steps.](how-to-create-register-datasets.md)
-+ Try out dataset training examples with our [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/).
++ Try out dataset training examples with our [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/).
machine-learning How To Create Component Pipelines Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-cli.md
You can iterate quickly with command jobs and then connect them together into a
### I'm doing distributed training in my component. The component, which is registered, specifies distributed training settings including node count. How can I change the number of nodes used during runtime? The optimal number of nodes is best determined at runtime, so I don't want to update the component and register a new version.
-You can use the overrides section in component job to change the resource and distribution settings. See [this example using TensorFlow](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/pipelines-with-components/basics/6a_tf_hello_world) or [this example using PyTorch](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/pipelines-with-components/basics/6c_pytorch_hello_world).
+You can use the overrides section in component job to change the resource and distribution settings. See [this example using TensorFlow](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/pipelines-with-components/basics/6a_tf_hello_world) or [this example using PyTorch](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/pipelines-with-components/basics/6b_pytorch_hello_world).
### How can I define an environment with conda dependencies inside a component? See [this example](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/pipelines-with-components/basics/5c_env_conda_file).
openshift Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-liberty-app.md
Complete the following prerequisites to successfully walk through this guide.
* Be sure to follow the steps in "Install the OpenShift CLI" because we'll use the `oc` command later in this article. * Write down the cluster console URL. It will look like `https://console-openshift-console.apps.<random>.<region>.aroapp.io/`. * Take note of the `kubeadmin` credentials.-
-1. Verify you can sign in to the OpenShift CLI with the token for user `kubeadmin`.
-
-### Configure Azure Active Directory authentication
-
-Azure Active Directory (Azure AD) implements OpenID Connect (OIDC). OIDC lets you use Azure AD to sign in to the ARO cluster. Follow the steps in [Configure Azure Active Directory authentication](configure-azure-ad-cli.md) to set up your cluster.
-
-After you complete the setup, return to this document and sign in to the cluster with an Azure AD user.
-
-1. Sign in to the OpenShift web console from your browser using the credentials of an Azure AD user. We'll leverage the OpenShift OpenID authentication against Azure Active Directory to use OpenID to define the administrator.
-
- 1. Use an InPrivate, Incognito or other equivalent browser window feature to sign in to the console. The window will look different after having enabled OIDC.
-
- :::image type="content" source="media/built-in-container-registry/oidc-enabled-login-window.png" alt-text="OpenID Connect enabled sign in window.":::
- 1. Select **AAD**
-
- > [!NOTE]
- > Take note of the username and password you use to sign in here. This username and password will function as an administrator for other actions in this article.
-1. Sign in with the OpenShift CLI by using the following steps. For discussion, this process is known as `oc login`.
- 1. At the right-top of the web console, expand the context menu of the signed-in user, then select **Copy Login Command**.
- 1. Sign in to a new tab window with the same user if necessary.
- 1. Select **Display Token**.
- 1. Copy the value listed below **Login with this token** to the clipboard and run it in a shell, as shown here.
-
- ```bash
- oc login --token=<login-token> --server=<server-url>
- ```
-
-1. Run `oc whoami` in the console and note the output as **\<aad-user>**. We'll use this value later in the article.
-1. Sign out of the OpenShift web console. Select the button in the top right of the browser window labeled as the **\<aad-user>** and choose **Log Out**.
-
-### Create an OpenShift namespace for the Java app
-
-1. Sign in to the OpenShift web console from your browser using the `kubeadmin` credentials.
-2. Navigate to **Administration** > **Namespaces** > **Create Namespace**.
-3. Fill in `open-liberty-demo` for **Name** and select **Create**, as shown next.
-
- ![create namespace](./media/howto-deploy-java-liberty-app/create-namespace.png)
-
-### Create an administrator for the demo project
-
-Besides image management, the **aad-user** will also be granted administrative permissions for managing resources in the demo project of the ARO 4 cluster. Sign in to the OpenShift CLI and grant the **aad-user** the necessary privileges by following these steps.
-
-1. Sign in to the OpenShift web console from your browser using the `kubeadmin` credentials.
-1. At the right-top of the web console, expand the context menu of the signed-in user, then select **Copy Login Command**.
-1. Sign in to a new tab window with the same user if necessary.
-1. Select **Display Token**.
-1. Copy the value listed below **Login with this token** to the clipboard and run it in a shell, as shown here.
-1. Execute the following commands to grant `admin` role to the **aad-user** in namespace `open-liberty-demo`.
-
- ```bash
- # Switch to project "open-liberty-demo"
- oc project open-liberty-demo
- Now using project "open-liberty-demo" on server "https://api.x8xl3f4y.eastus.aroapp.io:6443".
-
- oc adm policy add-role-to-user admin <aad-user>
- clusterrole.rbac.authorization.k8s.io/admin added: "kaaIjx75vFWovvKF7c02M0ya5qzwcSJ074RZBfXUc34"
- ```
+ * Be sure to follow the steps in "Connect using the OpenShift CLI" with the `kubeadmin` credentials.
### Install the Open Liberty OpenShift Operator
After creating and connecting to the cluster, install the Open Liberty Operator.
:::image type="content" source="media/howto-deploy-java-liberty-app/open-liberty-operator-installed.png" alt-text="Installed Operators showing Open Liberty is installed.":::
+### Create an OpenShift namespace for the Java app
+
+Follow the instructions below to create an OpenShift namespace for use with your app.
+
+1. Make sure you have signed in to the OpenShift web console from your browser using the `kubeadmin` credentials.
+2. Navigate to **Administration** > **Namespaces** > **Create Namespace**.
+3. Fill in `open-liberty-demo` for **Name** and select **Create**, as shown next.
+
+ ![create namespace](./media/howto-deploy-java-liberty-app/create-namespace.png)
+ ### Create an Azure Database for MySQL Follow the instructions below to set up an Azure Database for MySQL for use with your app. If your application doesn't require a database, you can skip this section.
cd <path-to-your-repo>/open-liberty-on-aro/3-integration/connect-db/mysql
export DB_SERVER_NAME=<Server name>.mysql.database.azure.com export DB_PORT_NUMBER=3306 export DB_NAME=<Database name>
-export DB_USER=<Server admin username>@<Database name>
+export DB_USER=<Server admin username>@<Server name>
export DB_PASSWORD=<Server admin password> export NAMESPACE=open-liberty-demo
Complete the following steps to build the application image:
# [with DB connection](#tab/with-mysql-image)
-### Log in to the OpenShift CLI as the Azure AD user
-
-Since you have already successfully run the app in the Liberty Docker container, sign in to the OpenShift CLI as the Azure AD user in order to build image remotely on the cluster.
-
-1. Sign in to the OpenShift web console from your browser using the credentials of an Azure AD user.
-
- 1. Use an InPrivate, Incognito or other equivalent browser window feature to sign in to the console.
- 1. Select **AAD**
-
- > [!NOTE]
- > Take note of the username and password you use to sign in here. This username and password will function as an administrator for other actions in this and other articles.
-1. Sign in with the OpenShift CLI by using the following steps. For discussion, this process is known as `oc login`.
- 1. At the right-top of the web console, expand the context menu of the signed-in user, then select **Copy Login Command**.
- 1. Sign in to a new tab window with the same user if necessary.
- 1. Select **Display Token**.
- 1. Copy the value listed below **Login with this token** to the clipboard and run it in a shell, as shown here.
-
- ```bash
- oc login --token=<login-token> --server=<server-url>
- ```
- ### Build the application and push to the image stream
-Next, you're going to build the image remotely on the cluster by executing the following commands.
+Since you have already successfully run the app in the Liberty Docker container, you're going to build the image remotely on the cluster by executing the following commands.
+1. Make sure you have already signed in to the OpenShift CLI using the `kubeadmin` credentials.
1. Identify the source directory and Dockerfile. ```bash
Before deploying the containerized application to a remote cluster, build and ru
1. Open `http://localhost:9080/` in your browser to visit the application home page. 1. Press **Control-C** to stop the application and Liberty server.
-### Log in to the OpenShift CLI as the Azure AD user
-
-When you're satisfied with the state of the application, sign in to the OpenShift CLI as the Azure AD user in order to build image remotely on the cluster.
-
-1. Sign in to the OpenShift web console from your browser using the credentials of an Azure AD user.
-
- 1. Use an InPrivate, Incognito or other equivalent browser window feature to sign in to the console.
- 1. Select **AAD**
-
- > [!NOTE]
- > Take note of the username and password you use to sign in here. This username and password will function as an administrator for other actions in this and other articles.
-1. Sign in with the OpenShift CLI by using the following steps. For discussion, this process is known as `oc login`.
- 1. At the right-top of the web console, expand the context menu of the signed-in user, then select **Copy Login Command**.
- 1. Sign in to a new tab window with the same user if necessary.
- 1. Select **Display Token**.
- 1. Copy the value listed below **Login with this token** to the clipboard and run it in a shell, as shown here.
-
- ```bash
- oc login --token=<login-token> --server=<server-url>
- ```
- ### Build the application and push to the image stream
-Next, you're going to build the image remotely on the cluster by executing the following commands.
+When you're satisfied with the state of the application, you're going to build the image remotely on the cluster by executing the following commands.
+1. Make sure you have already signed in to the OpenShift CLI using the `kubeadmin` credentials.
1. Identity the source directory and the Dockerfile. ```bash
Next, you're going to build the image remotely on the cluster by executing the f
## Deploy application on the ARO 4 cluster Now you can deploy the sample Liberty application to the Azure Red Hat OpenShift 4 cluster you created earlier when working through the prerequisites.+ # [with DB from web console](#tab/with-mysql-deploy-console) ### Deploy the application from the web console Because we use the Open Liberty Operator to manage Liberty applications, we need to create an instance of its *Custom Resource Definition*, of type "OpenLibertyApplication". The Operator will then take care of all aspects of managing the OpenShift resources required for deployment.
-1. Sign in to the OpenShift web console from your browser using the credentials of the Azure AD user.
+1. Sign in to the OpenShift web console from your browser using the `kubeadmin` credentials.
1. Expand **Home**, Select **Projects** > **open-liberty-demo**. 1. Navigate to **Operators** > **Installed Operators**. 1. In the middle of the page, select **Open Liberty Operator**.
You'll see the application home page opened in the browser.
Instead of using the web console GUI, you can deploy the application from the CLI. If you haven't already done so, download and install the `oc` command-line tool by following the steps in Red Hat documentation: [Getting Started with the CLI](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html). Now you can deploy the sample Liberty application to the ARO 4 cluster with the following steps.
-1. Log in to the OpenShift web console from your browser using the credentials of the Azure AD user.
-1. [Log in to the OpenShift CLI with the token for the Azure AD user](https://github.com/Azure-Samples/open-liberty-on-aro/blob/master/guides/howto-deploy-java-liberty-app.md#log-in-to-the-openshift-cli-with-the-token).
+
+1. Make sure you have already signed in to the OpenShift CLI using the `kubeadmin` credentials.
1. Run the following commands to deploy the application. ```bash # Change directory to "<path-to-repo>/3-integration/connect-db/mysql/target"
Once the Liberty application is up and running, open the output of **Route Host*
Because we use the Open Liberty Operator to manage Liberty applications, we need to create an instance of its *Custom Resource Definition*, of type "OpenLibertyApplication". The Operator will then take care of all aspects of managing the OpenShift resources required for deployment.
-1. Sign in to the OpenShift web console from your browser using the credentials of the Azure AD user.
+1. Sign in to the OpenShift web console from your browser using the `kubeadmin` credentials.
1. Expand **Home**, Select **Projects** > **open-liberty-demo**. 1. Navigate to **Operators** > **Installed Operators**. 1. In the middle of the page, select **Open Liberty Operator**.
When you're done with the application, follow these steps to delete the applicat
Instead of using the web console GUI, you can deploy the application from the CLI. If you haven't already done so, download and install the `oc` command-line tool by following Red Hat documentation [Getting Started with the CLI](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html).
-1. Sign in to the OpenShift web console from your browser using the credentials of the Azure AD user.
-2. Sign in to the OpenShift CLI with the token for the Azure AD user.
-3. Change directory to `2-simple` of your local clone, and run the following commands to deploy your Liberty application to the ARO 4 cluster. Command output is also shown inline.
+1. Make sure you have already signed in to the OpenShift CLI using the `kubeadmin` credentials.
+1. Change directory to `2-simple` of your local clone, and run the following commands to deploy your Liberty application to the ARO 4 cluster. Command output is also shown inline.
```bash # Switch to namespace "open-liberty-demo" where resources of demo app will belong to
Instead of using the web console GUI, you can deploy the application from the CL
javaee-cafe-simple 1/1 1 0 102s ```
-4. Check to see `1/1` under the `READY` column before you continue. If not, investigate and resolve the problem before continuing.
-5. Discover the host of route to the application with the `oc get route` command, as shown here.
+1. Check to see `1/1` under the `READY` column before you continue. If not, investigate and resolve the problem before continuing.
+1. Discover the host of route to the application with the `oc get route` command, as shown here.
```bash # Get host of the route
purview How To Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-resource-group.md
Previously updated : 4/08/2022 Last updated : 4/15/2022 # Resource group and subscription access provisioning by data owner (preview) [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-[Policies](concept-data-owner-policies.md) in Azure Purview allow you to enable access to data sources that have been registered to a collection. You can also [register an entire Azure resource group or subscription to a collection](register-scan-azure-multiple-sources.md), which will allow you to scan all available data sources in that resource group or subscription. If you create a single access policy against a registered resource group or subscription, a data owner can enable access to **all** available data sources in that resource group or subscription. That single policy will cover all existing data sources and any data sources that are created afterwards.
+[Policies](concept-data-owner-policies.md) allow you to enable access to data sources that have been registered for *Data use governance* in Azure Purview.
-This article describes how a data owner can create a single access policy for **all available** data sources in a subscription or a resource group.
+You can also [register an entire resource group or subscription](register-scan-azure-multiple-sources.md), and create a single policy that will manage access to **all** data sources in that resource group or subscription. That single policy will cover all existing data sources and any data sources that are created afterwards.
+This article describes how this is done.
> [!IMPORTANT]
-> Currently, these are the available data sources for access policies:
+> Currently, these are the available data sources for access policies in Public Preview:
> - Blob storage > - Azure Data Lake Storage (ADLS) Gen2
purview How To Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-storage.md
Previously updated : 04/08/2022 Last updated : 04/15/2022
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-[Policies](concept-data-owner-policies.md) in Azure Purview allow you to enable access to data sources that have been registered to a collection.
+[Policies](concept-data-owner-policies.md) allow you to enable access to data sources that have been registered for *Data use governance* in Azure Purview.
-This article describes how a data owner can use Azure Purview to enable access to datasets in Azure Storage. Currently, these Azure Storage sources are supported:
+This article describes how a data owner can delegate in Azure Purview management of access to Azure Storage datasets. Currently, these two Azure Storage sources are supported:
- Blob storage - Azure Data Lake Storage (ADLS) Gen2
This article describes how a data owner can use Azure Purview to enable access t
[!INCLUDE [Access policies generic configuration](./includes/access-policies-configuration-generic.md)] ### Register the data sources in Azure Purview for Data use governance
-The Azure Storage resources need to be registered with Azure Purview to later define access policies.
+The Azure Storage resources need to be registered first with Azure Purview to later define access policies.
To register your resources, follow the **Prerequisites** and **Register** sections of these guides:
To register your resources, follow the **Prerequisites** and **Register** sectio
- [Register and scan Azure Data Lake Storage (ADLS) Gen2 - Azure Purview](register-scan-adls-gen2.md#prerequisites)
-After you've registered your resources, you'll need to enable data use governance. Data use governance affects the security of your data, as it allows your users to manage access to resources from within Azure Purview.
-
-To ensure you securely enable data use governance, and follow best practices, follow this guide to enable data use governance for your resource group or subscription:
+After you've registered your resources, you'll need to enable *Data use governance*. Data use governance can affect the security of your data, as it allows certain Azure Purview roles to manage access to data sources that have been registered. Secure practices related to *Data use governance* are described in this guide:
- [How to enable data use governance](./how-to-enable-data-use-governance.md)
-In the end, your resource will have the **Data use governance** toggle to **Enabled**, as shown in the picture:
+The expected outcome is that your data source will have the **Data use governance** toggle **Enabled**, as shown in the picture:
:::image type="content" source="./media/how-to-data-owner-policies-storage/register-data-source-for-policy-storage.png" alt-text="Screenshot that shows how to register a data source for policy by toggling the enable tab in the resource editor.":::
Execute the steps in the [data-owner policy authoring tutorial](how-to-data-owne
## Additional information-- Policy statements set below container level on a Storage account are supported. If no access has been provided at Storage account level or container level, then the App that requests the data must execute a direct access by providing a fully qualified name to the data object. If the App attempts to crawl down the hierarchy starting from the Storage account or Container, and there's no access at that level, the request will fail. The following documents show examples of how to do perform a direct access. See also blogs in the *Next steps* section of this tutorial.
+- Policy statements set below container level on a Storage account are supported. If no access has been provided at Storage account level or container level, then the App that requests the data must execute a direct access by providing a fully qualified name to the data object. If the App attempts to crawl down the hierarchy starting from the Storage account or Container (like Storage Explorer does), and there's no access at that level, the request will fail. The following documents show examples of how to perform a direct access. See also the blogs in the *Next steps* section of this how-to-guide.
- [*abfs* for ADLS Gen2](../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md#access-files-from-the-cluster) - [*az storage blob download* for Blob Storage](../storage/blobs/storage-quickstart-blobs-cli.md#download-a-blob) - Creating a policy at Storage account level will enable the Subjects to access system containers, for example *$logs*. If this is undesired, first scan the data source(s) and then create finer-grained policies for each (that is, at container or subcontainer level).
Execute the steps in the [data-owner policy authoring tutorial](how-to-data-owne
### Known issues
-> [!Warning]
-> **Known issues** related to Policy creation
-> - Do not create policy statements based on Azure Purview resource sets. Even if displayed in Azure Purview policy authoring UI, they are not yet enforced. Learn more about [resource sets](concept-resource-sets.md).
+**Known issues** related to Policy creation
+- Do not create policy statements based on Azure Purview resource sets. Even if displayed in Azure Purview policy authoring UI, they are not yet enforced. Learn more about [resource sets](concept-resource-sets.md).
### Policy action mapping
sentinel Automate Incident Handling With Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-incident-handling-with-automation-rules.md
Actions can be defined to run when the conditions (see above) are met. You can d
Also, you can define an action to [**run a playbook**](tutorial-respond-threats-playbook.md), in order to take more complex response actions, including any that involve external systems. **Only** playbooks activated by the [**incident trigger**](automate-responses-with-playbooks.md#azure-logic-apps-basic-concepts) are available to be used in automation rules. You can define an action to include multiple playbooks, or combinations of playbooks and other actions, and the order in which they will run.
+Playbooks using [either version of Logic Apps (Standard or Consumption)](automate-responses-with-playbooks.md#two-types-of-logic-apps) will be available to run from automation rules.
+ ### Expiration date You can define an expiration date on an automation rule. The rule will be disabled after that date. This is useful for handling (that is, closing) "noise" incidents caused by planned, time-limited activities such as penetration testing.
sentinel Automate Responses With Playbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-responses-with-playbooks.md
Title: Automate threat response with playbooks in Microsoft Sentinel | Microsoft
description: This article explains automation in Microsoft Sentinel, and shows how to use playbooks to automate threat prevention and response. Previously updated : 02/21/2022 Last updated : 04/10/2022
Azure Logic Apps communicates with other systems and services using connectors.
- **Dynamic fields:** Temporary fields, determined by the output schema of triggers and actions and populated by their actual output, that can be used in the actions that follow.
+#### Two types of Logic Apps
+
+Microsoft Sentinel now supports two Logic Apps resource types:
+
+- **Logic App (Consumption)**, based on the classic, original Logic Apps engine, and
+- **Logic App (Standard)**, based on the new Logic Apps engine.
+
+**Logic Apps Standard** features a single-tenant, containerized environment that provides higher performance, fixed pricing, single apps containing multiple workflows, easier API connections management, native network capabilities such as virtual networking (VNet) and private endpoints support, built-in CI/CD features, better Visual Studio integration, a new version of the Logic Apps Designer, and more.
+
+You can leverage this powerful new version of Logic Apps by creating new Standard playbooks in Microsoft Sentinel, and you can use them the same ways you use the classic Logic App Consumption playbooks:
+- Attach them to automation rules and/or analytics rules.
+- Run them on demand, from both incidents and alerts.
+- Manage them in the Active Playbooks tab.
+
+There are many differences between these two resource types, some of which affect some of the ways they can be used in playbooks in Microsoft Sentinel. In such cases, the documentation will point out what you need to know.
+
+See [Resource type and host environment differences](../logic-apps/logic-apps-overview.md#resource-type-and-host-environment-differences) in the Logic Apps documentation for a detailed summary of the two resource types.
+
+> [!IMPORTANT]
+> - While the **Logic App (Standard)** resource type is generally available, Microsoft Sentinel's support for this resource type is in **Preview**.
+
+> [!NOTE]
+> - You'll notice an indicator in Standard workflows that presents as either *stateful* or *stateless*. Microsoft Sentinel does not support stateless workflows at this time. Learn about the differences between [**stateful and stateless workflows**](../logic-apps/single-tenant-overview-compare.md#stateful-and-stateless-workflows).
+> - Logic Apps Standard does not currently support Playbook templates. This means that you can't create a Standard workflow from within Microsoft Sentinel. Rather, you must create it in Logic Apps, and once it's created, you'll see it in Microsoft Sentinel.
+ ### Permissions required To give your SecOps team the ability to use Logic Apps to create and run playbooks in Microsoft Sentinel, assign Azure roles to your security operations team or to specific users on the team. The following describes the different available roles, and the tasks for which they should be assigned:
Two examples:
Playbooks can be run either **manually** or **automatically**.
-Running them manually means that when you get an alert, you can choose to run a playbook on-demand as a response to the selected alert. Currently this feature is generally available for alerts, and in preview for incidents.
+They are designed to be run automatically, and ideally that is how they should be run in the normal course of operations. You [run a playbook automatically](tutorial-respond-threats-playbook.md#automate-threat-responses) by defining it as an [automated response in an analytics rule](detect-threats-custom.md#set-automated-responses-and-create-the-rule) (for alerts), or as an [action in an automation rule](automate-incident-handling-with-automation-rules.md) (for incidents).
+
+There are circumstances, though, that call for running playbooks manually. For example, when creating a new playbook, you'll want to test it before putting it in production. Or, there may be situations where you'll want to have more control and human input into when and whether a certain playbook runs. You [run a playbook manually](tutorial-respond-threats-playbook.md#run-a-playbook-on-demand) by opening an incident or alert and selecting and running the associated playbook displayed there. Currently this feature is generally available for alerts, and in preview for incidents.
-Running them automatically means to set them as an automated response in an analytics rule (for alerts), or as an action in an automation rule (for incidents). [Learn more about automation rules](automate-incident-handling-with-automation-rules.md).
### Set an automated response
If the alert creates an incident, the incident will trigger an automation rule w
#### Alert creation automated response
-For playbooks that are triggered by alert creation and receive alerts as their inputs (their first step is ΓÇ£When a Microsoft Sentinel Alert is triggeredΓÇ¥), attach the playbook to an analytics rule:
+For playbooks that are triggered by alert creation and receive alerts as their inputs (their first step is ΓÇ£Microsoft Sentinel alert"), attach the playbook to an analytics rule:
1. Edit the [analytics rule](detect-threats-custom.md) that generates the alert you want to define an automated response for.
For playbooks that are triggered by alert creation and receive alerts as their i
#### Incident creation automated response
-For playbooks that are triggered by incident creation and receive incidents as their inputs (their first step is ΓÇ£When a Microsoft Sentinel Incident is triggeredΓÇ¥), create an automation rule and define a **Run playbook** action in it. This can be done in 2 ways:
+For playbooks that are triggered by incident creation and receive incidents as their inputs (their first step is ΓÇ£Microsoft Sentinel incident"), create an automation rule and define a **Run playbook** action in it. This can be done in 2 ways:
-- Edit the analytics rule that generates the incident you want to define an automated response for. Under **Incident automation** in the **Automated response** tab, create an automation rule. This will create a automated response only for this analytics rule.
+- Edit the analytics rule that generates the incident you want to define an automated response for. Under **Incident automation** in the **Automated response** tab, create an automation rule. This will create an automated response only for this analytics rule.
- From the **Automation rules** tab in the **Automation** blade, create a new automation rule and specify the appropriate conditions and desired actions. This automation rule will be applied to any analytics rule that fulfills the specified conditions.
If you want to run an incident-trigger playbook that you don't see in the list,
## Manage your playbooks
-In the **Playbooks** tab, there appears a list of all the playbooks which you have access to, filtered by the subscriptions which are currently displayed in Azure. The subscriptions filter is available from the **Directory + subscription** menu in the global page header.
+In the **Active playbooks** tab, there appears a list of all the playbooks which you have access to, filtered by the subscriptions which are currently displayed in Azure. The subscriptions filter is available from the **Directory + subscription** menu in the global page header.
Clicking on a playbook name directs you to the playbook's main page in Logic Apps. The **Status** column indicates if it is enabled or disabled.
+The **Plan** column indicates whether the playbook uses the **Standard** or **Consumption** resource type in Azure Logic Apps. You can filter the list by plan type to see only one type of playbook. You'll notice that playbooks of the Standard type use the `LogicApp/Workflow` naming convention. This convention reflects the fact that a Standard playbook represents a workflow that exists *alongside other workflows* in a single Logic App.
+ **Trigger kind** represents the Logic Apps trigger that starts this playbook. | Trigger kind | Indicates component types in playbook |
The following recommended playbooks, and other similar playbooks are available t
- **Blocking playbooks** are triggered when an alert or incident is created, gather entity information like the account, IP address, and host, and blocks them from further actions: - [Prompt to block an IP address](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Block-IPs-on-MDATP-Using-GraphSecurity).
- - [Block an AAD user](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Block-AADUser)
- - [Reset an AAD user password](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Reset-AADUserPassword/)
+ - [Block an Azure AD user](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Block-AADUser)
+ - [Reset an Azure AD user password](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Reset-AADUserPassword/)
- [Prompt to isolate a machine](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Isolate-AzureVMtoNSG) - **Create, update, or close playbooks** can create, update, or close incidents in Microsoft Sentinel, Microsoft 365 security services, or other ticketing systems:
sentinel Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/notebooks.md
If the *Runtime dependency of PyGObject is missing* error appears when you load
ModuleNotFoundError: No module named 'gi' ```
-1. Use the [aml-compute-setup.sh](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/HowTos/aml-compute-setup.sh) script, located in the Microsoft Sentinel Notebooks GitHub repository, to automatically install the `pygobject` in all notebooks and Anaconda environments on the Compute instance.
+1. Use the [aml-compute-setup.sh](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/tutorials-and-examples/how-tos/aml-compute-setup.sh) script, located in the Microsoft Sentinel Notebooks GitHub repository, to automatically install the `pygobject` in all notebooks and Anaconda environments on the Compute instance.
> [!TIP] > You can also fix this Warning by running the following code from a notebook:
We welcome feedback, suggestions, requests for features, contributed notebooks,
- **Find more notebooks** in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks):
- - The [`Sample-Notebooks`](https://github.com/Azure/Azure-Sentinel-Notebooks/tree/master/Sample-Notebooks) directory includes sample notebooks that are saved with data that you can use to show intended output.
+ - The [`Sample-Notebooks`](https://github.com/Azure/Azure-Sentinel-Notebooks/tree/master/tutorials-and-examples/example-notebooks) directory includes sample notebooks that are saved with data that you can use to show intended output.
- - The [`HowTos`](https://github.com/Azure/Azure-Sentinel-Notebooks/tree/master/HowTos) directory includes notebooks that describe concepts such as setting your default Python version, creating Microsoft Sentinel bookmarks from a notebook, and more.
+ - The [`HowTos`](https://github.com/Azure/Azure-Sentinel-Notebooks/tree/master/tutorials-and-examples/how-tos) directory includes notebooks that describe concepts such as setting your default Python version, creating Microsoft Sentinel bookmarks from a notebook, and more.
For more information, see:
For more information, see:
- [Webinar: Microsoft Sentinel notebooks fundamentals](https://www.youtube.com/watch?v=rewdNeX6H94) - [Proactively hunt for threats](hunting.md) - [Use bookmarks to save interesting information while hunting](bookmarks.md)-- [Jupyter, msticpy, and Microsoft Sentinel](https://msticpy.readthedocs.io/en/latest/getting_started/JupyterAndAzureSentinel.html)
+- [Jupyter, msticpy, and Microsoft Sentinel](https://msticpy.readthedocs.io/en/latest/getting_started/JupyterAndAzureSentinel.html)
sentinel Tutorial Respond Threats Playbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/tutorial-respond-threats-playbook.md
Get a more complete and detailed introduction to automating threat response usin
Follow these steps to create a new playbook in Microsoft Sentinel:
-### Prepare the playbook and Logic App
1. From the **Microsoft Sentinel** navigation menu, select **Automation**.
-1. On the top menu, select **Create** and **Add new playbook**.
+1. From the top menu, select **Create**.
+
+1. The drop-down menu that appears under **Create** gives you three choices for creating playbooks:
+
+ 1. If you're creating a **Standard** playbook (the new kind - see [Two types of Logic Apps](automate-responses-with-playbooks.md#two-types-of-logic-apps)), select **Blank playbook** and then follow the steps in the **Logic Apps Standard** tab below.
+
+ 1. If you're creating a **Consumption** playbook (the original, classic kind), then, depending on which trigger you want to use, select either **Playbook with incident trigger** or **Playbook with alert trigger**. Then, continue following the steps in the **Logic Apps Consumption** tab below.
+
+ > [!NOTE]
+ > Remember that only playbooks based on the **incident trigger** can be called by automation rules. Playbooks based on the **alert trigger** must be defined to run directly in [analytics rules](detect-threats-custom.md#set-automated-responses-and-create-the-rule). Both types can also be run manually.
+ >
+ > For more about which trigger to use, see [**Use triggers and actions in Microsoft Sentinel playbooks**](playbook-triggers-actions.md)
+
+# [Logic Apps Consumption](#tab/LAC)
+### Prepare the playbook and Logic App
+
+Regardless of which trigger you chose to create your playbook with in the previous step, the **Create playbook** wizard will appear.
+
+ :::image type="content" source="./media/tutorial-respond-threats-playbook/create-playbook-LAC.png" alt-text="Create a logic app":::
+
+1. In the **Basics** tab:
+
+ 1. Select the **Subscription**, **Resource group**, and **Region** of your choosing from their respective drop-down lists. The chosen region is where your Logic App information will be stored.
+
+ 1. Enter a name for your playbook under **Playbook name**.
+
+ 1. If you want to monitor this playbook's activity for diagnostic purposes, mark the **Enable diagnostics logs in Log Analytics** check box, and choose your **Log Analytics workspace** from the drop-down list.
+
+ 1. If your playbooks need access to protected resources that are inside or connected to an Azure virtual network, [you may need to use an integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md). If so, mark the **Associate with integration service environment** check box, and select the desired ISE from the drop-down list.
+
+ 1. Select **Next : Connections >**.
+
+1. In the **Connections** tab:
+
+ Ideally you should leave this section as is, configuring Logic Apps to connect to Microsoft Sentinel with managed identity. [Learn about this and other authentication alternatives](authenticate-playbooks-to-sentinel.md).
+
+ Select **Next : Review and create >**.
+
+1. In the **Review and create** tab:
+
+ Review the configuration choices you have made, and select **Create and continue to designer**.
+
+1. Your playbook will take a few minutes to be created and deployed, after which you will see the message "Your deployment is complete" and you will be taken to your new playbook's [Logic App Designer](../logic-apps/logic-apps-overview.md). The trigger you chose at the beginning will have automatically been added as the first step, and you can continue designing the workflow from there.
+
+ :::image type="content" source="media/tutorial-respond-threats-playbook/logic-app-blank-LAC.png" alt-text="Screenshot of logic app designer screen with opening trigger." lightbox="media/tutorial-respond-threats-playbook/logic-app-blank-LAC.png":::
+
+# [Logic Apps Standard](#tab/LAS)
+
+### Prepare the Logic App and workflow
- :::image type="content" source="./media/tutorial-respond-threats-playbook/add-new-playbook.png" alt-text="Add a new playbook":::
+There are three steps to getting started creating a Logic Apps Standard playbook:
- A new browser tab will open and take you to the **Create a logic app** wizard.
+1. [Create a Logic App](#create-a-logic-app).
+1. [Create a workflow](#create-a-workflow-playbook) (this is the actual playbook).
+1. [Choose the trigger](#choose-the-trigger).
- :::image type="content" source="./media/tutorial-respond-threats-playbook/create-playbook.png" alt-text="Create a logic app":::
+#### Create a Logic App
-1. Enter your **Subscription** and **Resource group**, and give your playbook a name under **Logic app name**.
+Since you selected **Blank playbook**, a new browser tab will open and take you to the **Create Logic App** wizard.
-1. For **Region**, select the Azure region where your Logic App information is to be stored.
+ :::image type="content" source="./media/tutorial-respond-threats-playbook/create-logic-app-basics.png" alt-text="Create a Standard logic app.":::
-1. If you want to monitor this playbook's activity for diagnostic purposes, mark the **Enable log analytics** check box, and enter your **Log Analytics workspace** name.
+1. In the **Basics** tab:
-1. If you want to apply tags to your playbook, click **Next : Tags >** (not connected to tags applied by automation rules. [Learn more about tags](../azure-resource-manager/management/tag-resources.md)). Otherwise, click **Review + Create**. Confirm the details you provided, and click **Create**.
+ 1. Select the **Subscription** and **Resource Group** of your choosing from their respective drop-down lists.
-1. While your playbook is being created and deployed (this will take a few minutes), you will be taken to a screen called **Microsoft.EmptyWorkflow**. When the "Your deployment is complete" message appears, click **Go to resource.**
+ 1. Enter a name for your Logic App. For **Publish**, choose **Workflow**. Select the **Region** where you wish to deploy the logic app.
-1. You will be taken to your new playbook's [Logic Apps Designer](../logic-apps/logic-apps-overview.md), where you can start designing the workflow. You'll see a screen with a short introductory video and some commonly used Logic App triggers and templates. [Learn more](../logic-apps/logic-apps-create-logic-apps-from-templates.md) about creating a playbook with Logic Apps.
+ 1. For **Plan type**, choose **Standard**.
-1. Select the **Blank Logic App** template.
+ 1. Select **Next : Hosting >**.
- :::image type="content" source="./media/tutorial-respond-threats-playbook/choose-playbook-template.png" alt-text="Logic Apps Designer template gallery":::
+1. In the **Hosting** tab:
-### Choose the trigger
+ 1. For **Storage type**, choose **Azure Storage**, and choose or create a **Storage account**.
-Every playbook must start with a trigger. The trigger defines the action that will start the playbook and the schema that the playbook will expect to receive.
+ 1. Choose a **Windows Plan**.
-1. In the search bar, look for Microsoft Sentinel. Select **Microsoft Sentinel** when it appears in the results.
+1. Select **Next : Monitoring >**.
-1. In the resulting **Triggers** tab, you will see the two triggers offered by Microsoft Sentinel:
- - When a response to a Microsoft Sentinel Alert is triggered
- - When Microsoft Sentinel incident creation rule was triggered
+1. In the **Monitoring** tab:
- Choose the trigger that matches the type of playbook you are creating.
+ 1. If you want to enable performance monitoring in Azure Monitor for this application, leave the toggle on Yes. Otherwise, toggle it to No.
+
+ > [!NOTE]
+ > This monitoring is **not required for Microsoft Sentinel** and **will cost you extra**.
+
+ 1. If you want you can select **Next : Tags >** to apply tags to this Logic App for resource categorization and billing purposes. Otherwise, select **Review + create**.
+
+1. In the **Review + create** tab:
+
+ Review the configuration choices you have made, and select **Create**.
+
+1. Your playbook will take a few minutes to be created and deployed, during which you will see some deployment messages. At the end of the process you will be taken to the final deployment screen where you'll see the message "Your deployment is complete".
+
+ Select **Go to resource**. You will be taken to the main page of your new Logic App.
+
+ Unlike with classic Consumption playbooks, you're not done yet. Now you must create a workflow.
+
+#### Create a workflow (playbook)
+
+1. Select **Workflows** from the navigation menu of your Logic App page.
+
+1. Select **+ Add** from the button bar at the top (it might take a few seconds for the button to be active).
+
+1. The **New workflow** panel will appear. Enter a name for your workflow.
+
+1. Under **State type**, select **Stateful**.
+
+ > [!NOTE]
+ > Microsoft Sentinel does not currently support the use of *Stateless* workflows as playbooks.
+
+1. Select **Create**. Your workflow will be saved and will appear in the list of workflows in your Logic App. Select the workflow to proceed.
+
+1. You'll enter your workflow's page. Here you can see all the information about your workflow, including a record of all the times it will have run. From the navigation menu, select **Designer**.
+
+1. The Designer screen will open and you will immediately be prompted to add a trigger and continue designing the workflow.
+
+ :::image type="content" source="media/tutorial-respond-threats-playbook/logic-app-standard-designer.png" alt-text="Screenshot of Logic App Standard designer." lightbox="media/tutorial-respond-threats-playbook/logic-app-standard-designer.png":::
+
+#### Choose the trigger
+
+1. Select the **Azure** tab and enter "Sentinel" in the Search line.
+
+1. In the **Triggers** tab below, you will see the two triggers offered by Microsoft Sentinel:
+ - Microsoft Sentinel alert (preview)
+ - Microsoft Sentinel incident (preview)
+
+ Select the trigger that matches the type of playbook you are creating.
> [!NOTE]
- > Remember that only playbooks based on the **incident trigger** can be called by automation rules. Playbooks based on the **alert trigger** must be defined to run directly in [analytics rules](detect-threats-custom.md#set-automated-responses-and-create-the-rule) and can also be run manually.
+ > Remember that only playbooks based on the **incident trigger** can be called by automation rules. Playbooks based on the **alert trigger** must be defined to run directly in [analytics rules](detect-threats-custom.md#set-automated-responses-and-create-the-rule). Both types can also be run manually.
> > For more about which trigger to use, see [**Use triggers and actions in Microsoft Sentinel playbooks**](playbook-triggers-actions.md)
- :::image type="content" source="./media/tutorial-respond-threats-playbook/choose-trigger.png" alt-text="Choose a trigger for your playbook":::
+ :::image type="content" source="./media/tutorial-respond-threats-playbook/sentinel-triggers.png" alt-text="Choose a trigger for your playbook":::
> [!NOTE] > When you choose a trigger, or any subsequent action, you will be asked to authenticate to whichever resource provider you are interacting with. In this case, the provider is Microsoft Sentinel. There are a few different approaches you can take to authentication. For details and instructions, see [**Authenticate playbooks to Microsoft Sentinel**](authenticate-playbooks-to-sentinel.md). +++ ### Add actions Now you can define what happens when you call the playbook. You can add actions, logical conditions, loops, or switch case conditions, all by selecting **New step**. This selection opens a new frame in the designer, where you can choose a system or an application to interact with or a condition to set. Enter the name of the system or application in the search bar at the top of the frame, and then choose from the available results.
You use a playbook to respond to an **alert** by creating an **analytics rule**,
1. From the **Analytics** blade in the Microsoft Sentinel navigation menu, select the analytics rule for which you want to automate the response, and click **Edit** in the details pane.
-1. In the **Analytics rule wizard - Edit existing rule** page, select the **Automated response** tab.
+1. In the **Analytics rule wizard - Edit existing scheduled rule** page, select the **Automated response** tab.
:::image type="content" source="./media/tutorial-respond-threats-playbook/automated-response-tab.png" alt-text="Automated response tab"::: 1. Choose your playbook from the drop-down list. You can choose more than one playbook, but only playbooks using the **alert trigger** will be available.
-1. In the **Review and create** tab, select **Save**.
+1. In the **Review and update** tab, select **Save**.
## Run a playbook on demand
sentinel Use Playbook Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/use-playbook-templates.md
You can repeat this process to create multiple playbooks on the same template.
1. For each connector you chose to create a new connection for after deployment: 1. From the navigation menu, select **API connections**.+ 1. Select the connection name.
+ :::image type="content" source="media/use-playbook-templates/view-api-connections.png" alt-text="Screenshot showing how to view A P I connections.":::
1. Select **Edit API connection** from the navigation menu.+ 1. Fill in the required parameters and click **Save**. :::image type="content" source="media/use-playbook-templates/edit-api-connection.png" alt-text="Screenshot showing how to edit A P I connections."::: Alternatively, you can create a new connection from within the relevant steps in the Logic Apps designer:
+
1. For each step which appears with an error sign, select it to expand.+ 1. Select **Add new**.+ 1. Authenticate according to the relevant instructions.+ 1. If there are other steps using this same connector, expand their boxes. From the list of connections that appears, select the connection you just created. 1. If you have chosen to use a managed identity connection for Microsoft Sentinel (or for other supported connections), grant permissions to the new playbook on the Microsoft Sentinel workspace (or on the relevant target resources for other connectors).
service-health Resource Health Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-health-overview.md
Different resources have their own criteria for when they report that they are d
## History information
-You can access up to 30 days of history in the **Health history** section of Resource Health.
+> [!NOTE]
+> You can query data up to 1 year using the QueryStartTime parameter of [Events](https://docs.microsoft.com/rest/api/resourcehealth/events/list-by-subscription-id) REST API.
+
+You can access up to 30 days of history in the **Health history** section of Resource Health from Azure Portal.
![List of Resource Health events over the last two weeks](./media/resource-health-overview/history-blade.png)
spring-cloud How To Enable System Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enable-system-assigned-managed-identity.md
Previously updated : 02/09/2022 Last updated : 04/15/2022 zone_pivot_groups: spring-cloud-tier-selection
If you're unfamiliar with managed identities for Azure resources, see the [Manag
::: zone pivot="sc-enterprise-tier" - An already provisioned Azure Spring Cloud Enterprise tier instance. For more information, see [Quickstart: Provision an Azure Spring Cloud service instance using the Enterprise tier](quickstart-provision-service-instance-enterprise.md).-- [Azure CLI version 2.0.67 or later](/cli/azure/install-azure-cli).-- [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)]
+- [Azure CLI version 2.30.0 or higher](/cli/azure/install-azure-cli).
+- [!INCLUDE [install-app-user-identity-extension](includes/install-app-user-identity-extension.md)]
::: zone-end ::: zone pivot="sc-standard-tier" - An already provisioned Azure Spring Cloud instance. For more information, see [Quickstart: Deploy your first application to Azure Spring Cloud](./quickstart.md).
+- [Azure CLI version 2.30.0 or higher](/cli/azure/install-azure-cli).
+- [!INCLUDE [install-app-user-identity-extension](includes/install-app-user-identity-extension.md)]
::: zone-end
If you're unfamiliar with managed identities for Azure resources, see the [Manag
Creating an app with a system-assigned identity requires setting an additional property on the application.
-# [Portal](#tab/azure-portal)
+### [Portal](#tab/azure-portal)
To set up a managed identity in the portal, first create an app, and then enable the feature.
To set up a managed identity in the portal, first create an app, and then enable
3. Select **Identity**. 4. Within the **System assigned** tab, switch **Status** to *On*. Select **Save**.
-![Managed identity in portal](./media/enterprise/msi/msi-enable.png)
-# [Azure CLI](#tab/azure-cli)
+### [Azure CLI](#tab/azure-cli)
You can enable system-assigned managed identity during app creation or on an existing app.
-**Enable system-assigned managed identity during creation of an app**
+### Enable system-assigned managed identity during creation of an app
The following example creates an app named *app_name* with a system-assigned managed identity, as requested by the `--assign-identity` parameter.
az spring-cloud app create \
--resource-group <resource-group-name> \ --name <app-name> \ --service <service-instance-name> \
- --assign-identity
+ --system-assigned
```
-**Enable system-assigned managed identity on an existing app**
+### Enable system-assigned managed identity on an existing app**
Use `az spring-cloud app identity assign` command to enable the system-assigned identity on an existing app.
Use `az spring-cloud app identity assign` command to enable the system-assigned
az spring-cloud app identity assign \ --resource-group <resource-group-name> \ --name <app-name> \
- --service <service-instance-name>
+ --service <service-instance-name> \
+ --system-assigned
```
Azure Spring Cloud shares the same endpoint for token acquisition with Azure Vir
Removing a system-assigned identity will also delete it from Azure AD. Deleting the app resource automatically removes system-assigned identities from Azure AD.
-# [Portal](#tab/azure-portal)
+### [Portal](#tab/azure-portal)
To remove system-assigned managed identity from an app that no longer needs it:
To remove system-assigned managed identity from an app that no longer needs it:
1. Navigate to the desired application and select **Identity**. 1. Under **System assigned**/**Status**, select **Off** and then select **Save**:
-![Managed identity](./media/enterprise/msi/msi-disable.png)
-# [Azure CLI](#tab/azure-cli)
+### [Azure CLI](#tab/azure-cli)
To remove system-assigned managed identity from an app that no longer needs it, use the following command:
To remove system-assigned managed identity from an app that no longer needs it,
az spring-cloud app identity remove \ --resource-group <resource-group-name> \ --name <app-name> \
- --service <service-instance-name>
+ --service <service-instance-name> \
+ --system-assigned
+```
+
+## Get the client ID from the object ID (principal ID)
+
+Use the following command to get the client ID from the object/principle ID value:
+
+```azurecli
+az ad sp show --id <object-ID> --query appId
```
spring-cloud How To Enterprise Build Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enterprise-build-service.md
In Azure Spring Cloud, the existing Standard tier already supports compiling use
Tanzu Build Service in the Enterprise tier is the entry point to containerize user applications from both source code and artifacts. There's a dedicated build agent pool that reserves compute resources for a given number of concurrent build tasks. The build agent pool prevents resource contention with your running apps. You can configure the number of resources given to the build agent pool during or after creating a new service instance of Azure Spring Cloud using the **VMware Tanzu settings**. The Build Agent Pool scale set sizes available are:
spring-cloud How To Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-manage-user-assigned-managed-identities.md
az spring-cloud app identity remove \
For user-assigned managed identity limitations, see [Quotas and service plans for Azure Spring Cloud](./quotas.md). - ## Next steps
-* [Learn more about managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
-* [How to use managed identities with Java SDK](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples)
+- [Learn more about managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
+- [How to use managed identities with Java SDK](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples)
spring-cloud How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-use-managed-identities.md
+
+ Title: Managed identities for applications in Azure Spring Cloud
+
+description: Home page for managed identities for applications.
++++ Last updated : 04/15/2022+
+zone_pivot_groups: spring-cloud-tier-selection
++
+# Use managed identities for applications in Azure Spring Cloud
+
+**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+
+This article shows you how to use system-assigned and user-assigned managed identities for applications in Azure Spring Cloud.
+
+Managed identities for Azure resources provide an automatically managed identity in Azure Active Directory (Azure AD) to an Azure resource such as your application in Azure Spring Cloud. You can use this identity to authenticate to any service that supports Azure AD authentication, without having credentials in your code.
+
+## Feature status
+
+| System-assigned | User-assigned |
+| - | - |
+| GA | Preview |
+
+## Manage managed identity for an application
+
+For system-assigned managed identities, see [How to enable and disable system-assigned managed identity](./how-to-enable-system-assigned-managed-identity.md).
+
+For user-assigned managed identities, see [How to assign and remove user-assigned managed identities](./how-to-manage-user-assigned-managed-identities.md).
+
+## Obtain tokens for Azure resources
+
+An application can use its managed identity to get tokens to access other resources protected by Azure AD, such as Azure Key Vault. These tokens represent the application accessing the resource, not any specific user of the application.
+
+You may need to configure the target resource to allow access from your application. For more information, see [Assign a managed identity access to a resource by using the Azure portal](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). For example, if you request a token to access Key Vault, be sure you have added an access policy that includes your application's identity. Otherwise, your calls to Key Vault will be rejected, even if they include the token. To learn more about which resources support Azure Active Directory tokens, see [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md).
+
+Azure Spring Cloud shares the same endpoint for token acquisition with Azure Virtual Machines. We recommend using Java SDK or Spring Boot starters to acquire a token. For various code and script examples and guidance on important topics such as handling token expiration and HTTP errors, see [How to use managed identities for Azure resources on an Azure VM to acquire an access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md).
+
+## Examples of connecting Azure services in application code
+
+The following table provides links to articles that contain examples:
+
+| Azure service | tutorial |
+|--||
+| Key Vault | [Tutorial: Use a managed identity to connect Key Vault to an Azure Spring Cloud app](tutorial-managed-identities-key-vault.md) |
+| Azure Functions | [Tutorial: Use a managed identity to invoke Azure Functions from an Azure Spring Cloud app](tutorial-managed-identities-functions.md) |
+| Azure SQL | [Use a managed identity to connect Azure SQL Database to an Azure Spring Cloud app](connect-managed-identity-to-azure-sql.md) |
+
+## Best practices when using managed identities
+
+We highly recommend that you use system-assigned and user-assigned managed identities separately unless you have a valid use case. If you use both kinds of managed identity together, failure might happen if an application is using system-assigned managed identity and the application gets the token without specifying the client ID of that identity. This scenario may work fine until one or more user-assigned managed identities are assigned to that application, then the application may fail to get the correct token.
+
+## Limitations
+
+### Maximum number of user-assigned managed identities per application
+
+For the maximum number of user-assigned managed identities per application, see [Quotas and Service Plans for Azure Spring Cloud](./quotas.md).
+
+### Azure services that aren't supported
+
+The following services do not currently support managed identity-based access:
+
+- Azure Redis Cache
+- Azure Flexible MySQL
+- Azure Flexible PostgreSQL
+- Azure Database for MariaDB
+- Azure Cosmos DB - Mongo DB
+- Azure Cosmos DB - Cassandra
+- Azure Databricks
+++
+## Concept mapping
+
+The following table shows the mappings between concepts in Managed Identity scope and Azure AD scope:
+
+| Managed Identity scope | Azure AD scope |
+||-|
+| Principal ID | Object ID |
+| Client ID | Application ID |
+
+## Next steps
+
+- [Access Azure Key Vault with managed identities in Spring boot starter](https://github.com/Azure/azure-sdk-for-jav#use-msi--managed-identities)
+- [Learn more about managed identities for Azure resources](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/managed-identities-azure-resources/overview.md)
+- [How to use managed identities with Java SDK](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples)
spring-cloud Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quotas.md
-# Quotas and Service Plans for Azure Spring Cloud
+# Quotas and service plans for Azure Spring Cloud
**This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-All Azure services set default limits and quotas for resources and features. Azure Spring Cloud offers two pricing tiers: Basic and Standard. We will detail limits for both tiers in this article.
+All Azure services set default limits and quotas for resources and features. Azure Spring Cloud offers two pricing tiers: Basic and Standard. We will detail limits for both tiers in this article.
## Azure Spring Cloud service tiers and limits
-| Resource | Scope | Basic | Standard/Enterprise |
-|--|--|--||
-| vCPU | per app instance | 1 | 4 |
-| Memory | per app instance | 2 GB | 8 GB |
-| Azure Spring Cloud service instances | per region per subscription | 10 | 10 |
-| Total app instances | per Azure Spring Cloud service instance | 25 | 500 |
-| Custom Domains | per Azure Spring Cloud service instance | 0 | 25 |
-| Persistent volumes | per Azure Spring Cloud service instance | 1 GB/app x 10 apps | 50 GB/app x 10 apps |
-| Inbound Public Endpoints | per Azure Spring Cloud service instance | 10 <sup>1</sup> | 10 <sup>1</sup> |
+| Resource | Scope | Basic | Standard/Enterprise |
+|--|--|--|-|
+| vCPU | per app instance | 1 | 4 |
+| Memory | per app instance | 2 GB | 8 GB |
+| Azure Spring Cloud service instances | per region per subscription | 10 | 10 |
+| Total app instances | per Azure Spring Cloud service instance | 25 | 500 |
+| Custom Domains | per Azure Spring Cloud service instance | 0 | 25 |
+| Persistent volumes | per Azure Spring Cloud service instance | 1 GB/app x 10 apps | 50 GB/app x 10 apps |
+| Inbound Public Endpoints | per Azure Spring Cloud service instance | 10 <sup>1</sup> | 10 <sup>1</sup> |
| Outbound Public IPs | per Azure Spring Cloud service instance | 1 <sup>2</sup> | 2 <sup>2</sup> <br> 1 if using VNet<sup>2</sup> |
-| User-assigned managed identities | per app instance | 20 | 20 |
+| User-assigned managed identities | per app instance | 20 | 20 |
<sup>1</sup> You can increase this limit via support request to a maximum of 1 per app.
spring-cloud Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/troubleshoot.md
This article provides instructions for troubleshooting Azure Spring Cloud develo
Export the logs to Azure Log Analytics. The table for Spring application logs is named *AppPlatformLogsforSpring*. To learn more, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md).
-The following error message might appear in your logs:
-
-> "org.springframework.context.ApplicationContextException: Unable to start web server"
+The following error message might appear in your logs: `org.springframework.context.ApplicationContextException: Unable to start web server`
The message indicates one of two likely problems:
To fix this error, go to the `server parameters` of your MySQL instance, and cha
### My application crashes or throws an unexpected error
-When you're debugging application crashes, start by checking the running status and discovery status of the application. To do so, go to _App management_ in the Azure portal to ensure that the statuses of all the applications are _Running_ and _UP_.
+When you're debugging application crashes, start by checking the running status and discovery status of the application. To do so, go to *App management* in the Azure portal to ensure that the statuses of all the applications are *Running* and *UP*.
-* If the status is _Running_ but the discovery status is not _UP_, go to the ["My application can't be registered"](#my-application-cant-be-registered) section.
+* If the status is *Running* but the discovery status is not *UP*, go to the ["My application can't be registered"](#my-application-cant-be-registered) section.
-* If the discovery status is _UP_, go to Metrics to check the application's health. Inspect the following metrics:
+* If the discovery status is *UP*, go to Metrics to check the application's health. Inspect the following metrics:
- - `TomcatErrorCount` (_tomcat.global.error_):
+ * `TomcatErrorCount` (*tomcat.global.error*):
- All Spring application exceptions are counted here. If this number is large, go to Azure Log Analytics to inspect your application logs.
+ All Spring application exceptions are counted here. If this number is large, go to Azure Log Analytics to inspect your application logs.
- - `AppMemoryMax` (_jvm.memory.max_):
+ * `AppMemoryMax` (*jvm.memory.max*):
- The maximum amount of memory available to the application. The amount might be undefined, or it might change over time if it is defined. If it's defined, the amount of used and committed memory is always less than or equal to max. However, a memory allocation might fail with an `OutOfMemoryError` message if the allocation attempts to increase the used memory such that *used > committed*, even if *used <= max* is still true. In such a situation, try to increase the maximum heap size by using the `-Xmx` parameter.
+ The maximum amount of memory available to the application. The amount might be undefined, or it might change over time if it is defined. If it's defined, the amount of used and committed memory is always less than or equal to max. However, a memory allocation might fail with an `OutOfMemoryError` message if the allocation attempts to increase the used memory such that *used > committed*, even if *used <= max* is still true. In such a situation, try to increase the maximum heap size by using the `-Xmx` parameter.
- - `AppMemoryUsed` (_jvm.memory.used_):
+ * `AppMemoryUsed` (*jvm.memory.used*):
- The amount of memory in bytes that's currently used by the application. For a normal load Java application, this metric series forms a *sawtooth* pattern, where the memory usage steadily increases and decreases in small increments and suddenly drops a lot, and then the pattern recurs. This metric series occurs because of garbage collection inside Java virtual machine, where collection actions represent drops on the sawtooth pattern.
+ The amount of memory in bytes that's currently used by the application. For a normal load Java application, this metric series forms a *sawtooth* pattern, where the memory usage steadily increases and decreases in small increments and suddenly drops a lot, and then the pattern recurs. This metric series occurs because of garbage collection inside Java virtual machine, where collection actions represent drops on the sawtooth pattern.
This metric is important to help identify memory issues, such as:
Before you onboard your application, ensure that it meets the following criteria
* The configuration items have their expected values. For more information, see [Set up a Spring Cloud Config Server instance for your service](./how-to-config-server.md). For enterprise tier, see [Use Application Configuration Service](./how-to-enterprise-application-configuration-service.md). * The environment variables have their expected values. * The JVM parameters have their expected values.
-* We recommended that you disable or remove the embedded _Config Server_ and _Spring Service Registry_ services from the application package.
-* If any Azure resources are to be bound via _Service Binding_, make sure the target resources are up and running.
+* We recommended that you disable or remove the embedded *Config Server* and *Spring Service Registry* services from the application package.
+* If any Azure resources are to be bound via *Service Binding*, make sure the target resources are up and running.
## Configuration and management
When you deploy your application package by using the [Azure CLI](/cli/azure/get
If the polling is interrupted, you can still use the following command to fetch the deployment logs:
-`az spring-cloud app show-deploy-log -n <app-name>`
-
-Ensure that your application is packaged in the correct [executable JAR format](https://docs.spring.io/spring-boot/docs/current/reference/html/executable-jar.html). If it isn't packaged correctly, you will receive an error message similar to the following:
+```azurecli
+az spring-cloud app show-deploy-log --name <app-name>
+```
-> "Error: Invalid or corrupt jarfile /jar/38bc8ea1-a6bb-4736-8e93-e8f3b52c8714"
+Ensure that your application is packaged in the correct [executable JAR format](https://docs.spring.io/spring-boot/docs/current/reference/html/executable-jar.html). If it isn't packaged correctly, you will receive an error message similar to the following: `Error: Invalid or corrupt jarfile /jar/38bc8ea1-a6bb-4736-8e93-e8f3b52c8714`
### I can't deploy a source package
When you deploy your application package by using the [Azure CLI](/cli/azure/get
If the polling is interrupted, you can still use the following command to fetch the build and deployment logs:
-`az spring-cloud app show-deploy-log -n <app-name>`
+```azurecli
+az spring-cloud app show-deploy-log --name <app-name>
+```
However, note that one Azure Spring Cloud service instance can trigger only one build job for one source package at one time. For more information, see [Deploy an application](./quickstart.md) and [Set up a staging environment in Azure Spring Cloud](./how-to-staging-environment.md).
In most cases, this situation occurs when *Required Dependencies* and *Service D
Wait at least two minutes before a newly registered instance starts receiving traffic.
-If you're migrating an existing Spring Cloud-based solution to Azure, ensure that your ad-hoc _Service Registry_ and _Config Server_ instances are removed (or disabled) to avoid conflicting with the managed instances provided by Azure Spring Cloud.
+If you're migrating an existing Spring Cloud-based solution to Azure, ensure that your ad-hoc *Service Registry* and *Config Server* instances are removed (or disabled) to avoid conflicting with the managed instances provided by Azure Spring Cloud.
-You can also check the _Service Registry_ client logs in Azure Log Analytics. For more information, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md)
+You can also check the *Service Registry* client logs in Azure Log Analytics. For more information, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md)
To learn more about Azure Log Analytics, see [Get started with Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-tutorial.md). Query the logs by using the [Kusto query language](/azure/kusto/query/).
Environment variables inform the Azure Spring Cloud framework, ensuring that Azu
1. Go to `https://<your application test endpoint>/actuator/health`.
- - A response similar to `{"status":"UP"}` indicates that the endpoint has been enabled.
- - If the response is negative, include the following dependency in your *POM.xml* file:
+ * A response similar to `{"status":"UP"}` indicates that the endpoint has been enabled.
+ * If the response is negative, include the following dependency in your *POM.xml* file:
- ```xml
- <dependency>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-actuator</artifactId>
- </dependency>
- ```
+ ```xml
+ <dependency>
+ <groupId>org.springframework.boot</groupId>
+ <artifactId>spring-boot-starter-actuator</artifactId>
+ </dependency>
+ ```
1. With the Spring Boot Actuator endpoint enabled, go to the Azure portal and look for the configuration page of your application. Add an environment variable with the name `MANAGEMENT_ENDPOINTS_WEB_EXPOSURE_INCLUDE` and the value `*`.
Environment variables inform the Azure Spring Cloud framework, ensuring that Azu
1. Go to `https://<your application test endpoint>/actuator/env` and inspect the response. It should look like this:
- ```json
- {
- "activeProfiles": [],
- "propertySources": {,
- "name": "server.ports",
- "properties": {
- "local.server.port": {
- "value": 1025
- }
- }
- }
- }
- ```
+ ```json
+ {
+ "activeProfiles": [],
+ "propertySources": {,
+ "name": "server.ports",
+ "properties": {
+ "local.server.port": {
+ "value": 1025
+ }
+ }
+ }
+ }
+ ```
Look for the child node named `systemEnvironment`. This node contains your application's environment variables.
Look for the child node named `systemEnvironment`. This node contains your appl
### I can't find metrics or logs for my application
-Go to **App management** to ensure that the application statuses are _Running_ and _UP_.
+Go to **App management** to ensure that the application statuses are *Running* and *UP*.
-Check to see whether _JMX_ is enabled in your application package. This feature can be enabled with the configuration property `spring.jmx.enabled=true`.
+Check to see whether *JMX* is enabled in your application package. This feature can be enabled with the configuration property `spring.jmx.enabled=true`.
Check to see whether the `spring-boot-actuator` dependency is enabled in your application package and that it successfully boots up.
Check to see whether the `spring-boot-actuator` dependency is enabled in your ap
If your application logs can be archived to a storage account but not sent to Azure Log Analytics, check to see whether you [set up your workspace correctly](../azure-monitor/logs/quick-create-workspace.md). If you're using a free tier of Azure Log Analytics, note that [the free tier does not provide a service-level agreement (SLA)](https://azure.microsoft.com/support/legal/sla/log-analytics/v1_3/).
-## Enterprise Tier
+## Enterprise tier
### Error 112039: Failed to purchase on Azure Marketplace Creating an Azure Spring Cloud Enterprise tier instance fails with error code "112039". Check the detailed error message for below for more information: -- **"Failed to purchase on Azure Marketplace because the Microsoft.SaaS RP is not registered on the Azure subscription."** : Azure Spring Cloud Enterprise tier purchase a SaaS offer from VMWare.
+* **"Failed to purchase on Azure Marketplace because the Microsoft.SaaS RP is not registered on the Azure subscription."** : Azure Spring Cloud Enterprise tier purchase a SaaS offer from VMware.
You must register the Microsoft.SaaS resource provider before creating Azure Spring Cloud Enterprise instance. See how to [register a resource provider](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider). -- **"Failed to load catalog product vmware-inc.azure-spring-cloud-vmware-tanzu-2 in the Azure subscription market."**: Your Azure subscription's billing account address is not in the supported location.
+* **"Failed to load catalog product vmware-inc.azure-spring-cloud-vmware-tanzu-2 in the Azure subscription market."**: Your Azure subscription's billing account address is not in the supported location.
For more information, see the section [No plans are available for market '\<Location>'](#no-plans-are-available-for-market-location). -- **"Failed to purchase on Azure Marketplace due to signature verification on Marketplace legal agreement. Check the Azure subcription has agree terms vmware-inc.azure-spring-cloud-vmware-tanzu-2.tanzu-asc-ent-mtr"**: Your Azure subscription has not signed the terms for the offer and plan to be purchased.
+* **"Failed to purchase on Azure Marketplace due to signature verification on Marketplace legal agreement. Check the Azure subscription has agree terms vmware-inc.azure-spring-cloud-vmware-tanzu-2.tanzu-asc-ent-mtr"**: Your Azure subscription has not signed the terms for the offer and plan to be purchased.
Go to your Azure subscription and run the following Azure CLI command to agree to the terms:+ ```azurecli
- az term accept --publisher vmware-inc --product azure-spring-cloud-vmware-tanzu-2 --plan tanzu-asc-ent-mtr
+ az term accept \
+ --publisher vmware-inc \
+ --product azure-spring-cloud-vmware-tanzu-2 \
+ --plan tanzu-asc-ent-mtr
``` If that doesn't help, you can contact the support team with the following info.
- - `AZURE_TENANT_ID`: the Azure tenant ID that hosts the Azure subscription
- - `AZURE_SUBSCRIPTION_ID`: the Azure subscription ID used to create the Spring Cloud instance
- - `SPRING_CLOUD_NAME`: the failed instance name
- - `ERROR_MESSAGE`: the observed error message
+ * `AZURE_TENANT_ID`: the Azure tenant ID that hosts the Azure subscription
+ * `AZURE_SUBSCRIPTION_ID`: the Azure subscription ID used to create the Spring Cloud instance
+ * `SPRING_CLOUD_NAME`: the failed instance name
+ * `ERROR_MESSAGE`: the observed error message
### No plans are available for market '\<Location>'
Azure Spring Cloud Enterprise tier needs customers to pay for a license to Tanzu
You can view the billing account for your subscription if you have admin access. See [view billing accounts](../cost-management-billing/manage/view-all-accounts.md#check-the-type-of-your-account). -
-### I need VMware Spring Runtime Support (Enterprise Tier only)
+### I need VMware Spring Runtime Support (Enterprise tier only)
Enterprise tier has built-in VMware Spring Runtime Support so you can directly open support tickets to [VMware](https://aka.ms/ascevsrsupport) if you think your issue is in scope of VMware Spring Runtime Support. For more information, see [https://tanzu.vmware.com/spring-runtime](https://tanzu.vmware.com/spring-runtime). For any other issues, directly open support tickets with Microsoft.
spring-cloud Tutorial Managed Identities Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/tutorial-managed-identities-key-vault.md
Previously updated : 07/08/2020 Last updated : 04/15/2022
To create a Key Vault, use the command [az keyvault create](/cli/azure/keyvault#
> Each Key Vault must have a unique name. Replace *\<your-keyvault-name>* with the name of your Key Vault in the following examples. ```azurecli
-az keyvault create --name "<your-keyvault-name>" -g "myResourceGroup"
+az keyvault create \
+ --resource-group <your-resource-group-name> \
+ --name "<your-keyvault-name>"
``` Make a note of the returned `vaultUri`, which will be in the format `https://<your-keyvault-name>.vault.azure.net`. It will be used in the following step.
Make a note of the returned `vaultUri`, which will be in the format `https://<yo
You can now place a secret in your Key Vault with the command [az keyvault secret set](/cli/azure/keyvault/secret#az-keyvault-secret-set): ```azurecli
-az keyvault secret set --vault-name "<your-keyvault-name>" \
+az keyvault secret set \
+ --vault-name "<your-keyvault-name>" \
--name "connectionString" \ --value "jdbc:sqlserver://SERVER.database.windows.net:1433;database=DATABASE;" ```
After installing corresponding extension, create an Azure Spring Cloud instance
```azurecli az extension add --name spring-cloud
-az spring-cloud create -n "myspringcloud" -g "myResourceGroup"
+az spring-cloud create \
+ --resource-group <your-resource-group-name> \
+ --name <your-Azure-Spring-Cloud-instance-name>
```
-The following example creates an app named `springapp` with a system-assigned managed identity, as requested by the `--assign-identity` parameter.
+### [System-assigned managed identity](#tab/system-assigned-managed-identity)
+
+The following example creates an app named `springapp` with a system-assigned managed identity, as requested by the `--system-assigned` parameter.
```azurecli
-az spring-cloud app create -n "springapp" -s "myspringcloud" -g "myResourceGroup" --assign-endpoint true --assign-identity
+az spring-cloud app create \
+ --resource-group <your-resource-group-name> \
+ --name "springapp" \
+ --service <your-Azure-Spring-Cloud-instance-name> \
+ --assign-endpoint true \
+ --system-assigned
export SERVICE_IDENTITY=$(az spring-cloud app show --name "springapp" -s "myspringcloud" -g "myResourceGroup" | jq -r '.identity.principalId') ```
-Make a note of the returned `url`, which will be in the format `https://<your-app-name>.azuremicroservices.io`. It will be used in the following step.
+### [User-assigned managed identity](#tab/user-assigned-managed-identity)
+
+First, create a user-assigned managed identity in advance with its resource ID set to `$USER_IDENTITY_RESOURCE_ID`.
++
+```azurecli
+export SERVICE_IDENTITY={principal ID of user-assigned managed identity}
+export USER_IDENTITY_RESOURCE_ID={resource ID of user-assigned managed identity}
+export USER_IDENTITY_CLIENT_ID={client ID of user-assigned managed identity}
+```
+
+The following example creates an app named `springapp` with a user-assigned managed identity, as requested by the `--user-assigned` parameter.
+
+```azurecli
+az spring-cloud app create \
+ --resource-group <your-resource-group-name> \
+ --name "springapp" \
+ --service <your-Azure-Spring-Cloud-instance-name> \
+ --assign-endpoint true \
+ --user-assigned $USER_IDENTITY_RESOURCE_ID
+az spring-cloud app show \
+ --resource-group <your-resource-group-name> \
+ --name "springapp" \
+ --service <your-Azure-Spring-Cloud-instance-name>
+```
+++
+Make a note of the returned URL, which will be in the format `https://<your-app-name>.azuremicroservices.io`. This URL will be used in the following step.
## Grant your app access to Key Vault
-Use `az keyvault set-policy` to grant proper access in Key Vault for your app.
+Use the following command to grant proper access in Key Vault for your app:
```azurecli
-az keyvault set-policy --name "<your-keyvault-name>" --object-id ${SERVICE_IDENTITY} --secret-permissions set get list
+az keyvault set-policy \
+ --name "<your-keyvault-name>" \
+ --object-id ${SERVICE_IDENTITY} \
+ --secret-permissions set get list
``` > [!NOTE]
-> Use `az keyvault delete-policy --name "<your-keyvault-name>" --object-id ${SERVICE_IDENTITY}` to remove the access for your app after system-assigned managed identity is disabled.
+> For system-assigned managed identity case, use `az keyvault delete-policy --name "<your-keyvault-name>" --object-id ${SERVICE_IDENTITY}` to remove the access for your app after system-assigned managed identity is disabled.
## Build a sample Spring Boot app with Spring Boot starter This app will have access to get secrets from Azure Key Vault. Use the Azure Key Vault Secrets Spring boot starter. Azure Key Vault is added as an instance of Spring **PropertySource**. Secrets stored in Azure Key Vault can be conveniently accessed and used like any externalized configuration property, such as properties in files.
-1. Generate a sample project from start.spring.io with Azure Key Vault Spring Starter.
+1. Use the following command to generate a sample project from `start.spring.io` with Azure Key Vault Spring Starter.
+
+ ```azurecli
+ curl https://start.spring.io/starter.tgz -d dependencies=web,azure-keyvault-secrets -d baseDir=springapp -d bootVersion=2.3.1.RELEASE -d javaVersion=1.8 | tar -xzvf -
+ ```
+
+1. Specify your Key Vault in your app.
+
+ ```azurecli
+ cd springapp
+ vim src/main/resources/application.properties
+ ```
- ```azurecli
- curl https://start.spring.io/starter.tgz -d dependencies=web,azure-keyvault-secrets -d baseDir=springapp -d bootVersion=2.3.1.RELEASE -d javaVersion=1.8 | tar -xzvf -
- ```
+1. To use managed identity for Azure Spring Cloud apps, add properties with the following content to the *src/main/resources/application.properties* file.
-2. Specify your Key Vault in your app.
+### [System-assigned managed identity](#tab/system-assigned-managed-identity)
- ```azurecli
- cd springapp
- vim src/main/resources/application.properties
- ```
+```properties
+azure.keyvault.enabled=true
+azure.keyvault.uri=https://<your-keyvault-name>.vault.azure.net
+```
+
+### [User-assigned managed identity](#tab/user-assigned-managed-identity)
- To use managed identity for Azure Spring Cloud apps, add properties with the below content to src/main/resources/application.properties.
+```properties
+azure.keyvault.enabled=true
+azure.keyvault.uri=https://<your-keyvault-name>.vault.azure.net
+azure.keyvault.client-id={Client ID of user-assigned managed identity}
+```
++
- ```properties
- azure.keyvault.enabled=true
- azure.keyvault.uri=https://<your-keyvault-name>.vault.azure.net
- ```
+ > [!NOTE]
+ > You must add the key vault URL in the *application.properties* file as shown above. Otherwise, the key vault URL may not be captured during runtime.
- > [!Note]
- > Must add the key vault url in `application.properties` as above. Otherwise, the key vault url may not be captured during runtime.
+1. Add the following code example to *src/main/java/com/example/demo/DemoApplication.java*. This code retrieves the connection string from the key vault.
-3. Add the code example to src/main/java/com/example/demo/DemoApplication.java. It retrieves the connection string from the Key Vault.
+ ```Java
+ package com.example.demo;
- ```Java
- package com.example.demo;
+ import org.springframework.boot.SpringApplication;
+ import org.springframework.boot.autoconfigure.SpringBootApplication;
+ import org.springframework.beans.factory.annotation.Value;
+ import org.springframework.boot.CommandLineRunner;
+ import org.springframework.web.bind.annotation.GetMapping;
+ import org.springframework.web.bind.annotation.RestController;
- import org.springframework.boot.SpringApplication;
- import org.springframework.boot.autoconfigure.SpringBootApplication;
- import org.springframework.beans.factory.annotation.Value;
- import org.springframework.boot.CommandLineRunner;
- import org.springframework.web.bind.annotation.GetMapping;
- import org.springframework.web.bind.annotation.RestController;
+ @SpringBootApplication
+ @RestController
+ public class DemoApplication implements CommandLineRunner {
- @SpringBootApplication
- @RestController
- public class DemoApplication implements CommandLineRunner {
+ @Value("${connectionString}")
+ private String connectionString;
- @Value("${connectionString}")
- private String connectionString;
+ public static void main(String[] args) {
+ SpringApplication.run(DemoApplication.class, args);
+ }
- public static void main(String[] args) {
- SpringApplication.run(DemoApplication.class, args);
- }
+ @GetMapping("get")
+ public String get() {
+ return connectionString;
+ }
- @GetMapping("get")
- public String get() {
- return connectionString;
- }
+ public void run(String... varl) throws Exception {
+ System.out.println(String.format("\nConnection String stored in Azure Key Vault:\n%s\n",connectionString));
+ }
+ }
+ ```
- public void run(String... varl) throws Exception {
- System.out.println(String.format("\nConnection String stored in Azure Key Vault:\n%s\n",connectionString));
- }
- }
- ```
+ If you open the *pom.xml* file, you'll see the dependency of `azure-keyvault-secrets-spring-boot-starter`. Add this dependency to your project in your *pom.xml* file.
- If you open the *pom.xml* file, you'll see the dependency `azure-keyvault-secrets-spring-boot-starter`. Add this dependency to your project in the *pom.xml* file.
+ ```xml
+ <dependency>
+ <groupId>com.microsoft.azure</groupId>
+ <artifactId>azure-keyvault-secrets-spring-boot-starter</artifactId>
+ </dependency>
+ ```
- ```xml
- <dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>azure-keyvault-secrets-spring-boot-starter</artifactId>
- </dependency>
- ```
+1. Use the following command to package your sample app.
-4. Package your sample app.
+ ```azurecli
+ mvn clean package
+ ```
- ```azurecli
- mvn clean package
- ```
+1. Now you can deploy your app to Azure with the following command:
-5. Now you can deploy your app to Azure with the Azure CLI command `az spring-cloud app deploy`.
+ ```azurecli
+ az spring-cloud app deploy \
+ --resource-group <your-resource-group-name> \
+ --name "springapp" \
+ --service <your-Azure-Spring-Cloud-instance-name> \
+ --jar-path target/demo-0.0.1-SNAPSHOT.jar
+ ```
- ```azurecli
- az spring-cloud app deploy -n "springapp" -s "myspringcloud" -g "myResourceGroup" --jar-path target/demo-0.0.1-SNAPSHOT.jar
- ```
+1. To test your app, access the public endpoint or test endpoint by using the following command:
-6. To test your app, access the public endpoint or test endpoint.
+ ```azurecli
+ curl https://myspringcloud-springapp.azuremicroservices.io/get
+ ```
- ```azurecli
- curl https://myspringcloud-springapp.azuremicroservices.io/get
- ```
+ You'll see the message `Successfully got the value of secret connectionString from Key Vault https://<your-keyvault-name>.vault.azure.net/: jdbc:sqlserver://SERVER.database.windows.net:1433;database=DATABASE;`.
- you'll see the message `Successfully got the value of secret connectionString from Key Vault https://<your-keyvault-name>.vault.azure.net/: jdbc:sqlserver://SERVER.database.windows.net:1433;database=DATABASE;`.
+## Build the sample Spring Boot app with Java SDK
-## Build sample Spring Boot app with Java SDK
+This sample can set and get secrets from Azure Key Vault. The [Azure Key Vault Secret client library for Java](/java/api/overview/azure/security-keyvault-secrets-readme) provides Azure Active Directory token authentication support across the Azure SDK. The library provides a set of `TokenCredential` implementations that you can use to construct Azure SDK clients to support Azure AD token authentication.
-This sample can set and get secrets from Azure Key Vault. The [Azure Key Vault Secret client library for Java](/java/api/overview/azure/security-keyvault-secrets-readme) provides Azure Active Directory token authentication support across the Azure SDK. It provides a set of `TokenCredential` implementations that can be used to construct Azure SDK clients to support Azure AD token authentication.
+The Azure Key Vault Secret client library enables you to securely store and control the access to tokens, passwords, API keys, and other secrets. The library offers operations to create, retrieve, update, delete, purge, back up, restore, and list the secrets and its versions.
-The Azure Key Vault Secret client library allows you to securely store and control the access to tokens, passwords, API keys, and other secrets. The library offers operations to create, retrieve, update, delete, purge, back up, restore, and list the secrets and its versions.
+To build the sample, use the following steps:
-1. Clone a sample project.
+1. Clone the sample project.
- ```azurecli
- git clone https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples.git
- ```
+ ```azurecli
+ git clone https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples.git
+ ```
-2. Specify your Key Vault in your app.
+1. Specify your key vault in your app.
- ```azurecli
- cd Azure-Spring-Cloud-Samples/managed-identity-keyvault
- vim src/main/resources/application.properties
- ```
+ ```azurecli
+ cd Azure-Spring-Cloud-Samples/managed-identity-keyvault
+ vim src/main/resources/application.properties
+ ```
- To use managed identity for Azure Spring Cloud apps, add properties with the following content to *src/main/resources/application.properties*.
+ To use managed identity for Azure Spring Cloud apps, add properties with the following content to *src/main/resources/application.properties*.
- ```properties
- azure.keyvault.enabled=true
- azure.keyvault.uri=https://<your-keyvault-name>.vault.azure.net
- ```
+ ```properties
+ azure.keyvault.enabled=true
+ azure.keyvault.uri=https://<your-keyvault-name>.vault.azure.net
+ ```
-3. Include [ManagedIdentityCredentialBuilder](/java/api/com.azure.identity.managedidentitycredentialbuilder) to get token from Azure Active Directory and [SecretClientBuilder](/java/api/com.azure.security.keyvault.secrets.secretclientbuilder) to set or get secrets from Key Vault in your code.
+1. Include [ManagedIdentityCredentialBuilder](/java/api/com.azure.identity.managedidentitycredentialbuilder) to get a token from Azure Active Directory and [SecretClientBuilder](/java/api/com.azure.security.keyvault.secrets.secretclientbuilder) to set or get secrets from Key Vault in your code.
- Get the example from [MainController.java](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/blob/master/managed-identity-keyvault/src/main/java/com/microsoft/azure/MainController.java#L28) of the cloned sample project.
+ Get the example from the [MainController.java](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/blob/master/managed-identity-keyvault/src/main/java/com/microsoft/azure/MainController.java#L28) file of the cloned sample project.
- Also include `azure-identity` and `azure-security-keyvault-secrets` as dependencies in your *pom.xml* file. Get the example from [pom.xml](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/blob/master/managed-identity-keyvault/pom.xml#L21) of the cloned sample project.
+ Include `azure-identity` and `azure-security-keyvault-secrets` as a dependency in your *pom.xml* file. Get the example from the [pom.xml](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/blob/master/managed-identity-keyvault/pom.xml#L21) file of the cloned sample project.
-4. Package your sample app.
+1. Use the following command to package your sample app.
- ```azurecli
- mvn clean package
- ```
+ ```azurecli
+ mvn clean package
+ ```
-5. Now deploy the app to Azure with the Azure CLI command `az spring-cloud app deploy`.
+1. Now deploy the app to Azure with the following command:
- ```azurecli
- az spring-cloud app deploy -n "springapp" -s "myspringcloud" -g "myResourceGroup" --jar-path target/asc-managed-identity-keyvault-sample-0.1.0.jar
- ```
+ ```azurecli
+ az spring-cloud app deploy \
+ --resource-group <your-resource-group-name> \
+ --name "springapp" \
+ --service <your-Azure-Spring-Cloud-instance-name> \
+ --jar-path target/asc-managed-identity-keyvault-sample-0.1.0.jar
+ ```
-6. Access the public endpoint or test endpoint to test your app.
+1. Access the public endpoint or test endpoint to test your app.
- First, get the value of your secret that you set in Azure Key Vault.
+ First, get the value of your secret that you set in Azure Key Vault.
- ```azurecli
- curl https://myspringcloud-springapp.azuremicroservices.io/secrets/connectionString
- ```
+ ```azurecli
+ curl https://myspringcloud-springapp.azuremicroservices.io/secrets/connectionString
+ ```
- you'll see the message `Successfully got the value of secret connectionString from Key Vault https://<your-keyvault-name>.vault.azure.net/: jdbc:sqlserver://SERVER.database.windows.net:1433;database=DATABASE;`.
+ You'll see the message `Successfully got the value of secret connectionString from Key Vault https://<your-keyvault-name>.vault.azure.net/: jdbc:sqlserver://SERVER.database.windows.net:1433;database=DATABASE;`.
- Now create a secret and then retrieve it using the Java SDK.
+ Now create a secret and then retrieve it using the Java SDK.
- ```azurecli
- curl -X PUT https://myspringcloud-springapp.azuremicroservices.io/secrets/test?value=success
+ ```azurecli
+ curl -X PUT https://myspringcloud-springapp.azuremicroservices.io/secrets/test?value=success
- curl https://myspringcloud-springapp.azuremicroservices.io/secrets/test
- ```
+ curl https://myspringcloud-springapp.azuremicroservices.io/secrets/test
+ ```
- you'll see the message `Successfully got the value of secret test from Key Vault https://<your-keyvault-name>.vault.azure.net: success`.
+ You'll see the message `Successfully got the value of secret test from Key Vault https://<your-keyvault-name>.vault.azure.net: success`.
## Next steps
storage Archive Rehydrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-overview.md
Previously updated : 03/01/2022 Last updated : 04/15/2022
Microsoft recommends performing a copy operation in most scenarios where you nee
- A copy operation avoids the early deletion fee that is assessed if you change the tier of a blob from the Archive tier before the required 180-day period elapses. For more information, see [Archive access tier](access-tiers-overview.md#archive-access-tier). - If there is a lifecycle management policy in effect for the storage account, then rehydrating a blob with [Set Blob Tier](/rest/api/storageservices/set-blob-tier) can result in a scenario where the lifecycle policy moves the blob back to the Archive tier after rehydration because the last modified time is beyond the threshold set for the policy. A copy operation leaves the source blob in the Archive tier and creates a new blob with a different name and a new last modified time, so there is no risk that the rehydrated blob will be moved back to the Archive tier by the lifecycle policy.
-Copying a blob from the Archive tier can take hours to complete depending on the rehydration priority selected. Behind the scenes, a blob copy operation reads your archived source blob to create a new online blob in the selected destination tier. The new blob may be visible when you list the blobs in the parent container before the rehydration operation is complete, but its tier will be set to Archive, The data is not available until the read operation from the source blob in the Archive tier is complete and the blob's contents have been written to the new destination blob in an online tier. The new blob is an independent copy, so modifying or deleting it does not affect the source blob in the Archive tier.
+Copying a blob from the Archive tier can take hours to complete depending on the rehydration priority selected. Behind the scenes, a blob copy operation reads your archived source blob to create a new online blob in the selected destination tier. The new blob may be visible when you list the blobs in the parent container before the rehydration operation is complete, but its tier will be set to Archive. The data is not available until the read operation from the source blob in the Archive tier is complete and the blob's contents have been written to the new destination blob in an online tier. The new blob is an independent copy, so modifying or deleting it does not affect the source blob in the Archive tier.
To learn how to rehydrate a blob by copying it to an online tier, see [Rehydrate a blob with a copy operation](archive-rehydrate-to-online-tier.md#rehydrate-a-blob-with-a-copy-operation). > [!IMPORTANT] > Do not delete the source blob until the rehydration has completed successfully. If the source blob is deleted, then the destination blob may not finish copying. You can handle the event that is raised when the copy operation completes to know when it is safe to delete the source blob. For more information, see [Handle an event on blob rehydration](#handle-an-event-on-blob-rehydration).
-Copying an archived blob to an online destination tier is supported within the same storage account only. You cannot copy an archived blob to a destination blob that is also in the Archive tier.
+Rehydrating an archived blob by copying it to an online destination tier is supported within the same storage account only for service versions prior to 2021-02-12. Beginning with service version 2021-02-12, you can rehydrate an archived blob by copying it to a different storage account, as long as the destination account is in the same region as the source account. Rehydration across storage accounts enables you to segregate your production data from your backup data, by maintaining them in separate accounts. Isolating archived data in a separate account can also help to mitigate costs from unintentional rehydration.
+
+The target blob for the copy operation must be in an online tier (Hot or Cool). You cannot copy an archived blob to a destination blob that is also in the Archive tier.
The following table shows the behavior of a blob copy operation, depending on the tiers of the source and destination blob. | | **Hot tier source** | **Cool tier source** | **Archive tier source** | |--|--|--|--|
-| **Hot tier destination** | Supported | Supported | Supported within the same account. Requires blob rehydration. |
-| **Cool tier destination** | Supported | Supported | Supported within the same account. Requires blob rehydration. |
-| **Archive tier destination** | Supported | Supported | Unsupported |
+| **Hot tier destination** | Supported | Supported | Supported across accounts in the same region with version 2021-02-12 and later. Supported within the same storage account only for earlier versions. Requires blob rehydration. |
+| **Cool tier destination** | Supported | Supported | Supported across accounts in the same region with version 2021-02-12 and later. Supported within the same storage account only for earlier versions. Requires blob rehydration. |
+| **Archive tier destination** | Supported | Supported | Not supported |
## Change a blob's access tier to an online tier
storage Archive Rehydrate To Online Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-to-online-tier.md
Previously updated : 03/01/2022 Last updated : 04/15/2022
For more information about rehydrating a blob, see [Blob rehydration from the Ar
To rehydrate a blob from the Archive tier by copying it to an online tier, use PowerShell, Azure CLI, or one of the Azure Storage client libraries. Keep in mind that when you copy an archived blob to an online tier, the source and destination blobs must have different names.
+Copying an archived blob to an online destination tier is supported within the same storage account. Beginning with service version 2021-02-12, you can copy an archived blob to a different storage account, as long as the destination account is in the same region as the source account.
+ After the copy operation is complete, the destination blob appears in the Archive tier. The destination blob is then rehydrated to the online tier that you specified in the copy operation. When the destination blob is fully rehydrated, it becomes available in the new online tier.
-The following examples show how to copy an archived blob with PowerShell or Azure CLI.
+### Rehydrate a blob to the same storage account
-### [Portal](#tab/azure-portal)
+The following examples show how to copy an archived blob to a blob in the Hot tier in the same storage account.
+
+#### [Portal](#tab/azure-portal)
N/A
-### [PowerShell](#tab/azure-powershell)
+#### [PowerShell](#tab/azure-powershell)
To copy an archived blob to an online tier with PowerShell, call the [Start-AzStorageBlobCopy](/powershell/module/az.storage/start-azstorageblobcopy) command and specify the target tier and the rehydration priority. Remember to replace placeholders in angle brackets with your own values:
Start-AzStorageBlobCopy -SrcContainer $srcContainerName `
-Context $ctx ```
-### [Azure CLI](#tab/azure-cli)
+#### [Azure CLI](#tab/azure-cli)
To copy an archived blob to an online tier with Azure CLI, call the [az storage blob copy start](/cli/azure/storage/blob/copy#az-storage-blob-copy-start) command and specify the target tier and the rehydration priority. Remember to replace placeholders in angle brackets with your own values:
az storage blob copy start \
+### Rehydrate a blob to a different storage account in the same region
+
+The following examples show how to copy an archived blob to a blob in the Hot tier in a different storage account.
+
+#### [Portal](#tab/azure-portal)
+
+N/A
+
+#### [PowerShell](#tab/azure-powershell)
+
+To copy an archived blob to a blob in an online tier in a different storage account with PowerShell, make sure you have installed the [Az.Storage](https://www.powershellgallery.com/packages/Az.Storage/) module, version 4.4.0 or higher. Next, call the [Start-AzStorageBlobCopy](/powershell/module/az.storage/start-azstorageblobcopy) command and specify the target online tier and the rehydration priority. You must specify a shared access signature (SAS) with read permissions for the archived source blob.
+
+The following example shows how to copy an archived blob to the Hot tier in a different storage account. Remember to replace placeholders in angle brackets with your own values:
+
+```powershell
+$rgName = "<resource-group>"
+$srcAccount = "<source-account>"
+$destAccount = "<dest-account>"
+$srcContainer = "<source-container>"
+$destContainer = "<dest-container>"
+$srcBlob = "<source-blob>"
+$destBlob = "<destination-blob>"
+
+# Get the destination account context
+$destCtx = New-AzStorageContext -StorageAccountName $destAccount -UseConnectedAccount
+
+# Get the source account context
+$srcCtx = New-AzStorageContext -StorageAccountName $srcAccount -UseConnectedAccount
+
+# Get the SAS URI for the source blob
+$srcBlobUri = New-AzStorageBlobSASToken -Container $srcContainer `
+ -Blob $srcBlob `
+ -Permission rwd `
+ -ExpiryTime (Get-Date).AddDays(1) `
+ -FullUri `
+ -Context $srcCtx
+
+# Start the cross-account copy operation
+Start-AzStorageBlobCopy -AbsoluteUri $srcBlobUri `
+ -DestContainer $destContainer `
+ -DestBlob $destBlob `
+ -DestContext $destCtx `
+ -StandardBlobTier Hot `
+ -RehydratePriority Standard
+```
+
+#### [Azure CLI](#tab/azure-cli)
+
+To copy an archived blob to a blob in an online tier in a different storage account with the Azure CLI, make sure you have installed version 2.35.0 or higher. Next, call the [az storage blob copy start](/cli/azure/storage/blob/copy#az-storage-blob-copy-start) command and specify the target online tier and the rehydration priority. You must specify a shared access signature (SAS) with read permissions for the archived source blob.
+
+The following example shows how to copy an archived blob to the Hot tier in a different storage account. Remember to replace placeholders in angle brackets with your own values:
+
+```azurecli
+# Specify the expiry interval
+end=`date -u -d "1 day" '+%Y-%m-%dT%H:%MZ'`
+
+# Get a SAS for the source blob
+srcBlobUri=$(az storage blob generate-sas \
+ --account-name <source-account> \
+ --container <source-container> \
+ --name <archived-source-blob> \
+ --permissions rwd \
+ --expiry $end \
+ --https-only \
+ --full-uri \
+ --as-user \
+ --auth-mode login | tr -d '"')
+
+# Copy to the destination blob in the Hot tier
+az storage blob copy start \
+ --source-uri $srcBlobUri \
+ --account-name <dest-account> \
+ --destination-container <dest-container> \
+ --destination-blob <dest-blob> \
+ --tier Hot \
+ --rehydrate-priority Standard \
+ --auth-mode login
+```
+++ ## Rehydrate a blob by changing its tier To rehydrate a blob by changing its tier from Archive to Hot or Cool, use the Azure portal, PowerShell, or Azure CLI.
virtual-desktop Create File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-file-share.md
Last updated 12/08/2021 + # Create a profile container with Azure Files and AD DS
To assign Azure role-based access control (Azure RBAC) permissions:
1. Open the Azure portal.
-2. Open the storage account you created in [Set up a storage account](#set-up-a-storage-account).
+1. Open the storage account you created in [Set up a storage account](#set-up-a-storage-account).
+
+1. Select **File shares**, then select the name of the file share you plan to use.
-3. Select **File shares**, then select the name of the file share you plan to use.
+1. Select **Access control (IAM)**.
-4. Select **Access Control (IAM)**.
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-5. Select **Add a role assignment**.
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-6. In the **Add role assignment** tab, select **Storage File Data SMB Share Elevated Contributor** for the administrator account.
+ | Setting | Value |
+ | | |
+ | Role | Storage File Data SMB Share Elevated Contributor |
+ | Assign access to | User, group, or service principal |
+ | Members | \<Name of the administrator account> |
- To assign users permissions for their FSLogix profiles, follow these same instructions. However, when you get to step 5, select **Storage File Data SMB Share Contributor** instead.
+ To assign users permissions for their FSLogix profiles, select the **Storage File Data SMB Share Contributor** role instead.
-7. Select **Save**.
+ ![Screenshot showing Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
## Assign users permissions on the Azure file share
virtual-desktop Create Profile Container Adds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-profile-container-adds.md
Last updated 12/08/2021 + # Create a profile container with Azure Files and Azure AD DS
To assign users access permissions:
1. From the Azure portal, open the file share you created in [Set up an Azure Storage account](#set-up-an-azure-storage-account).
-2. Select **Access Control (IAM)**.
+1. Select **Access control (IAM)**.
-3. Select **Add a role assignment**.
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-4. In the **Add role assignment** tab, select the appropriate built-in role from the role list. You'll need to at least select **Storage File Data SMB Share Contributor** for the account to get proper permissions.
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-5. For **Assign access to**, select **Azure Active Directory user, group, or service principal**.
+ | Setting | Value |
+ | | |
+ | Role | Storage File Data SMB Share Contributor |
+ | Assign access to | User, group, or service principal |
+ | Members | \<Name or email address for the target Azure Active Directory identity> |
-6. Select a name or email address for the target Azure Active Directory identity.
-
-7. Select **Save**.
+ ![Screenshot showing Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
## Get the Storage Account access key
virtual-desktop Start Virtual Machine Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/start-virtual-machine-connect.md
Last updated 04/14/2022 + # Start Virtual Machine on Connect
To use the Azure portal to create a custom role for Start VM on Connect:
After that, you'll need to assign the role to the Azure Virtual Desktop service principal.
-To assign the custom role:
+The following steps describe how to assign the custom role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-1. In the **Access control (IAM) tab**, select **Add role assignment**.
+1. In the navigation menu of the subscription, select **Access control (IAM)**.
-2. Search for and select the role you just created.
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-3. On the **Members** tab, enter and select **Windows Virtual Desktop** in the search bar.
+1. On the **Role** tab, search for and select the role you just created.
- >[!NOTE]
- >You might see both the Windows Virtual Desktop and Windows Virtual Desktop Azure Resource Manager Provider first party applications appear if you've deployed Azure Virtual Desktop (classic). Assign the role to both apps.
- >
- > [!div class="mx-imgBorder"]
- > ![A screenshot of the Access control (IAM) tab. In the search bar, both Azure Virtual Desktop and Azure Virtual Desktop (classic) are highlighted in red.](media/add-role-assignment.png)
+1. On the **Members** tab, search for and select **Windows Virtual Desktop**.
+
+ > [!NOTE]
+ > If you've deployed Azure Virtual Desktop (classic), both the Windows Virtual Desktop and Windows Virtual Desktop Azure Resource Manager Provider first party applications might appear. If so, assign the role to both apps.
+ >
+
+ ![Screenshot showing Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
+
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
### Create a custom role with a JSON file template
vpn-gateway Vpn Gateway Troubleshoot Site To Site Cannot Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-troubleshoot-site-to-site-cannot-connect.md
If the Internet-facing IP address of the VPN device is included in the **Local n
`https://<YourVirtualNetworkGatewayIP>:8081/healthprobe`
-> [!NOTE]
-> For Active/Acive Gateways use the following to check the second public IP: https://<YourVirtualNetworkGatewayIP2>:8083/healthprobe
+ _For Active/Acive gateways use the following to check the second public IP:_ <br>
+ `https://<YourVirtualNetworkGatewayIP2>:8083/healthprobe`
2. Click through the certificate warning. 3. If you receive a response, the VPN gateway is considered healthy. If you don't receive a response, the gateway might not be healthy or an NSG on the gateway subnet is causing the problem. The following text is a sample response: