Updates from: 05/13/2021 03:07:40
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Location Condition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/location-condition.md
Title: Location condition in Azure Active Directory Conditional Access
-description: Learn how to use the location condition to control access to your cloud apps based on a user's network location.
+description: Use the location condition to control access based on user location.
Previously updated : 11/24/2020 Last updated : 04/28/2021 -+
# Using the location condition in a Conditional Access policy
-As explained in the [overview article](overview.md) Conditional Access policies are at their most basic an if-then statement combining signals, to make decisions, and enforce organization policies. One of those signals that can be incorporated into the decision-making process is network location.
+As explained in the [overview article](overview.md) Conditional Access policies are at their most basic an if-then statement combining signals, to make decisions, and enforce organization policies. One of those signals that can be incorporated into the decision-making process is location.
![Conceptual Conditional signal plus decision to get enforcement](./media/location-condition/conditional-access-signal-decision-enforcement.png)
-Organizations can use this network location for common tasks like:
+Organizations can use this location for common tasks like:
- Requiring multi-factor authentication for users accessing a service when they are off the corporate network. - Blocking access for users accessing a service from specific countries or regions.
-The network location is determined by the public IP address a client provides to Azure Active Directory. Conditional Access policies by default apply to all IPv4 and IPv6 addresses.
+The location is determined by the public IP address a client provides to Azure Active Directory or GPS coordinates provided by the Microsoft Authenticator app. Conditional Access policies by default apply to all IPv4 and IPv6 addresses.
## Named locations
-Locations are designated in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. These named network locations may include locations like an organization's headquarters network ranges, VPN network ranges, or ranges that you wish to block. Named locations can be defined by IPv4/IPv6 address ranges or by countries/regions.
+Locations are designated in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. These named network locations may include locations like an organization's headquarters network ranges, VPN network ranges, or ranges that you wish to block. Named locations can be defined by IPv4/IPv6 address ranges or by countries.
![Named locations in the Azure portal](./media/location-condition/new-named-location.png) ### IP address ranges
-To define a named location by IPv4/IPv6 address ranges, you will need to provide a **Name** and an IP range.
+To define a named location by IPv4/IPv6 address ranges, you will need to provide:
+
+- A **Name** for the location
+- One or more IP ranges
+- Optionally **Mark as trusted location**
+
+![New IP locations in the Azure portal](./media/location-condition/new-trusted-location.png)
Named locations defined by IPv4/IPv6 address ranges are subject to the following limitations: + - Configure up to 195 named locations - Configure up to 2000 IP ranges per named location - Both IPv4 and IPv6 ranges are supported - Private IP ranges cannot be configured - The number of IP addresses contained in a range is limited. Only CIDR masks greater than /8 are allowed when defining an IP range.
-### Trusted locations
+#### Trusted locations
+
+Administrators can designate locations defined by IP address ranges to be trusted named locations.
+
+Sign-ins from trusted named locations improve the accuracy of Azure AD Identity Protection's risk calculation, lowering a user's sign-in risk when they authenticate from a location marked as trusted. Additionally, trusted named locations can be targeted in Conditional Access policies. For example, you may [restrict multi-factor authentication registration to trusted locations](howto-conditional-access-policy-registration.md).
+
+### Countries
-Administrators can designate named locations defined by IP address ranges to be trusted named locations.
+Organizations can determine country location by IP address or GPS coordinates.
-![Trusted locations in the Azure portal](./media/location-condition/new-trusted-location.png)
+To define a named location by country, you will need to provide:
-Sign-ins from trusted named locations improve the accuracy of Azure AD Identity Protection's risk calculation, lowering a user's sign-in risk when they authenticate from a location marked as trusted. Additionally, trusted named locations can be targeted in Conditional Access policies. For example, you may require restrict multi-factor authentication registration to trusted named locations only.
+- A **Name** for the location
+- Choose to determine location by IP address or GPS coordinates
+- Add one or more countries
+- Optionally choose to **Include unknown countries/regions**
-### Countries and regions
+![Country as a location in the Azure portal](./media/location-condition/new-named-location-country-region.png)
-Some organizations may choose to restrict access to certain countries or regions using Conditional Access. In addition to defining named locations by IP ranges, admins can define named locations by country or regions. When a user signs in, Azure AD resolves the user's IPv4 address to a country or region, and the mapping is updated periodically. Organizations can use named locations defined by countries to block traffic from countries where they do not do business, such as North Korea.
+If you select **Determine location by IP address (IPv4 only)**, the system will collect the IP address of the device the user is signing into. When a user signs in, Azure AD resolves the user's IPv4 address to a country or region, and the mapping is updated periodically. Organizations can use named locations defined by countries to block traffic from countries where they do not do business.
> [!NOTE] > Sign-ins from IPv6 addresses cannot be mapped to countries or regions, and are considered unknown areas. Only IPv4 addresses can be mapped to countries or regions.
-![Create a new country or region-based location in the Azure portal](./media/location-condition/new-named-location-country-region.png)
+If you select **Determine location by GPS coordinates (Preview)**, the user will need to have the Microsoft Authenticator app installed on their mobile device. Every hour, the system will contact the userΓÇÖs Microsoft Authenticator app to collect the GPS location of the userΓÇÖs mobile device.
-#### Include unknown areas
+The first time the user is required to share their location from the Microsoft Authenticator app, the user will receive a notification in the app. The user will need to open the app and grant location permissions.
-Some IP addresses are not mapped to a specific country or region, including all IPv6 addresses. To capture these IP locations, check the box **Include unknown areas** when defining a location. This option allows you to choose if these IP addresses should be included in the named location. Use this setting when the policy using the named location should apply to unknown locations.
+For the next 24 hours, if the user is still accessing the resource and the user has granted the app permission to run in the background, the location will be shared silently once per hour from the device, so the user will not need to keep getting out their mobile device. After 24 hours, the user will need to open up the app and manually approve the notification.
+
+#### Include unknown countries/regions
+
+Some IP addresses are not mapped to a specific country or region, including all IPv6 addresses. To capture these IP locations, check the box **Include unknown countries/regions** when defining a geographic location. This option allows you to choose if these IP addresses should be included in the named location. Use this setting when the policy using the named location should apply to unknown locations.
### Configure MFA trusted IPs
You can also configure IP address ranges representing your organization's local
If you have Trusted IPs configured, they show up as **MFA Trusted IPs** in the list of locations for the location condition.
-### Skipping multi-factor authentication
+#### Skipping multi-factor authentication
On the multi-factor authentication service settings page, you can identify corporate intranet users by selecting **Skip multi-factor authentication for requests from federated users on my intranet**. This setting indicates that the inside corporate network claim, which is issued by AD FS, should be trusted and used to identify the user as being on the corporate network. For more information, see [Enable the Trusted IPs feature by using Conditional Access](../authentication/howto-mfa-mfasettings.md#enable-the-trusted-ips-feature-by-using-conditional-access).
If both steps fail, a user is considered to be no longer on a trusted IP.
## Location condition in policy
-When you configure the location condition, you have the option to distinguish between:
+When you configure the location condition, you can distinguish between:
- Any location - All trusted locations
This option applies to:
### Selected locations
-With this option, you can select one or more named locations. For a policy with this setting to apply, a user needs to connect from any of the selected locations. When you click **Select** the named network selection control that shows the list of named networks opens. The list also shows if the network location has been marked as trusted. The named location called **MFA Trusted IPs** is used to include the IP settings that can be configured in the multi-factor authentication service setting page.
+With this option, you can select one or more named locations. For a policy with this setting to apply, a user needs to connect from any of the selected locations. When you **Select** the named network selection control that shows the list of named networks opens. The list also shows if the network location has been marked as trusted. The named location called **MFA Trusted IPs** is used to include the IP settings that can be configured in the multi-factor authentication service setting page.
## IPv6 traffic
-By default, Conditional Access policies will apply to all IPv6 traffic. You can exclude specific IPv6 address ranges from a Conditional Access policy if you donΓÇÖt want policies to be enforced for specific IPv6 ranges. For example, if you want to not enforce a policy for uses on your corporate network, and your corporate network is hosted on public IPv6 ranges.
+By default, Conditional Access policies will apply to all IPv6 traffic. You can exclude specific IPv6 address ranges from a Conditional Access policy if you donΓÇÖt want policies to be enforced for specific IPv6 ranges. For example, if you want to not enforce a policy for uses on your corporate network, and your corporate network is hosted on public IPv6 ranges.
+
+### Identifying IPv6 traffic in the Azure AD Sign-in activity reports
+
+You can discover IPv6 traffic in your tenant by going the [Azure AD sign-in activity reports](../reports-monitoring/concept-sign-ins.md). After you have the activity report open, add the ΓÇ£IP addressΓÇ¥ column. This column will give you to identify the IPv6 traffic.
+
+You can also find the client IP by clicking a row in the report, and then going to the ΓÇ£LocationΓÇ¥ tab in the sign-in activity details.
### When will my tenant have IPv6 traffic?
Most of the IPv6 traffic that gets proxied to Azure AD comes from Microsoft Exch
- When a mail client is used to connect to Exchange Online with legacy authentication, Azure AD may receive an IPv6 address. The initial authentication request goes to Exchange and is then proxied to Azure AD. - When Outlook Web Access (OWA) is used in the browser, it will periodically verify all Conditional Access policies continue to be satisfied. This check is used to catch cases where a user may have moved from an allowed IP address to a new location, like the coffee shop down the street. In this case, if an IPv6 address is used and if the IPv6 address is not in a configured range, the user may have their session interrupted and be directed back to Azure AD to reauthenticate.
-These are the most common reasons you may need to configure IPv6 ranges in your named locations. In addition, if you are using Azure VNets, you will have traffic coming from an IPv6 address. If you have VNet traffic blocked by a Conditional Access policy, check your Azure AD sign-in log. Once youΓÇÖve identified the traffic, you can get the IPv6 address being used and exclude it from your policy.
+In addition, if you are using Azure VNets, you will have traffic coming from an IPv6 address. If you have VNet traffic blocked by a Conditional Access policy, check your Azure AD sign-in log. Once youΓÇÖve identified the traffic, you can get the IPv6 address being used and exclude it from your policy.
> [!NOTE] > If you want to specify an IP CIDR range for a single address, apply the /128 bit mask. If you see the IPv6 address 2607:fb90:b27a:6f69:f8d5:dea0:fb39:74a and wanted to exclude that single address as a range, you would use 2607:fb90:b27a:6f69:f8d5:dea0:fb39:74a/128.
-### Identifying IPv6 traffic in the Azure AD Sign-in activity reports
-
-You can discover IPv6 traffic in your tenant by going the [Azure AD sign-in activity reports](../reports-monitoring/concept-sign-ins.md). After you have the activity report open, add the ΓÇ£IP addressΓÇ¥ column. This column will give you to identify the IPv6 traffic.
-
-You can also find the client IP by clicking a row in the report, and then going to the ΓÇ£LocationΓÇ¥ tab in the sign-in activity details.
- ## What you should know ### When is a location evaluated?
By default, Azure AD issues a token on an hourly basis. After moving off the cor
### User IP address
-The IP address that is used in policy evaluation is the public IP address of the user. For devices on a private network, this IP address is not the client IP of the userΓÇÖs device on the intranet, it is the address used by the network to connect to the public internet.
+The IP address used in policy evaluation is the public IP address of the user. For devices on a private network, this IP address is not the client IP of the userΓÇÖs device on the intranet, it is the address used by the network to connect to the public internet.
### Bulk uploading and downloading of named locations
When a cloud proxy is in place, a policy that is used to require a hybrid Azure
### API support and PowerShell
-A preview version of the Graph API for named locations is available, for more information see the [namedLocation API](/graph/api/resources/namedlocation).
-
-> [!NOTE]
-> Named locations that you create by using PowerShell display only in Named locations (preview). You can't see named locations in the old view.
+A preview version of the Graph API for named locations is available, for more information, see the [namedLocation API](/graph/api/resources/namedlocation).
## Next steps -- If you want to know how to configure a Conditional Access policy, see the article [Building a Conditional Access policy](concept-conditional-access-policies.md).-- Looking for an example policy using the location condition? See the article, [Conditional Access: Block access by location](howto-conditional-access-policy-location.md)
+- Configure a Conditional Access policy using location, see the article [Conditional Access: Block access by location](howto-conditional-access-policy-location.md).
active-directory Scenario Desktop Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-desktop-acquire-token.md
pca.getAuthCodeUrl(authCodeUrlParameters).then((response) => {
# [Python](#tab/python)
-MSAL Python doesn't provide an interactive acquire token method directly. Instead, it requires the application to send an authorization request in its implementation of the user interaction flow to obtain an authorization code. This code can then be passed to the `acquire_token_by_authorization_code` method to get the token.
+MSAL Python 1.7+ provides an interactive acquire token method.
```python result = None
if accounts:
result = app.acquire_token_silent(config["scope"], account=accounts[0]) if not result:
- result = app.acquire_token_by_authorization_code(
- request.args['code'],
+ result = app.acquire_token_interactive( # It automatically provides PKCE protection
scopes=config["scope"]) ```
namespace CommonCacheMsalV3
## Next steps Move on to the next article in this scenario,
-[Call a web API from the desktop app](scenario-desktop-call-api.md).
+[Call a web API from the desktop app](scenario-desktop-call-api.md).
active-directory Azureadjoin Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/azureadjoin-plan.md
Remote desktop connection to an Azure AD joined devices requires the host machin
Starting Windows 10 2004 update, users can also use remote desktop from an Azure AD registered Windows 10 device to an Azure AD joined device.
+### RADIUS and Wi-Fi authentication
+
+Currently, Azure AD joined devices do not support RADIUS authentication for connecting to Wi-Fi access points, since RADIUS relies on presence of an on-premises computer object. As an alternative, you can use certificates pushed via Intune or user credentials to authenticate to Wi-Fi.
++ ## Understand your provisioning options **Note**: Azure AD joined devices cannot be deployed using System Preparation Tool (Sysprep) or similar imaging tools
You can use this implementation to [require managed devices for cloud app access
> [Join your work device to your organization's network](../user-help/user-help-join-device-on-network.md) <!--Image references-->
-[1]: ./media/azureadjoin-plan/12.png
+[1]: ./media/azureadjoin-plan/12.png
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new.md
These results will show contextual and relevant details about the event and acti
For more information, see [What is sign-in diagnostic in Azure AD?](../reports-monitoring/overview-sign-in-diagnostics.md) +
+### Azure AD Connect cloud sync general availability refresh
+**Type:** Changed feature
+**Service category:** Azure AD Connect Cloud Sync
+**Product capability:** Directory
+
+Azure AD connect cloud sync now has an updated agent (version# - 1.1.359). For more details on agent updates, including bug fixes, check out the [version history](../cloud-sync/reference-version-history.md). With the updated agent, cloud sync customers can use GMSA cmdlets to set and reset their gMSA permission at a granular level. In addition that, we have changed the limit of syncing members using group scope filtering from 1499 to 50,000 (50K) members.
+
+Check out the newly available [expression builder](../cloud-sync/how-to-expression-builder.md#deploy-the-expression) for cloud sync, which, helps you build complex expressions as well as simple expressions when you do transformations of attribute values from AD to Azure AD using attribute mapping.
+ ## March 2021
Azure AD Application Proxy native support for header-based authentication is now
-### Azure AD Connect cloud sync general availability refresh
-**Type:** Changed feature
-**Service category:** Azure AD Connect Cloud Sync
-**Product capability:** Directory
-
-Azure AD connect cloud sync now has an updated agent (version# - 1.1.359). For more details on agent updates, including bug fixes, check out the [version history](../cloud-sync/reference-version-history.md). With the updated agent, cloud sync customers can use GMSA cmdlets to set and reset their gMSA permission at a granular level. In addition that, we have changed the limit of syncing members using group scope filtering from 1499 to 50,000 (50K) members.
-
-Check out the newly available [expression builder](../cloud-sync/how-to-expression-builder.md#deploy-the-expression) for cloud sync, which, helps you build complex expressions as well as simple expressions when you do transformations of attribute values from AD to Azure AD using attribute mapping.
--- ### Two-way SMS for MFA Server is no longer supported **Type:** Deprecated
Enhanced dynamic group service is now in Public Preview. New customers that crea
The new service also aims to complete member addition and removal because of attribute changes within a few minutes. Also, single processing failures won't block tenant processing. To learn more about creating dynamic groups, see our [documentation](../enterprise-users/groups-create-rule.md). -+
active-directory Pim Resource Roles Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md
na Previously updated : 10/23/2020 Last updated : 05/11/2020
Azure Active Directory (Azure AD) Privileged Identity Management (PIM) can manag
> [!NOTE] > Users or members of a group assigned to the Owner or User Access Administrator subscription roles, and Azure AD Global administrators that enable subscription management in Azure AD have Resource administrator permissions by default. These administrators can assign roles, configure role settings, and review access using Privileged Identity Management for Azure resources. A user can't manage Privileged Identity Management for Resources without Resource administrator permissions. View the list of [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
+## Role assignment conditions
+
+You can use the Azure attribute-based access control (Azure ABAC) preview to place resource conditions on eligible role assignments using Privileged Identity Management (PIM). With PIM, your end users must activate an eligible role assignment to get permission to perform certain actions. Using Azure ABAC conditions in PIM enables you not only to limit a userΓÇÖs role permissions to a resource using fine-grained conditions, but also to use PIM to secure the role assignment with a time-bound setting, approval workflow, audit trail, and so on. For more information, see [Azure attribute-based access control public preview](../../role-based-access-control/conditions-overview.md).
+ ## Assign a role Follow these steps to make a user eligible for an Azure resource role.
Follow these steps to update or remove an existing role assignment.
![Update or remove role assignment](./media/pim-resource-roles-assign-roles/resources-update-remove.png)
+1. To add or update a condition to refine Azure resource access, select **Add** or **View/Edit** in the **Condition** column for the role assignment. Currently, the Storage Blob Data Owner, Storage Blob Data Reader, and the Blob Storage Blob Data Contributor roles in Privileged Identity Management are the only two roles supported as part of the [Azure attribute-based access control public preview](../../role-based-access-control/conditions-overview.md).
+
+ ![Update or remove attributes for access control](./media/pim-resource-roles-assign-roles/resources-abac-update-remove.png)
+ 1. Select **Update** or **Remove** to update or remove the role assignment. For information about extending a role assignment, see [Extend or renew Azure resource roles in Privileged Identity Management](pim-resource-roles-renew-extend.md).
active-directory Tutorial Azure Monitor Stream Logs To Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md
After data is displayed in the event hub, you can access and read the data in tw
* **Configure a supported SIEM tool**. To read data from the event hub, most tools require the event hub connection string and certain permissions to your Azure subscription. Third-party tools with Azure Monitor integration include, but are not limited to:
- * **ArcSight**: For more information about integrating Azure AD logs with Splunk, see [Integrate Azure Active Directory logs with ArcSight using Azure Monitor](howto-integrate-activity-logs-with-arcsight.md).
+ * **ArcSight**: For more information about integrating Azure AD logs with ArcSight, see [Integrate Azure Active Directory logs with ArcSight using Azure Monitor](howto-integrate-activity-logs-with-arcsight.md).
* **Splunk**: For more information about integrating Azure AD logs with Splunk, see [Integrate Azure AD logs with Splunk by using Azure Monitor](./howto-integrate-activity-logs-with-splunk.md).
active-directory Oracle Cloud Infrastructure Console Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/oracle-cloud-infrastructure-console-provisioning-tutorial.md
Previously updated : 01/16/2020 Last updated : 05/16/2021 # Tutorial: Configure Oracle Cloud Infrastructure Console for automatic user provisioning
+> [!NOTE]
+> Integrating with Oracle Cloud Infrastructure Console or Oracle IDCS with a custom / BYOA application is not supported. Using the gallery application as described in this tutorial is supported. The gallery application has been customized to work with the Oracle SCIM server.
This tutorial describes the steps you need to perform in both Oracle Cloud Infrastructure Console and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Oracle Cloud Infrastructure Console](https://www.oracle.com/cloud/free/?source=:ow:o:p:nav:0916BCButton&intcmp=:ow:o:p:nav:0916BCButton) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
aks Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-identity.md
There are two levels of access needed to fully operate an AKS cluster:
* Pull your `kubeconfig`. * Access to the Kubernetes API. This access is controlled by either: * [Kubernetes RBAC](#kubernetes-rbac) (traditionally).
- * [Integrating Azure RBAC with AKS for Kubernetes authorization](#azure-rbac-for-kubernetes-authorization-preview).
+ * [Integrating Azure RBAC with AKS for Kubernetes authorization](#azure-rbac-for-kubernetes-authorization).
### Azure RBAC to authorize access to the AKS resource
Alternatively, you could give your user the general [Contributor](../role-based-
[Use Azure RBAC to define access to the Kubernetes configuration file in AKS](control-kubeconfig-access.md).
-### Azure RBAC for Kubernetes Authorization (Preview)
+### Azure RBAC for Kubernetes Authorization
With the Azure RBAC integration, AKS will use a Kubernetes Authorization webhook server so you can manage Azure AD-integrated Kubernetes cluster resource permissions and assignments using Azure role definition and role assignments.
aks Manage Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/manage-azure-rbac.md
description: Learn how to use Azure RBAC for Kubernetes Authorization with Azure Kubernetes Service (AKS). Previously updated : 09/21/2020 Last updated : 02/09/2021 #Customer intent: As a cluster operator or developer, I want to learn how to leverage Azure RBAC permissions to authorize actions within the AKS cluster.
-# Use Azure RBAC for Kubernetes Authorization (preview)
+# Use Azure RBAC for Kubernetes Authorization
Today you can already leverage [integrated authentication between Azure Active Directory (Azure AD) and AKS](managed-aad.md). When enabled, this integration allows customers to use Azure AD users, groups, or service principals as subjects in Kubernetes RBAC, see more [here](azure-ad-rbac.md). This feature frees you from having to separately manage user identities and credentials for Kubernetes. However, you still have to set up and manage Azure RBAC and Kubernetes RBAC separately. For more details on authentication and authorization with RBAC on AKS, see [here](concepts-identity.md).
This document covers a new approach that allows for the unified management and a
## Before you begin
-The ability to manage RBAC for Kubernetes resources from Azure gives you the choice to manage RBAC for the cluster resources either using Azure or native Kubernetes mechanisms. When enabled, Azure AD principals will be validated exclusively by Azure RBAC while regular Kubernetes users and service accounts are exclusively validated by Kubernetes RBAC. For more details on authentication and authorization with RBAC on AKS, see [here](concepts-identity.md#azure-rbac-for-kubernetes-authorization-preview).
+The ability to manage RBAC for Kubernetes resources from Azure gives you the choice to manage RBAC for the cluster resources either using Azure or native Kubernetes mechanisms. When enabled, Azure AD principals will be validated exclusively by Azure RBAC while regular Kubernetes users and service accounts are exclusively validated by Kubernetes RBAC. For more details on authentication and authorization with RBAC on AKS, see [here](concepts-identity.md#azure-rbac-for-kubernetes-authorization).
+### Prerequisites
-### Prerequisites
-- Ensure you have the Azure CLI version 2.9.0 or later-- Ensure you have the `EnableAzureRBACPreview` feature flag enabled.-- Ensure you have the `aks-preview` [CLI extension][az-extension-add] v0.4.55 or higher installed
+- Ensure you have the Azure CLI version 2.24.0 or later
- Ensure you have installed [kubectl v1.18.3+][az-aks-install-cli].
-#### Register `EnableAzureRBACPreview` preview feature
-
-To create an AKS cluster that uses Azure RBAC for Kubernetes Authorization, you must enable the `EnableAzureRBACPreview` feature flag on your subscription.
-
-Register the `EnableAzureRBACPreview` feature flag using the [az feature register][az-feature-register] command as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "EnableAzureRBACPreview"
-```
-
- You can check on the registration status using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnableAzureRBACPreview')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
-
-#### Install aks-preview CLI extension
-
-To create an AKS cluster that uses Azure RBAC, you need the *aks-preview* CLI extension version 0.4.55 or higher. Install the *aks-preview* Azure CLI extension using the [az extension add][az-extension-add] command, or install any available updates using the [az extension update][az-extension-update] command:
-
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
- ### Limitations - Requires [Managed Azure AD integration](managed-aad.md).-- You can't integrate Azure RBAC for Kubernetes authorization into existing clusters during preview, but you will be able to at General Availability (GA). - Use [kubectl v1.18.3+][az-aks-install-cli]. - If you have CRDs and are making custom role definitions, the only way to cover CRDs today is to provide `Microsoft.ContainerService/managedClusters/*/read`. AKS is working on providing more granular permissions for CRDs. For the remaining objects you can use the specific API Groups, for example: `Microsoft.ContainerService/apps/deployments/read`. - New role assignments can take up to 5min to propagate and be updated by the authorization server.
A successful creation of a cluster with Azure AD integration and Azure RBAC for
} ```
+## Integrate Azure RBAC into an existing cluster
+
+> [!NOTE]
+> To use Azure RBAC for Kubernetes Authorization, Azure Active Directory integration must be enabled on your cluster. For more, see [Azure Active Directory integration][managed-aad].
+
+To add Azure RBAC for Kubernetes Authorization into an existing AKS cluster, use the [az aks update][az-aks-update] command with the flag `enable-azure-rbac`.
+
+```azurecli-interactive
+az aks update -g myResourceGroup -n myAKSCluster --enable-azure-rbac
+```
+ ## Create role assignments for users to access cluster AKS provides the following four built-in roles:
Replace `<YOUR SUBSCRIPTION ID>` by the ID from your subscription, which you can
az account show --query id -o tsv ``` - Now we can create the role definition by running the below command from the folder where you saved `deploy-view.json`: ```azurecli-interactive
az role assignment create --role "AKS Deployment Viewer" --assignee <AAD-ENTITY-
> ```azurecli-interactive > az aks install-cli > ```
-> You might need to run it with `sudo` privileges.
+>
+> You might need to run it with `sudo` privileges.
-Now that you have assigned your desired role and permissions. You can start calling the Kubernetes API, for example, from `kubectl`.
+Now that you have assigned your desired role and permissions. You can start calling the Kubernetes API, for example, from `kubectl`.
For this purpose, let's first get the cluster's kubeconfig using the below command:
aks-nodepool1-93451573-vmss000001 Ready agent 3h6m v1.15.11
aks-nodepool1-93451573-vmss000002 Ready agent 3h6m v1.15.11 ``` - ## Clean up ### Clean Role assignment
aks-nodepool1-93451573-vmss000002 Ready agent 3h6m v1.15.11
```azurecli-interactive az role assignment list --scope $AKS_ID --query [].id -o tsv ```+ Copy the ID or IDs from all the assignments you did and then. ```azurecli-interactive
az group delete -n MyResourceGroup
[az-feature-register]: /cli/azure/feature#az_feature_register [az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli [az-provider-register]: /cli/azure/provider#az_provider_register
+[az-aks-update]: /cli/azure/aks#az_aks_update
+[managed-aad]: ./managed-aad.md
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-managed-identity.md
Title: Use managed identities in Azure Kubernetes Service
description: Learn how to use managed identities in Azure Kubernetes Service (AKS) Previously updated : 12/16/2020 Last updated : 05/12/2021 # Use managed identities in Azure Kubernetes Service
Currently, an Azure Kubernetes Service (AKS) cluster (specifically, the Kubernet
You must have the following resource installed: -- The Azure CLI, version 2.15.1 or later
+- The Azure CLI, version 2.23.0 or later
## Limitations
Finally, get credentials to access the cluster:
az aks get-credentials --resource-group myResourceGroup --name myManagedCluster ```
-## Update an AKS cluster to managed identities (Preview)
+## Update an AKS cluster to managed identities
You can now update an AKS cluster currently working with service principals to work with managed identities by using the following CLI commands.
-First, Register the Feature Flag for system-assigned identity:
-
-```azurecli-interactive
-az feature register --namespace Microsoft.ContainerService -n MigrateToMSIClusterPreview
-```
-
-Update the system-assigned identity:
- ```azurecli-interactive az aks update -g <RGName> -n <AKSName> --enable-managed-identity ```-
-Register the Feature Flag for user-assigned identity:
-
-```azurecli-interactive
-az feature register --namespace Microsoft.ContainerService -n UserAssignedIdentityPreview
-```
-
-Update the user-assigned identity:
-
-```azurecli-interactive
-az aks update -g <RGName> -n <AKSName> --enable-managed-identity --assign-identity <UserAssignedIdentityResourceID>
-```
> [!NOTE] > Once the system-assigned or user-assigned identities have been updated to managed identity, perform an `az aks nodepool upgrade --node-image-only` on your nodes to complete the update to managed identity.
app-service Template Deploy Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/scripts/template-deploy-private-endpoint.md
This template creates a private endpoint for an Azure web app.
### Review the template ### Deploy the template
application-gateway Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/quick-create-template.md
You can also complete this quickstart using the [Azure portal](quick-create-port
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fag-docs-qs%2Fazuredeploy.json)
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-docs-qs%2Fazuredeploy.json)
## Prerequisites
For the sake of simplicity, this template creates a simple setup with a public f
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/ag-docs-qs/) Multiple Azure resources are defined in the template:
Deploy the ARM template to Azure:
1. Select **Deploy to Azure** to sign in to Azure and open the template. The template creates an application gateway, the network infrastructure, and two virtual machines in the backend pool running IIS.
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fag-docs-qs%2Fazuredeploy.json)
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-docs-qs%2Fazuredeploy.json)
2. Select or create your resource group, type the virtual machine administrator user name and password. 3. Select **Review + Create** and then select **Create**.
azure-arc Plan Evaluate On Azure Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/plan-evaluate-on-azure-virtual-machine.md
While you cannot install Azure Arc enabled servers on an Azure VM for production
## Prerequisites * Your account is assigned to the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor) role.
-* The Azure virtual machine is running an [operating system supported by Arc enabled servers](agent-overview.md#supported-operating-systems). If you don't have an Azure VM, you can deploy a [simple Windows VM](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2f101-vm-simple-windows%2fazuredeploy.json) or a [simple Ubuntu Linux 18.04 LTS VM](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2f101-vm-simple-linux%2fazuredeploy.json).
-* Your Azure VM can communicate outbound to download the Azure Connected Machine agent package for Windows from the [Microsoft Download Center](https://aka.ms/AzureConnectedMachineAgent), and Linux from the Microsoft [package repository](https://packages.microsoft.com/). If outbound connectivity to the Internet is restricted following your IT security policy, you will need to download the agent package manually and copy it to a folder on the Azure VM.
+* The Azure virtual machine is running an [operating system supported by Arc enabled servers](agent-overview.md#supported-operating-systems). If you don't have an Azure VM, you can deploy a [simple Windows VM](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.compute%2fvm-simple-windows%2fazuredeploy.json) or a [simple Ubuntu Linux 18.04 LTS VM](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.compute%2fvm-simple-windows%2fazuredeploy.json).
+* Your Azure VM can communicate outbound to download the Azure Connected Machine agent package for Windows from the [Microsoft Download Center](https://aka.ms/AzureConnectedMachineAgent), and Linux from the Microsoft [package repository](https://packages.microsoft.com/). If outbound connectivity to the Internet is restricted following your IT security policy, you will need to download the agent package manually and copy it to a folder on the Azure VM.
* An account with elevated (that is, an administrator or as root) privileges on the VM, and RDP or SSH access to the VM. * To register and manage the Azure VM with Arc enabled servers, you are a member of the [Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator) or [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role in the resource group.
To start managing your Azure VM as an Arc enabled server, you need to make the f
3. Create a security rule to deny access to the Azure Instance Metadata Service (IMDS). IMDS is a REST API that applications can call to get information about the VM's representation in Azure, including its resource ID and location. IMDS also provides access to any managed identities assigned to the machine. Azure Arc enabled servers provides its own IMDS implementation and returns information about the Azure Arc representation of the VM. To avoid situations where both IMDS endpoints are available and apps have to choose between the two, you block access to the Azure VM IMDS so that the Azure Arc enabled server IMDS implementation is the only one available.
-After you've made these changes, your Azure VM behaves like any machine or server outside of Azure and is at the necessary starting point to install and evaluate Azure Arc enabled servers.
+After you've made these changes, your Azure VM behaves like any machine or server outside of Azure and is at the necessary starting point to install and evaluate Azure Arc enabled servers.
-When Arc enabled servers is configured on the VM, you see two representations of it in Azure. One is the Azure VM resource, with a `Microsoft.Compute/virtualMachines` resource type, and the other is an Azure Arc resource, with a `Microsoft.HybridCompute/machines` resource type. As a result of preventing management of the guest operating system from the shared physical host server, the best way to think about the two resources is the Azure VM resource is the virtual hardware for your VM, and let's you control the power state and view information about its SKU, network, and storage configurations. The Azure Arc resource manages the guest operating system in that VM, and can be used to install extensions, view compliance data for Azure Policy, and complete any other supported task by Arc enabled servers.
+When Arc enabled servers is configured on the VM, you see two representations of it in Azure. One is the Azure VM resource, with a `Microsoft.Compute/virtualMachines` resource type, and the other is an Azure Arc resource, with a `Microsoft.HybridCompute/machines` resource type. As a result of preventing management of the guest operating system from the shared physical host server, the best way to think about the two resources is the Azure VM resource is the virtual hardware for your VM, and let's you control the power state and view information about its SKU, network, and storage configurations. The Azure Arc resource manages the guest operating system in that VM, and can be used to install extensions, view compliance data for Azure Policy, and complete any other supported task by Arc enabled servers.
## Reconfigure Azure VM
When Arc enabled servers is configured on the VM, you see two representations of
While still connected to the server, run the following commands to block access to the Azure IMDS endpoint. For Windows, run the following PowerShell command: ```powershell
- New-NetFirewallRule -Name BlockAzureIMDS -DisplayName "Block access to Azure IMDS" -Enabled True -Profile Any -Direction Outbound -Action Block -RemoteAddress 169.254.169.254
+ New-NetFirewallRule -Name BlockAzureIMDS -DisplayName "Block access to Azure IMDS" -Enabled True -Profile Any -Direction Outbound -Action Block -RemoteAddress 169.254.169.254
```
- For Linux, consult your distribution's documentation for the best way to block outbound access to `169.254.169.254/32` over TCP port 80. Normally you'll block outbound access with the built-in firewall, but you can also temporarily block it with **iptables** or **nftables**.
+ For Linux, consult your distribution's documentation for the best way to block outbound access to `169.254.169.254/32` over TCP port 80. Normally you'll block outbound access with the built-in firewall, but you can also temporarily block it with **iptables** or **nftables**.
If your Azure VM is running Ubuntu, perform the following steps to configure its uncomplicated firewall (UFW):
When Arc enabled servers is configured on the VM, you see two representations of
To configure a generic iptables configuration, run the following command: ```bash
- iptables -A OUTPUT -d 169.254.169.254 -j DROP
+ iptables -A OUTPUT -d 169.254.169.254 -j DROP
``` > [!NOTE]
When Arc enabled servers is configured on the VM, you see two representations of
The VM is now ready for you to begin evaluating Arc enabled servers. To install and configure the Arc enabled servers agent, see [Connect hybrid machines using the Azure portal](onboard-portal.md) and follow the steps to generate an installation script and install using the scripted method. > [!NOTE]
- > If outbound connectivity to the internet is restricted from your Azure VM, you'll need to download the agent package manually. Copy the agent package to the Azure VM, and modify the Arc enabled servers installation script to reference the source folder.
+ > If outbound connectivity to the internet is restricted from your Azure VM, you'll need to download the agent package manually. Copy the agent package to the Azure VM, and modify the Arc enabled servers installation script to reference the source folder.
If you missed one of the steps, the installation script detects it is running on an Azure VM and terminates with an error. Verify you've completed steps 1-3, and then rerun the script.
azure-cache-for-redis Cache Administration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-administration.md
The **Schedule updates** blade allows you to designate a maintenance window for
> [!NOTE] > The maintenance window applies to Redis server updates and updates to the Operating System of the VMs hosting the cache. The maintenance window does not apply to Host OS updates to the Hosts hosting the cache VMs or other Azure Networking components. In rare cases, where caches are hosted on older models (you can tell if your cache is on an older model if the DNS name of the cache resolves to a suffix of "cloudapp.net", "chinacloudapp.cn", "usgovcloudapi.net" or "cloudapi.de"), the maintenance window won't apply to Guest OS updates either.
+>
+> Currently, no option is available to configure a reboot or scheduled updates for an Enterprise tier cache.
>
azure-cache-for-redis Cache Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-configure.md
The **maxfragmentationmemory-reserved** setting configures the amount of memory,
One thing to consider when choosing a new memory reservation value (**maxmemory-reserved** or **maxfragmentationmemory-reserved**) is how this change might affect a cache that is already running with large amounts of data in it. For instance, if you have a 53 GB cache with 49 GB of data, then change the reservation value to 8 GB, this change will drop the max available memory for the system down to 45 GB. If either your current `used_memory` or your `used_memory_rss` values are higher than the new limit of 45 GB, then the system will have to evict data until both `used_memory` and `used_memory_rss` are below 45 GB. Eviction can increase server load and memory fragmentation. For more information on cache metrics such as `used_memory` and `used_memory_rss`, see [Available metrics and reporting intervals](cache-how-to-monitor.md#available-metrics-and-reporting-intervals). > [!IMPORTANT]
-> The **maxmemory-reserved** and **maxfragmentationmemory-reserved** settings are only available for Standard and Premium caches.
->
+> The **maxmemory-reserved** and **maxfragmentationmemory-reserved** settings are available only for Standard and Premium caches.
+>
+> The `noeviction` eviction policy is the only memory policy that's available for an Enterprise tier cache.
> #### Keyspace notifications (advanced settings)
azure-cache-for-redis Cache How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-monitor.md
The **Usage** section in the **Overview** blade has **Redis Server Load**, **Mem
The **Pricing tier** displays the cache pricing tier, and can be used to [scale](cache-how-to-scale.md) the cache to a different pricing tier.
-## View metrics with Azure monitor
+## View metrics charts for all your caches with Azure Monitor for Azure Cache for Redis
-To view Redis metrics and create custom charts using Azure Monitor, click **Metrics** from the **Resource menu**, and customize your chart using the desired metrics, reporting interval, chart type, and more.
+Use [Azure Monitor for Azure Cache for Redis](../azure-monitor/insights/redis-cache-insights-overview.md) (preview) for a view of the overall performance, failures, capacity, and operational health of all your Azure Cache for Redis resources in a customizable unified interactive experience that lets you drill down into details for individual resources. Azure Monitor for Azure Cache for Redis is based on the [workbooks feature of Azure Monitor](../azure-monitor/visualize/workbooks-overview.md) that provides rich visualizations for metrics and other data. To learn more, see the [Explore Azure Monitor for Azure Cache for Redis](../azure-monitor/insights/redis-cache-insights-overview.md) article.
+
+## View metrics with Azure Monitor metrics explorer
+
+For scenarios where you don't need the full flexibility of Azure Monitor for Azure Cache for Redis, you can instead view metrics and create custom charts using the Azure Monitor metrics explorer. Click **Metrics** from the **Resource menu**, and customize your chart using the desired metrics, reporting interval, chart type, and more.
![In the left navigation pane of contoso55, Metrics is an option under Monitoring and is highlighted. On Metrics there is a list of metrics. Cache hits and Cache misses are selected.](./media/cache-how-to-monitor/redis-cache-monitor.png) For more information on working with metrics using Azure Monitor, see [Overview of metrics in Microsoft Azure](../azure-monitor/data-platform.md).
-<a name="how-to-view-metrics-and-customize-chart"></a>
<a name="enable-cache-diagnostics"></a> ## Export cache metrics
azure-cache-for-redis Cache Redis Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-redis-samples.md
Last updated 01/23/2017 # Azure Cache for Redis samples
-This topic provides a list of Azure Cache for Redis samples, covering scenarios such as connecting to a cache, reading and writing data to and from a cache, and using the ASP.NET Azure Cache for Redis providers. Some of the samples are downloadable projects, and some provide step-by-step guidance and include code snippets but do not link to a downloadable project.
+You'll find a list of Azure Cache for Redis samples in this article.
+The samples cover scenarios such as:
+
+* Connecting to a cache
+* Reading and writing data to and from a cache
+* And using the ASP.NET Azure Cache for Redis providers.
+
+Some samples are downloadable projects. Other samples provide step-by-step guidance that includes code snippets but don't link to a downloadable project.
## Hello world samples
-The samples in this section show the basics of connecting to an Azure Cache for Redis instance and reading and writing data to the cache using a variety of languages and Redis clients.
+The samples in this section show the basics of connecting to an Azure Cache for Redis instance. The sample also shows reading and writing data to the cache using different languages and Redis clients.
-The [Hello world](https://github.com/rustd/RedisSamples/tree/master/HelloWorld) sample shows how to perform various cache operations using the [StackExchange.Redis](https://github.com/StackExchange/StackExchange.Redis) .NET client.
+The [Hello world](https://github.com/rustd/RedisSamples/tree/master/HelloWorld) sample shows how to do various cache operations using the [StackExchange.Redis](https://github.com/StackExchange/StackExchange.Redis) .NET client.
This sample shows how to:
This sample shows how to:
* Use Redis sets to implement tagging * Work with Redis Cluster
-For more information, see the [StackExchange.Redis](https://github.com/StackExchange/StackExchange.Redis) documentation on GitHub, and for more usage scenarios see the [StackExchange.Redis.Tests](https://github.com/StackExchange/StackExchange.Redis/tree/master/tests) unit tests.
+For more information, see the [StackExchange.Redis](https://github.com/StackExchange/StackExchange.Redis) documentation on GitHub. For more usage scenarios, see the [StackExchange.Redis.Tests](https://github.com/StackExchange/StackExchange.Redis/tree/master/tests) unit tests.
[How to use Azure Cache for Redis with Python](cache-python-get-started.md) shows how to get started with Azure Cache for Redis using Python and the [redis-py](https://github.com/andymccurdy/redis-py) client.
-[Work with .NET objects in the cache](cache-dotnet-how-to-use-azure-redis-cache.md#work-with-net-objects-in-the-cache) shows you one way to serialize .NET objects so you can write them to and read them from an Azure Cache for Redis instance.
+[Work with .NET objects in the cache](cache-dotnet-how-to-use-azure-redis-cache.md#work-with-net-objects-in-the-cache) shows you one way to serialize .NET objects to write them to and read them from an Azure Cache for Redis instance.
## Use Azure Cache for Redis as a Scale out Backplane for ASP.NET SignalR
-The [Use Azure Cache for Redis as a Scale out Backplane for ASP.NET SignalR](https://github.com/rustd/RedisSamples/tree/master/RedisAsSignalRBackplane) sample demonstrates how you can use Azure Cache for Redis as a SignalR backplane. For more information about backplane, see [SignalR Scaleout with Redis](https://www.asp.net/signalr/overview/performance/scaleout-with-redis).
+The [Use Azure Cache for Redis as a Scale out Backplane for ASP.NET SignalR](https://github.com/rustd/RedisSamples/tree/master/RedisAsSignalRBackplane) sample demonstrates how to use Azure Cache for Redis as a SignalR backplane. For more information about backplane, see [SignalR Scaleout with Redis](https://www.asp.net/signalr/overview/performance/scaleout-with-redis).
## Azure Cache for Redis customer query sample
-This sample demonstrates compares performance between accessing data from a cache and accessing data from persistence storage. This sample has two projects.
+This sample compares performance between accessing data from a cache and accessing data from persistence storage. This sample has two projects.
* [Demo how Azure Cache for Redis can improve performance by Caching data](https://github.com/rustd/RedisSamples/tree/master/RedisCacheCustomerQuerySample) * [Seed the Database and Cache for the demo](https://github.com/rustd/RedisSamples/tree/master/SeedCacheForCustomerQuerySample) ## ASP.NET Session State and Output Caching
-The [Use Azure Cache for Redis to store ASP.NET SessionState and OutputCache](https://github.com/rustd/RedisSamples/tree/master/SessionState_OutputCaching) sample demonstrates how you to use Azure Cache for Redis to store ASP.NET Session and Output Cache using the SessionState and OutputCache providers for Redis.
+The [Use Azure Cache for Redis to store ASP.NET SessionState and OutputCache](https://github.com/rustd/RedisSamples/tree/master/SessionState_OutputCaching) sample demonstrates:
+
+* How to use Azure Cache for Redis to store ASP.NET Session and Output Cache
+* Using the SessionState and OutputCache providers for Redis.
## Manage Azure Cache for Redis with MAML
-The [Manage Azure Cache for Redis using Azure Management Libraries](https://github.com/rustd/RedisSamples/tree/master/ManageCacheUsingMAML) sample demonstrates how can you use Azure Management Libraries to manage - (Create/ Update/ delete) your Cache.
+The [Manage Azure Cache for Redis using Azure Management Libraries](https://github.com/rustd/RedisSamples/tree/master/ManageCacheUsingMAML) sample demonstrates how to use Azure Management Libraries to manage - (Create/ Update/ delete) your Cache.
## Custom monitoring sample
-The [Access Azure Cache for Redis Monitoring data](https://github.com/rustd/RedisSamples/tree/master/CustomMonitoring) sample demonstrates how you can access monitoring data for your Azure Cache for Redis outside of the Azure Portal.
+The [Access Azure Cache for Redis Monitoring data](https://github.com/rustd/RedisSamples/tree/master/CustomMonitoring) sample demonstrates how to access monitoring data for your Azure Cache for Redis outside of the Azure portal.
## A Twitter-style clone written using PHP and Redis
-The [Retwis](https://github.com/SyntaxC4-MSFT/retwis) sample is the Redis Hello World. It is a minimal Twitter-style social network clone written using Redis and PHP using the [Predis](https://github.com/nrk/predis) client. The source code is designed to be very simple and at the same time to show different Redis data structures.
+The [Retwis](https://github.com/SyntaxC4-MSFT/retwis) sample is the Redis Hello World. It's a minimal Twitter-style social network clone written using Redis and PHP using the [Predis](https://github.com/nrk/predis) client. The source code is designed to be simple and at the same time to show different Redis data structures.
## Bandwidth monitor The [Bandwidth monitor](https://github.com/JonCole/SampleCode/tree/master/BandWidthMonitor) sample allows you to monitor the bandwidth used on the client. To measure the bandwidth, run the sample on the cache client machine, make calls to the cache, and observe the bandwidth reported by the bandwidth monitor sample.
azure-cache-for-redis Cache Troubleshoot Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-troubleshoot-server.md
Memory pressure on the server side leads to all kinds of performance problems th
- The cache is filled with data near its maximum capacity. - Redis is seeing high memory fragmentation. This fragmentation is most often caused by storing large objects since Redis is optimized for small objects.
-Redis exposes two stats through the [INFO](https://redis.io/commands/info) command that can help you identify this issue: "used_memory" and "used_memory_rss". You can [view these metrics](cache-how-to-monitor.md#view-metrics-with-azure-monitor) using the portal.
+Redis exposes two stats through the [INFO](https://redis.io/commands/info) command that can help you identify this issue: "used_memory" and "used_memory_rss". You can [view these metrics](cache-how-to-monitor.md#view-metrics-with-azure-monitor-metrics-explorer) using the portal.
There are several possible changes you can make to help keep memory usage healthy:
There are several possible changes you can make to help keep memory usage health
A high server load or CPU usage means the server can't process requests in a timely fashion. The server may be slow to respond and unable to keep up with request rates.
-[Monitor metrics](cache-how-to-monitor.md#view-metrics-with-azure-monitor) such as CPU or server load. Watch for spikes in CPU usage that correspond with timeouts.
+[Monitor metrics](cache-how-to-monitor.md#view-metrics-with-azure-monitor-metrics-explorer) such as CPU or server load. Watch for spikes in CPU usage that correspond with timeouts.
There are several changes you can make to mitigate high server load:
Using the [SLOWLOG](https://redis.io/commands/slowlog) command, you can measure
Different cache sizes have different network bandwidth capacities. If the server exceeds the available bandwidth, then data won't be sent to the client as quickly. Clients requests could time out because the server can't push data to the client fast enough.
-The "Cache Read" and "Cache Write" metrics can be used to see how much server-side bandwidth is being used. You can [view these metrics](cache-how-to-monitor.md#view-metrics-with-azure-monitor) in the portal.
+The "Cache Read" and "Cache Write" metrics can be used to see how much server-side bandwidth is being used. You can [view these metrics](cache-how-to-monitor.md#view-metrics-with-azure-monitor-metrics-explorer) in the portal.
To mitigate situations where network bandwidth usage is close to maximum capacity:
azure-cache-for-redis Cache Troubleshoot Timeouts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-troubleshoot-timeouts.md
This error message contains metrics that can help point you to the cause and pos
| wr |There's an active writer (meaning the 6 unsent requests aren't being ignored) bytes/activewriters | | in |There are no active readers and zero bytes are available to be read on the NIC bytes/activereaders |
+In the preceding exception example, the `IOCP` and `WORKER` sections each include a `Busy` value that is greater than the `Min` value. The difference means that you should adjust your `ThreadPool` settings. You can [configure your ThreadPool settings](cache-management-faq.md#important-details-about-threadpool-growth) to ensure that your thread pool scales up quickly under burst scenarios.
+ You can use the following steps to investigate possible root causes. 1. As a best practice, make sure you're using the following pattern to connect when using the StackExchange.Redis client.
You can use the following steps to investigate possible root causes.
1. Ensure that your server and the client application are in the same region in Azure. For example, you might be getting timeouts when your cache is in East US but the client is in West US and the request doesn't complete within the `synctimeout` interval or you might be getting timeouts when you're debugging from your local development machine.
- ItΓÇÖs highly recommended to have the cache and in the client in the same Azure region. If you have a scenario that includes cross region calls, you should set the `synctimeout` interval to a value higher than the default 5000-ms interval by including a `synctimeout` property in the connection string. The following example shows a snippet of a connection string for StackExchange.Redis provided by Azure Cache for Redis with a `synctimeout` of 2000 ms.
+ ItΓÇÖs highly recommended to have the cache and in the client in the same Azure region. If you have a scenario that includes cross region calls, you should set the `synctimeout` interval to a value higher than the default 5000-ms interval by including a `synctimeout` property in the connection string. The following example shows a snippet of a connection string for StackExchange.Redis provided by Azure Cache for Redis with a `synctimeout` of 8000 ms.
```output
- synctimeout=2000,cachename.redis.cache.windows.net,abortConnect=false,ssl=true,password=...
+ synctimeout=8000,cachename.redis.cache.windows.net,abortConnect=false,ssl=true,password=...
``` 1. Ensure you using the latest version of the [StackExchange.Redis NuGet package](https://www.nuget.org/packages/StackExchange.Redis/). There are bugs constantly being fixed in the code to make it more robust to timeouts so having the latest version is important.
azure-cache-for-redis Quickstart Create Redis Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/quickstart-create-redis-enterprise.md
You'll need an Azure subscription before you begin. If you don't have one, creat
:::image type="content" source="media/cache-create/enterprise-tier-basics.png" alt-text="Enterprise tier Basics tab"::: > [!NOTE]
- > Be sure to check the box under "Terms" before proceeding.
+ > Be sure to select **Terms** before you proceed.
> 1. Select **Next: Networking** and skip. 1. Select **Next: Advanced** and set **Clustering policy** to **Enterprise**. Enable **Non-TLS access only** if you plan to connect to the new cache without using TLS. This is not recommended, however.
- :::image type="content" source="media/cache-create/enterprise-tier-advanced.png" alt-text="Enterprise tier Advanced tab":::
+ :::image type="content" source="media/cache-create/enterprise-tier-advanced.png" alt-text="Screenshot that shows the Enterprise tier Advanced tab.":::
+ > [!NOTE]
+ > You can't change modules after you create the cache instance. The setting is create-only.
+ >
+
1. Select **Next: Tags** and skip. 1. Select **Next: Review + create**.
azure-functions Azfw0001 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/errors-diagnostics/net-worker-rules/azfw0001.md
+
+ Title: "AZFW0001: Invalid binding attributes"
+description: "Learn about code analysis rule AZFW0001: Invalid binding attributes"
+++ Last updated : 05/10/2021++
+# AZFW0001: Invalid binding attributes
+This rule is triggered when invalid WebJobs binding attributes are used in the function definition.
+
+| | Value |
+|-|-|
+| **Rule ID** |AZFW0001|
+| **Category** |[Usage]|
+| **Severity** |Error|
+
+## Rule description
+
+The Azure Functions .NET Worker uses a different input and output binding model, which is incompatible with the WebJobs binding
+model used by the Azure Functions in-process model.
+
+In order to support the existing bindings and triggers, a new set of packages, compatible with the new binding model, have been introduced, those
+packages follow a naming convention that makes it easy to find a suitable replacement, simply by changing the prefix `Microsoft.Azure.WebJobs.Extensions.*` for `Microsoft.Azure.Functions.Worker.Extensions.*`. For example:
+
+If you have a reference to `Microsoft.Azure.WebJobs.Extensions.ServiceBus`, replace that with a reference to `Microsoft.Azure.Functions.Worker.Extensions.ServiceBus`
+
+## How to fix violations
+
+To fix violations, add a reference to the appropriate package as described above and use the correct attributes from that package.
+
+## When to suppress the rule
+
+This rule should not be suppressed, as the existing bindings will not work in the isolated model.
azure-functions Azf0001 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/errors-diagnostics/sdk-rules/azf0001.md
+
+ Title: "AZFW0001: Avoid async void"
+description: "Learn about code analysis rule AZF0001: Avoid async void"
+++ Last updated : 05/10/2021++
+# AZF0001: Avoid async void
+
+This rule is triggered when the `void` return type is used in an async function definition.
+
+| | Value |
+|-|-|
+| **Rule ID** |AZF0001|
+| **Category** |[Usage]|
+| **Severity** |Error|
+
+## Rule description
+
+Defining `async` functions with a `void` return type make it impossible for the Functions runtime to track invocation completion or catch and handle exceptions thrown by the function method.
+
+Refer to this article for general `async void` information: https://msdn.microsoft.com/magazine/jj991977.aspx
+
+## How to fix violations
+
+To fix violations, change the function's return type from `void` to `Task` and make the necessary code changes to appropriately return a `Task`.
+
+## When to suppress the rule
+
+This rule should not be suppressed. Use of `async void` will lead to unpredictable behavior.
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compare-azure-government-global-azure.md
Azure Security Center is deployed in Azure Government regions but not in Azure G
### [Azure Sentinel](../sentinel/overview.md)
-The following **features have known limitations** in Azure Government:
--- Office 365 data connector
- - The Office 365 data connector can be used only for [Office 365 GCC High and Office 365 DoD](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc-high-and-dod). Office 365 GCC can be accessed only from global (commercial) Azure.
--- AWS CloudTrail data connector
- - The AWS CloudTrail data connector can be used only for [AWS in the Public Sector](https://aws.amazon.com/government-education/).
-
-### [Enterprise Mobility + Security (EMS)](/enterprise-mobility-security)
-
-For information about EMS suite capabilities in Azure Government, see the [Enterprise Mobility + Security for US Government Service Description](/enterprise-mobility-security/solutions/ems-govt-service-description).
-
+For feature variations and limitations, see [Cloud feature availability for US Government customers](../security/fundamentals/feature-availability.md#azure-sentinel).
## Storage
azure-maps How To Create Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-create-template.md
You can create your Azure Maps account using an Azure Resource Manager (ARM) tem
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-maps-create%2Fazuredeploy.json)
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.maps%2Fmaps-create%2Fazuredeploy.json)
## Prerequisites
To complete this article:
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-maps-create/). The Azure Maps account resource is defined in this template:
The Azure Maps account resource is defined in this template:
1. Select the following image to sign in to Azure and open a template. The template creates an Azure Maps account.
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-maps-create%2Fazuredeploy.json)
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.maps%2Fmaps-create%2Fazuredeploy.json)
2. Select or enter the following values.
The Azure Maps account resource is defined in this template:
* **Account Name**: enter a name for your Azure Maps account, which must be globally unique. * **Pricing Tier**: select the appropriate pricing tier, the default value for the template is S0.
-3. Select **Review + create**.
+3. Select **Review + create**.
4. Confirm your settings on the review page and click **Create**. After your Azure Maps has been deployed successfully, you get a notification: ![ARM template deploy portal notification](./media/how-to-create-template/resource-manager-template-portal-deployment-notification.png)
azure-maps How To Manage Pricing Tier https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-manage-pricing-tier.md
Title: Manage your Azure Maps account's pricing tier | Microsoft Azure Maps
description: You can use the Azure portal to manage your Microsoft Azure Maps account and its pricing tier. Previously updated : 04/26/2020 Last updated : 05/12/2020
You can manage the pricing tier of your Azure Maps account through the Azure por
Get more information about [choosing the right pricing tier in Azure Maps](./choose-pricing-tier.md).
+>[!NOTE]
+>Switching to Gen 1 pricing tier is not available for Gen 2 Azure Maps Creator customers. Gen 1 Azure Maps Creator will be deprecated on 8/6/2021.
+ ## View your pricing tier To view your chosen pricing tier, navigate to the **Pricing Tier** option in the settings menu.
To view your chosen pricing tier, navigate to the **Pricing Tier** option in the
After you create your Azure Maps account, you can upgrade or downgrade the pricing tier for your Azure Maps account. To upgrade or downgrade, navigate to the **Pricing Tier** option in the settings menu. Select the pricing tier from drop down list. Note ΓÇô current pricing tier will be default selection. Select the **Save** button to save your chosen pricing tier option. - > [!NOTE] > You don't have to generate new subscription keys or client ID (for Azure AD authentication) if you upgrade or downgrade the pricing tier for your Azure Maps account. ++++ ## Next steps Learn how to see the API usage metrics for your Azure Maps account:
azure-monitor Azure Monitor Agent Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-install.md
Set-AzVMExtension -Name AMAWindows -ExtensionType AzureMonitorWindowsAgent -Publ
``` # [Linux](#tab/PowerShellLinux) ```powershell
-Set-AzVMExtension -Name AMALinux -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.0
+Set-AzVMExtension -Name AMALinux -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.5
```
azure-monitor Itsmc Definition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsmc-definition.md
Action groups provide a modular and reusable way to trigger actions for your Azu
> [!NOTE] > After you create the ITSM connection, you need to wait 30 minutes for the sync process to finish.
-## Define a template
+### Define a template
Certain work item types can use templates that you define in the ITSM tool. By using templates, you can define fields that will be automatically populated according to fixed values for an action group. You can define which template you want to use as a part of the definition of an action group. You can find in ServiceNow docs information about how to create templates - (here)[https://docs.servicenow.com/bundle/paris-platform-administration/page/administer/form-administration/task/t_CreateATemplateUsingTheTmplForm.html].
To create an action group:
7. Select a **Work Item** type.
-8. If you want to fill out-of-the-box fields with fixed values, select **Use Custom Template**. Otherwise, choose an existing [template](#define-a-template) in the **Template** list and enter the fixed values in the template fields.
-
-9. In the last section of the interface for creating an ITSM action group, you can define how many work items will be created for each alert.
+8. In the last section of the interface for creating an ITSM action group, you can define how many work items will be created for each alert.
> [!NOTE] > This section is relevant only for log search alerts. For all other alert types, you'll create one work item per alert.
To create an action group:
* If you select **Create individual work items for each Log Entry (Configuration item field is not filled. Can result in large number of work items.)**, a work item will be created for each row in the search results of the log search alert query. The description property in the payload of the work item will contain the row from the search results. * If you select **Create individual work items for each Configuration Item**, every configuration item in every alert will create a new work item. Each configuration item can have more than one work item in the ITSM system. This option is the same as the selecting the check box that appears after you select **Incident** as the work item type.
+9. As a part of the action definition you can define predefined fields that will contain constant values as a part of the payload. According to the work item type there are 3 options that can be used as a part of the payload:
+ * **None**: Use a regular payload to ServiceNow without any extra predefined fields and values.
+ * **Use default fields**: Using a set of fields and values that will be sent automatically as a part of the payload to ServiceNow. Those fields are not flexible and the values are defined in ServiceNow lists.
+ * **Use saved templates from ServiceNow**: Using a predefine set of fields and values that was defined as a part of a template definition in ServiceNow. If you already defined the template in ServiceNow you can use it from the **Template** list otherwise you can define it in ServiceNow, for more [details](#define-a-template).
10. Select **OK**.
azure-monitor Asp Net Troubleshoot No Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/asp-net-troubleshoot-no-data.md
You can modify these parameters as needed:
For more information, - [Recording performance traces with PerfView](https://github.com/dotnet/roslyn/wiki/Recording-performance-traces-with-PerfView).-- [Application Insights Event Sources](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/ETW)
+- [Application Insights Event Sources](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/troubleshooting/ETW)
## Collect logs with dotnet-trace
azure-monitor Azure Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/azure-web-apps.md
$app = Set-AzWebApp -AppSettings $newAppSettings -ResourceGroupName $app.Resourc
Upgrading from version 2.8.9 happens automatically, without any additional actions. The new monitoring bits are delivered in the background to the target app service, and on application restart they will be picked up.
-To check which version of the extension you are running visit `http://yoursitename.scm.azurewebsites.net/ApplicationInsights`
+To check which version of the extension you're running, go to `https://yoursitename.scm.azurewebsites.net/ApplicationInsights`.
-![Screenshot of url path http://yoursitename.scm.azurewebsites.net/ApplicationInsights](./media/azure-web-apps/extension-version.png)
+![Screenshot of the U R L path to check the version of the extension you are running](./media/azure-web-apps/extension-version.png)
### Upgrade from versions 1.0.0 - 2.6.5
azure-monitor Continuous Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/continuous-monitoring.md
In the left pane of the release pipeline page, select **Configure Application In
The four default alert rules are created via an Inline script:
-```bash
+```azurecli
$subscription = az account show --query "id";$subscription.Trim("`"");$resource="/subscriptions/$subscription/resourcegroups/"+"$(Parameters.AppInsightsResourceGroupName)"+"/providers/microsoft.insights/components/" + "$(Parameters.ApplicationInsightsResourceName)"; az monitor metrics alert create -n 'Availability_$(Release.DefinitionName)' -g $(Parameters.AppInsightsResourceGroupName) --scopes $resource --condition 'avg availabilityResults/availabilityPercentage < 99' --description "created from Azure DevOps"; az monitor metrics alert create -n 'FailedRequests_$(Release.DefinitionName)' -g $(Parameters.AppInsightsResourceGroupName) --scopes $resource --condition 'count requests/failed > 5' --description "created from Azure DevOps";
azure-monitor Profiler Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/profiler-bring-your-own-storage.md
To configure BYOS for code-level diagnostics (Profiler/Debugger), there are thre
To install Azure CLI, refer to the [Official Azure CLI documentation](/cli/azure/install-azure-cli). 1. Install the Application Insights CLI extension.
- ```powershell
+ ```azurecli
az extension add -n application-insights ``` 1. Connect your Storage Account with your Application Insights resource. Pattern:
- ```powershell
+ ```azurecli
az monitor app-insights component linked-storage link --resource-group "{resource_group_name}" --app "{application_insights_name}" --storage-account "{storage_account_name}" ``` Example:
- ```powershell
+ ```azurecli
az monitor app-insights component linked-storage link --resource-group "byos-test" --app "byos-test-westus2-ai" --storage-account "byosteststoragewestus2" ```
azure-monitor Profiler Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/profiler-vm.md
This article shows you how to get Application Insights Profiler running on your
``` b. If establishing remote access is a problem, you can use the [Azure CLI](/cli/azure/get-started-with-azure-cli) to run the following command:
- ```powershell
+ ```azurecli
az vm run-command invoke -g MyResourceGroupName -n MyVirtualMachineName --command-id RunPowerShellScript --scripts "Enable-WindowsOptionalFeature -FeatureName IIS-HttpTracing -Online -All" ```
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/snapshot-debugger.md
Subscription owners should assign the `Application Insights Snapshot Debugger` r
> [!IMPORTANT]
-> Snapshots can potentially contain personal and other sensitive information in variable and parameter values.
+> Please note that snapshots may contain personal data or other sensitive information in variable and parameter values. Snapshot data is stored in the same region as your App Insights resource.
## View Snapshots in the Portal
azure-monitor Container Insights Agent Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-agent-config.md
Container insights collects stdout, stderr, and environmental variables from con
This article demonstrates how to create ConfigMap and configure data collection based on your requirements. >[!NOTE]
->For Azure Red Hat OpenShift, a template ConfigMap file is created in the *openshift-azure-logging* namespace.
+>For Azure Red Hat OpenShift V3, a template ConfigMap file is created in the *openshift-azure-logging* namespace.
> ## ConfigMap file settings overview
Perform the following steps to configure and deploy your ConfigMap configuration
1. Download the [template ConfigMap YAML file](https://aka.ms/container-azm-ms-agentconfig) and save it as container-azm-ms-agentconfig.yaml. > [!NOTE]
- > This step is not required when working with Azure Red Hat OpenShift because the ConfigMap template already exists on the cluster.
+ > This step is not required when working with Azure Red Hat OpenShift V3 because the ConfigMap template already exists on the cluster.
-2. Edit the ConfigMap yaml file with your customizations to collect stdout, stderr, and/or environmental variables. If you are editing the ConfigMap yaml file for Azure Red Hat OpenShift, first run the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging` to open the file in a text editor.
+2. Edit the ConfigMap yaml file with your customizations to collect stdout, stderr, and/or environmental variables. If you are editing the ConfigMap yaml file for Azure Red Hat OpenShift V3, first run the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging` to open the file in a text editor.
- To exclude specific namespaces for stdout log collection, you configure the key/value using the following example: `[log_collection_settings.stdout] enabled = true exclude_namespaces = ["my-namespace-1", "my-namespace-2"]`. - To disable environment variable collection for a specific container, set the key/value `[log_collection_settings.env_var] enabled = true` to enable variable collection globally, and then follow the steps [here](container-insights-manage-agent.md#how-to-disable-environment-variable-collection-on-a-container) to complete configuration for the specific container. - To disable stderr log collection cluster-wide, you configure the key/value using the following example: `[log_collection_settings.stderr] enabled = false`.
+
+ Save your changes in the editor.
-3. For clusters other than Azure Red Hat OpenShift, create ConfigMap by running the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>` on clusters other than Azure Red Hat OpenShift.
+3. For clusters other than Azure Red Hat OpenShift V3, create ConfigMap by running the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
- For Azure Red Hat OpenShift, save your changes in the editor.
- The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods, not all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following and includes the result: `configmap "container-azm-ms-agentconfig" created`. ## Verify configuration
-To verify the configuration was successfully applied to a cluster other than Azure Red Hat OpenShift, use the following command to review the logs from an agent pod: `kubectl logs omsagent-fdf58 -n kube-system`. If there are configuration errors from the omsagent pods, the output will show errors similar to the following:
+To verify the configuration was successfully applied to a cluster other than Azure Red Hat OpenShift V3, use the following command to review the logs from an agent pod: `kubectl logs omsagent-fdf58 -n kube-system`. If there are configuration errors from the omsagent pods, the output will show errors similar to the following:
``` ***************Start Config Processing********************
Errors related to applying configuration changes are also available for review.
- From an agent pod logs using the same `kubectl logs` command. >[!NOTE]
- >This command is not applicable to Azure Red Hat OpenShift cluster.
+ >This command is not applicable to Azure Red Hat OpenShift V3 cluster.
> - From Live logs. Live logs show errors similar to the following:
Errors related to applying configuration changes are also available for review.
- From the **KubeMonAgentEvents** table in your Log Analytics workspace. Data is sent every hour with *Error* severity for configuration errors. If there are no errors, the entry in the table will have data with severity *Info*, which reports no errors. The **Tags** property contains more information about the pod and container ID on which the error occurred and also the first occurrence, last occurrence and count in the last hour. -- With Azure Red Hat OpenShift, check the omsagent logs by searching the **ContainerLog** table to verify if log collection of openshift-azure-logging is enabled.
+- With Azure Red Hat OpenShift V3, check the omsagent logs by searching the **ContainerLog** table to verify if log collection of openshift-azure-logging is enabled.
-After you correct the error(s) in ConfigMap on clusters other than Azure Red Hat OpenShift, save the yaml file and apply the updated ConfigMaps by running the command: `kubectl apply -f <configmap_yaml_file.yaml`. For Azure Red Hat OpenShift, edit and save the updated ConfigMaps by running the command:
+After you correct the error(s) in ConfigMap on clusters other than Azure Red Hat OpenShift V3, save the yaml file and apply the updated ConfigMaps by running the command: `kubectl apply -f <configmap_yaml_file.yaml`. For Azure Red Hat OpenShift V3, edit and save the updated ConfigMaps by running the command:
``` bash oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging
oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging
## Applying updated ConfigMap
-If you have already deployed a ConfigMap on clusters other than Azure Red Hat OpenShift and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used and then apply using the same command as before, `kubectl apply -f <configmap_yaml_file.yaml`. For Azure Red Hat OpenShift, edit and save the updated ConfigMaps by running the command:
+If you have already deployed a ConfigMap on clusters other than Azure Red Hat OpenShift V3 and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used and then apply using the same command as before, `kubectl apply -f <configmap_yaml_file.yaml`. For Azure Red Hat OpenShift V3, edit and save the updated ConfigMaps by running the command:
``` bash oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging
The output will show similar to the following with the annotation schema-version
- With monitoring enabled to collect health and resource utilization of your AKS or hybrid cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights. -- View [log query examples](container-insights-log-search.md#search-logs-to-analyze-data) to see pre-defined queries and examples to evaluate or customize for alerting, visualizing, or analyzing your clusters.
+- View [log query examples](container-insights-log-search.md#search-logs-to-analyze-data) to see pre-defined queries and examples to evaluate or customize for alerting, visualizing, or analyzing your clusters.
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
This option uses the following defaults:
- Creates or uses existing default log analytics workspace corresponding to the region of the cluster - Auto-upgrade is enabled for the Azure Monitor cluster extension
-```console
+```azurecli
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers ```
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-n
You can use an existing Azure Log Analytics workspace in any subscription on which you have *Contributor* or a more permissive role assignment.
-```console
+```azurecli
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings logAnalyticsWorkspaceResourceID=<armResourceIdOfExistingWorkspace> ```
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-n
If you want to tweak the default resource requests and limits, you can use the advanced configurations settings:
-```console
+```azurecli
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings omsagent.resources.daemonset.limits.cpu=150m omsagent.resources.daemonset.limits.memory=600Mi omsagent.resources.deployment.limits.cpu=1 omsagent.resources.deployment.limits.memory=750Mi ```
Checkout the [resource requests and limits section of Helm chart](https://github
If the Azure Arc enabled Kubernetes cluster is on Azure Stack Edge, then a custom mount path `/home/data/docker` needs to be used.
-```console
+```azurecli
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings omsagent.logsettings.custommountpath=/home/data/docker ```
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-n
3. Deploy the template to create Azure Monitor Container Insights extension
- ```console
+ ```azurecli
az login az account set --subscription "Subscription Name" az deployment group create --resource-group <resource-group> --template-file ./arc-k8s-azmon-extension-arm-template.json --parameters @./arc-k8s-azmon-extension-arm-template-params.json
azure-monitor Collect Custom Metrics Guestos Resource Manager Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/collect-custom-metrics-guestos-resource-manager-vm.md
Title: Collect Windows VM metrics in Azure Monitor with template
-description: Send guest OS metrics to the Azure Monitor metric database store by using a Resource Manager template for a Windows virtual machine
+description: Send guest OS metrics to the Azure Monitor metric database store by using a Resource Manager template for a Windows virtual machine
Last updated 05/04/2020
# Send guest OS metrics to the Azure Monitor metric store by using an Azure Resource Manager template for a Windows virtual machine
-Performance data from the guest OS of Azure virtual machines is not collected automatically like other [platform metrics](./monitor-azure-resource.md#monitoring-data). Install the Azure Monitor [diagnostics extension](../agents/diagnostics-extension-overview.md) to collect guest OS metrics into the metrics database so it can be used with all features of Azure Monitor Metrics, including near-real time alerting, charting, routing, and access from a REST API. This article describes the process for sending Guest OS performance metrics for a Windows virtual machine to the metrics database using a Resource Manager template.
+Performance data from the guest OS of Azure virtual machines is not collected automatically like other [platform metrics](./monitor-azure-resource.md#monitoring-data). Install the Azure Monitor [diagnostics extension](../agents/diagnostics-extension-overview.md) to collect guest OS metrics into the metrics database so it can be used with all features of Azure Monitor Metrics, including near-real time alerting, charting, routing, and access from a REST API. This article describes the process for sending Guest OS performance metrics for a Windows virtual machine to the metrics database using a Resource Manager template.
> [!NOTE] > For details on configuring the diagnostics extension to collect guest OS metrics using the Azure portal, see [Install and configure Windows Azure diagnostics extension (WAD)](../agents/diagnostics-extension-windows-install.md).
If you're new to Resource Manager templates, learn about [template deployments](
- You need to have either [Azure PowerShell](/powershell/azure) or [Azure Cloud Shell](../../cloud-shell/overview.md) installed. -- Your VM resource must be in a [region that supports custom metrics](./metrics-custom-overview.md#supported-regions).
+- Your VM resource must be in a [region that supports custom metrics](./metrics-custom-overview.md#supported-regions).
## Set up Azure Monitor as a data sink
The Azure Diagnostics extension uses a feature called "data sinks" to route metr
## Author Resource Manager template For this example, you can use a publicly available sample template. The starting templates are at
-https://github.com/Azure/azure-quickstart-templates/tree/master/101-vm-simple-windows.
+https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-simple-windows.
- **Azuredeploy.json** is a preconfigured Resource Manager template for the deployment of a virtual machine.
azure-monitor Stream Monitoring Data Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/stream-monitoring-data-event-hubs.md
Routing your monitoring data to an event hub with Azure Monitor enables you to e
| Tool | Hosted in Azure | Description | |:|:| :| | IBM QRadar | No | The Microsoft Azure DSM and Microsoft Azure Event Hub Protocol are available for download from [the IBM support website](https://www.ibm.com/support). You can learn more about the integration with Azure at [QRadar DSM configuration](https://www.ibm.com/docs/en/dsm?topic=options-configuring-microsoft-azure-event-hubs-communicate-qradar). |
-| Splunk | No | [Microsoft Azure Add-On for Splunk](https://splunkbase.splunk.com/app/3757/) is an open source project available in Splunkbase. <br><br> If you cannot install an add-on in your Splunk instance, if for example you're using a proxy or running on Splunk Cloud, you can forward these events to the Splunk HTTP Event Collector using [Azure Function For Splunk](https://github.com/Microsoft/AzureFunctionforSplunkVS), which is triggered by new messages in the event hub. |
+| Splunk | No | [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110/) is an open source project available in Splunkbase. <br><br> If you cannot install an add-on in your Splunk instance, if for example you're using a proxy or running on Splunk Cloud, you can forward these events to the Splunk HTTP Event Collector using [Azure Function For Splunk](https://github.com/Microsoft/AzureFunctionforSplunkVS), which is triggered by new messages in the event hub. |
| SumoLogic | No | Instructions for setting up SumoLogic to consume data from an event hub are available at [Collect Logs for the Azure Audit App from Event Hub](https://help.sumologic.com/Send-Data/Applications-and-Other-Data-Sources/Azure-Audit/02Collect-Logs-for-Azure-Audit-from-Event-Hub). | | ArcSight | No | The ArcSight Azure Event Hub smart connector is available as part of [the ArcSight smart connector collection](https://community.softwaregrp.com/t5/Discussions/Announcing-General-Availability-of-ArcSight-Smart-Connectors-7/m-p/1671852). | | Syslog server | No | If you want to stream Azure Monitor data directly to a syslog server, you can use a [solution based on an Azure function](https://github.com/miguelangelopereira/azuremonitor2syslog/).
azure-netapp-files Azacsnap Tips https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-tips.md
It may be necessary to limit the scope of the AzAcSnap service principal. Revie
The following is an example role definition with the minimum required actions needed for AzAcSnap to function.
-```bash
+```azurecli
az role definition create --role-definition '{ \ "Name": "Azure Application Consistent Snapshot tool", \ "IsCustom": "true", \
az role definition create --role-definition '{ \
For restore options to work successfully, the AzAcSnap service principal also needs to be able to create volumes. In this case the role definition needs an additional action, therefore the complete service principal should look like the following example.
-```bash
+```azurecli
az role definition create --role-definition '{ \ "Name": "Azure Application Consistent Snapshot tool", \ "IsCustom": "true", \
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/whats-new.md
na ms.devlang: na Previously updated : 05/06/2021 Last updated : 05/12/2021
Azure NetApp Files is updated regularly. This article provides a summary about t
## May 2021
+* Azure NetApp Files Application Consistent Snapshot tool [(AzAcSnap)](azacsnap-introduction.md) is now generally available.
+
+ AzAcSnap is a command-line tool that enables you to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, SUSE and RHEL). See [Release Notes for AzAcSnap](azacsnap-release-notes.md) for the latest changes about the tool.
+ * [Support for capacity pool billing tags](manage-billing-tags.md) Azure NetApp Files now supports billing tags to help you cross-reference cost with business units or other internal consumers. Billing tags are assigned at the capacity pool level and not volume level, and they appear on the customer invoice.
azure-portal Azure Portal Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/azure-portal-dashboards.md
Title: Create a dashboard in the Azure portal
description: This article describes how to create and customize a dashboard in the Azure portal. ms.assetid: ff422f36-47d2-409b-8a19-02e24b03ffe7 Previously updated : 05/06/2021 Last updated : 05/12/2021 # Create a dashboard in the Azure portal
If you select this icon, you can pin the tile to an existing private or shared d
:::image type="content" source="media/azure-portal-dashboards/dashboard-pin-pane.png" alt-text="Screenshot of Pin to dashboard options.":::
+### Copy a tile to a new dashboard
+
+If you want to reuse a tile on a different dashboard, you can copy it from one dashboard to another. To do so, select the context menu in the upper right corner and then select **Copy**.
++
+You can then select whether to copy the tile to an existing private or shared dashboard, or create a copy of the tile within the dashboard you're already working in. You can also create a new dashboard which will include a copy of the tile by selecting **Create new**.
+ ### Resize or rearrange tiles To change the size of a tile or to rearrange the tiles on a dashboard, follow these steps:
azure-resource-manager Create Custom Provider https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/custom-providers/create-custom-provider.md
Use the `create` command to create or update a custom resource provider. This ex
```azurecli-interactive az custom-providers resource-provider create --resource-group $rgName --name $funcName \ --action name=ping endpoint=https://myTestSite.azurewebsites.net/api/{requestPath} routing_type=Proxy \resource-type name=users endpoint=https://myTestSite.azurewebsites.net/api{requestPath} routing_type="Proxy, Cache"
+--resource-type name=users endpoint=https://myTestSite.azurewebsites.net/api/{requestPath} routing_type="Proxy, Cache"
``` ```json
az custom-providers resource-provider create --resource-group $rgName --name $fu
"resourceTypes": [ {
- "endpoint": "https://myTestSite.azurewebsites.net/api{requestPath}",
+ "endpoint": "https://myTestSite.azurewebsites.net/api/{requestPath}",
"name": "users", "routingType": "Proxy, Cache" }
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-support-resources.md
Jump to a resource provider namespace:
> | privateendpointredirectmaps | No | No | No | > | privateendpoints | No | No | No | > | privatelinkservices | No | No | No |
-> | publicipaddresses | Yes - Basic SKU<br>Yes - Standard SKU | Yes - Basic SKU<br>No - Standard SKU | Yes<br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move public IP addresses. |
+> | publicipaddresses | Yes - Basic SKU<br>Yes - Standard SKU | Yes - Basic SKU<br>No - Standard SKU | Yes<br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move public IP address configurations (IP addresses are not retained). |
> | publicipprefixes | Yes | Yes | No | > | routefilters | No | No | No | > | routetables | Yes | Yes | No |
azure-resource-manager Define Resource Dependency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/define-resource-dependency.md
Azure Resource Manager evaluates the dependencies between resources, and deploys
Within your Azure Resource Manager template (ARM template), the `dependsOn` element enables you to define one resource as a dependent on one or more resources. Its value is a JavaScript Object Notation (JSON) array of strings, each of which is a resource name or ID. The array can include resources that are [conditionally deployed](conditional-resource-deployment.md). When a conditional resource isn't deployed, Azure Resource Manager automatically removes it from the required dependencies.
-The following example shows a network interface that depends on a virtual network, network security group, and public IP address. For the full template, see [the quickstart template for a Linux VM](https://github.com/Azure/azure-quickstart-templates/blob/master/101-vm-simple-linux/azuredeploy.json).
+The following example shows a network interface that depends on a virtual network, network security group, and public IP address. For the full template, see [the quickstart template for a Linux VM](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-simple-linux/azuredeploy.json).
```json {
azure-resource-manager Template Tutorial Create Templates With Dependent Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-tutorial-create-templates-with-dependent-resources.md
Azure Quickstart Templates is a repository for ARM templates. Instead of creatin
2. In **File name**, paste the following URL: ```url
- https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-vm-simple-windows/azuredeploy.json
+ https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.compute/vm-simple-windows/azuredeploy.json
``` 3. Select **Open** to open the file.
azure-resource-manager Template Tutorial Deploy Vm Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-tutorial-deploy-vm-extensions.md
Azure Quickstart Templates is a repository for ARM templates. Instead of creatin
1. In the **File name** box, paste the following URL: ```url
- https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-vm-simple-windows/azuredeploy.json
+ https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.compute/vm-simple-windows/azuredeploy.json
``` 1. To open the file, select **Open**.
azure-resource-manager Template Tutorial Use Conditions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-tutorial-use-conditions.md
Azure Quickstart Templates is a repository for ARM templates. Instead of creatin
1. In **File name**, paste the following URL: ```url
- https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-vm-simple-windows/azuredeploy.json
+ https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.compute/vm-simple-windows/azuredeploy.json
``` 1. Select **Open** to open the file.
azure-resource-manager Template Tutorial Use Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-tutorial-use-key-vault.md
Azure Quickstart Templates is a repository for ARM templates. Instead of creatin
1. In the **File name** box, paste the following URL: ```url
- https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-vm-simple-windows/azuredeploy.json
+ https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.compute/vm-simple-windows/azuredeploy.json
``` 1. Select **Open** to open the file. The scenario is the same as the one that's used in [Tutorial: Create ARM templates with dependent resources](./template-tutorial-create-templates-with-dependent-resources.md).
Azure Quickstart Templates is a repository for ARM templates. Instead of creatin
1. Repeat steps 1-3 to open the following URL, and then save the file as *azuredeploy.parameters.json*. ```url
- https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-vm-simple-windows/azuredeploy.parameters.json
+ https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.compute/vm-simple-windows/azuredeploy.parameters.json
``` ## Edit the parameters file
azure-signalr Signalr Quickstart Azure Signalr Service Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-quickstart-azure-signalr-service-arm-template.md
This quickstart describes how to use an Azure Resource Manager template (ARM tem
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal once you sign in.
-[:::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Button to deploy Azure SignalR Service to Azure using an ARM template in the Azure portal.":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2f101-signalr%2fazuredeploy.json)
+[:::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Button to deploy Azure SignalR Service to Azure using an ARM template in the Azure portal.":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.signalrservice%2fsignalr%2fazuredeploy.json)
## Prerequisites
An Azure account with an active subscription. [Create one for free](https://azur
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-signalr/). The template defines one Azure resource:
The template defines one Azure resource:
Select the following link to deploy the Azure SignalR Service using the ARM template in the Azure portal:
-[:::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Button to deploy Azure SignalR Service to Azure using the ARM template in the Azure portal.":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2f101-signalr%2fazuredeploy.json)
+[:::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Button to deploy Azure SignalR Service to Azure using the ARM template in the Azure portal.":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.signalrservice%2fsignalr%2fazuredeploy.json)
On the **Deploy an Azure SignalR Service** page:
$paramObjHashTable = @{
Write-Verbose "Run New-AzResourceGroupDeployment to create an Azure SignalR Service using an ARM template" -Verbose New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName ` -TemplateParameterObject $paramObjHashTable `
- -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-signalr/azuredeploy.json
+ -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.signalrservice/signalr/azuredeploy.json
Read-Host "Press [ENTER] to continue" ```
params='name='$serviceName' location='$serviceLocation' pricingTier='$priceTier'
echo "CREATE RESOURCE GROUP: az group create --name $resourceGroupName --location $resourceGroupRegion" && az group create --name $resourceGroupName --location $resourceGroupRegion && echo "RUN az deployment group create, which creates an Azure SignalR Service using an ARM template" &&
-az deployment group create --resource-group $resourceGroupName --parameters $params --template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-signalr/azuredeploy.json &&
+az deployment group create --resource-group $resourceGroupName --parameters $params --template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.signalrservice/signalr/azuredeploy.json &&
read -p "Press [ENTER] to continue: " ```
azure-sql Authentication Aad Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-aad-configure.md
Previously updated : 08/17/2020 Last updated : 05/11/2021 # Configure and manage Azure AD authentication with Azure SQL
For more information about CLI commands, see [az sql server](/cli/azure/sql/serv
> [!NOTE] > You can also provision an Azure Active Directory Administrator by using the REST APIs. For more information, see [Service Management REST API Reference and Operations for Azure SQL Database Operations for Azure SQL Database](/rest/api/sql/)
+## Set or unset the Azure AD admin using service principals
+
+If you are planning to have the service principal set or unset an Azure AD admin for Azure SQL, an additional API Permission is necessary. The [Directory.Read.All](/graph/permissions-reference#application-permissions-18) Application API permission will need to be added to your application in Azure AD.
+
+> [!NOTE]
+> This section on setting the Azure AD admin only applies to using PowerShell or CLI commands, as you cannot use the Azure portal as an Azure AD service principal.
++
+The service principal will also need the [**SQL Server Contributor**](../../role-based-access-control/built-in-roles.md#sql-server-contributor) role for SQL Database, or the [**SQL Managed Instance Contributor**](../../role-based-access-control/built-in-roles.md#sql-managed-instance-contributor) role for SQL Managed Instance.
+
+For more information, see [service principals (Azure AD applications)](authentication-aad-service-principal.md).
+ ## Configure your client computers On all client machines, from which your applications or users connect to SQL Database or Azure Synapse using Azure AD identities, you must install the following software:
azure-sql Authentication Aad Guest Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-aad-guest-users.md
--++ Previously updated : 07/27/2020 Last updated : 05/10/2021 # Create Azure AD guest users and set as an Azure AD admin > [!NOTE] > This article is in **public preview**.
azure-sql Authentication Aad Service Principal Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-aad-service-principal-tutorial.md
Title: Create Azure AD users using service principals
-description: This tutorial walks you through creating an Azure AD user with an Azure AD applications (service principals) in Azure SQL Database and Azure Synapse Analytics
+description: This tutorial walks you through creating an Azure AD user with an Azure AD applications (service principals) in Azure SQL Database
- Previously updated : 02/11/2021 Last updated : 05/10/2021 # Tutorial: Create Azure AD users using Azure AD applications
-> [!NOTE]
-> This article is in **public preview**. For more information, see [Azure Active Directory service principal with Azure SQL](authentication-aad-service-principal.md). This article will use Azure SQL Database to demonstrate the necessary tutorial steps, but can be similarly applied to [Azure Synapse Analytics](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md).
-
-This article takes you through the process of creating Azure AD users in Azure SQL Database, using Azure service principals (Azure AD applications). This functionality already exists in Azure SQL Managed Instance, but is now being introduced in Azure SQL Database and Azure Synapse Analytics. To support this scenario, an Azure AD Identity must be generated and assigned to the Azure SQL logical server.
+This article takes you through the process of creating Azure AD users in Azure SQL Database, using Azure service principals (Azure AD applications). This functionality already exists in Azure SQL Managed Instance, but is now being introduced in Azure SQL Database. To support this scenario, an Azure AD Identity must be generated and assigned to the Azure SQL logical server.
For more information on Azure AD authentication for Azure SQL, see the article [Use Azure Active Directory authentication](authentication-aad-overview.md).
In this tutorial, you learn how to:
## Prerequisites -- An existing [Azure SQL Database](single-database-create-quickstart.md) or [Azure Synapse Analytics](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md) deployment. We assume you have a working SQL Database for this tutorial.
+- An existing [Azure SQL Database](single-database-create-quickstart.md) deployment. We assume you have a working SQL Database for this tutorial.
- Access to an already existing Azure Active Directory. - [Az.Sql 2.9.0](https://www.powershellgallery.com/packages/Az.Sql/2.9.0) module or higher is needed when using PowerShell to set up an individual Azure AD application as Azure AD admin for Azure SQL. Ensure you are upgraded to the latest module.
For a similar approach on how to set the **Directory Readers** permission for SQ
## Create a service principal (an Azure AD application) in Azure AD
-1. Follow the guide here to [register your app and set permissions](active-directory-interactive-connect-azure-sql-db.md#register-your-app-and-set-permissions).
-
- Make sure to add the **Application permissions** as well as the **Delegated permissions**.
-
- :::image type="content" source="media/authentication-aad-service-principals-tutorial/aad-apps.png" alt-text="Screenshot showing the App registrations page for Azure Active Directory. An app with the Display name AppSP is highlighted.":::
-
- :::image type="content" source="media/authentication-aad-service-principals-tutorial/aad-app-registration-api-permissions.png" alt-text="api-permissions":::
+1. Follow the guide here to [register your app](active-directory-interactive-connect-azure-sql-db.md#register-your-app-and-set-permissions).
2. You'll also need to create a client secret for signing in. Follow the guide here to [upload a certificate or create a secret for signing in](../../active-directory/develop/howto-create-service-principal-portal.md#authentication-two-options).
In this tutorial, we'll be using *AppSP* as our main service principal, and *mya
For more information on how to create an Azure AD application, see the article [How to: Use the portal to create an Azure AD application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md).
-### Permissions required to set or unset the Azure AD admin
-
-In order for the service principal to set or unset an Azure AD admin for Azure SQL, an additional API Permission is necessary. The [Directory.Read.All](/graph/permissions-reference#application-permissions-18) Application API permission will need to be added to your application in Azure AD.
--
-The service principal will also need the [**SQL Server Contributor**](../../role-based-access-control/built-in-roles.md#sql-server-contributor) role for SQL Database, or the [**SQL Managed Instance Contributor**](../../role-based-access-control/built-in-roles.md#sql-managed-instance-contributor) role for SQL Managed Instance.
-
-> [!NOTE]
-> Although Azure AD Graph API is being deprecated, the **Directory.Reader.All** permission still applies to this tutorial. The Microsoft Graph API does not apply to this tutorial.
- ## Create the service principal user in Azure SQL Database Once a service principal is created in Azure AD, create the user in SQL Database. You'll need to connect to your SQL Database with a valid login with permissions to create users in the database.
Once a service principal is created in Azure AD, create the user in SQL Database
$conn.Close() ```
- Alternatively, you can use the code sample in the blog, [Azure AD Service Principal authentication to SQL DB - Code Sample](https://techcommunity.microsoft.com/t5/azure-sql-database/azure-ad-service-principal-authentication-to-sql-db-code-sample/ba-p/481467). Modify the script to execute a DDL statement `CREATE USER [myapp] FROM EXTERNAL PROVIDER`. The same script can be used to create a regular Azure AD user a group in SQL Database.
+ Alternatively, you can use the code sample in the blog, [Azure AD Service Principal authentication to SQL DB - Code Sample](https://techcommunity.microsoft.com/t5/azure-sql-database/azure-ad-service-principal-authentication-to-sql-db-code-sample/ba-p/481467). Modify the script to execute a DDL statement `CREATE USER [myapp] FROM EXTERNAL PROVIDER`. The same script can be used to create a regular Azure AD user or a group in SQL Database.
2. Check if the user *myapp* exists in the database by executing the following command:
Once a service principal is created in Azure AD, create the user in SQL Database
- [Azure AD Service Principal authentication to SQL DB - Code Sample](https://techcommunity.microsoft.com/t5/azure-sql-database/azure-ad-service-principal-authentication-to-sql-db-code-sample/ba-p/481467) - [Application and service principal objects in Azure Active Directory](../../active-directory/develop/app-objects-and-service-principals.md) - [Create an Azure service principal with Azure PowerShell](/powershell/azure/create-azure-service-principal-azureps)-- [Directory Readers role in Azure Active Directory for Azure SQL](authentication-aad-directory-readers-role.md)
+- [Directory Readers role in Azure Active Directory for Azure SQL](authentication-aad-directory-readers-role.md)
azure-sql Authentication Aad Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-aad-service-principal.md
Title: Azure Active Directory service principal with Azure SQL
-description: Azure AD Applications (service principals) support Azure AD user creation in Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics
+description: Utilize AD Applications (service principals) support Azure AD user creation in Azure SQL Database and Azure SQL Managed Instance
- Previously updated : 02/11/2021 Last updated : 05/11/2021 # Azure Active Directory service principal with Azure SQL
-Support for Azure Active Directory (Azure AD) user creation in Azure SQL Database (SQL DB) and [Azure Synapse Analytics](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md) on behalf of Azure AD applications (service principals) are currently in **public preview**.
+Azure Active Directory (Azure AD) supports user creation in Azure SQL Database (SQL DB) on behalf of Azure AD applications (service principals).
> [!NOTE] > This functionality is already supported for SQL Managed Instance. ## Service principal (Azure AD applications) support
-This article applies to applications that are integrated with Azure AD, and are part of Azure AD registration. These applications often need authentication and authorization access to Azure SQL to perform various tasks. This feature in **public preview** now allows service principals to create Azure AD users in SQL Database and Azure Synapse. There was a limitation preventing Azure AD object creation on behalf of Azure AD applications that was removed.
+This article applies to applications that are integrated with Azure AD, and are part of Azure AD registration. These applications often need authentication and authorization access to Azure SQL to perform various tasks. This feature allows service principals to create Azure AD users in SQL Database. There was a limitation preventing Azure AD object creation on behalf of Azure AD applications that was removed.
When an Azure AD application is registered using the Azure portal or a PowerShell command, two objects are created in the Azure AD tenant:
When an Azure AD application is registered using the Azure portal or a PowerShel
For more information on Azure AD applications, see [Application and service principal objects in Azure Active Directory](../../active-directory/develop/app-objects-and-service-principals.md) and [Create an Azure service principal with Azure PowerShell](/powershell/azure/create-azure-service-principal-azureps).
-SQL Database, Azure Synapse, and SQL Managed Instance support the following Azure AD objects:
+SQL Database and SQL Managed Instance support the following Azure AD objects:
- Azure AD users (managed, federated, and guest) - Azure AD groups (managed and federated) - Azure AD applications
-The T-SQL command `CREATE USER [Azure_AD_Object] FROM EXTERNAL PROVIDER` on behalf of an Azure AD application is now supported for SQL Database and Azure Synapse.
+The T-SQL command `CREATE USER [Azure_AD_Object] FROM EXTERNAL PROVIDER` on behalf of an Azure AD application is now supported for SQL Database.
## Functionality of Azure AD user creation using service principals
-Supporting this functionality is useful in Azure AD application automation processes where Azure AD objects are created and maintained in SQL Database and Azure Synapse without human interaction. Service principals can be an Azure AD admin for the SQL logical server, as part of a group or an individual user. The application can automate Azure AD object creation in SQL Database and Azure Synapse when executed as a system administrator, and does not require any additional SQL privileges. This allows for a full automation of a database user creation. This feature is also supported for system-assigned managed identity and user-assigned managed identity. For more information, see [What are managed identities for Azure resources?](../../active-directory/managed-identities-azure-resources/overview.md)
+Supporting this functionality is useful in Azure AD application automation processes where Azure AD objects are created and maintained in SQL Database without human interaction. Service principals can be an Azure AD admin for the SQL logical server, as part of a group or an individual user. The application can automate Azure AD object creation in SQL Database when executed as a system administrator, and does not require any additional SQL privileges. This allows for a full automation of a database user creation. This feature also supports Azure AD system-assigned managed identity and user-assigned managed identity that can be created as users in SQL Database on behalf of service principals. For more information, see [What are managed identities for Azure resources?](../../active-directory/managed-identities-azure-resources/overview.md)
## Enable service principals to create Azure AD users
-To enable an Azure AD object creation in SQL Database and Azure Synapse on behalf of an Azure AD application, the following settings are required:
+To enable an Azure AD object creation in SQL Database on behalf of an Azure AD application, the following settings are required:
1. Assign the server identity. The assigned server identity represents the Managed Service Identity (MSI). Currently, the server identity for Azure SQL does not support User Managed Identity (UMI). - For a new Azure SQL logical server, execute the following PowerShell command:
To enable an Azure AD object creation in SQL Database and Azure Synapse on behal
- To check if the server identity is assigned to the server, execute the Get-AzSqlServer command. > [!NOTE]
- > Server identity can be assigned using CLI commands as well. For more information, see [az sql server create](/cli/azure/sql/server#az_sql_server_create) and [az sql server update](/cli/azure/sql/server#az_sql_server_update).
+ > Server identity can be assigned using REST API and CLI commands as well. For more information, see [az sql server create](/cli/azure/sql/server#az_sql_server_create), [az sql server update](/cli/azure/sql/server#az_sql_server_update), and [Servers - REST API](/rest/api/sql/2020-08-01-preview/servers).
2. Grant the Azure AD [**Directory Readers**](../../active-directory/roles/permissions-reference.md#directory-readers) permission to the server identity created or assigned to the server. - To grant this permission, follow the description used for SQL Managed Instance that is available in the following article: [Provision Azure AD admin (SQL Managed Instance)](authentication-aad-configure.md?tabs=azure-powershell#provision-azure-ad-admin-sql-managed-instance)
To enable an Azure AD object creation in SQL Database and Azure Synapse on behal
> [!IMPORTANT] > Steps 1 and 2 must be executed in the above order. First, create or assign the server identity, followed by granting the [**Directory Readers**](../../active-directory/roles/permissions-reference.md#directory-readers) permission. Omitting one of these steps, or both will cause an execution error during an Azure AD object creation in Azure SQL on behalf of an Azure AD application. >
-> If you are using the service principal to set or unset the Azure AD admin, the application must also have the [Directory.Read.All](/graph/permissions-reference#application-permissions-18) Application API permission in Azure AD. For more information on [permissions required to set an Azure AD admin](authentication-aad-service-principal-tutorial.md#permissions-required-to-set-or-unset-the-azure-ad-admin), and step by step instructions to create an Azure AD user on behalf of an Azure AD application, see [Tutorial: Create Azure AD users using Azure AD applications](authentication-aad-service-principal-tutorial.md).
->
> In **public preview**, you can assign the **Directory Readers** role to a group in Azure AD. The group owners can then add the managed identity as a member of this group, which would bypass the need for a **Global Administrator** or **Privileged Roles Administrator** to grant the **Directory Readers** role. For more information on this feature, see [Directory Readers role in Azure Active Directory for Azure SQL](authentication-aad-directory-readers-role.md).
-## Troubleshooting and limitations for public preview
+## Troubleshooting and limitations
- When creating Azure AD objects in Azure SQL on behalf of an Azure AD application without enabling server identity and granting **Directory Readers** permission, the operation will fail with the following possible errors. The example error below is for a PowerShell command execution to create a SQL Database user `myapp` in the article [Tutorial: Create Azure AD users using Azure AD applications](authentication-aad-service-principal-tutorial.md). - `Exception calling "ExecuteNonQuery" with "0" argument(s): "'myapp' is not a valid login or you do not have permission. Cannot find the user 'myapp', because it does not exist, or you do not have permission."`
- - `Exception calling "ExecuteNonQuery" with "0" argument(s): "Principal 'myapp' could not be resolved.`
- - `User or server identity does not have permission to read from Azure Active Directory.`
+ - `Exception calling "ExecuteNonQuery" with "0" argument(s): "Principal 'myapp' could not be resolved. Error message:
+ 'Server identity is not configured. Please follow the steps in "Assign an Azure AD identity to your server and add
+ Directory Reader permission to your identity" (https://aka.ms/sqlaadsetup)'"`
- For the above error, follow the steps to [Assign an identity to the Azure SQL logical server](authentication-aad-service-principal-tutorial.md#assign-an-identity-to-the-azure-sql-logical-server) and [Assign Directory Readers permission to the SQL logical server identity](authentication-aad-service-principal-tutorial.md#assign-directory-readers-permission-to-the-sql-logical-server-identity).
- > [!NOTE]
- > The error messages indicated above will be changed before the feature GA to clearly identify the missing setup requirement for Azure AD application support.
-- Setting the Azure AD application as an Azure AD admin for SQL Managed Instance is only supported using the CLI command, and PowerShell command with [Az.Sql 2.9.0](https://www.powershellgallery.com/packages/Az.Sql/2.9.0) or higher. For more information, see the [az sql mi ad-admin create](/cli/azure/sql/mi/ad-admin#az_sql_mi_ad_admin_create) and [Set-AzSqlInstanceActiveDirectoryAdministrator](/powershell/module/az.sql/set-azsqlinstanceactivedirectoryadministrator) commands.
- - If you want to use the Azure portal for SQL Managed Instance to set the Azure AD admin, a possible workaround is to create an Azure AD group. Then add the service principal (Azure AD application) to this group, and set this group as an Azure AD admin for the SQL Managed Instance.
- - Setting the service principal (Azure AD application) as an Azure AD admin for SQL Database and Azure Synapse is supported using the Azure portal, [PowerShell](authentication-aad-configure.md?tabs=azure-powershell#powershell-for-sql-database-and-azure-synapse), and [CLI](authentication-aad-configure.md?tabs=azure-cli#powershell-for-sql-database-and-azure-synapse) commands.
+ - Setting the service principal (Azure AD application) as an Azure AD admin for SQL Database is supported using the Azure portal, [PowerShell](authentication-aad-configure.md?tabs=azure-powershell#powershell-for-sql-database-and-azure-synapse), [REST API](/rest/api/sql/2020-08-01-preview/servers), and [CLI](authentication-aad-configure.md?tabs=azure-cli#powershell-for-sql-database-and-azure-synapse) commands.
- Using an Azure AD application with service principal from another Azure AD tenant will fail when accessing SQL Database or SQL Managed Instance created in a different tenant. A service principal assigned to this application must be from the same tenant as the SQL logical server or Managed Instance. - [Az.Sql 2.9.0](https://www.powershellgallery.com/packages/Az.Sql/2.9.0) module or higher is needed when using PowerShell to set up an individual Azure AD application as Azure AD admin for Azure SQL. Ensure you are upgraded to the latest module.
azure-sql Database Import Export Azure Services Off https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-import-export-azure-services-off.md
+ms.devlang:
-+ Last updated 01/08/2020 # Import or export an Azure SQL Database without allowing Azure services to access the server
Create an Azure virtual machine by selecting the **Deploy to Azure** button.
This template allows you to deploy a simple Windows virtual machine using a few different options for the Windows version, using the latest patched version. This will deploy a A2 size VM in the resource group location and return the fully qualified domain name of the VM. <br><br>
-[![Image showing a button labeled "Deploy to Azure".](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-vm-simple-windows%2Fazuredeploy.json)
+[![Image showing a button labeled "Deploy to Azure".](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.compute%2Fvm-simple-windows%2Fazuredeploy.json)
-For more information, see [Very simple deployment of a Windows VM](https://github.com/Azure/azure-quickstart-templates/tree/master/101-vm-simple-windows).
+For more information, see [Very simple deployment of a Windows VM](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-simple-windows).
## Connect to the virtual machine
The following steps show you how to connect to your virtual machine using a remo
1. After deployment completes, go to the virtual machine resource.
- ![Screenshot shows a virtual machine Overview page with a Connect button.](./media/database-import-export-azure-services-off/vm.png)
+ ![Screenshot shows a virtual machine Overview page with a Connect button.](./media/database-import-export-azure-services-off/vm.png)
2. Select **Connect**. A Remote Desktop Protocol file (.rdp file) form appears with the public IP address and port number for the virtual machine.
- ![RDP form](./media/database-import-export-azure-services-off/rdp.png)
+ ![RDP form](./media/database-import-export-azure-services-off/rdp.png)
3. Select **Download RDP File**.
azure-vmware Rotate Cloudadmin Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/rotate-cloudadmin-credentials.md
In this step, you'll verify that the HCX Connector has the updated credentials.
2. On the VMware HCX Dashboard, select **Site Pairing**.
- :::image type="content" source="media/reset-vsphere-credentials/hcx-site-pairing.png" alt-text="Screenshot of VMware HCX Dashboard with Site Pairing highlighted.":::
+ :::image type="content" source="media/rotate-cloudadmin-credentials/hcx-site-pairing.png" alt-text="Screenshot of VMware HCX Dashboard with Site Pairing highlighted.":::
3. Select the correct connection to Azure VMware Solution and select **Edit Connection**.
azure-web-pubsub Quickstart Serverless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/quickstart-serverless.md
While the service is deploying, let's switch to working with code. Clone the [sa
In *local.settings.json*, you need to make these changes and then save the file. - Replace the place holder *<connection-string>* to the real one copied from **Azure portal** for **`WebPubSubConnectionString`** setting. - For **`AzureWebJobsStorage`** setting, this is required due to [Azure Functions requires an Azure Storage account](https://docs.microsoft.com/azure/azure-functions/storage-considerations).
- - If you have Azure storage emulator run in local, keep the original settings of "UseDevelopmentStorage=true".
+ - If you have Azure Storage Emulator run in local, keep the original settings of "UseDevelopmentStorage=true".
- If you have an Azure storage connection string, replace the value with it. - JavaScript functions are organized into folders. In each folder are two files: `function.json` defines the bindings that are used in the function, and `index.js` is the body of the function. There are several triggered functions in this function app:
backup Encryption At Rest With Cmk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/encryption-at-rest-with-cmk.md
Title: Encryption of backup data using customer-managed keys description: Learn how Azure Backup allows you to encrypt your backup data using customer-managed keys (CMK). Previously updated : 04/19/2021 Last updated : 05/12/2021 # Encryption of backup data using customer-managed keys
You now need to permit the Recovery Services vault to access the Azure Key Vault
1. Select **Save** to save changes made to the access policy of the Azure Key Vault.
+>[!NOTE]
+>You can also assign an RBAC role to the Recovery Services vault that contains the above mentioned permissions, such as the _[Key Vault Crypto Officer](../key-vault/general/rbac-guide.md#azure-built-in-roles-for-key-vault-data-plane-operations)_ role.<br><br>These roles may contain additional permissions other than the ones discussed above.
+ ## Enable soft-delete and purge protection on the Azure Key Vault You need to **enable soft delete and purge protection** on your Azure Key Vault that stores your encryption key. You can do this from the Azure Key Vault UI as shown below. (Alternatively, these properties can be set while creating the Key Vault). Read more about these Key Vault properties [here](../key-vault/general/soft-delete-overview.md).
batch Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/quick-create-template.md
You need a Batch account to create compute resources (pools of compute nodes) an
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-batchaccount-with-storage%2Fazuredeploy.json)
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.batch%2Fbatchaccount-with-storage%2Fazuredeploy.json)
## Prerequisites
You must have an active Azure subscription.
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-batchaccount-with-storage/). Two Azure resources are defined in the template:
Two Azure resources are defined in the template:
1. Select the following image to sign in to Azure and open a template. The template creates an Azure Batch account and a storage account.
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-batchaccount-with-storage%2Fazuredeploy.json)
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.batch%2Fbatchaccount-with-storage%2Fazuredeploy.json)
1. Select or enter the following values.
cognitive-services FAQ https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/FAQ.md
# Computer Vision API Frequently Asked Questions > [!TIP]
-> If you can't find answers to your questions in this FAQ, try asking the Computer Vision API community on [StackOverflow](https://stackoverflow.com/questions/tagged/project-oxford+or+microsoft-cognitive) or contact Help and Support on UserVoice
+> If you can't find answers to your questions in this FAQ, try asking the Computer Vision API community on [StackOverflow](https://stackoverflow.com/questions/tagged/project-oxford+or+microsoft-cognitive) or contact Help and Support on [UserVoice](https://feedback.azure.com/forums/932041-azure-cognitive-services?category_id=395743)
cognitive-services Howtoanalyzevideo_Vision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/Vision-API-How-to-Topics/HowtoAnalyzeVideo_Vision.md
The image-, voice-, video-, and text-understanding capabilities of VideoFrameAna
In this article, you learned how to run near real-time analysis on live video streams by using the Face and Computer Vision services. You also learned how you can use our sample code to get started.
-Feel free to provide feedback and suggestions in the [GitHub repository](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/). To provide broader API feedback, go to our UserVoice site.
+Feel free to provide feedback and suggestions in the [GitHub repository](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/). To provide broader API feedback, go to our [UserVoice](https://feedback.azure.com/forums/932041-azure-cognitive-services?category_id=395743) site.
cognitive-services Howtoanalyzevideo_Face https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/HowtoAnalyzeVideo_Face.md
When you're ready to integrate, **reference the VideoFrameAnalyzer library from
In this guide, you learned how to run near-real-time analysis on live video streams using the Face, Computer Vision, and Emotion APIs, and how to use our sample code to get started.
-Feel free to provide feedback and suggestions in the [GitHub repository](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) or, for broader API feedback, on our UserVoice site.
+Feel free to provide feedback and suggestions in the [GitHub repository](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) or, for broader API feedback, on our [UserVoice](https://feedback.azure.com/forums/932041-azure-cognitive-services?category_id=395743) site.
## Related Topics - [How to Detect Faces in Image](HowtoDetectFacesinImage.md)
cognitive-services Call Center Transcription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/call-center-transcription.md
Internally we are using the above technologies to support Microsoft customer cal
Some businesses are required to transcribe conversations in real-time. Real-time transcription can be used to identify key-words and trigger searches for content and resources relevant to the conversation, for monitoring sentiment, to improve accessibility, or to provide translations for customers and agents who aren't native speakers.
-For scenarios that require real-time transcription, we recommend using the [Speech SDK](speech-sdk.md). Currently, speech-to-text is available in [more than 20 languages](language-support.md), and the SDK is available in C++, C#, Java, Python, Node.js, Objective-C, and JavaScript. Samples are available in each language on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk). For the latest news and updates, see [Release notes](releasenotes.md).
+For scenarios that require real-time transcription, we recommend using the [Speech SDK](speech-sdk.md). Currently, speech-to-text is available in [more than 20 languages](language-support.md), and the SDK is available in C++, C#, Java, Python, JavaScript, Objective-C, and Go. Samples are available in each language on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk). For the latest news and updates, see [Release notes](releasenotes.md).
Internally we are using the above technologies to analyze in real-time Microsoft customer calls as they happen, as illustrated in the following diagram.
Sample code is available on GitHub for each of the Speech service features. Thes
## Next steps > [!div class="nextstepaction"]
-> [Get a Speech service subscription key for free](overview.md#try-the-speech-service-for-free)
+> [Get a Speech service subscription key for free](overview.md#try-the-speech-service-for-free)
cognitive-services Get Started Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/get-started-text-to-speech.md
keywords: text to speech
## Get position information
-Your project may need to know when a word is spoken by speech-to-text so that it can take specific action based on that timing. As an example, if you wanted to highlight words as they were spoken, you would need to know what to highlight, when to highlight it, and for how long to highlight it.
+Your project may need to know when a word is spoken by text-to-speech so that it can take specific action based on that timing.
+As an example, if you wanted to highlight words as they were spoken, you would need to know what to highlight, when to highlight it, and for how long to highlight it.
-You can accomplish this using the `WordBoundary` event available within `SpeechSynthesizer`. This event is raised at the beginning of each new spoken word and will provide a time offset within the spoken stream as well as a text offset within the input prompt.
+You can accomplish this using the `WordBoundary` event available within `SpeechSynthesizer`.
+This event is raised at the beginning of each new spoken word and will provide a time offset within the spoken stream and a text offset within the input prompt.
* `AudioOffset` reports the output audio's elapsed time between the beginning of synthesis and the start of the next word. This is measured in hundred-nanosecond units (HNS) with 10,000 HNS equivalent to 1 millisecond. * `WordOffset` reports the character position in the input string (original text or [SSML](speech-synthesis-markup.md)) immediately before the word that's about to be spoken.
cognitive-services How To Audio Content Creation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-audio-content-creation.md
The tool is based on [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). It allows you to adjust text-to-speech output attributes in real time or batch synthesis, such as voice characters, voice styles, speaking speed, pronunciation, and prosody.
-You can have easy access to more than 150 pre-built voices across close to 50 different languages, including the state-of-the-art neural TTS voices, and your custom voice if you have built one.
+You can have easy access to more than 150 pre-built voices across 60+ different languages, including the state-of-the-art neural TTS voices, and your custom voice if you have built one.
-See the [video tutorial](https://www.youtube.com/watch?v=O1wIJ7mts_w) for Audio Content Creation.
+See the [video tutorial](https://youtu.be/ygApYuOOG6w) for Audio Content Creation.
## How to Get Started?
It takes a few moments to deploy your new Speech resource. Once the deployment i
### Step 3 - Log into the Audio Content Creation with your Azure account and Speech resource 1. After getting the Azure account and the Speech resource, you can log into [Audio Content Creation](https://aka.ms/audiocontentcreation) by clicking **Get started**.
-2. The **Speech resource** page will be shown to you. Select the Speech resource you want to work on. Click **Go to Studio** to start your audio creation. You can also create a new Speech resource here by clicking **Create new**. When you log into the Audio Content Creation tool for the Next time, we will link you directly to the audio work files under the current speech resource.
-3. You can modify your Speech resource at any time with the **Settings** option, located in the top nav.
+2. The home page lists all the products under Speech Studio. Click **Audio Content Creation** to start.
+3. The **Welcome to Speech Studio** page will be shown to you to set up the speech service. Select the Azure subscription and the Speech resource you want to work on. Click **Use resource** to complete the settings. When you log into the Audio Content Creation tool for the Next time, we will link you directly to the audio work files under the current speech resource. You can check your Azure subscriptions details and status in [Azure portal](https://portal.azure.com/). If you do not have available speech resource and you are the owner or admin of an Azure subscription, you can also create a new Speech resource in Speech Studio by clicking **Create a new resource**. If you are a user role for a certain Azure subscription, you may not have the permission to create a new speech resource. Please contact your admin to get the speech resource access.
+4. You can modify your Speech resource at any time with the **Settings** option, located in the top nav.
+5. If you want to swith directory, please go the **Settings** or your profile to operate.
## How to use the tool?
This diagram shows the steps it takes to fine-tune text-to-speech outputs. Use t
> [!NOTE] > Gated access is available for Custom Neural Voices, which allow you to create high-definition voices similar to natural-sounding speech. For additional details, see [Gating process](./text-to-speech.md).
-4. Click the **play** icon (a triangle) to preview the default synthesis output. Then improve the output by adjusting pronunciation, break, pitch, rate, intonation, voice style, and more. For a complete list of options, see [Speech Synthesis Markup Language](speech-synthesis-markup.md). Here is a [video](https://www.youtube.com/watch?v=O1wIJ7mts_w) to show how to fine-tune speech output with Audio Content Creation.
-5. Save and [export your tuned audio](#export-tuned-audio). When you save the tuning track in the system, you can continue to work and iterate on the output. When you're satisfied with the output, you can create an audio creation task with the export feature. You can observe the status of the export task and download the output for use with your apps and products.
+4. Select the content you want to preview and click the **play** icon (a triangle) to preview the default synthesis output. Please note that if you make any changes on teh text, you need to click the **Stop** icon and then click **play** icon again to re-generate the audio with changed scripts.
+5. Improve the output by adjusting pronunciation, break, pitch, rate, intonation, voice style, and more. For a complete list of options, see [Speech Synthesis Markup Language](speech-synthesis-markup.md). Here is a [video](https://youtu.be/ygApYuOOG6w) to show how to fine-tune speech output with Audio Content Creation.
+6. Save and [export your tuned audio](#export-tuned-audio). When you save the tuning track in the system, you can continue to work and iterate on the output. When you're satisfied with the output, you can create an audio creation task with the export feature. You can observe the status of the export task and download the output for use with your apps and products.
## Create an audio tuning file
There are two ways to get your content into the Audio Content Creation tool.
**Option 1:**
-1. Click the **New file** icon in the upper-right to create a new audio tuning file.
+1. Click **New** > **file** to create a new audio tuning file.
2. Type or paste your content into the editing window. The characters for each file is up to 20,000. If your script is longer than 20,000 characters, you can use Option 2 to automatically split your content into multiple files. 3. Don't forget to save.
After you've reviewed your audio output and are satisfied with your tuning and a
If more than one user wants to use Audio Content Creation, you can grant user access to the Azure subscription and the speech resource. If you add a user to an Azure subscription, the user can access all the resources under the Azure subscription. But if you only add a user to a speech resource, the user will only have access to the speech resource, and cannot access other resources under this Azure subscription. A user with access to the speech resource can use Audio Content Creation.
+The user need to prepare a [Microsoft account](https://account.microsoft.com/account). If the user do not have a Microsoft account, create one with just a few minutes. The user can use the existing email and link as a Microsoft account, or creat a new outlook email as Microsoft account.
++ ### Add users to a speech resource Follow these steps to add a user to a speech resource so they can use Audio Content Creation.
Follow these steps to add a user to a speech resource so they can use Audio Cont
1. Search for **Cognitive services** in the [Azure portal](https://portal.azure.com/), select the speech resource that you want to add users to. 2. Click **Access control (IAM)**. Click the **Role assignments** tab to view all the role assignments for this subscription. :::image type="content" source="media/audio-content-creation/access-control-roles.png" alt-text="Role assignment tab":::
-1. Click **Add** > **Add role assignment** to open the Add role assignment pane. In the Role drop-down list, select the **Cognitive Service User** role. If you want to give the user ownership of this speech resource, you can select the **Owner** role.
-1. In the list, select a user. If you do not see the user in the list, you can type in the Select box to search the directory for display names and email addresses. If the user is not in this directory, you can input the user's [Microsoft account](https://account.microsoft.com/account) (which is trusted by Azure active directory).
-1. Click **Save** to assign the role. After a few moments, the user is assigned the Cognitive Service User role at the speech resource scope.
+3. Click **Add** > **Add role assignment** to open the Add role assignment pane. In the Role drop-down list, select the **Cognitive Service User** role. If you want to give the user ownership of this speech resource, you can select the **Owner** role.
+4. In the list, select a user. If you do not see the user in the list, you can type in the Select box to search the directory for display names and email addresses. If the user is not in this directory, you can input the user's [Microsoft account](https://account.microsoft.com/account) (which is trusted by Azure active directory).
+5. Click **Save** to assign the role. The user will receive an email invitation. Accept the invitation by clicking **Accept invitation** > **Accept to join Azure** in the email. Then the user will be redirected to the Azure portal. The user do not need to take further action in the Azure portal.
+6. After a few moments, the user is assigned the Cognitive Service User role at the speech resource scope. User can visit or refresh the [Audio Content Creation](https://aka.ms/audiocontentcreation) page, and choose the speech resource to get started.
:::image type="content" source="media/audio-content-creation/add-role-first.png" alt-text="Add role dialog":::
-1. The users you add will receive an invitation email. After they click **Accept invitation** > **Accept to join Azure**, then they can use [Audio Content Creation](https://aka.ms/audiocontentcreation).
Users who are in the same speech resource will see each other's work in Audio Content Creation studio. If you want each individual user to have a unique and private workplace in Audio Content Creation, please [create a new speech resource](#step-2create-a-speech-resource) for each user and give each user the unique access to the speech resource.
Users who are in the same speech resource will see each other's work in Audio Co
If you want one of the users to give access to other users, you need to give the user the owner role for the speech resource and set the user as the Azure directory reader. 1. Add the user as the owner of the speech resource. See [how to add users to a speech resource](#add-users-to-a-speech-resource). :::image type="content" source="media/audio-content-creation/add-role.png" alt-text="Role Owner field":::
-1. Select the collapsed menu in the upper left. Click **Azure Active Directory**, and then Click **Users**.
+1. In the [Azure portal](https://portal.azure.com/), select the collapsed menu in the upper left. Click **Azure Active Directory**, and then Click **Users**.
1. Search the user's Microsoft account, and go to the user's detail page. Click **Assigned roles**. 1. Click **Add assignments** -> **Directory Readers**.
cognitive-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md
Title: How to get facial pose events for lip-sync
-description: Speech SDK supports viseme events during speech synthesis, which represent key poses in observed speech, such as the position of the lips, jaw and tongue when producing a particular phoneme.
+description: Speech SDK supports viseme events during speech synthesis, which represent key poses in observed speech, such as the position of the lips, jaw, and tongue when producing a particular phoneme.
Using visemes, you can create more natural and intelligent news broadcast assist
## Get viseme events with the Speech SDK
-To make viseme events, we convert input text into a set of phoneme sequences and their corresponding viseme sequences. We estimate the start time of each viseme in the speech audio. Viseme events contain a sequence of viseme IDs, each with an offset into the audio where that viseme appears. These events can drive mouth animations that simulate a person speaking the input text.
+To make viseme events, TTS service converts input text into a set of phoneme sequences and their corresponding viseme sequences.
+Then the start time of each viseme in the speech audio is estimated.
+Viseme events contain a sequence of viseme IDs, each with an offset into the audio where that viseme appears.
+These events can drive mouth animations that simulate a person speaking the input text.
| Parameter | Description | |--|-|
cognitive-services Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-sdk.md
The Speech SDK exposes many features from the Speech service, but not all of the
- C++/Windows & Linux & macOS - C# (Framework & .NET Core)/Windows & UWP & Unity & Xamarin & Linux & macOS - Java (Jre and Android)
- - JavaScript (Brower and NodeJS)
+ - JavaScript (Browser and NodeJS)
- Python - Swift - Objective-C
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/pricing.md
Title: Pricing scenarios for Calling (Voice/Video) and Chat description: Learn about Communication Services' Pricing Model.--++ -+ Last updated 03/10/2021
communication-services Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/teams-interop.md
> [!IMPORTANT] > To enable/disable [Teams tenant interoperability](../concepts/teams-interop.md), complete [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR21ouQM6BHtHiripswZoZsdURDQ5SUNQTElKR0VZU0VUU1hMOTBBMVhESS4u).
+> [!NOTE]
+> Interoperability between Azure Communication Services and Microsoft Teams enables your applications and users to participate in Teams calls, meetings, and chat. It is your responsibility to ensure that the users of your application are notified when recording or transcription are enabled in a Teams call or meeting. Microsoft will indicate to you via the Azure Communication Services API that recording or transcription has commenced and you must communicate this fact, in real time, to your users within your applicationΓÇÖs user interface. You agree to indemnify Microsoft for all costs and damages incurred as a result of your failure to comply with this obligation.
+++ Azure Communication Services can be used to build custom meeting experiences that interact with Microsoft Teams. Users of your Communication Services solution(s) can interact with Teams participants over voice, video, chat, and screen sharing. Teams interoperability allows you to create custom applications that connect users to Teams meetings. Users of your custom applications don't need to have Azure Active Directory identities or Teams licenses to experience this capability. This is ideal for bringing employees (who may be familiar with Teams) and external users (using a custom application experience) together into a seamless meeting experience. For example:
Azure Communication Services interoperability isn't compatible with Teams deploy
> [!div class="nextstepaction"] > [Join your calling app to a Teams meeting](../quickstarts/voice-video-calling/get-started-teams-interop.md)+
+For more information, see the following articles:
+
+- Learn about [UI Framework](./ui-framework/ui-sdk-overview.md)
+- Learn about [UI Framework capabilities](./ui-framework/ui-sdk-features.md)
communication-services Call Automation Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/call-automation-apis.md
Content-Type: application/json
"message": "<error-message>", } ```
+## In-Call Events
+Event notifications are sent as JSON payloads to the calling application via the `callbackUri`set during the create call request.
+
+### CallState Event - Establishing
+```
+{
+ "id": null,
+ "topic": null,
+ "subject": "callLeg/531f3600-481f-41c8-8a75-3e8b2e8e6200/callState",
+ "data": {
+ "ConversationId": null,
+ "CallLegId": "531f3600-481f-41c8-8a75-3e8b2e8e6200",
+ "CallState": "Establishing"
+ },
+ "eventType": "Microsoft.Communication.CallLegStateChanged",
+ "eventTime": "2021-05-05T20:08:39.0157964Z",
+ "metadataVersion": null,
+ "dataVersion": null
+}
+```
+### CallState Event - Established
+```
+{
+ "id": null,
+ "topic": null,
+ "subject": "callLeg/531f3600-481f-41c8-8a75-3e8b2e8e6200/callState",
+ "data": {
+ "ConversationId": "aHR0cHM6Ly9jb252LXVzc2MtMDIuY29udi5za3lwZS5jb20vY29udi92RFNacTFyTEIwdVotM0dQdjBabUpnP2k9OCZlPTYzNzU1NzQzNzg4NTgzMTgxMQ",
+ "CallLegId": "531f3600-481f-41c8-8a75-3e8b2e8e6200",
+ "CallState": "Established"
+ },
+ "eventType": "Microsoft.Communication.CallLegStateChanged",
+ "eventTime": "2021-05-05T20:08:59.5783985Z",
+ "metadataVersion": null,
+ "dataVersion": null
+}
+```
+
+### CallState Event - Terminating
+```
+{
+ "id": null,
+ "topic": null,
+ "subject": "callLeg/531f3600-481f-41c8-8a75-3e8b2e8e6200/callState",
+ "data": {
+ "ConversationId": "aHR0cHM6Ly9jb252LXVzc2MtMDIuY29udi5za3lwZS5jb20vY29udi92RFNacTFyTEIwdVotM0dQdjBabUpnP2k9OCZlPTYzNzU1NzQzNzg4NTgzMTgxMQ",
+ "CallLegId": "531f3600-481f-41c8-8a75-3e8b2e8e6200",
+ "CallState": "Terminating"
+ },
+ "eventType": "Microsoft.Communication.CallLegStateChanged",
+ "eventTime": "2021-05-05T20:13:45.7398707Z",
+ "metadataVersion": null,
+ "dataVersion": null
+}
+```
+
+### CallState Event - Terminated
+```
+{
+ "id": null,
+ "topic": null,
+ "subject": "callLeg/531f3600-481f-41c8-8a75-3e8b2e8e6200/callState",
+ "data": {
+ "ConversationId": "aHR0cHM6Ly9jb252LXVzc2MtMDIuY29udi5za3lwZS5jb20vY29udi92RFNacTFyTEIwdVotM0dQdjBabUpnP2k9OCZlPTYzNzU1NzQzNzg4NTgzMTgxMQ",
+ "CallLegId": "531f3600-481f-41c8-8a75-3e8b2e8e6200",
+ "CallState": "Terminated"
+ },
+ "eventType": "Microsoft.Communication.CallLegStateChanged",
+ "eventTime": "2021-05-05T20:13:46.1541814Z",
+ "metadataVersion": null,
+ "dataVersion": null
+}
+```
+
+### DTMF Received Event
+```
+{
+ "id": null,
+ "topic": null,
+ "subject": "callLeg/471f3600-4e1f-4cd4-9eec-4a484e4cbf00/dtmf",
+ "data": {
+ "ToneInfo": {
+ "SequenceId": 1,
+ "Tone": "Tone1"
+ },
+ "CallLegId": "471f3600-4e1f-4cd4-9eec-4a484e4cbf00"
+ },
+ "eventType": "Microsoft.Communication.DtmfReceived",
+ "eventTime": "2021-05-05T20:31:00.4818813Z",
+ "metadataVersion": null,
+ "dataVersion": null
+}
+```
+
+### PlayAudioResult Event
+```
+{
+ "id": null,
+ "topic": null,
+ "subject": "callLeg/511f3600-401f-4296-b6d0-b8da6f343b00/playAudio",
+ "data": {
+ "ResultInfo": {
+ "Code": 200,
+ "Subcode": 0,
+ "Message": "Action completed successfully."
+ },
+ "OperationContext": "6c6cbbc7-66b2-47a8-a29c-5e5f73aee86d",
+ "Status": "Completed",
+ "CallLegId": "511f3600-401f-4296-b6d0-b8da6f343b00"
+ },
+ "eventType": "Microsoft.Communication.PlayAudioResult",
+ "eventTime": "2021-05-05T20:38:22.0476663Z",
+ "metadataVersion": null,
+ "dataVersion": null
+}
+```
+
+### Cancel media processing Event
+```
+{
+ "id": null,
+ "topic": null,
+ "subject": "callLeg/471f3600-4e1f-4cd4-9eec-4a484e4cbf00/playAudio",
+ "data": {
+ "ResultInfo": {
+ "Code": 400,
+ "Subcode": 8508,
+ "Message": "Action falied, the operation was cancelled."
+ },
+ "OperationContext": "d8aeabf7-47a0-4803-b0cc-6059a708440d",
+ "Status": "Completed",
+ "CallLegId": "471f3600-4e1f-4cd4-9eec-4a484e4cbf00"
+ },
+ "eventType": "Microsoft.Communication.PlayAudioResult",
+ "eventTime": "2021-05-05T20:31:01.2789071Z",
+ "metadataVersion": null,
+ "dataVersion": null
+}
+```
+
+### Invite Participant result Event
+```
+{
+ "id": "52154ee2-b2ba-420f-b42f-a69c6101c516",
+ "topic": null,
+ "subject": "callLeg/421f6d00-18fc-4d11-bde6-e5e371494753/inviteParticipantResult",
+ "data": {
+ "ResultInfo": null,
+ "OperationContext": "5dbcbdd4-febf-4091-a5be-543f09b2692c",
+ "Status": "Completed",
+ "CallLegId": "421f6d00-18fc-4d11-bde6-e5e371494753",
+ "Participants": [
+ {
+ "RawId": "8:acs:016a7064-0581-40b9-be73-6dde64d69d72_00000009-de04-ee58-740a-113a0d00330d",
+ "CommunicationUser": {
+ "Id": "8:acs:016a7064-0581-40b9-be73-6dde64d69d72_00000009-de04-ee58-740a-113a0d00330d"
+ },
+ "PhoneNumber": null,
+ "MicrosoftTeamsUser": null
+ }
+ ]
+ },
+ "eventType": "Microsoft.Communication.InviteParticipantResult",
+ "eventTime": "2021-05-05T21:49:52.8138396Z",
+ "metadataVersion": null,
+ "dataVersion": null
+}
+```
+
+### Participants Updated Event
+```
+{
+ "id": null,
+ "topic": null,
+ "subject": "callLeg/411f6d00-088a-4ee4-a7bf-c064ac10afeb/participantsUpdated",
+ "data": {
+ "CallLegId": "411f6d00-088a-4ee4-a7bf-c064ac10afeb",
+ "Participants": [
+ {
+ "Identifier": {
+ "RawId": "8:acs:016a7064-0581-40b9-be73-6dde64d69d72_00000009-7904-f8c2-51b9-a43a0d0010d9",
+ "CommunicationUser": {
+ "Id": "8:acs:016a7064-0581-40b9-be73-6dde64d69d72_00000009-7904-f8c2-51b9-a43a0d0010d9"
+ },
+ "PhoneNumber": null,
+ "MicrosoftTeamsUser": null
+ },
+ "ParticipantId": "de7539f7-019e-4934-a4c9-9a770e5e07bb",
+ "IsMuted": false
+ },
+ {
+ "Identifier": {
+ "RawId": "8:acs:016a7064-0581-40b9-be73-6dde64d69d72_00000009-547e-c56e-71bf-a43a0d002dc1",
+ "CommunicationUser": {
+ "Id": "8:acs:016a7064-0581-40b9-be73-6dde64d69d72_00000009-547e-c56e-71bf-a43a0d002dc1"
+ },
+ "PhoneNumber": null,
+ "MicrosoftTeamsUser": null
+ },
+ "ParticipantId": "16c3518f-5ff5-4989-8073-39255a71fb58",
+ "IsMuted": false
+ }
+ ]
+ },
+ "eventType": "Microsoft.Communication.ParticipantsUpdated",
+ "eventTime": "2021-04-16T06:26:37.9121542Z",
+ "metadataVersion": null,
+ "dataVersion": null
+}
+```
+ ## Next steps Check out our [sample](https://github.com/Azure/communication-preview/tree/master/samples/Server-Calling/IncidentReporter) to learn more.
communication-services Create Your Own Components https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/ui-framework/create-your-own-components.md
- Title: Create your own UI Framework component-
-description: In this quickstart, you'll learn how to build a custom component compatible with the UI Framework
-- Previously updated : 03/10/2021-----
-# Quickstart: Create your Own UI Framework Component
--
-Get started with Azure Communication Services by using the UI Framework to quickly integrate communication experiences into your applications.
-
-In this quickstart, you'll learn how create your own components using the pre-defined state interface offered by UI Framework. This approach is ideal for developers who need more customization and want to use their own design assets for the experience.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [Node.js](https://nodejs.org/) Active LTS and Maintenance LTS versions (Node 12 Recommended).-- An active Communication Services resource. [Create a Communication Services resource](./../create-communication-resource.md).-- A User Access Token to instantiate the call client. Learn how to [create and manage user access tokens](./../access-tokens.md).-
-UI Framework requires a React environment to be set up. Next we will do that. If you already have a React App, you can skip this section.
-
-### Set Up React App
-
-We'll use the create-react-app template for this quickstart. For more information, see: [Get Started with React](https://reactjs.org/docs/create-a-new-react-app.html)
-
-```console
-
-npx create-react-app my-app
-
-cd my-app
-
-```
-
-At the end of this process you should have a full application inside of the folder `my-app`. For this quickstart, we'll be modifying files inside of the `src` folder.
-
-### Install the package
-
-Use the `npm install` command to install the Azure Communication Services Calling SDK for JavaScript. Move the provided tarball (Private Preview) over to the my-app directory.
-
-```console
-
-//For Private Preview install tarball
-
-npm install --save ./{path for tarball}
-
-```
-
-The `--save` option lists the library as a dependency in your **package.json** file.
-
-### Run Create React App
-
-Let's test the Create React App installation by running:
-
-```console
-
-npm run start
-
-```
-
-## Object model
-
-The following classes and interfaces handle some of the major features of the Azure Communication Services UI SDK:
-
-| Name | Description |
-| - | |
-| Provider| Fluent UI provider that allows developers to modify underlying Fluent UI components|
-| CallingProvider| Calling Provider to instantiate a call. Required to add base components|
-| ChatProvider | Chat Provider to instantiate a chat thread. Required to add base components|
-| connectFuncsToContext | Method to connect UI Framework components with underlying providers using mappers |
-| MapToChatMessageProps | Chat message data layer mapper which provides components with chat message props |
--
-## Initialize Chat Providers using Azure Communication Services credentials
-
-For this quickstart we will use chat as an example, for more information on calling, see [Base Components Quickstart](./get-started-with-components.md) and [Composite Components Quickstart](./get-started-with-composites.md).
-
-Go to the `src` folder inside of `my-app` and look for the file `app.js`. Here we'll drop the following code to initialize our Chat Provider. This provider is in charge of maintaining the context of the call and chat experiences. To initialize the components, you'll need an access token retrieved from Azure Communication Services. For details on how to do get an access token, see: [create and manage access tokens](./../access-tokens.md).
-
-UI Framework Components follow the same general architecture for the rest of the service. The components don't generate access tokens, group IDs, or thread IDs. These elements come from services that go through the proper steps to generate these IDs and pass them to the client application. For more information, see [Client Server Architecture](./../../concepts/client-and-server-architecture.md).
-
-`App.js`
-```javascript
-
-import {CallingProvider, ChatProvider} from "@azure/acs-ui-sdk"
-
-function App(props) {
-
- return (
- <ChatProvider
- token={/*Insert the Azure Communication Services access token*/}
- userId={/*Insert the Azure Communication Services user id*/}
- displayName={/*Insert Display Name to be used for the user*/}
- threadId={/*Insert id for group chat thread to be joined*/}
- endpointUrl={/*Insert the environment URL for the Azure Resource used*/}
- refreshTokenCallback={/*Optional, Insert refresh token call back function*/}
- >
- // Add Chat Components Here
- </ChatProvider>
- );
-}
-
-export default App;
-
-```
-
-Once initialized, this provider lets you build your own layout using UI Framework Component and any extra layout logic. The provider takes care of initializing all the underlying logic and properly connecting the different components together. Next we'll create a custom component using UI Framework mappers to connect to our chat provider.
--
-## Create a custom component using mappers
-
-We will start by creating a new file called `SimpleChatThread.js` where we will create the component. We will start by importing the UI Framework components we will need. Here, we will use out of the box html and react to create a fully custom component for a simple chat thread. Using the `connectFuncsToContext` method, we will use the `MapToChatMessageProps` mapper to map props to `SimpleChatThread` custom components. These props will give us access to the chat messages being sent and received to populate them onto our simple thread.
-
-`SimpleChatThread.js`
-```javascript
-
-import {connectFuncsToContext, MapToChatMessageProps} from "@azure/acs-ui-sdk"
-
-function SimpleChatThread(props) {
-
- return (
- <div>
- {props.chatMessages?.map((message) => (
- <div key={message.id ?? message.clientMessageId}> {`${message.senderDisplayName}: ${message.content}`}</div>
- ))}
- </div>
- );
-}
-
-export default connectFuncsToContext(SimpleChatThread, MapToChatMessageProps);
-
-```
-
-## Add your custom component to your application
-
-Now that we have our custom component ready, we will import it and add it to our layout.
-
-```javascript
-
-import {CallingProvider, ChatProvider} from "@azure/acs-ui-sdk"
-import SimpleChatThread from "./SimpleChatThread"
-
-function App(props) {
-
- return (
- <ChatProvider ... >
- <SimpleChatThread />
- </ChatProvider>
- );
-}
-
-export default App;
-
-```
-
-## Run quickstart
-
-To run the code above, use the command:
-
-```console
-
-npm run start
-
-```
-
-To fully test the capabilities, you will need a second client with chat functionality to send messages that will be received by our Simple Chat Thread. See our [Calling Hero Sample](./../../samples/calling-hero-sample.md) and [Chat Hero Sample](./../../samples/chat-hero-sample.md) as potential options.
-
-## Clean up resources
-
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Try UI Framework Composite Components](./get-started-with-composites.md)
-
-For more information, see the following resources:
-- [UI Framework Overview](../../concepts/ui-framework/ui-sdk-overview.md)-- [UI Framework Capabilities](./../../concepts/ui-framework/ui-sdk-features.md)-- [UI Framework Base Components Quickstart](./get-started-with-components.md)
communication-services Get Started With Components https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/ui-framework/get-started-with-components.md
- Title: Get started with Azure Communication Services UI Framework base components-
-description: In this quickstart, you'll learn how to get started with UI Framework base components
-- Previously updated : 03/10/2021-----
-# Quickstart: Get started with UI Framework Base Components
--
-Get started with Azure Communication Services by using the UI Framework to quickly integrate communication experiences into your applications. In this quickstart, you'll learn how integrate UI Framework base components into your application to build communication experiences.
-
-UI Framework components come in two flavors: Base and Composite.
--- **Base components** represent discrete communication capabilities; they're the basic building blocks that can be used to build complex communication experiences. -- **Composite components** are turn-key experiences for common communication scenarios that have been built using **base components** as building blocks and packaged to be easily integrated into applications.-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [Node.js](https://nodejs.org/) Active LTS and Maintenance LTS versions (Node 12 Recommended).-- An active Communication Services resource. [Create a Communication Services resource](./../create-communication-resource.md).-- A User Access Token to instantiate the call client. Learn how to [create and manage user access tokens](./../access-tokens.md).-
-## Setting up
-
-UI Framework requires a React environment to be setup. Next we will do that. If you already have a React App, you can skip this section.
-
-### Set Up React App
-
-We'll use the create-react-app template for this quickstart. For more information, see: [Get Started with React](https://reactjs.org/docs/create-a-new-react-app.html)
-
-```console
-
-npx create-react-app my-app
-
-cd my-app
-
-```
-
-At the end of this process, you should have a full application inside of the folder `my-app`. For this quickstart, we'll be modifying files inside of the `src` folder.
-
-### Install the package
-
-Use the `npm install` command to install the Azure Communication Services Calling SDK for JavaScript. Move the provided tarball (Private Preview) over to the my-app directory.
-
-```console
-
-//For Private Preview install tarball
-
-npm install --save ./{path for tarball}
-
-```
-
-The `--save` option lists the library as a dependency in your **package.json** file.
-
-### Run Create React App
-
-Let's test the Create React App installation by running:
-
-```console
-
-npm run start
-
-```
-
-## Object model
-
-The following classes and interfaces handle some of the major features of the Azure Communication Services UI SDK:
-
-| Name | Description |
-| - | |
-| Provider| Fluent UI provider that allows developers to modify underlying Fluent UI components|
-| CallingProvider| Calling Provider to instantiate a call. Required to add extra components|
-| ChatProvider | Chat Provider to instantiate a chat thread. Required to add extra components|
-| MediaGallery | Base component that shows call participants and their remote video streams |
-| MediaControls | Base component to control call including mute, video, share screen |
-| ChatThread | Base component that renders a chat thread with typing indicators, read receipts, etc. |
-| SendBox | Base component that allows user to input messages that will be sent to the joined thread|
-
-## Initialize Calling and Chat Providers using Azure Communication Services credentials
-
-Go to the `src` folder inside of `my-app` and look for the file `app.js`. Here we'll drop the following code to initialize our Calling and Chat providers. These providers are responsible for maintaining the context of the call and chat experiences. You can choose which one to use depending on the type of communication experience you're building. If needed, you can use both at the same time. To initialize the components, you'll need an access token retrieved from Azure Communication Services. For details on how to get access tokens, see: [create and manage access tokens](./../access-tokens.md).
-
-> [!NOTE]
-> The components don't generate access tokens, group IDs, or thread IDs. These elements come from services that go through the proper steps to generate these IDs and pass them to the client application. For more information, see: [Client Server Architecture](./../../concepts/client-and-server-architecture.md).
->
-> For Example: The Chat Provider expects that the `userId` associated to the `token` being used to initialize it has already been joined to the `threadId` being provided. If the token hasn't been joined to the thread ID, then the Chat Provider will fail. For more information on chat, see: [Getting Started with Chat](./../chat/get-started.md)
-
-We'll use a Fluent UI theme to enhance the look and feel of the application:
-
-`App.js`
-```javascript
-
-import {CallingProvider, ChatProvider} from "@azure/acs-ui-sdk"
-import { mergeThemes, teamsTheme } from '@fluentui/react-northstar';
-import { Provider } from '@fluentui/react-northstar/dist/commonjs/components/Provider/Provider';
-import { svgIconStyles } from '@fluentui/react-northstar/dist/es/themes/teams/components/SvgIcon/svgIconStyles';
-import { svgIconVariables } from '@fluentui/react-northstar/dist/es/themes/teams/components/SvgIcon/svgIconVariables';
-import * as siteVariables from '@fluentui/react-northstar/dist/es/themes/teams/siteVariables';
-
-const iconTheme = {
- componentStyles: {
- SvgIcon: svgIconStyles
- },
- componentVariables: {
- SvgIcon: svgIconVariables
- },
- siteVariables
-};
-
-function App(props) {
-
- return (
- <Provider theme={mergeThemes(iconTheme, teamsTheme)}>
- <CallingProvider
- displayName={/*Insert Display Name to be used for the user*/}
- groupId={/*Insert GUID for group call to be joined*/}
- token={/*Insert the Azure Communication Services access token*/}
- refreshTokenCallback={/*Optional, Insert refresh token call back function*/}
- >
- // Add Calling Components Here
- </CallingProvider>
-
- {/*Note: Make sure that the userId associated to the token has been added to the provided threadId*/}
-
- <ChatProvider
- token={/*Insert the Azure Communication Services access token*/}
- displayName={/*Insert Display Name to be used for the user*/}
- threadId={/*Insert id for group chat thread to be joined*/}
- endpointUrl={/*Insert the environment URL for the Azure Resource used*/}
- refreshTokenCallback={/*Optional, Insert refresh token call back function*/}
- >
- // Add Chat Components Here
- </ChatProvider>
- </Provider>
- );
-}
-
-export default App;
-
-```
-
-Once initialized, this provider lets you build your own layout using UI Framework Base Components and any extra layout logic. The provider takes care of initializing all the underlying logic and properly connecting the different components together. Next we'll use various base components provided by UI Framework to build communication experiences. You can customize the layout of these components and add any other custom components that you want to render with them.
-
-## Build UI Framework Calling Component Experiences
-
-For Calling, we'll use the `MediaGallery` and `MediaControls` Components. For more information about them, see [UI Framework Capabilities](./../../concepts/ui-framework/ui-sdk-features.md). To start, in the `src` folder, create a new file called `CallingComponents.js`. Here we'll initialize a function component that will hold our base components to then import in `app.js`. You can add extra layout and styling around the components.
-
-`CallingComponents.js`
-```javascript
-
-import {MediaGallery, MediaControls, MapToCallConfigurationProps, connectFuncsToContext} from "@azure/acs-ui-sdk"
-
-function CallingComponents(props) {
-
- if (props.isCallInitialized) {props.joinCall()}
-
- return (
- <div style = {{height: '35rem', width: '30rem', float: 'left'}}>
- <MediaGallery/>
- <MediaControls/>
- </div>
- );
-}
-
-export default connectFuncsToContext(CallingComponents, MapToCallConfigurationProps);
-
-```
-
-At the bottom of this file, we exported the calling components using the `connectFuncsToContext` method from the UI Framework to connect the calling UI components to the underlying state using the mapping function `MapToCallingSetupProps`. This method yields the component having its props populated, which we then use to check state and join the call. Using the `isCallInitialized` property to check whether the `CallAgent` is ready and then we use the `joinCall` method to join in. UI Framework supports custom mapping functions to be used for scenarios where developers want to control how data is pushed to the components.
-
-## Build UI Framework Chat Component Experiences
-
-For Chat, we will use the `ChatThread` and `SendBox` components. For more information about these components, see [UI Framework Capabilities](./../../concepts/ui-framework/ui-sdk-features.md). To start, in the `src` folder, create a new file called `ChatComponents.js`. Here we'll initialize a function component that will hold our base components to then import in `app.js`.
-
-`ChatComponents.js`
-```javascript
-
-import {ChatThread, SendBox} from '@azure/acs-ui-sdk'
-
-function ChatComponents() {
-
- return (
- <div style = {{height: '35rem', width: '30rem', float: 'left'}}>
- <ChatThread />
- <SendBox />
- </div >
- );
-}
-
-export default ChatComponents;
-
-```
-
-## Add Calling and Chat Components to the main application
-
-Back in the `app.js` file, we will now add the components to the `CallingProvider` and `ChatProvider` like shown below.
-
-`App.js`
-```javascript
-
-import ChatComponents from './ChatComponents';
-import CallingComponents from './CallingComponents';
-
-<Provider ... >
- <CallingProvider .... >
- <CallingComponents/>
- </CallingProvider>
-
- <ChatProvider .... >
- <ChatComponents />
- </ChatProvider>
-</Provider>
-
-```
-
-## Run quickstart
-
-To run the code above use the command:
-
-```console
-
-npm run start
-
-```
-
-To fully test the capabilities, you will need a second client with calling and chat functionality to join the call and chat thread. See our [Calling Hero Sample](./../../samples/calling-hero-sample.md) and [Chat Hero Sample](./../../samples/chat-hero-sample.md) as potential options.
-
-## Clean up resources
-
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Try UI Framework Composite Components](./get-started-with-composites.md)
-
-For more information, see the following resources:
-- [UI Framework Overview](../../concepts/ui-framework/ui-sdk-overview.md)-- [UI Framework Capabilities](./../../concepts/ui-framework/ui-sdk-features.md)
communication-services Get Started With Composites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/ui-framework/get-started-with-composites.md
- Title: Get started with Azure Communication Services UI Framework SDK composite components-
-description: In this quickstart, you'll learn how to get started with UI Framework Composite Components
-- Previously updated : 03/10/2021-----
-# Quickstart: Get started with UI Framework Composite Components
--
-Get started with Azure Communication Services by using the UI Framework to quickly integrate communication experiences into your applications. In this quickstart, you'll learn how integrate UI Framework Composite Components into your application to build communication experiences.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [Node.js](https://nodejs.org/) Active LTS and Maintenance LTS versions (Node 12 Recommended).-- An active Communication Services resource. [Create a Communication Services resource](./../create-communication-resource.md).-- A User Access Token to instantiate the call composite. Learn how to [create and manage user access tokens](./../access-tokens.md).-
-## Setting up
-
-UI Framework requires a React environment to be setup. Next we will do that. If you already have a React App, you can skip this section.
-
-### Set Up React App
-
-We will use the create-react-app template for this quickstart. For more information, see: [Get Started with React](https://reactjs.org/docs/create-a-new-react-app.html)
-
-```console
-
-npx create-react-app my-app
-
-cd my-app
-
-```
-
-At the end of this process, you should have a full application inside of the folder `my-app`. For this quickstart, we'll be modifying files inside of the `src` folder.
-
-### Install the package
-
-Use the `npm install` command to install the Azure Communication Services Calling SDK for JavaScript. Move the provided tarball (Private Preview) over to the my-app directory.
-
-```console
-
-//Private Preview install tarball
-
-npm install --save ./{path for tarball}
-
-```
-
-The `--save` option lists the library as a dependency in your **package.json** file.
-
-### Run Create React App
-
-Let's test the Create React App installation by running:
-
-```console
-
-npm run start
-
-```
-
-## Object model
-
-The following classes and interfaces handle some of the major features of the Azure Communication Services UI SDK:
-
-| Name | Description |
-| - | |
-| GroupCall | Composite component that renders a group calling experience with participant gallery and controls. |
-| GroupChat | Composite component that renders a group chat experience with chat thread and input |
--
-## Initialize Group Call and Group Chat Composite Components
-
-Go to the `src` folder inside of `my-app` and look for the file `app.js`. Here we'll drop the following code to initialize our Composite Components for Group Chat and Calling. You can choose which one to use depending on the type of communication experience you're building. If needed, you can use both at the same time. To initialize the components, you'll need an access token retrieved from Azure Communication Services. For details on how to do get access tokens, see: [create and manage user access tokens](./../access-tokens.md).
-
-> [!NOTE]
-> The components don't generate access tokens, group IDs, or thread IDs. These elements come from services that go through the proper steps to generate these IDs and pass them to the client application. For more information, see: [Client Server Architecture](./../../concepts/client-and-server-architecture.md).
->
-> For Example: The Group Chat composite expects that the `userId` associated to the `token` being used to initialize it has already been joined to the `threadId` being provided. If the token hasn't been joined to the thread ID, then the Group Chat composite will fail. For more information on chat, see: [Getting Started with Chat](./../chat/get-started.md)
--
-`App.js`
-```javascript
-
-import {GroupCall, GroupChat} from "@azure/acs-ui-sdk"
-
-function App(){
-
- return(<>
- {/* Example styling provided, developers can provide their own styling to position and resize components */}
- <div style={{height: "35rem", width: "50rem", float: "left"}}>
- <GroupCall
- displayName={DISPLAY_NAME} //Required, Display name for the user entering the call
- token={TOKEN} // Required, Azure Communication Services access token retrieved from authentication service
- refreshTokenCallback={CALLBACK} //Optional, Callback to refresh the token in case it expires
- groupId={GROUPID} //Required, Id for group call that will be joined. (GUID)
- onEndCall = { () => {
- //Optional, Action to be performed when the call ends
- }}
- />
- </div>
-
- {/*Note: Make sure that the userId associated to the token has been added to the provided threadId*/}
- {/* Example styling provided, developers can provide their own styling to position and resize components */}
- <div style={{height: "35rem", width: "30rem", float: "left"}}>
- <GroupChat
- displayName={DISPLAY_NAME} //Required, Display name for the user entering the call
- token={TOKEN} // Required, Azure Communication Services access token retrieved from authentication service
- threadId={THREADID} //Required, Id for group chat thread that will be joined.
- endpointUrl={ENDPOINT_URL} //Required, URL for Azure endpoint being used for Azure Communication Services
- onRenderAvatar = { (acsId) => {
- //Optional, function to override the avatar image on the chat thread. Function receives one parameters for the Azure Communication Services Identity. Must return a React element.
- }}
- refreshToken = { () => {
- //Optional, function to refresh the access token in case it expires
- }}
- options = {{
- //Optional, options to define chat behavior
- sendBoxMaxLength: number | undefined //Optional, Limit the max send box length based on viewport size change.
- }}
- />
- </div>
- </>);
-}
-
-export default App;
-
-```
-
-## Run quickstart
-
-To run the code above, use the command:
-
-```console
-
-npm run start
-
-```
-
-To fully test the capabilities, you will need a second client with calling and chat functionality to join the call and chat thread. See our [Calling Hero Sample](./../../samples/calling-hero-sample.md) and [Chat Hero Sample](./../../samples/chat-hero-sample.md) as potential options.
-
-## Clean up resources
-
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Try UI Framework Base Components](./get-started-with-components.md)
-
-For more information, see the following resources:
-- [UI Framework Overview](../../concepts/ui-framework/ui-sdk-overview.md)-- [UI Framework Capabilities](./../../concepts/ui-framework/ui-sdk-features.md)
connectors Connectors Create Api Servicebus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-create-api-servicebus.md
You can use triggers that get responses from Service Bus and make the output ava
* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
-* A Service Bus namespace and messaging entity, such as a queue. These items and your logic app need to use the same Azure subscription. If you don't have these items, learn how to [create your Service Bus namespace and a queue](../service-bus-messaging/service-bus-create-namespace-portal.md).
+* A Service Bus namespace and messaging entity, such as a queue. If you don't have these items, learn how to [create your Service Bus namespace and a queue](../service-bus-messaging/service-bus-create-namespace-portal.md).
* Basic knowledge about [how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md)
-* The logic app where you use the Service Bus namespace and messaging entity. Your logic app and the service bus need to use the same Azure subscription. To start your workflow with a Service Bus trigger, [create a blank logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md). To use a Service Bus action in your workflow, start your logic app with another trigger, for example, the [Recurrence trigger](../connectors/connectors-native-recurrence.md).
+* The logic app where you use the Service Bus namespace and messaging entity. To start your workflow with a Service Bus trigger, [create a blank logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md). To use a Service Bus action in your workflow, start your logic app with another trigger, for example, the [Recurrence trigger](../connectors/connectors-native-recurrence.md).
<a name="permissions-connection-string"></a>
data-factory Author Global Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/author-global-parameters.md
Previously updated : 03/15/2021 Last updated : 05/12/2021 # Global parameters in Azure Data Factory
There are two ways to integrate global parameters in your continuous integration
* Include global parameters in the ARM template * Deploy global parameters via a PowerShell script
-For most use cases, it is recommended to include global parameters in the ARM template. This will integrate natively with the solution outlined in [the CI/CD doc](continuous-integration-deployment.md). Global parameters will be added as an ARM template parameter by default as they often change from environment to environment. You can enable the inclusion of global parameters in the ARM template from the **Manage** hub.
+For general use cases, it is recommended to include global parameters in the ARM template. This integrates natively with the solution outlined in [the CI/CD doc](continuous-integration-deployment.md). In case of automatic publishing and Purview connection, **PowerShell script** method is required. You can find more about PowerShell script method later. Global parameters will be added as an ARM template parameter by default as they often change from environment to environment. You can enable the inclusion of global parameters in the ARM template from the **Manage** hub.
+
+![Include in ARM template](media/author-global-parameters/include-arm-template.png)
> [!NOTE]
-> The **Include in ARM template** configuration is only available in "Git mode". Currently it is disabled in "live mode" or "Data Factory" mode.
+> The **Include in ARM template** configuration is only available in "Git mode". Currently it is disabled in "live mode" or "Data Factory" mode. In case of automatic publishing or Purview connection, do not use Include global parameters method; use PowerShell script method.
> [!WARNING]
->You can not use ΓÇÿ-ΓÇÿ in the parameter name. You will receive an errorcode "{"code":"BadRequest","message":"ErrorCode=InvalidTemplate,ErrorMessage=The expression >'pipeline().globalParameters.myparam-dbtest-url' is not valid: .....}". But, you can use the ΓÇÿ_ΓÇÖ in the parameter name.
+>You can not use ΓÇÿ-ΓÇÿ in the parameter name. You will receive an errorcode "{"code":"BadRequest","message":"ErrorCode=InvalidTemplate,ErrorMessage=The expression >'pipeline().globalParameters.myparam-dbtest-url' is not valid: .....}". But, you can use the ΓÇÿ_ΓÇÖ in the parameter name.
-![Include in ARM template](media/author-global-parameters/include-arm-template.png)
+Adding global parameters to the ARM template adds a factory-level setting that will override other factory-level settings such as a customer-managed key or git configuration in other environments. If you have these settings enabled in an elevated environment such as UAT or PROD, it's better to deploy global parameters via a PowerShell script in the steps highlighted below.
-Adding global parameters to the ARM template adds a factory-level setting that will override other factory-level settings such as a customer-managed key or git configuration in other environments. If you have these settings enabled in an elevated environment such as UAT or PROD, it's better to deploy global parameters via a PowerShell script in the steps highlighted below.
### Deploying using PowerShell
data-factory Concepts Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-integration-runtime.md
The following diagram shows location settings of Data Factory and its integratio
![Integration runtime location](media/concepts-integration-runtime/integration-runtime-location.png) ## Determining which IR to use
+If one data factory activity associates with more than one type of integration runtime, it will resolve to one of them。 The self-hosted integration runtime takes precedence over Azure integration runtime in Azure Data Factory managed virtual network. And the latter takes precedence over public Azure integration runtime.
+For example, one copy activity is used to copy data from source to sink. The public Azure integration runtime is associated with the linked service to source and an Azure integration runtime in Azure Data Factory managed virtual network associates with the linked service for sink, then the result is that both source and sink linked service use Azure integration runtime in Azure Data Factory managed virtual network. But if a self-hosted integration runtime associates the linked service for source, then both source and sink linked service use self-hosted integration runtime.
### Copy activity
data-factory Data Flow Expression Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-expression-functions.md
___
### <code>expr</code> <code><b>expr(<i>&lt;expr&gt;</i> : string) => any</b></code><br/><br/> Results in a expression from a string. This is the same as writing this expression in a non-literal form. This can be used to pass parameters as string representations.
-* expr(ΓÇÿprice * discountΓÇÖ) => any
+* expr('price * discount') => any
___ ### <code>factorial</code> <code><b>factorial(<i>&lt;value1&gt;</i> : number) => long</b></code><br/><br/>
___
### <code>normalize</code> <code><b>normalize(<i>&lt;String to normalize&gt;</i> : string) => string</b></code><br/><br/> Normalizes the string value to separate accented unicode characters.
-* ``regexReplace(normalize('bo┬▓s'), `\p{M}`, '') -> 'boys'``
+* ``regexReplace(normalize('bo┬▓s'), `\p{M}`, '') -> 'boys'``
___ ### <code>not</code> <code><b>not(<i>&lt;value1&gt;</i> : boolean) => boolean</b></code><br/><br/>
___
Based on a criteria, gets the sample covariance of two columns. * ``covarianceSampleIf(region == 'West', sales, profit)`` ___+ ### <code>first</code> <code><b>first(<i>&lt;value1&gt;</i> : any, [<i>&lt;value2&gt;</i> : boolean]) => any</b></code><br/><br/> Gets the first value of a column group. If the second parameter ignoreNulls is omitted, it is assumed false. * ``first(sales)`` * ``first(sales, false)`` ___+ ### <code>isDistinct</code> <code><b>isDistinct(<i>&lt;value1&gt;</i> : any , <i>&lt;value1&gt;</i> : any) => boolean</b></code><br/><br/> Finds if a column or set of columns is distinct. It does not count null as a distinct value
-* ``isDistinct(custId, custName) => boolean``
-* ___
+* ``isDistinct(custId, custName) => boolean``
+___
+ ### <code>kurtosis</code> <code><b>kurtosis(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/> Gets the kurtosis of a column.
Maps each element of the array to a new element using the provided expression. M
* ``map(['a', 'b', 'c', 'd'], #item + '_processed') -> ['a_processed', 'b_processed', 'c_processed', 'd_processed']`` ___ ### <code>mapIf</code>
-<code><b>mapIf (<value1> : array, <value2> : binaryfunction, <value3>: binaryFunction) => any</b></code><br/><br/>
+<code><b>mapIf (<i>\<value1\></i> : array, <i>\<value2\></i> : binaryfunction, \<value3\>: binaryFunction) => any</b></code><br/><br/>
Conditionally maps an array to another array of same or smaller length. The values can be of any datatype including structTypes. It takes a mapping function where you can address the item in the array as #item and current index as #index. For deeply nested maps you can refer to the parent maps using the ``#item_[n](#item_1, #index_1...)`` notation.
-* ``mapIf([10, 20, 30], #item > 10, #item + 5) -> [25, 35]``
+* ``mapIf([10, 20, 30], #item > 10, #item + 5) -> [25, 35]``
* ``mapIf(['icecream', 'cake', 'soda'], length(#item) > 4, upper(#item)) -> ['ICECREAM', 'CAKE']`` ___ ### <code>mapIndex</code>
Maps each element of the array to a new element using the provided expression. M
* ``mapIndex([1, 2, 3, 4], #item + 2 + #index) -> [4, 6, 8, 10]`` ___ ### <code>mapLoop</code>
-<code><b>mapLoop(<value1> : integer, <value2> : unaryfunction) => any</b></code><br/><br/>
+<code><b>mapLoop(<i>\<value1\></i> : integer, <i>\<value2\></i> : unaryfunction) => any</b></code><br/><br/>
Loops through from 1 to length to create an array of that length. It takes a mapping function where you can address the index in the array as #index. For deeply nested maps you can refer to the parent maps using the #index_n(#index_1, #index_2...) notation.
-* ``mapLoop(3, #index * 10) -> [10, 20, 30]``
+* ``mapLoop(3, #index * 10) -> [10, 20, 30]``
___ ### <code>reduce</code> <code><b>reduce(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : any, <i>&lt;value3&gt;</i> : binaryfunction, <i>&lt;value4&gt;</i> : unaryfunction) => any</b></code><br/><br/>
___
Conversion functions are used to convert data and test for data types ### <code>isBitSet</code>
-<code><b>isBitSet (<value1> : array, <value2>:integer ) => boolean</b></code><br/><br/>
+<code><b>isBitSet (<i><i>\<value1\></i></i> : array, <i>\<value2\></i>:integer ) => boolean</b></code><br/><br/>
Checks if a bit position is set in this bitset * ``isBitSet(toBitSet([10, 32, 98]), 10) => true`` ___ ### <code>setBitSet</code>
-<code><b>setBitSet (<value1> : array, <value2>:array) => array</b></code><br/><br/>
+<code><b>setBitSet (<i>\<value1\></i>: array, <i>\<value2\></i>:array) => array</b></code><br/><br/>
Sets bit positions in this bitset * ``setBitSet(toBitSet([10, 32]), [98]) => [4294968320L, 17179869184L]`` ___ ### <code>isBoolean</code>
-<code><b>isBoolean(<value1> : string) => boolean</b></code><br/><br/>
+<code><b>isBoolean(<i>\<value1\></i>: string) => boolean</b></code><br/><br/>
Checks if the string value is a boolean value according to the rules of ``toBoolean()`` * ``isBoolean('true') -> true`` * ``isBoolean('no') -> true`` * ``isBoolean('microsoft') -> false`` ___ ### <code>isByte</code>
-<code><b>isByte(<value1> : string) => boolean</b></code><br/><br/>
+<code><b>isByte(<i>\<value1\></i> : string) => boolean</b></code><br/><br/>
Checks if the string value is a byte value given an optional format according to the rules of ``toByte()`` * ``isByte('123') -> true`` * ``isByte('chocolate') -> false`` ___ ### <code>isDate</code>
-<code><b>isDate (<value1> : string, [<format>: string]) => boolean</b></code><br/><br/>
+<code><b>isDate (<i>\<value1\></i> : string, [<format>: string]) => boolean</b></code><br/><br/>
Checks if the input date string is a date using an optional input date format. Refer Java's SimpleDateFormat for available formats. If the input date format is omitted, default format is ``yyyy-[M]M-[d]d``. Accepted formats are ``[ yyyy, yyyy-[M]M, yyyy-[M]M-[d]d, yyyy-[M]M-[d]dT* ]`` * ``isDate('2012-8-18') -> true`` * ``isDate('12/18--234234' -> 'MM/dd/yyyy') -> false`` ___ ### <code>isShort</code>
-<code><b>isShort (<value1> : string, [<format>: string]) => boolean</b></code><br/><br/>
+<code><b>isShort (<i>\<value1\></i> : string, [<format>: string]) => boolean</b></code><br/><br/>
Checks of the string value is a short value given an optional format according to the rules of ``toShort()`` * ``isShort('123') -> true`` * ``isShort('$123' -> '$###') -> true`` * ``isShort('microsoft') -> false`` ___ ### <code>isInteger</code>
-<code><b>isInteger (<value1> : string, [<format>: string]) => boolean</b></code><br/><br/>
+<code><b>isInteger (<i>\<value1\></i> : string, [<format>: string]) => boolean</b></code><br/><br/>
Checks of the string value is a integer value given an optional format according to the rules of ``toInteger()`` * ``isInteger('123') -> true`` * ``isInteger('$123' -> '$###') -> true`` * ``isInteger('microsoft') -> false`` ___ ### <code>isLong</code>
-<code><b>isLong (<value1> : string, [<format>: string]) => boolean</b></code><br/><br/>
+<code><b>isLong (<i>\<value1\></i> : string, [<format>: string]) => boolean</b></code><br/><br/>
Checks of the string value is a long value given an optional format according to the rules of ``toLong()`` * ``isLong('123') -> true`` * ``isLong('$123' -> '$###') -> true`` * ``isLong('gunchus') -> false`` ___ ### <code>isFloat</code>
-<code><b>isFloat (<value1> : string, [<format>: string]) => boolean</b></code><br/><br/>
+<code><b>isFloat (<i>\<value1\></i> : string, [<format>: string]) => boolean</b></code><br/><br/>
Checks of the string value is a float value given an optional format according to the rules of ``toFloat()`` * ``isFloat('123') -> true`` * ``isFloat('$123.45' -> '$###.00') -> true`` * ``isFloat('icecream') -> false`` ___ ### <code>isDouble</code>
-<code><b>isDouble (<value1> : string, [<format>: string]) => boolean</b></code><br/><br/>
+<code><b>isDouble (<i>\<value1\></i> : string, [<format>: string]) => boolean</b></code><br/><br/>
Checks of the string value is a double value given an optional format according to the rules of ``toDouble()`` * ``isDouble('123') -> true`` * ``isDouble('$123.45' -> '$###.00') -> true`` * ``isDouble('icecream') -> false`` ___ ### <code>isDecimal</code>
-<code><b>isDecimal (<value1> : string) => boolean</b></code><br/><br/>
+<code><b>isDecimal (<i>\<value1\></i> : string) => boolean</b></code><br/><br/>
Checks of the string value is a decimal value given an optional format according to the rules of ``toDecimal()`` * ``isDecimal('123.45') -> true`` * ``isDecimal('12/12/2000') -> false`` ___ ### <code>isTimestamp</code>
-<code><b>isTimestamp (<value1> : string, [<format>: string]) => boolean</b></code><br/><br/>
+<code><b>isTimestamp (<i>\<value1\></i> : string, [<format>: string]) => boolean</b></code><br/><br/>
Checks if the input date string is a timestamp using an optional input timestamp format. Refer to Java's SimpleDateFormat for available formats. If the timestamp is omitted the default pattern ``yyyy-[M]M-[d]d hh:mm:ss[.f...]`` is used. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. Timestamp supports up to millisecond accuracy with value of 999 Refer to Java's SimpleDateFormat for available formats. * ``isTimestamp('2016-12-31 00:12:00') -> true`` * ``isTimestamp('2016-12-31T00:12:00' -> 'yyyy-MM-dd\\'T\\'HH:mm:ss' -> 'PST') -> true``
Checks if a certain hierarchical path exists by name in the stream. You can pass
* ``hasPath('grandpa.parent.child') => boolean`` ___ ### <code>hex</code>
-<code><b>hex(<value1>: binary) => string</b></code><br/><br/>
+<code><b>hex(<i>\<value1\></i>: binary) => string</b></code><br/><br/>
Returns a hex string representation of a binary value * ``hex(toBinary([toByte(0x1f), toByte(0xad), toByte(0xbe)])) -> '1fadbe'`` ___ ### <code>unhex</code>
-<code><b>unhex(<value1>: string) => binary</b></code><br/><br/>
+<code><b>unhex(<i>\<value1\></i>: string) => binary</b></code><br/><br/>
Unhexes a binary value from its string representation. This can be used in conjunction with sha2, md5 to convert from string to binary representation
-* ``unhex('1fadbe') -> toBinary([toByte(0x1f), toByte(0xad), toByte(0xbe)])``
-* ``unhex(md5(5, 'gunchus', 8.2, 'bojjus', true, toDate('2010-4-4'))) -> toBinary([toByte(0x4c),toByte(0xe8),toByte(0xa8),toByte(0x80),toByte(0xbd),toByte(0x62),toByte(0x1a),toByte(0x1f),toByte(0xfa),toByte(0xd0),toByte(0xbc),toByte(0xa9),toByte(0x05),toByte(0xe1),toByte(0xbc),toByte(0x5a)])``
+* ``unhex('1fadbe') -> toBinary([toByte(0x1f), toByte(0xad), toByte(0xbe)])``
+* ``unhex(md5(5, 'gunchus', 8.2, 'bojjus', true, toDate('2010-4-4'))) -> toBinary([toByte(0x4c),toByte(0xe8),toByte(0xa8),toByte(0x80),toByte(0xbd),toByte(0x62),toByte(0x1a),toByte(0x1f),toByte(0xfa),toByte(0xd0),toByte(0xbc),toByte(0xa9),toByte(0x05),toByte(0xe1),toByte(0xbc),toByte(0x5a)])``
## Window functions The following functions are only available in window transformations.
data-factory Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/frequently-asked-questions.md
Previously updated : 04/29/2021 Last updated : 05/11/2021 # Azure Data Factory FAQ
Clusters are never shared. We guarantee isolation for each job run in production
### Is there a way to write attributes in cosmos db in the same order as specified in the sink in ADF data flow?
-For cosmos DB, the underlying format of each document is a JSON object which is an unordered set of name/value pairs, so the order cannot be reserved. Data flow spins up a cluster even on integration runtime with 15 min TTL configuration dataflow advisory about TTL and costs This troubleshoot document [Data flow performance.](https://docs.microsoft.com/azure/data-factory/concepts-data-flow-performance#time-to-live)
+For cosmos DB, the underlying format of each document is a JSON object which is an unordered set of name/value pairs, so the order cannot be reserved.
-
-### Why an user is unable to use data preview in the data flows?
+### Why a user is unable to use data preview in the data flows?
You should check permissions for custom role. There are multiple actions involved in the dataflow data preview. You start by checking network traffic while debugging on your browser. Please follow all of the actions, for details, please refer to [Resource provider.](https://docs.microsoft.com/azure/role-based-access-control/resource-provider-operations#microsoftdatafactory)
-### Does the data flow compute engine serve multiple tenants?
-
-This troubleshooting document may help to resolve your issue:
-[Multiple tenants.](https://docs.microsoft.com/azure/data-factory/frequently-asked-questions#does-the-data-flow-compute-engine-serve-multiple-tenants)
-- ### In ADF, can I calculate value for a new column from existing column from mapping? You can use derive transformation in mapping data flow to create a new column on the logic you want. When creating a derived column, you can either generate a new column or update an existing one. In the Column textbox, enter in the column you are creating. To override an existing column in your schema, you can use the column dropdown. To build the derived column's expression, click on the Enter expression textbox. You can either start typing your expression or open up the expression builder to construct your logic.
Please try to use larger cluster and leverage the row limits in debug settings t
Column name can be parameterized similar to other properties. Like in derived column customer can use **$ColumnNameParam = toString(byName($myColumnNameParamInData)).** These parameters can be passed from pipeline execution down to Data flows.
+### The data flow advisory about TTL and costs
+
+This troubleshoot document may help to resolve your issues: [Mapping data flows performance and tuning guide-Time to live](https://docs.microsoft.com/azure/data-factory/concepts-data-flow-performance#time-to-live).
## Wrangling data flow (Data flow power query)
Column name can be parameterized similar to other properties. Like in derived co
Data factory is available in following [regions.](https://azure.microsoft.com/global-infrastructure/services/?products=data-factory) Power query feature is being rolled out to all regions. If the feature is not available in your region, please check with support.
-### What are the limitations and constraints with wrangling data flow ?
+### What are the limitations and constraints with wrangling data flow?
Dataset names can only contain alpha-numeric characters. The following data stores are supported:
data-factory How To Configure Azure Ssis Ir Custom Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-configure-azure-ssis-ir-custom-setup.md
If you select the **Install Azure PowerShell** type for your express custom setu
If you select the **Install licensed component** type for your express custom setup, you can then select an integrated component from our ISV partners in the **Component name** drop-down list:
- * If you select the **SentryOne's Task Factory** component, you can install the [Task Factory](https://www.sentryone.com/products/task-factory/high-performance-ssis-components) suite of components from SentryOne on your Azure-SSIS IR. To do so, enter the product license key that you purchased from them beforehand in the **License key** text box. The current integrated version is **2020.1.3**.
+* If you select the **SentryOne's Task Factory** component, you can install the [Task Factory](https://www.sentryone.com/products/task-factory/high-performance-ssis-components) suite of components from SentryOne on your Azure-SSIS IR by entering the product license key that you purchased from them in the **License key** box. The current integrated version is **2020.21.2**.
- * If you select the **oh22's HEDDA.IO** component, you can install the [HEDDA.IO](https://github.com/oh22is/HEDDA.IO/tree/master/SSIS-IR) data quality/cleansing component from oh22 on your Azure-SSIS IR. To do so, you need to purchase their service beforehand. The current integrated version is **1.0.14**.
+* If you select the **oh22's HEDDA.IO** component, you can install the [HEDDA.IO](https://github.com/oh22is/HEDDA.IO/tree/master/SSIS-IR) data quality/cleansing component from oh22 on your Azure-SSIS IR. To do so, you need to purchase their service beforehand. The current integrated version is **1.0.14**.
- * If you select the **oh22's SQLPhonetics.NET** component, you can install the [SQLPhonetics.NET](https://appsource.microsoft.com/product/web-apps/oh22.sqlphonetics-ssis) data quality/matching component from oh22 on your Azure-SSIS IR. To do so, enter the product license key that you purchased from them beforehand in the **License key** text box. The current integrated version is **1.0.45**.
-
- * If you select the **KingswaySoft's SSIS Integration Toolkit** component, you can install the [SSIS Integration Toolkit](https://www.kingswaysoft.com/products/ssis-integration-toolkit-for-microsoft-dynamics-365) suite of connectors for CRM/ERP/marketing/collaboration apps, such as Microsoft Dynamics/SharePoint/Project Server, Oracle/Salesforce Marketing Cloud, etc. from KingswaySoft on your Azure-SSIS IR. To do so, enter the product license key that you purchased from them beforehand in the **License key** text box. The current integrated version is **2020.1**.
+* If you select the **oh22's SQLPhonetics.NET** component, you can install the [SQLPhonetics.NET](https://appsource.microsoft.com/product/web-apps/oh22.sqlphonetics-ssis) data quality/matching component from oh22 on your Azure-SSIS IR. To do so, enter the product license key that you purchased from them beforehand in the **License key** text box. The current integrated version is **1.0.45**.
+
+* If you select the **KingswaySoft's SSIS Integration Toolkit** component, you can install the [SSIS Integration Toolkit](https://www.kingswaysoft.com/products/ssis-integration-toolkit-for-microsoft-dynamics-365) suite of connectors for CRM/ERP/marketing/collaboration apps, such as Microsoft Dynamics/SharePoint/Project Server, Oracle/Salesforce Marketing Cloud, etc. from KingswaySoft on your Azure-SSIS IR by entering the product license key that you purchased from them in the **License key** box. The current integrated version is **20.2**.
- * If you select the **KingswaySoft's SSIS Productivity Pack** component, you can install the [SSIS Productivity Pack](https://www.kingswaysoft.com/products/ssis-productivity-pack) suite of components from KingswaySoft on your Azure-SSIS IR. To do so, enter the product license key that you purchased from them beforehand in the **License key** text box. The current integrated version is **20.1**.
+* If you select the **KingswaySoft's SSIS Productivity Pack** component, you can install the [SSIS Productivity Pack](https://www.kingswaysoft.com/products/ssis-productivity-pack) suite of components from KingswaySoft on your Azure-SSIS IR by entering the product license key that you purchased from them in the **License key** box. The current integrated version is **20.2**.
- * If you select the **Theobald Software's Xtract IS** component, you can install the [Xtract IS](https://theobald-software.com/en/xtract-is/) suite of connectors for SAP systems (ERP, S/4HANA, BW) from Theobald Software on your Azure-SSIS IR. To do so, drag & drop/upload the product license file that you purchased from them beforehand into the **License file** input box. The current integrated version is **6.1.1.3**.
+* If you select the **Theobald Software's Xtract IS** component, you can install the [Xtract IS](https://theobald-software.com/en/xtract-is/) suite of connectors for SAP system (ERP, S/4HANA, BW) from Theobald Software on your Azure-SSIS IR by dragging & dropping/uploading the product license file that you purchased from them into the **License file** box. The current integrated version is **6.5.13.18**.
- * If you select the **AecorSoft's Integration Service** component, you can install the [Integration Service](https://www.aecorsoft.com/en/products/integrationservice) suite of connectors for SAP and Salesforce systems from AecorSoft on your Azure-SSIS IR. To do so, enter the product license key that you purchased from them beforehand in the **License key** text box. The current integrated version is **3.0.00**.
+* If you select the **AecorSoft's Integration Service** component, you can install the [Integration Service](https://www.aecorsoft.com/en/products/integrationservice) suite of connectors for SAP and Salesforce systems from AecorSoft on your Azure-SSIS IR. To do so, enter the product license key that you purchased from them beforehand in the **License key** text box. The current integrated version is **3.0.00**.
- * If you select the **CData's SSIS Standard Package** component, you can install the [SSIS Standard Package](https://www.cdata.com/kb/entries/ssis-adf-packages.rst#standard) suite of most popular components from CData, such as Microsoft SharePoint connectors, on your Azure-SSIS IR. To do so, enter the product license key that you purchased from them beforehand in the **License key** text box. The current integrated version is **19.7354**.
+* If you select the **CData's SSIS Standard Package** component, you can install the [SSIS Standard Package](https://www.cdata.com/kb/entries/ssis-adf-packages.rst#standard) suite of most popular components from CData, such as Microsoft SharePoint connectors, on your Azure-SSIS IR. To do so, enter the product license key that you purchased from them beforehand in the **License key** text box. The current integrated version is **19.7354**.
- * If you select the **CData's SSIS Extended Package** component, you can install the [SSIS Extended Package](https://www.cdata.com/kb/entries/ssis-adf-packages.rst#extended) suite of all components from CData, such as Microsoft Dynamics 365 Business Central connectors and other components in their **SSIS Standard Package**, on your Azure-SSIS IR. To do so, enter the product license key that you purchased from them beforehand in the **License key** text box. The current integrated version is **19.7354**. Due to its large size, to avoid installation timeout, please ensure that your Azure-SSIS IR has at least 4 CPU cores per node.
+* If you select the **CData's SSIS Extended Package** component, you can install the [SSIS Extended Package](https://www.cdata.com/kb/entries/ssis-adf-packages.rst#extended) suite of all components from CData, such as Microsoft Dynamics 365 Business Central connectors and other components in their **SSIS Standard Package**, on your Azure-SSIS IR. To do so, enter the product license key that you purchased from them beforehand in the **License key** text box. The current integrated version is **19.7354**. Due to its large size, to avoid installation timeout, please ensure that your Azure-SSIS IR has at least 4 CPU cores per node.
Your added express custom setups will appear on the **Advanced settings** page. To remove them, select their check boxes, and then select **Delete**.
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/managed-virtual-network-private-endpoint.md
Benefits of using Managed Virtual Network:
- Managed Virtual Network along with Managed private endpoints protects against data exfiltration. > [!IMPORTANT]
->Currently, the managed VNet is only supported in the same region as Azure Data Factory region.
+>Currently, the managed Virtual Network is only supported in the same region as Azure Data Factory region.
+
+> [!Note]
+>As Azure Data Factory managed Virtual Network is still in public preview, there is no SLA guarantee.
+
+> [!Note]
+>Existing public Azure integration runtime can't switch to Azure integration runtime in Azure Data Factory managed virtual network and vice versa.
![ADF Managed Virtual Network architecture](./media/managed-vnet/managed-vnet-architecture-diagram.png)
databox-online Azure Stack Edge Gpu Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md
Previously updated : 02/04/2021 Last updated : 05/11/2021 # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
Follow these steps to enable compute and configure compute network.
This is an optional configuration. > [!IMPORTANT]
-> * If you enable compute and use IoT Edge module on your Azure Stack Edge Pro device, we recommend you set web proxy authentication as **None**. NTLM is not supported.
> * Proxy-auto config (PAC) files are not supported. A PAC file defines how web browsers and other user agents can automatically choose the appropriate proxy server (access method) for fetching a given URL. > * Transparent proxies work well with Azure Stack Edge Pro. For non-transparent proxies that intercept and read all the traffic (via their own certificates installed on the proxy server), upload the public key of the proxy's certificate as the signing chain on your Azure Stack Edge Pro device. You can then configure the proxy server settings on your Azure Stack Edge device. For more information, see [Bring your own certificates and upload through the local UI](azure-stack-edge-gpu-deploy-configure-certificates.md#bring-your-own-certificates).
This is an optional configuration.
1. On the **Web proxy settings** page, take the following steps:
- 1. In the **Web proxy URL** box, enter the URL in this format: `http://host-IP address or FQDN:Port number`. HTTPS URLs are not supported.
+ 1. In the **Web proxy URL** box, enter the URL in this format: `http://host-IP address or FQDN:Port number`. HTTPS URLs are not supported.
- 2. Under **Authentication**, select **None** or **NTLM**. If you enable compute and use IoT Edge module on your Azure Stack Edge Pro device, we recommend you set web proxy authentication to **None**. **NTLM** is not supported.
+ 2. To validate and apply the configured web proxy settings, select **Apply**.
- 3. If you're using authentication, enter a username and password.
-
- 4. To validate and apply the configured web proxy settings, select **Apply**.
-
- ![Local web UI "Web proxy settings" page 2](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/web-proxy-2.png)
+ ![Local web UI "Web proxy settings" page 2](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/web-proxy-2.png)<!--UI text update for instruction text is needed.-->
2. After the settings are applied, select **Next: Device**.
databox-online Azure Stack Edge Gpu Virtual Machine Sizes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-virtual-machine-sizes.md
Previously updated : 03/27/2021 Last updated : 05/12/2021
-#Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device by using APIs, so that I can efficiently manage my VMs.
+#Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device by using APIs, so that I can efficiently manage my VMs.
# VM sizes and types for Azure Stack Edge Pro
databox-online Azure Stack Edge Mini R Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy.md
Previously updated : 02/04/2021 Last updated : 05/11/2021 # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Mini R so I can use it to transfer data to Azure.
Follow these steps to configure the network for your device.
4. In the local web UI, go to **Get started**. On the **Security** tile, select **Certificates** and then select **Configure**.
- [![Local web UI "Certificates" page](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/get-started-1.png)](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/get-started-1.png#lightbox)
+ [![Local web UI "Certificates" page](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/get-started-1.png)](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/get-started-1.png#lightbox)
- 1. Select **+ Add certificate**.
+ 1. Select **+ Add certificate**.
- [![Local web UI "Certificates" page 1](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-cert-1.png)](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-cert-1.png#lightbox)
+ [![Local web UI "Certificates" page 1](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-cert-1.png)](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-cert-1.png#lightbox)
- 2. Upload the signing chain and select **Apply**.
+ 2. Upload the signing chain and select **Apply**.
- ![Local web UI "Certificates" page 2](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-cert-2.png)
+ ![Local web UI "Certificates" page 2](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-cert-2.png)
- 3. Repeat the procedure with the Wi-Fi certificate.
+ 3. Repeat the procedure with the Wi-Fi certificate.
- ![Local web UI "Certificates" page 3](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-cert-4.png)
+ ![Local web UI "Certificates" page 3](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-cert-4.png)
- 4. The new certificates should be displayed on the **Certificates** page.
-
- [![Local web UI "Certificates" page 4](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-cert-5.png)](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-cert-5.png#lightbox)
+ 4. The new certificates should be displayed on the **Certificates** page.
- 5. Go back to **Get started**.
+ [![Local web UI "Certificates" page 4](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-cert-5.png)](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-cert-5.png#lightbox)
-3. On the **Network** tile, select **Configure**.
-
- On your physical device, there are five network interfaces. PORT 1 and PORT 2 are 1-Gbps network interfaces. PORT 3 and PORT 4 are all 10-Gbps network interfaces. The fifth port is the Wi-Fi port.
+ 5. Go back to **Get started**.
- [![Local web UI "Network settings" page 1](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/configure-wifi-1.png)](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/configure-wifi-1.png#lightbox)
-
- Select the Wi-Fi port and configure the port settings.
-
- > [!IMPORTANT]
- > We strongly recommend that you configure a static IP address for the Wi-Fi port.
+5. On the **Network** tile, select **Configure**.
- ![Local web UI "Network settings" page 2](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/configure-wifi-2.png)
+ On your physical device, there are five network interfaces. PORT 1 and PORT 2 are 1-Gbps network interfaces. PORT 3 and PORT 4 are all 10-Gbps network interfaces. The fifth port is the Wi-Fi port.
- The **Network** page updates after you apply the Wi-Fi port settings.
+ [![Local web UI "Network settings" page 1](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/configure-wifi-1.png)](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/configure-wifi-1.png#lightbox)
- ![Local web UI "Network settings" page 3](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/configure-wifi-4.png)
-
-4. Select **Add Wi-Fi profile** and upload your Wi-Fi profile.
+ Select the Wi-Fi port and configure the port settings.
- ![Local web UI "Port WiFi Network settings" 1](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-profile-1.png)
-
- A wireless network profile contains the SSID (network name), password key, and security information to be able to connect to a wireless network. You can get the Wi-Fi profile for your environment from your network administrator.
+ > [!IMPORTANT]
+ > We strongly recommend that you configure a static IP address for the Wi-Fi port.
++
+ ![Local web UI "Network settings" page 2](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/configure-wifi-2.png)
- ![Local web UI "Port WiFi Network settings" 2](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-profile-2.png)
+ The **Network** page updates after you apply the Wi-Fi port settings.
- After the profile is added, the list of Wi-Fi profiles updates to reflect the new profile. The profile should show the **Connection status** as **Disconnected**.
+ ![Local web UI "Network settings" page 3](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/configure-wifi-4.png)
- ![Local web UI "Port WiFi Network settings" 3](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-profile-3.png)
+6. Select **Add Wi-Fi profile** and upload your Wi-Fi profile.
+ ![Local web UI "Port WiFi Network settings" 1](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-profile-1.png)
-5. After the wireless network profile is successfully loaded, connect to this profile. Select **Connect to Wi-Fi profile**.
+ A wireless network profile contains the SSID (network name), password key, and security information to be able to connect to a wireless network. You can get the Wi-Fi profile for your environment from your network administrator.
- ![Local web UI "Port Wi-Fi Network settings" 4](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-profile-4.png)
+ ![Local web UI "Port WiFi Network settings" 2](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-profile-2.png)
-6. Select the Wi-Fi profile that you added in the previous step, and select **Apply**.
+ After the profile is added, the list of Wi-Fi profiles updates to reflect the new profile. The profile should show the **Connection status** as **Disconnected**.
- ![Local web UI "Port Wi-Fi Network settings" 5](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-profile-5.png)
+ ![Local web UI "Port WiFi Network settings" 3](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-profile-3.png)
- The **Connection status** should update to **Connected**. The signal strength updates to indicate the quality of the signal.
+7. After the wireless network profile is successfully loaded, connect to this profile. Select **Connect to Wi-Fi profile**.
- ![Local web UI "Port Wi-Fi Network settings" 6](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-profile-6.png)
+ ![Local web UI "Port Wi-Fi Network settings" 4](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-profile-4.png)
- > [!NOTE]
- > To transfer large amounts of data, we recommend that you use a wired connection instead of the wireless network.
+8. Select the Wi-Fi profile that you added in the previous step, and select **Apply**.
-6. Disconnect PORT 1 on the device from the laptop.
+ ![Local web UI "Port Wi-Fi Network settings" 5](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-profile-5.png)
-7. As you configure the network settings, keep in mind:
+ The **Connection status** should update to **Connected**. The signal strength updates to indicate the quality of the signal.
- - If DHCP is enabled in your environment, network interfaces are automatically configured. An IP address, subnet, gateway, and DNS are automatically assigned.
- - If DHCP isn't enabled, you can assign static IPs if needed.
- - You can configure your network interface as IPv4.
- - Network Interface Card (NIC) Teaming or link aggregation is not supported with Azure Stack Edge.
- - Serial number for any port corresponds to the node serial number. For a K-series device, only one serial number is displayed.
+ ![Local web UI "Port Wi-Fi Network settings" 6](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-profile-6.png)
- >[!NOTE]
+ > [!NOTE]
+ > To transfer large amounts of data, we recommend that you use a wired connection instead of the wireless network.
+
+9. Disconnect PORT 1 on the device from the laptop.
+
+10. As you configure the network settings, keep in mind:
+
+ - If DHCP is enabled in your environment, network interfaces are automatically configured. An IP address, subnet, gateway, and DNS are automatically assigned.
+ - If DHCP isn't enabled, you can assign static IPs if needed.
+ - You can configure your network interface as IPv4.
+ - Network Interface Card (NIC) Teaming or link aggregation is not supported with Azure Stack Edge.
+ - Serial number for any port corresponds to the node serial number. For a K-series device, only one serial number is displayed.
+
+ > [!NOTE]
> We recommend that you do not switch the local IP address of the network interface from static to DCHP, unless you have another IP address to connect to the device. If using one network interface and you switch to DHCP, there would be no way to determine the DHCP address. If you want to change to a DHCP address, wait until after the device has registered with the service, and then change. You can then view the IPs of all the adapters in the **Device properties** in the Azure portal for your service. After you have configured and applied the network settings, select **Next: Compute** to configure compute network.
Follow these steps to enable compute and configure compute network.
1. In the **Compute** page, select a network interface that you want to enable for compute.
- ![Compute page in local UI 2](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/compute-network-1.png)
+ ![Compute page in local UI 2](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/compute-network-1.png)
1. In the **Network settings** dialog, select **Enable**. When you enable compute, a virtual switch is created on your device on that network interface. The virtual switch is used for the compute infrastructure on the device. 1. Assign **Kubernetes node IPs**. These static IP addresses are for the compute VM.
- For an *n*-node device, a contiguous range of a minimum of *n+1* IPv4 addresses (or more) are provided for the compute VM using the start and end IP addresses. Given Azure Stack Edge is a 1-node device, a minimum of 2 contiguous IPv4 addresses are provided.
-
- > [!IMPORTANT]
- > Kubernetes on Azure Stack Edge uses 172.27.0.0/16 subnet for pod and 172.28.0.0/16 subnet for service. Make sure that these are not in use in your network. If these subnets are already in use in your network, you can change these subnets by running the `Set-HcsKubeClusterNetworkInfo` cmdlet from the PowerShell interface of the device. For more information, see [Change Kubernetes pod and service subnets](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-pod-and-service-subnets).
+ For an *n*-node device, a contiguous range of a minimum of *n+1* IPv4 addresses (or more) are provided for the compute VM using the start and end IP addresses. Given Azure Stack Edge is a 1-node device, a minimum of 2 contiguous IPv4 addresses are provided.
+ > [!IMPORTANT]
+ > Kubernetes on Azure Stack Edge uses 172.27.0.0/16 subnet for pod and 172.28.0.0/16 subnet for service. Make sure that these are not in use in your network. If these subnets are already in use in your network, you can change these subnets by running the `Set-HcsKubeClusterNetworkInfo` cmdlet from the PowerShell interface of the device. For more information, see [Change Kubernetes pod and service subnets](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-pod-and-service-subnets).
1. Assign **Kubernetes external service IPs**. These are also the load-balancing IP addresses. These contiguous IP addresses are for services that you want to expose outside the Kubernetes cluster and you specify the static IP range depending on the number of services exposed.
- > [!IMPORTANT]
- > We strongly recommend that you specify a minimum of 1 IP address for Azure Stack Edge Mini R Hub service to access compute modules. You can then optionally specify additional IP addresses for other services/IoT Edge modules (1 per service/module) that need to be accessed from outside the cluster. The service IP addresses can be updated later.
+ > [!IMPORTANT]
+ > We strongly recommend that you specify a minimum of 1 IP address for Azure Stack Edge Mini R Hub service to access compute modules. You can then optionally specify additional IP addresses for other services/IoT Edge modules (1 per service/module) that need to be accessed from outside the cluster. The service IP addresses can be updated later.
1. Select **Apply**.
- ![Compute page in local UI 3](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/compute-network-3.png)
+ ![Compute page in local UI 3](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/compute-network-3.png)
1. The configuration takes a couple minutes to apply and you may need to refresh the browser. You can see that the specified port is enabled for compute.
- ![Compute page in local UI 4](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/compute-network-4.png)
+ ![Compute page in local UI 4](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/compute-network-4.png)
- Select **Next: Web proxy** to configure web proxy.
+ Select **Next: Web proxy** to configure web proxy.
## Configure web proxy
Follow these steps to enable compute and configure compute network.
This is an optional configuration. > [!IMPORTANT]
-> * If you enable compute and use IoT Edge module on your Azure Stack Edge Mini R device, we recommend you set web proxy authentication as **None**. NTLM is not supported.
->* Proxy-auto config (PAC) files are not supported. A PAC file defines how web browsers and other user agents can automatically choose the appropriate proxy server (access method) for fetching a given URL. Proxies that try to intercept and read all the traffic (then re-sign everything with their own certification) aren't compatible since the proxy's certificate is not trusted. Typically transparent proxies work well with Azure Stack Edge Mini R. Non-transparent web proxies are not supported.
+> Proxy-auto config (PAC) files are not supported. A PAC file defines how web browsers and other user agents can automatically choose the appropriate proxy server (access method) for fetching a given URL. Proxies that try to intercept and read all the traffic (then re-sign everything with their own certification) aren't compatible since the proxy's certificate is not trusted. Typically transparent proxies work well with Azure Stack Edge Mini R. Non-transparent web proxies are not supported.
1. On the **Web proxy settings** page, take the following steps:
- 1. In the **Web proxy URL** box, enter the URL in this format: `http://host-IP address or FQDN:Port number`. HTTPS URLs are not supported.
-
- 2. Under **Authentication**, select **None** or **NTLM**. If you enable compute and use IoT Edge module on your Azure Stack Edge Mini R device, we recommend you set web proxy authentication to **None**. **NTLM** is not supported.
+ 1. In the **Web proxy URL** box, enter the URL in this format: `http://host-IP address or FQDN:Port number`. HTTPS URLs are not supported.
- 3. If you're using authentication, enter a username and password.
+ 2. To validate and apply the configured web proxy settings, select **Apply**.
- 4. To validate and apply the configured web proxy settings, select **Apply**.
-
- ![Local web UI "Web proxy settings" page](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/web-proxy-1.png)
+ ![Local web UI "Web proxy settings" page](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/web-proxy-1.png)<!--UI text update is needed to remove NTLM from instruction text.-->
2. After the settings are applied, select **Next: Device**.
databox-online Azure Stack Edge Pro R Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-pro-r-deploy-configure-network-compute-web-proxy.md
Previously updated : 02/04/2021 Last updated : 05/11/2021 # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro R so I can use it to transfer data to Azure.
Follow these steps to enable compute and configure compute network.
This is an optional configuration. > [!IMPORTANT]
-> * If you enable compute and use IoT Edge module on your Azure Stack Edge Pro R device, we recommend you set web proxy authentication as **None**. NTLM is not supported.
->* Proxy-auto config (PAC) files are not supported. A PAC file defines how web browsers and other user agents can automatically choose the appropriate proxy server (access method) for fetching a given URL. Proxies that try to intercept and read all the traffic (then re-sign everything with their own certification) aren't compatible since the proxy's certificate is not trusted. Typically transparent proxies work well with Azure Stack Edge Pro R. Non-transparent web proxies are not supported.
+> Proxy-auto config (PAC) files are not supported. A PAC file defines how web browsers and other user agents can automatically choose the appropriate proxy server (access method) for fetching a given URL. Proxies that try to intercept and read all the traffic (then re-sign everything with their own certification) aren't compatible since the proxy's certificate is not trusted. Typically transparent proxies work well with Azure Stack Edge Pro R. Non-transparent web proxies are not supported.
1. On the **Web proxy settings** page, take the following steps:
- 1. In the **Web proxy URL** box, enter the URL in this format: `http://host-IP address or FQDN:Port number`. HTTPS URLs are not supported.
+ 1. In the **Web proxy URL** box, enter the URL in this format: `http://host-IP address or FQDN:Port number`. HTTPS URLs are not supported.
- 2. Under **Authentication**, select **None** or **NTLM**. If you enable compute and use IoT Edge module on your Azure Stack Edge Pro R device, we recommend you set web proxy authentication to **None**. **NTLM** is not supported.
-
- 3. If you're using authentication, enter a username and password.
-
- 4. To validate and apply the configured web proxy settings, select **Apply**.
+ 2. To validate and apply the configured web proxy settings, select **Apply**.
- ![Local web UI "Web proxy settings" page 2](./media/azure-stack-edge-pro-r-deploy-configure-network-compute-web-proxy/web-proxy-2.png)
+ ![Local web UI "Web proxy settings" page 2](./media/azure-stack-edge-pro-r-deploy-configure-network-compute-web-proxy/web-proxy-2.png)<!--UI text update for instruction text is needed.-->
2. After the settings are applied, select **Next: Device**.
firewall Quick Create Ipgroup Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/quick-create-ipgroup-template.md
Previously updated : 08/28/2020 Last updated : 05/10/2021
In this quickstart, you use an Azure Resource Manager template (ARM template) to
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-azurefirewall-create-with-ipgroups-and-linux-jumpbox%2Fazuredeploy.json)
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Fazurefirewall-create-with-ipgroups-and-linux-jumpbox%2Fazuredeploy.json)
## Prerequisites
This template creates an Azure Firewall and IP Groups, along with the necessary
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-azurefirewall-create-with-ipgroups-and-linux-jumpbox). Multiple Azure resources are defined in the template:
Deploy the ARM template to Azure:
1. Select **Deploy to Azure** to sign in to Azure and open the template. The template creates an Azure Firewall, the network infrastructure, and two virtual machines.
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-azurefirewall-create-with-ipgroups-and-linux-jumpbox%2Fazuredeploy.json)
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Fazurefirewall-create-with-ipgroups-and-linux-jumpbox%2Fazuredeploy.json)
2. In the portal, on the **Create an Azure Firewall with IpGroups** page, type or select the following values:
- - Subscription: Select from existing subscriptions
+ - Subscription: Select from existing subscriptions
- Resource group: Select from existing resource groups or select **Create new**, and select **OK**. - Location: Select a location
- - Virtual Network Name: Type a name for the new virtual network (VNet)
- - IP Group Name 1: Type name for IP Group 1
- - IP Group Name 2: Type name for IP Group 2
- - Admin Username: Type username for the administrator user account
- - Authentication: Select sshPublicKey or password
+ - Virtual Network Name: Type a name for the new virtual network (VNet)
+ - IP Group Name 1: Type name for IP Group 1
+ - IP Group Name 2: Type name for IP Group 2
+ - Admin Username: Type username for the administrator user account
+ - Authentication: Select sshPublicKey or password
- Admin Password: Type an administrator password or key 3. Select **I agree to the terms and conditions stated above** and then select **Purchase**. The deployment can take 10 minutes or longer to complete.
frontdoor Front Door Quickstart Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-quickstart-template-samples.md
The following table includes links to Azure Resource Manager deployment model te
| Template | Description | | | |
-| [Create a basic Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/101-front-door-create-basic)| Creates a basic Front Door configuration with a single backend. |
+| [Create a basic Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-create-basic)| Creates a basic Front Door configuration with a single backend. |
| [Create a Front Door with multiple backends and backend pools and URL based routing](https://github.com/Azure/azure-quickstart-templates/tree/master/101-front-door-create-multiple-backends)| Creates a Front Door with load balancing configured for multiple backends in ta backend pool and also across backend pools based on URL path. | | [Onboard a custom domain with Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/101-front-door-custom-domain)| Add a custom domain to your Front Door. | | [Create Front Door with geo filtering](https://github.com/Azure/azure-quickstart-templates/tree/master/101-front-door-geo-filtering)| Create a Front Door that allows/blocks traffic from certain countries/regions. |
frontdoor Quickstart Create Front Door Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/quickstart-create-front-door-template.md
The template used in this quickstart is from [Azure Quickstart Templates](https:
In this quickstart, you'll create a Front Door configuration with a single backend and a single default path matching `/*`. One Azure resource is defined in the template:
frontdoor Concept Rule Set Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/concept-rule-set-actions.md
In this example, we rewrite all requests to the path `/redirection`, and don't p
+## <a name="OriginGroupOverride"></a> Origin group override
+
+Use the **Origin group override** action to change the origin group that the request should be routed to.
+
+### Properties
+
+| Property | Supported values |
+|-||
+| Origin group | The origin group that the request should be routed to. This overrides the configuration specified in the Front Door endpoint route. |
+
+### Example
+
+In this example, we route all matched requests to an origin group named `SecondOriginGroup`, regardless of the configuration in the Front Door endpoint route.
+
+# [Portal](#tab/portal)
++
+# [JSON](#tab/json)
+
+```json
+{
+ "name": "OriginGroupOverride",
+ "parameters": {
+ "originGroup": {
+ "id": "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.Cdn/profiles/<profile-name>/originGroups/SecondOriginGroup"
+ },
+ "@odata.type": "#Microsoft.Azure.Cdn.Models.DeliveryRuleOriginGroupOverrideActionParameters"
+ }
+}
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+{
+ name: 'OriginGroupOverride'
+ parameters: {
+ originGroup: {
+ id: '/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.Cdn/profiles/<profile-name>/originGroups/SecondOriginGroup'
+ }
+ '@odata.type': '#Microsoft.Azure.Cdn.Models.DeliveryRuleOriginGroupOverrideActionParameters'
+ }
+}
+```
+++ ## Server variables Rule Set server variables provide access to structured information about the request. You can use server variables to dynamically change the request/response headers or URL rewrite paths/query strings, for example, when a new page load or when a form is posted.
frontdoor Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/resource-manager-template-samples.md
Previously updated : 04/16/2021 Last updated : 05/11/2021 # Azure Resource Manager templates for Azure Front Door
The following table includes links to Azure Resource Manager templates for Azure
| [WAF policy with geo-filtering](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-geo-filtering/) | Creates a Front Door profile and WAF with a custom rule to perform geo-filtering. | |**App Service origins**| **Description** | | [App Service](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-app-service-public) | Creates an App Service app with a public endpoint, and a Front Door profile. |
-| [App Service with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-premium-app-service-private-link) | Creates an App Service app with a private endpoint, and a Front Door profile. |
+| [App Service with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-premium-app-service-private-link) | Creates an App Service app with a private endpoint, and a Front Door profile. |
|**Azure Functions origins**| **Description** | | [Azure Functions](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-function-public/) | Creates an Azure Functions app with a public endpoint, and a Front Door profile. |
-| [Azure Functions with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-premium-function-private-link) | Creates an Azure Functions app with a private endpoint, and a Front Door profile. |
+| [Azure Functions with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-premium-function-private-link) | Creates an Azure Functions app with a private endpoint, and a Front Door profile. |
|**API Management origins**| **Description** | | [API Management (external)](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-api-management-external) | Creates an API Management instance with external VNet integration, and a Front Door profile. | |**Storage origins**| **Description** | | [Storage static website](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-storage-static-website) | Creates an Azure Storage account and static website with a public endpoint, and a Front Door profile. |
-| [Storage blobs with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-premium-storage-blobs-private-link) | Creates an Azure Storage account and blob container with a private endpoint, and a Front Door profile. |
+| [Storage blobs with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-premium-storage-blobs-private-link) | Creates an Azure Storage account and blob container with a private endpoint, and a Front Door profile. |
|**Application Gateway origins**| **Description** | | [Application Gateway](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-application-gateway-public) | Creates an Application Gateway, and a Front Door profile. | |**Virtual machine origins**| **Description** |
governance Guest Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/guest-configuration.md
assignment are automatically included.
Customers designing a highly available solution should consider the redundancy planning requirements for [virtual machines](../../../virtual-machines/availability.md) because guest assignments are extensions of
-machine resources in Azure. If a physical region becomes unavailable in Azure, it's not possible
-to view historical reports for a guest assignment until the region is restored.
+machine resources in Azure. When guest assignment resources are provisioned in to an Azure region that is
+[paired](../../../best-practices-availability-paired-regions.md), as long as at least one region in the pair
+is available, then guest assignment reports are available. If the Azure region isn't paired and
+it becomes unavailable, then it isn't possible to access reports for a guest assignment until
+the region is restored.
When considering an architecture for highly available applications, especially where virtual machines are provisioned in
healthcare-apis How To Run A Reindex https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/how-to-run-a-reindex.md
If the request is successful, a status of **201 Created** gets returned. The res
```json HTTP/1.1 201 Created
-Content-Location: https://cv-cosmos1.azurewebsites.net/_operations/reindex/560c7c61-2c70-4c54-b86d-c53a9d29495e
+Content-Location: https://{{FHIR URL}}/_operations/reindex/560c7c61-2c70-4c54-b86d-c53a9d29495e
{ "resourceType": "Parameters",
Content-Location: https://cv-cosmos1.azurewebsites.net/_operations/reindex/560c7
``` > [!NOTE]
-> To check the status of or to cancel a reindex job, youΓÇÖll need the reindex ID. This is the ID of the resulting Parameters resource (shown above) and can also be found as the GUID at the end of the Content-Location string:
-
-`https://{{FHIR URL}}/_operations/reindex/560c7c61-2c70-4c54-b86d-c53a9d29495e`
+> To check the status of or to cancel a reindex job, youΓÇÖll need the reindex ID. This is the ID of the resulting Parameters resource (shown above). Reindex ID can also be found at the end of the Content-Location string. In the example above, it would be `560c7c61-2c70-4c54-b86d-c53a9d29495e`.
## How to check the status of a reindex job
iot-accelerators Howto Opc Publisher Run https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/howto-opc-publisher-run.md
To make the IoT Edge module configuration files accessible in the host file syst
{ "Hostname": "publisher", "Cmd": [
- "--pf=./pn.json",
+ "--pf=/appdata/pn.json",
"--aa" ], "HostConfig": {
It implements a number of tags, which generate random data and tags with anomali
## Next steps
-Now that you've learned how to run OPC Publisher, the recommended next steps are to learn about [OPC Twin](overview-opc-twin.md) and [OPC Vault](overview-opc-vault.md).
+Now that you've learned how to run OPC Publisher, the recommended next steps are to learn about [OPC Twin](overview-opc-twin.md) and [OPC Vault](overview-opc-vault.md).
key-vault How To Integrate Certificate Authority https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/how-to-integrate-certificate-authority.md
Title: Integrating Key Vault with DigiCert Certificate Authority
-description: How to integrate Key Vault with DigiCert Certificate Authority
+ Title: Integrating Key Vault with DigiCert certificate authority
+description: This article describes how to integrate Key Vault with DigiCert certificate authority so you can provision, manage, and deploy certificates for your network.
tags: azure-resource-manager
Last updated 06/02/2020
-# Integrating Key Vault with DigiCert Certificate Authority
+# Integrating Key Vault with DigiCert certificate authority
-Azure Key Vault allows you to easily provision, manage, and deploy digital certificates for your network and to enable secure communications for applications. A Digital certificate is an electronic credential to establish proof of identity in an electronic transaction.
+Azure Key Vault allows you to easily provision, manage, and deploy digital certificates for your network and to enable secure communications for applications. A digital certificate is an electronic credential that establishes proof of identity in an electronic transaction.
-Azure key vault users can generate DigiCert certificates directly from their Key Vault. Key Vault would ensure end-to-end certificate lifecycle management for those certificates issued by DigiCert through Key VaultΓÇÖs trusted partnership with DigiCert Certificate Authority.
+Azure Key Vault users can generate DigiCert certificates directly from their key vaults. Key Vault has a trusted partnership with DigiCert certificate authority. This partnership ensures end-to-end certificate lifecycle management for certificates issued by DigiCert.
-For more general information about Certificates, see [Azure Key Vault Certificates](./about-certificates.md).
+For more general information about certificates, see [Azure Key Vault certificates](./about-certificates.md).
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you start.
## Prerequisites
-To complete this guide, you must have the following resources.
-* A key vault. You can use an existing key vault, or create a new one by following the steps in one of these quickstarts:
- - [Create a key vault with the Azure CLI](../general/quick-create-cli.md)
- - [Create a key vault with Azure PowerShell](../general/quick-create-powershell.md)
- - [Create a key vault with the Azure portal](../general/quick-create-portal.md).
-* You need to activate DigiCert CertCentral account. [Sign up](https://www.digicert.com/account/signup/) for your CertCentral account.
-* Administrator level permissions in your accounts.
+To complete the procedures in this article, you need to have:
+* A key vault. You can use an existing key vault or create one by completing the steps in one of these quickstarts:
+ - [Create a key vault by using the Azure CLI](../general/quick-create-cli.md)
+ - [Create a key vault by using Azure PowerShell](../general/quick-create-powershell.md)
+ - [Create a key vault by using the Azure portal](../general/quick-create-portal.md)
+* An activated DigiCert CertCentral account. [Sign up](https://www.digicert.com/account/signup/) for your CertCentral account.
+* Administrator-level permissions in your accounts.
### Before you begin
-Make sure you have the following information handy from your DigiCert CertCentral account:
-- CertCentral Account ID
+Make sure you have the following information from your DigiCert CertCentral account:
+- CertCentral account ID
- Organization ID - API key
-## Adding Certificate Authority in Key Vault
-After gathering above information from DigiCert CertCentral account, you can now add DigiCert to Certificate Authority list in the key vault.
+## Add the certificate authority in Key Vault
+After you gather the preceding information from your DigiCert CertCentral account, you can add DigiCert to the certificate authority list in the key vault.
### Azure portal
-1. To add DigiCert certificate authority, navigate to the key vault you want to add DigiCert.
-2. On the Key Vault properties pages, select **Certificates**.
-3. Select **Certificate Authorities** tab.
-![select certificate authorities](../media/certificates/how-to-integrate-certificate-authority/select-certificate-authorities.png)
-4. Select **Add** option.
- ![add certificate authorities](../media/certificates/how-to-integrate-certificate-authority/add-certificate-authority.png)
-5. On the **Create a certificate Authority** screen choose the following values:
- - **Name**: Add an identifiable Issuer name. Example DigicertCA
- - **Provider**: Select DigiCert from the menu.
- - **Account ID**: Enter your DigiCert CertCentral Account ID
- - **Account Password**: Enter the API key you generated in your DigiCert CertCentral Account
- - **Organization ID**: Enter OrgID gathered from DigiCert CertCentral Account
- - Click **Create**.
+1. To add DigiCert certificate authority, go to the key vault you want to add it to.
+2. On the Key Vault property page, select **Certificates**.
+3. Select the **Certificate Authorities** tab:
+4. Select **Add**:
+5. Under **Create a certificate authority**, enter these values:
+ - **Name**: An identifiable issuer name. For example, **DigiCertCA**.
+ - **Provider**: **DigiCert**.
+ - **Account ID**: Your DigiCert CertCentral account ID.
+ - **Account Password**: The API key you generated in your DigiCert CertCentral account.
+ - **Organization ID**: The organization ID from your DigiCert CertCentral account.
+
+1. Select **Create**.
-6. You will see that DigicertCA has now been added in Certificate Authorities list.
+DigicertCA is now in the certificate authority list.
### Azure PowerShell
-Azure PowerShell is used to create and manage Azure resources using commands or scripts. Azure hosts Azure Cloud Shell, an interactive shell environment that you can use through your Azure portal in the browser itself.
+You can use Azure PowerShell to create and manage Azure resources by using commands or scripts. Azure hosts Azure Cloud Shell, an interactive shell environment that you can use through the Azure portal in a browser.
-If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, you need Azure AZ PowerShell module 1.0.0 or later to complete the procedures here. Type `$PSVersionTable.PSVersion` to determine the version. If you need to upgrade, see [Install Azure AZ PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure:
```azurepowershell-interactive Login-AzAccount ```
-1. Create a **resource group**
+1. Create an Azure resource group by using [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). A resource group is a logical container into which Azure resources are deployed and managed.
-Create an Azure resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). A resource group is a logical container into which Azure resources are deployed and managed.
+ ```azurepowershell-interactive
+ New-AzResourceGroup -Name ContosoResourceGroup -Location EastUS
+ ```
-```azurepowershell-interactive
-New-AzResourceGroup -Name ContosoResourceGroup -Location EastUS
-```
+2. Create a key vault that has a unique name. Here, `Contoso-Vaultname` is the name for the key vault.
-2. Create a **Key Vault**
+ - **Vault name**: `Contoso-Vaultname`
+ - **Resource group name**: `ContosoResourceGroup`
+ - **Location**: `EastUS`
-You must use a unique name for your key vault. Here "Contoso-Vaultname" is the name for Key Vault throughout this guide.
+ ```azurepowershell-interactive
+ New-AzKeyVault -Name 'Contoso-Vaultname' -ResourceGroupName 'ContosoResourceGroup' -Location 'EastUS'
+ ```
-- **Vault name** Contoso-Vaultname.-- **Resource group name** ContosoResourceGroup.-- **Location** EastUS.
+3. Define variables for the following values from your DigiCert CertCentral account:
-```azurepowershell-interactive
-New-AzKeyVault -Name 'Contoso-Vaultname' -ResourceGroupName 'ContosoResourceGroup' -Location 'EastUS'
-```
+ - **Account ID**
+ - **Organization ID**
+ - **API Key**
-3. Define variables for information gathered from DigiCert CertCentral account.
+ ```azurepowershell-interactive
+ $accountId = "myDigiCertCertCentralAccountID"
+ $org = New-AzKeyVaultCertificateOrganizationDetail -Id OrganizationIDfromDigiCertAccount
+ $secureApiKey = ConvertTo-SecureString DigiCertCertCentralAPIKey -AsPlainText ΓÇôForce
+ ```
-- Define **Account ID** variable-- Define **Org ID** variable-- Define **API Key** variable
+4. Set the issuer. Doing so will add Digicert as a certificate authority in the key vault. [Learn more about the parameters.](/powershell/module/az.keyvault/Set-AzKeyVaultCertificateIssuer)
+ ```azurepowershell-interactive
+ Set-AzKeyVaultCertificateIssuer -VaultName "Contoso-Vaultname" -Name "TestIssuer01" -IssuerProvider DigiCert -AccountId $accountId -ApiKey $secureApiKey -OrganizationDetails $org -PassThru
+ ```
-```azurepowershell-interactive
-$accountId = "myDigiCertCertCentralAccountID"
-$org = New-AzKeyVaultCertificateOrganizationDetail -Id OrganizationIDfromDigiCertAccount
-$secureApiKey = ConvertTo-SecureString DigiCertCertCentralAPIKey -AsPlainText ΓÇôForce
-```
+5. Set the policy for the certificate and issuing certificate from DigiCert directly in Key Vault:
-4. Set **Issuer**. This will add Digicert as a Certificate Authority in the key vault. To learn more about the parameters, [read here](/powershell/module/az.keyvault/Set-AzKeyVaultCertificateIssuer)
-```azurepowershell-interactive
-Set-AzKeyVaultCertificateIssuer -VaultName "Contoso-Vaultname" -Name "TestIssuer01" -IssuerProvider DigiCert -AccountId $accountId -ApiKey $secureApiKey -OrganizationDetails $org -PassThru
-```
+ ```azurepowershell-interactive
+ $Policy = New-AzKeyVaultCertificatePolicy -SecretContentType "application/x-pkcs12" -SubjectName "CN=contoso.com" -IssuerName "TestIssuer01" -ValidityInMonths 12 -RenewAtNumberOfDaysBeforeExpiry 60
+ Add-AzKeyVaultCertificate -VaultName "Contoso-Vaultname" -Name "ExampleCertificate" -CertificatePolicy $Policy
+ ```
-5. **Setting Policy for the certificate and issuing certificate** from DigiCert directly inside Key Vault.
-
-```azurepowershell-interactive
-$Policy = New-AzKeyVaultCertificatePolicy -SecretContentType "application/x-pkcs12" -SubjectName "CN=contoso.com" -IssuerName "TestIssuer01" -ValidityInMonths 12 -RenewAtNumberOfDaysBeforeExpiry 60
-Add-AzKeyVaultCertificate -VaultName "Contoso-Vaultname" -Name "ExampleCertificate" -CertificatePolicy $Policy
-```
-
-Certificate has now been successfully issued by Digicert CA inside specified Key Vault through this integration.
+The certificate is now issued by DigiCert certificate authority in the specified key vault.
## Troubleshoot
-If the certificate issued is in 'disabled' status in the Azure portal, proceed to view the **Certificate Operation** to review the DigiCert error message for that certificate.
+If the certificate issued is in disabled status in the Azure portal, view the certificate operation to review the DigiCert error message for the certificate:
- ![Certificate operation](../media/certificates/how-to-integrate-certificate-authority/certificate-operation-select.png)
-Error message 'Please perform a merge to complete this certificate request.'
- You would need to merge the CSR signed by the CA to complete this request. Learn more [here](./create-certificate-signing-request.md)
+Error message: "Please perform a merge to complete this certificate request."
+
+Merge the CSR signed by the certificate authority to complete the request. For information about merging a CSR, see [Create and merge a CSR](./create-certificate-signing-request.md).
-For more information, see the [Certificate operations in the Key Vault REST API reference](/rest/api/keyvault). For information on establishing permissions, see [Vaults - Create or Update](/rest/api/keyvault/vaults/createorupdate) and [Vaults - Update Access Policy](/rest/api/keyvault/vaults/updateaccesspolicy).
+For more information, see [Certificate operations in the Key Vault REST API reference](/rest/api/keyvault). For information on establishing permissions, see [Vaults - Create or update](/rest/api/keyvault/vaults/createorupdate) and [Vaults - Update access policy](/rest/api/keyvault/vaults/updateaccesspolicy).
## Frequently asked questions -- Can I generate a digicert wildcard certificate through KeyVault?
- Yes. It would depend upon how you have configured your digicert account.
-- How can I create **OV-SSL or EV-SSL** certificate with DigiCert?
- Key vault supports creating OV and EV SSL certificates. When creating a certificate, click on Advanced Policy Configuration, then specify the Certificate type. Values supported are : OV-SSL, EV-SSL
+- **Can I generate a DigiCert wildcard certificate by using Key Vault?**
+
+ Yes, though it depends on how you configured your DigiCert account.
+- **How can I create an OV SSL or EV SSL certificate with DigiCert?**
+
+ Key Vault supports the creation of OV and EV SSL certificates. When you create a certificate, select **Advanced Policy Configuration** and then specify the certificate type. Supported values: OV SSL, EV SSL
- You would be able to create this type of certificate in key vault if your Digicert account allows. For this type of certificate, the validation is performed by DigiCert and their support team would be able to best help you with the solution, if validation fails. You can add additional information when creating a certificate by defining them in subjectName.
+ You can create this type of certificate in Key Vault if your DigiCert account allows it. For this type of certificate, validation is performed by DigiCert. If validation fails, the DigiCert support team can help. You can add information when you create a certificate by defining the information in `subjectName`.
-Example
- ```SubjectName="CN = docs.microsoft.com, OU = Microsoft Corporation, O = Microsoft Corporation, L = Redmond, S = WA, C = US"
- ```
+ For example,
+ `SubjectName="CN = docs.microsoft.com, OU = Microsoft Corporation, O = Microsoft Corporation, L = Redmond, S = WA, C = US"`.
+
+- **Does it take longer to create a DigiCert certificate via integration than it does to acquire it directly from DigiCert?**
-- Is there a time delay in creating digicert certificate through integration vs acquiring certificate through digicert directly?
- No. When creating a certificate, it is the process of verification which may take time and that verification is dependent on process DigiCert follows.
+ No. When you create a certificate, the verification process might take time. DigiCert controls that process.
## Next steps
key-vault Quick Create Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/quick-create-node.md
const retrievedCertificate = await client.getCertificate(certificateName);
### Delete a certificate
-Finally, let's delete and purge the certificate from your key vault with the [beginDeleteCertificate]https://docs.microsoft.com/javascript/api/@azure/keyvault-certificates/certificateclient?#beginDeleteCertificate_string__BeginDeleteCertificateOptions_) and [purgeDeletedCertificate](https://docs.microsoft.com/javascript/api/@azure/keyvault-certificates/certificateclient?#purgeDeletedCertificate_string__PurgeDeletedCertificateOptions_) methods.
+Finally, let's delete and purge the certificate from your key vault with the [beginDeleteCertificate](https://docs.microsoft.com/javascript/api/@azure/keyvault-certificates/certificateclient?#beginDeleteCertificate_string__BeginDeleteCertificateOptions_) and [purgeDeletedCertificate](https://docs.microsoft.com/javascript/api/@azure/keyvault-certificates/certificateclient?#purgeDeletedCertificate_string__PurgeDeletedCertificateOptions_) methods.
```javascript const deletePoller = await client.beginDeleteCertificate(certificateName);
main().then(() => console.log('Done')).catch((ex) => console.log(ex.message));
Execute the following commands to run the app.
-```azurecli
+```cmd
npm install npm index.js ```
key-vault How To Azure Key Vault Network Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/how-to-azure-key-vault-network-security.md
+
+ Title: How to configure Azure Key Vault networking configuration
+description: Step-by-step instructions to configure Key Vault firewalls and virtual networks
+++++ Last updated : 5/11/2021+++
+# Configure Azure Key Vault networking settings
+
+This article will provide you with guidance on how to configure the Azure Key Vault networking settings to work with other applications and Azure services. To learn about different network security configurations in detail, [read here](network-security.md).
+
+Here's step-by-step instructions to configure Key Vault firewall and virtual networks by using the Azure portal, Azure CLI and Azure PowerShell
+
+# [Portal](#tab/azure-portal)
++
+1. Browse to the key vault you want to secure.
+2. Select **Networking**, and then select the **Firewalls and virtual networks** tab.
+3. Under **Allow access from**, select **Selected networks**.
+4. To add existing virtual networks to firewalls and virtual network rules, select **+ Add existing virtual networks**.
+5. In the new blade that opens, select the subscription, virtual networks, and subnets that you want to allow access to this key vault. If the virtual networks and subnets you select don't have service endpoints enabled, confirm that you want to enable service endpoints, and select **Enable**. It might take up to 15 minutes to take effect.
+6. Under **IP Networks**, add IPv4 address ranges by typing IPv4 address ranges in [CIDR (Classless Inter-domain Routing) notation](https://tools.ietf.org/html/rfc4632) or individual IP addresses.
+7. If you want to allow Microsoft Trusted Services to bypass the Key Vault Firewall, select 'Yes'. For a full list of the current Key Vault Trusted Services please see the following link. [Azure Key Vault Trusted Services](./overview-vnet-service-endpoints.md#trusted-services)
+7. Select **Save**.
+
+You can also add new virtual networks and subnets, and then enable service endpoints for the newly created virtual networks and subnets, by selecting **+ Add new virtual network**. Then follow the prompts.
+
+# [Azure CLI](#tab/azure-cli)
+
+Here's how to configure Key Vault firewalls and virtual networks by using the Azure CLI
+
+1. [Install Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli).
+
+2. List available virtual network rules. If you haven't set any rules for this key vault, the list will be empty.
+ ```azurecli
+ az keyvault network-rule list --resource-group myresourcegroup --name mykeyvault
+ ```
+
+3. Enable a service endpoint for Key Vault on an existing virtual network and subnet.
+ ```azurecli
+ az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.KeyVault"
+ ```
+
+4. Add a network rule for a virtual network and subnet.
+ ```azurecli
+ subnetid=$(az network vnet subnet show --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --query id --output tsv)
+ az keyvault network-rule add --resource-group "demo9311" --name "demo9311premium" --subnet $subnetid
+ ```
+
+5. Add an IP address range from which to allow traffic.
+ ```azurecli
+ az keyvault network-rule add --resource-group "myresourcegroup" --name "mykeyvault" --ip-address "191.10.18.0/24"
+ ```
+
+6. If this key vault should be accessible by any trusted services, set `bypass` to `AzureServices`.
+ ```azurecli
+ az keyvault update --resource-group "myresourcegroup" --name "mykeyvault" --bypass AzureServices
+ ```
+
+7. Turn the network rules on by setting the default action to `Deny`.
+ ```azurecli
+ az keyvault update --resource-group "myresourcegroup" --name "mekeyvault" --default-action Deny
+ ```
+
+# [PowerShell](#tab/azure-powershell)
++
+Here's how to configure Key Vault firewalls and virtual networks by using PowerShell:
+
+1. Install the latest [Azure PowerShell](/powershell/azure/install-az-ps), and [sign in](/powershell/azure/authenticate-azureps).
+
+2. List available virtual network rules. If you have not set any rules for this key vault, the list will be empty.
+ ```powershell
+ (Get-AzKeyVault -VaultName "mykeyvault").NetworkAcls
+ ```
+
+3. Enable service endpoint for Key Vault on an existing virtual network and subnet.
+ ```powershell
+ Get-AzVirtualNetwork -ResourceGroupName "myresourcegroup" -Name "myvnet" | Set-AzVirtualNetworkSubnetConfig -Name "mysubnet" -AddressPrefix "10.1.1.0/24" -ServiceEndpoint "Microsoft.KeyVault" | Set-AzVirtualNetwork
+ ```
+
+4. Add a network rule for a virtual network and subnet.
+ ```powershell
+ $subnet = Get-AzVirtualNetwork -ResourceGroupName "myresourcegroup" -Name "myvnet" | Get-AzVirtualNetworkSubnetConfig -Name "mysubnet"
+ Add-AzKeyVaultNetworkRule -VaultName "mykeyvault" -VirtualNetworkResourceId $subnet.Id
+ ```
+
+5. Add an IP address range from which to allow traffic.
+ ```powershell
+ Add-AzKeyVaultNetworkRule -VaultName "mykeyvault" -IpAddressRange "16.17.18.0/24"
+ ```
+
+6. If this key vault should be accessible by any trusted services, set `bypass` to `AzureServices`.
+ ```powershell
+ Update-AzKeyVaultNetworkRuleSet -VaultName "mykeyvault" -Bypass AzureServices
+ ```
+
+7. Turn the network rules on by setting the default action to `Deny`.
+ ```powershell
+ Update-AzKeyVaultNetworkRuleSet -VaultName "mykeyvault" -DefaultAction Deny
+ ```
+
+## References
+* ARM Template Reference: [Azure Key Vault ARM Template Reference](/azure/templates/Microsoft.KeyVault/vaults)
+* Azure CLI commands: [az keyvault network-rule](/cli/azure/keyvault/network-rule)
+* Azure PowerShell cmdlets: [Get-AzKeyVault](/powershell/module/az.keyvault/get-azkeyvault), [Add-AzKeyVaultNetworkRule](/powershell/module/az.KeyVault/Add-azKeyVaultNetworkRule), [Remove-AzKeyVaultNetworkRule](/powershell/module/az.KeyVault/Remove-azKeyVaultNetworkRule), [Update-AzKeyVaultNetworkRuleSet](/powershell/module/az.KeyVault/Update-azKeyVaultNetworkRuleSet)
+
+## Next steps
+
+* [Virtual network service endpoints for Key Vault](overview-vnet-service-endpoints.md)
+* [Azure Key Vault security overview](security-features.md)
key-vault Network Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/network-security.md
Title: Configure Azure Key Vault firewalls and virtual networks - Azure Key Vault
-description: Step-by-step instructions to configure Key Vault firewalls and virtual networks
+description: Learn about key vault networking settings
# Configure Azure Key Vault firewalls and virtual networks
-This article will provide you with guidance on how to configure the Azure Key Vault firewall. This document will cover the different configurations for the Key Vault firewall in detail, and provide step-by-step instructions on how to configure Azure Key Vault to work with other applications and Azure services.
+This document will cover the different configurations for the Key Vault firewall in detail. To follow the step-by-step instructions on how to configure these settings, follow guide [here](how-to-azure-key-vault-network-security.md)
For more information, see [Virtual network service endpoints for Azure Key Vault](overview-vnet-service-endpoints.md).
By default, when you create a new key vault, the Azure Key Vault firewall is dis
When you enable the Key Vault Firewall, you will be given an option to 'Allow Trusted Microsoft Services to bypass this firewall.' The trusted services list does not cover every single Azure service. For example, Azure DevOps is not on the trusted services list. **This does not imply that services that do not appear on the trusted services list not trusted or insecure.** The trusted services list encompasses services where Microsoft controls all of the code that runs on the service. Since users can write custom code in Azure services such as Azure DevOps, Microsoft does not provide the option to create a blanket approval for the service. Furthermore, just because a service appears on the trusted service list, doesn't mean it is allowed for all scenarios. To determine if a service you are trying to use is on the trusted service list, please see the following document [here](./overview-vnet-service-endpoints.md#trusted-services).
-For how-to guide, follow the instructions here for [Portal, Azure CLI and Powershell](#use-the-azure-portal)
+For how-to guide, follow the instructions here for [Portal, Azure CLI and PowerShell](how-to-azure-key-vault-network-security.md)
### Key Vault Firewall Enabled (IPv4 Addresses and Ranges - Static IPs)
To understand how to configure a private link connection on your key vault, plea
> * IP network rules are only allowed for public IP addresses. IP address ranges reserved for private networks (as defined in RFC 1918) are not allowed in IP rules. Private networks include addresses that start with **10.**, **172.16-31**, and **192.168.**. > * Only IPv4 addresses are supported at this time.
-## Use the Azure portal
-
-Here's how to configure Key Vault firewalls and virtual networks by using the Azure portal:
-
-1. Browse to the key vault you want to secure.
-2. Select **Networking**, and then select the **Firewalls and virtual networks** tab.
-3. Under **Allow access from**, select **Selected networks**.
-4. To add existing virtual networks to firewalls and virtual network rules, select **+ Add existing virtual networks**.
-5. In the new blade that opens, select the subscription, virtual networks, and subnets that you want to allow access to this key vault. If the virtual networks and subnets you select don't have service endpoints enabled, confirm that you want to enable service endpoints, and select **Enable**. It might take up to 15 minutes to take effect.
-6. Under **IP Networks**, add IPv4 address ranges by typing IPv4 address ranges in [CIDR (Classless Inter-domain Routing) notation](https://tools.ietf.org/html/rfc4632) or individual IP addresses.
-7. If you want to allow Microsoft Trusted Services to bypass the Key Vault Firewall, select 'Yes'. For a full list of the current Key Vault Trusted Services please see the following link. [Azure Key Vault Trusted Services](./overview-vnet-service-endpoints.md#trusted-services)
-7. Select **Save**.
-
-You can also add new virtual networks and subnets, and then enable service endpoints for the newly created virtual networks and subnets, by selecting **+ Add new virtual network**. Then follow the prompts.
-
-## Use the Azure CLI
-
-Here's how to configure Key Vault firewalls and virtual networks by using the Azure CLI
-
-1. [Install Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli).
-
-2. List available virtual network rules. If you haven't set any rules for this key vault, the list will be empty.
- ```azurecli
- az keyvault network-rule list --resource-group myresourcegroup --name mykeyvault
- ```
-
-3. Enable a service endpoint for Key Vault on an existing virtual network and subnet.
- ```azurecli
- az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.KeyVault"
- ```
-
-4. Add a network rule for a virtual network and subnet.
- ```azurecli
- subnetid=$(az network vnet subnet show --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --query id --output tsv)
- az keyvault network-rule add --resource-group "demo9311" --name "demo9311premium" --subnet $subnetid
- ```
-
-5. Add an IP address range from which to allow traffic.
- ```azurecli
- az keyvault network-rule add --resource-group "myresourcegroup" --name "mykeyvault" --ip-address "191.10.18.0/24"
- ```
-
-6. If this key vault should be accessible by any trusted services, set `bypass` to `AzureServices`.
- ```azurecli
- az keyvault update --resource-group "myresourcegroup" --name "mykeyvault" --bypass AzureServices
- ```
-
-7. Turn the network rules on by setting the default action to `Deny`.
- ```azurecli
- az keyvault update --resource-group "myresourcegroup" --name "mekeyvault" --default-action Deny
- ```
-
-## Use Azure PowerShell
--
-Here's how to configure Key Vault firewalls and virtual networks by using PowerShell:
-
-1. Install the latest [Azure PowerShell](/powershell/azure/install-az-ps), and [sign in](/powershell/azure/authenticate-azureps).
-
-2. List available virtual network rules. If you have not set any rules for this key vault, the list will be empty.
- ```powershell
- (Get-AzKeyVault -VaultName "mykeyvault").NetworkAcls
- ```
-
-3. Enable service endpoint for Key Vault on an existing virtual network and subnet.
- ```powershell
- Get-AzVirtualNetwork -ResourceGroupName "myresourcegroup" -Name "myvnet" | Set-AzVirtualNetworkSubnetConfig -Name "mysubnet" -AddressPrefix "10.1.1.0/24" -ServiceEndpoint "Microsoft.KeyVault" | Set-AzVirtualNetwork
- ```
-
-4. Add a network rule for a virtual network and subnet.
- ```powershell
- $subnet = Get-AzVirtualNetwork -ResourceGroupName "myresourcegroup" -Name "myvnet" | Get-AzVirtualNetworkSubnetConfig -Name "mysubnet"
- Add-AzKeyVaultNetworkRule -VaultName "mykeyvault" -VirtualNetworkResourceId $subnet.Id
- ```
-
-5. Add an IP address range from which to allow traffic.
- ```powershell
- Add-AzKeyVaultNetworkRule -VaultName "mykeyvault" -IpAddressRange "16.17.18.0/24"
- ```
-
-6. If this key vault should be accessible by any trusted services, set `bypass` to `AzureServices`.
- ```powershell
- Update-AzKeyVaultNetworkRuleSet -VaultName "mykeyvault" -Bypass AzureServices
- ```
-
-7. Turn the network rules on by setting the default action to `Deny`.
- ```powershell
- Update-AzKeyVaultNetworkRuleSet -VaultName "mykeyvault" -DefaultAction Deny
- ```
- ## References * ARM Template Reference: [Azure Key Vault ARM Template Reference](/azure/templates/Microsoft.KeyVault/vaults) * Azure CLI commands: [az keyvault network-rule](/cli/azure/keyvault/network-rule)
key-vault Security Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/security-features.md
When you create a key vault in an Azure subscription, it's automatically associa
- **Application-only**: The application represents a service principal or managed identity. This identity is the most common scenario for applications that periodically need to access certificates, keys, or secrets from the key vault. For this scenario to work, the `objectId` of the application must be specified in the access policy and the `applicationId` must _not_ be specified or must be `null`. - **User-only**: The user accesses the key vault from any application registered in the tenant. Examples of this type of access include Azure PowerShell and the Azure portal. For this scenario to work, the `objectId` of the user must be specified in the access policy and the `applicationId` must _not_ be specified or must be `null`.-- **Application-plus-user** (sometimes referred as _compound identity_): The user is required to access the key vault from a specific application _and_ the application must use the on-behalf-of authentication (OBO) flow to impersonate the user. For this scenario to work, both `applicationId` and `objectId` must be specified in the access policy. The `applicationId` identifies the required application and the `objectId` identifies the user. Currently, this option isn't available for data plane Azure RBAC (preview).
+- **Application-plus-user** (sometimes referred as _compound identity_): The user is required to access the key vault from a specific application _and_ the application must use the on-behalf-of authentication (OBO) flow to impersonate the user. For this scenario to work, both `applicationId` and `objectId` must be specified in the access policy. The `applicationId` identifies the required application and the `objectId` identifies the user. Currently, this option isn't available for data plane Azure RBAC.
In all types of access, the application authenticates with Azure AD. The application uses any [supported authentication method](../../active-directory/develop/authentication-vs-authorization.md) based on the application type. The application acquires a token for a resource in the plane to grant access. The resource is an endpoint in the management or data plane, based on the Azure environment. The application uses the token and sends a REST API request to Key Vault. To learn more, review the [whole authentication flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md).
You should also take regular back ups of your vault on update/delete/create of o
- [Azure Key Vault security baseline](security-baseline.md) - [Azure Key Vault best practices](security-baseline.md) - [Virtual network service endpoints for Azure Key Vault](overview-vnet-service-endpoints.md)-- [Azure RBAC: Built-in roles](../../role-based-access-control/built-in-roles.md)
+- [Azure RBAC: Built-in roles](../../role-based-access-control/built-in-roles.md)
lab-services Administrator Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/administrator-guide.md
The region specifies the datacenter where information about a resource group is
A lab account's location indicates the region that a resource exists in.
-### Lab
+### Lab
The location that a lab exists in varies, depending on the following factors:
A general rule is to set a resource's region to one that's closest to its users.
## VM sizing
-When administrators or Lab Creators create a lab, they can choose from a variety of VM sizes, depending on the needs of their classroom. Remember that the compute size availability depends on the region that your lab account is located in.
+When administrators or Lab Creators create a lab, they can choose from a variety of VM sizes, depending on the needs of their classroom. Remember that the size availability depends on the region that your lab account is located in.
-| Size | Specs | Series | Suggested use |
+In the following table, notice that several of the VM sizes map to more than one VM series. Depending on capacity availability, Lab Services may use any of the VM series that are listed for a VM size. For example, the *Small* VM size maps to using either the [Standard_A2_v2](../virtual-machines/av2-series.md) or the [Standard_A2](../virtual-machines/sizes-previous-gen.md#a-series) VM series. When you choose *Small* as the VM size for your lab, Lab Services will first attempt to use the *Standard_A2_v2* series. However, when there isn't sufficient capacity available, Lab Services will instead use the *Standard_A2* series. The pricing is determined by the VM size and is the same regardless of which VM series Lab Services uses for that specific size. For more information on pricing for each VM size, read the [Lab Services pricing guide](https://azure.microsoft.com/pricing/details/lab-services/).
++
+| Size | Minimum Specs | Series | Suggested use |
| - | -- | | - |
-| Small| <ul><li>2&nbsp;cores</li><li>3.5 gigabytes (GB) RAM</li> | [Standard_A2_v2](../virtual-machines/av2-series.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) | Best suited for command line, opening web browser, low-traffic web servers, small to medium databases. |
-| Medium | <ul><li>4&nbsp;cores</li><li>7&nbsp;GB&nbsp;RAM</li> | [Standard_A4_v2](../virtual-machines/av2-series.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) | Best suited for relational databases, in-memory caching, and analytics. |
-| Medium (nested virtualization) | <ul><li>4&nbsp;cores</li><li>16&nbsp;GB&nbsp;RAM</li></ul> | [Standard_D4s_v3](../virtual-machines/dv3-dsv3-series.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json#dsv3-series) | Best suited for relational databases, in-memory caching, and analytics.
-| Large | <ul><li>8&nbsp;cores</li><li>16&nbsp;GB&nbsp;RAM</li></ul> | [Standard_A8_v2](../virtual-machines/av2-series.md) | Best suited for applications that need faster CPUs, better local disk performance, large databases, large memory caches. This size also supports nested virtualization. |
-| Large (nested virtualization) | <ul><li>8&nbsp;cores</li><li>32&nbsp;GB&nbsp;RAM</li></ul> | [Standard_D8s_v3](../virtual-machines/dv3-dsv3-series.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json#dsv3-series) | Best suited for applications that need faster CPUs, better local disk performance, large databases, large memory caches. |
+| Small| <ul><li>2&nbsp;cores</li><li>3.5 gigabytes (GB) RAM</li> | [Standard_A2_v2](../virtual-machines/av2-series.md), [Standard_A2](../virtual-machines/sizes-previous-gen.md#a-series) | Best suited for command line, opening web browser, low-traffic web servers, small to medium databases. |
+| Medium | <ul><li>4&nbsp;cores</li><li>7&nbsp;GB&nbsp;RAM</li> | [Standard_A4_v2](../virtual-machines/av2-series.md), [Standard_A3](../virtual-machines/sizes-previous-gen.md#a-series) | Best suited for relational databases, in-memory caching, and analytics. |
+| Medium (nested virtualization) | <ul><li>4&nbsp;cores</li><li>16&nbsp;GB&nbsp;RAM</li></ul> | [Standard_D4s_v3](../virtual-machines/dv3-dsv3-series.md#dsv3-series) | Best suited for relational databases, in-memory caching, and analytics. This size also supports nested virtualization.
+| Large | <ul><li>8&nbsp;cores</li><li>16&nbsp;GB&nbsp;RAM</li></ul> | [Standard_A8_v2](../virtual-machines/av2-series.md), [Standard_A7](../virtual-machines/sizes-previous-gen.md#a-series) | Best suited for applications that need faster CPUs, better local disk performance, large databases, large memory caches. |
+| Large (nested virtualization) | <ul><li>8&nbsp;cores</li><li>32&nbsp;GB&nbsp;RAM</li></ul> | [Standard_D8s_v3](../virtual-machines/dv3-dsv3-series.md#dsv3-series) | Best suited for applications that need faster CPUs, better local disk performance, large databases, large memory caches. This size also supports nested virtualization. |
| Small GPU (visualization) | <ul><li>6&nbsp;cores</li><li>56&nbsp;GB&nbsp;RAM</li> | [Standard_NV6](../virtual-machines/nv-series.md) | Best suited for remote visualization, streaming, gaming, and encoding using frameworks such as OpenGL and DirectX. |
-| Small GPU (Compute) | <ul><li>6&nbsp;cores</li><li>56&nbsp;GB&nbsp;RAM</li></ul> | [Standard_NC6](../virtual-machines/nc-series.md) |Best suited for computer-intensive applications such as AI and deep learning. |
-| Medium GPU (visualization) | <ul><li>12&nbsp;cores</li><li>112&nbsp;GB&nbsp;RAM</li></ul> | [Standard_NV12](../virtual-machines/nv-series.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) | Best suited for remote visualization, streaming, gaming, and encoding using frameworks such as OpenGL and DirectX. |
+| Small GPU (Compute) | <ul><li>6&nbsp;cores</li><li>56&nbsp;GB&nbsp;RAM</li></ul> | [Standard_NC6](../virtual-machines/nc-series.md), [Standard_NC6s_v3](../virtual-machines/ncv3-series.md) |Best suited for computer-intensive applications such as AI and deep learning. |
+| Medium GPU (visualization) | <ul><li>12&nbsp;cores</li><li>112&nbsp;GB&nbsp;RAM</li></ul> | [Standard_NV12](../virtual-machines/nv-series.md), [Standard_NV12s_v3](../virtual-machines/nvv3-series.md), [Standard_NV12s_v2](../virtual-machines/sizes-previous-gen.md#nvv2-series) | Best suited for remote visualization, streaming, gaming, and encoding using frameworks such as OpenGL and DirectX. |
## Manage identity
load-balancer Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-outbound-connections.md
For more information about Azure Virtual Network NAT, see [What is Azure Virtual
| Associations | Method | IP protocols | | - | | |
- | Public load balancer or stand-alone | [SNAT (Source Network Address Translation)](#snat) </br> is not used. | TCP (Transmission Control Protocol) </br> UDP (User Datagram Protocol) </br> ICMP (Internet Control Message Protocol) </br> ESP (Encapsulating Security Payload) |
+ | Public IP on VM's NIC | [SNAT (Source Network Address Translation)](#snat) </br> is not used. | TCP (Transmission Control Protocol) </br> UDP (User Datagram Protocol) </br> ICMP (Internet Control Message Protocol) </br> ESP (Encapsulating Security Payload) |
All traffic will return to the requesting client from the virtual machine's public IP address (Instance Level IP).
logic-apps Create Stateful Stateless Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/create-stateful-stateless-workflows-azure-portal.md
Title: Create Logic Apps Preview workflows in the Azure portal
-description: Build and run workflows for automation and integration scenarios with Azure Logic Apps Preview in the Azure portal.
+ Title: Create workflows using single-tenant Azure Logic Apps (portal)
+description: Create automated workflows that integrate apps, data, services, and systems using single-tenant Azure Logic Apps and the Azure portal.
ms.suite: integration-+ Previously updated : 04/23/2021 Last updated : 05/10/2021
-# Create stateful and stateless workflows in the Azure portal with Azure Logic Apps Preview
+# Create an integration workflow using single-tenant Azure Logic Apps and the Azure portal (preview)
> [!IMPORTANT]
-> This capability is in public preview, is provided without a service level agreement, and is not recommended for production workloads.
-> Certain features might not be supported or might have constrained capabilities. For more information, see
-> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> This capability is in preview and is subject to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-With [Azure Logic Apps Preview](logic-apps-overview-preview.md), you can build automation and integration solutions across apps, data, cloud services, and systems by creating and running logic apps that include [*stateful* and *stateless* workflows](logic-apps-overview-preview.md#stateful-stateless) in the Azure portal by starting with the new **Logic App (Preview)** resource type. With this new logic app type, you can build multiple workflows that are powered by the redesigned Azure Logic Apps Preview runtime, which provides portability, better performance, and flexibility for deploying and running in various hosting environments, not only Azure, but also Docker containers. To learn more about the new logic app type, see [Overview for Azure Logic Apps Preview](logic-apps-overview-preview.md).
+This article shows how to create an example automated integration workflow that runs in the *single-tenant Logic Apps environment* by using the new **Logic App (Preview)** resource type. While this example workflow is cloud-based and has only two steps, you can create workflows from hundreds of operations that can connect a wide range of apps, data, services, and systems across cloud, on premises, and hybrid environments. If you're new to single-tenant Logic Apps and the **Logic App (Preview)** resource type, review [Single-tenant versus multi-tenant and integration service environment](logic-apps-overview-preview.md).
-![Screenshot that shows the Azure portal with the workflow designer for the "Logic App (Preview)" resource.](./media/create-stateful-stateless-workflows-azure-portal/azure-portal-logic-apps-overview.png)
-
-In the Azure portal, you can start by creating a new **Logic App (Preview)** resource. While you can also start by [creating a project in Visual Studio Code with the Azure Logic Apps (Preview) extension](create-stateful-stateless-workflows-visual-studio-code.md), both approaches provide the capability for you to deploy and run your logic app in the same kinds of hosting environments.
+The example workflow starts with the built-in Request trigger and follows with an Office 365 Outlook action. The trigger creates a callable endpoint for the workflow and waits for an inbound HTTPS request from any caller. When the trigger receives a request and fires, the next action runs by sending email to the specified email address along with selected outputs from the trigger.
-Meanwhile, you can still create the original logic app type. Although the development experiences in the portal differ between the original and new logic app types, your Azure subscription can include both types. You can view and access all the deployed logic apps in your Azure subscription, but the apps are organized into their own categories and sections.
+> [!TIP]
+> If you don't have an Office 365 account, you can use any other available action that can send
+> messages from your email account, for example, Outlook.com.
+>
+> To create this example workflow using Visual Studio Code instead, follow the steps in
+> [Create integration workflows using single tenant Azure Logic Apps and Visual Studio Code](create-stateful-stateless-workflows-visual-studio-code.md). Both options provide the capability
+> to develop, run, and deploy logic app workflows in the same kinds of environments. However, with
+> Visual Studio Code, you can *locally* develop, test, and run workflows in your development environment.
-This article shows how to build your logic app and workflow in the Azure portal by using the **Logic App (Preview)** resource type and performing these high-level tasks:
+![Screenshot that shows the Azure portal with the workflow designer for the "Logic App (Preview)" resource.](./media/create-stateful-stateless-workflows-azure-portal/azure-portal-logic-apps-overview.png)
-* Create the new logic app resource and add a blank workflow.
+As you progress, you'll complete these high-level tasks:
+* Create the logic app resource and add a blank [*stateful*](logic-apps-overview-preview.md#stateful-stateless) workflow.
* Add a trigger and action.- * Trigger a workflow run.- * View the workflow's run and trigger history.- * Enable or open the Application Insights after deployment.- * Enable run history for stateless workflows.
-> [!NOTE]
-> For information about current known issues, review the [Logic Apps Public Preview Known Issues page in GitHub](https://github.com/Azure/logicapps/blob/master/articles/logic-apps-public-preview-known-issues.md).
+For more information, review the following documentation:
+
+* [What is Azure Logic Apps?](logic-apps-overview.md)
+* [What is the single-tenant Logic Apps environment?](logic-apps-overview-preview.md)
## Prerequisites * An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An [Azure Storage account](../storage/common/storage-account-overview.md) because the **Logic App (Preview)** resource is powered by Azure Functions and has [storage requirements that are similar to function apps](../azure-functions/storage-considerations.md). You can use an existing storage account, or you can create a storage account in advance or during logic app creation.
+* An [Azure Storage account](../storage/common/storage-account-overview.md). If you don't have one, you can either create a storage account in advance or during logic app creation.
> [!NOTE]
- > [Stateful logic apps](logic-apps-overview-preview.md#stateful-stateless) perform storage transactions, such as
+ > The **Logic App (Preview)** resource type is powered by Azure Functions and has [storage requirements similar to function apps](../azure-functions/storage-considerations.md).
+ > [Stateful workflows](logic-apps-overview-preview.md#stateful-stateless) perform storage transactions, such as
> using queues for scheduling and storing workflow states in tables and blobs. These transactions incur
- > [Azure Storage charges](https://azure.microsoft.com/pricing/details/storage/). For more information about
- > how stateful logic apps store data in external storage, see [Stateful versus stateless](logic-apps-overview-preview.md#stateful-stateless).
+ > [storage charges](https://azure.microsoft.com/pricing/details/storage/). For more information about
+ > how stateful workflows store data in external storage, review [Stateful and stateless workflows](logic-apps-overview-preview.md#stateful-stateless).
+
+* To deploy to a Docker container, you need an existing Docker container image.
-* To deploy to a Docker container, you need an existing Docker container image. For example, you can create this image through [Azure Container Registry](../container-registry/container-registry-intro.md), [App Service](../app-service/overview.md), or [Azure Container Instance](../container-instances/container-instances-overview.md).
+ For example, you can create this image through [Azure Container Registry](../container-registry/container-registry-intro.md), [App Service](../app-service/overview.md), or [Azure Container Instance](../container-instances/container-instances-overview.md).
-* To build the same example logic app in this article, you need an Office 365 Outlook email account that uses a Microsoft work or school account to sign in.
+* To create the same example workflow in this article, you need an Office 365 Outlook email account that uses a Microsoft work or school account to sign in.
- If you choose to use a different [email connector that's supported by Azure Logic Apps](/connectors/), such as Outlook.com or [Gmail](../connectors/connectors-google-data-security-privacy-policy.md), you can still follow the example, and the general overall steps are the same, but your user interface and options might differ in some ways. For example, if you use the Outlook.com connector, use your personal Microsoft account instead to sign in.
+ If you choose a [different email connector](/connectors/connector-reference/connector-reference-logicapps-connectors), such as Outlook.com, you can still follow the example, and the general overall steps are the same. However, your options might differ in some ways. For example, if you use the Outlook.com connector, use your personal Microsoft account instead to sign in.
-* To test the example logic app that you create in this article, you need a tool that can send calls to the Request trigger, which is the first step in example logic app. If you don't have such a tool, you can download, install, and use [Postman](https://www.postman.com/downloads/).
+* To test the example workflow in this article, you need a tool that can send calls to the endpoint created by the Request trigger. If you don't have such a tool, you can download, install, and use [Postman](https://www.postman.com/downloads/).
-* If you create your logic app with settings that support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app. You can do so either when you create your logic app or after deployment. You need to have an Application Insights instance, but you can create this resource either [in advance](../azure-monitor/app/create-workspace-resource.md), when you create your logic app, or after deployment.
+* If you create your logic app resources with settings that support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app. You can do so either when you create your logic app or after deployment. You need to have an Application Insights instance, but you can create this resource either [in advance](../azure-monitor/app/create-workspace-resource.md), when you create your logic app, or after deployment.
## Create the logic app resource
-1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account credentials.
+1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account credentials.
-1. In the Azure portal search box, enter `logic app preview`, and select **Logic App (Preview)**.
+1. In the Azure portal search box, enter `logic apps`, and select **Logic apps**.
![Screenshot that shows the Azure portal search box with the "logic app preview" search term and the "Logic App (Preview)" resource selected.](./media/create-stateful-stateless-workflows-azure-portal/find-logic-app-resource-template.png)
-1. On the **Logic App (Preview)** page, select **Add**.
+1. On the **Logic apps** page, select **Add** > **Preview**.
-1. On the **Create Logic App (Preview)** page, on the **Basics** tab, provide this information about your logic app.
+ This step creates a logic app resource that runs in the single-tenant Logic Apps environment and uses the [preview (single-tenant) pricing model](logic-apps-pricing.md#preview-pricing).
+
+1. On the **Create Logic App** page, on the **Basics** tab, provide the following information about your logic app resource:
| Property | Required | Value | Description | |-|-|-|-| | **Subscription** | Yes | <*Azure-subscription-name*> | The Azure subscription to use for your logic app. |
- | **Resource group** | Yes | <*Azure-resource-group-name*> | The Azure resource group where you create your logic app and related resources. This resource name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <p><p>This example creates a resource group named `Fabrikam-Workflows-RG`. |
- | **Logic app name** | Yes | <*logic-app-name*> | The name to use for your logic app. This resource name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <p><p>This example creates a logic app named `Fabrikam-Workflows`. <p><p>**Note**: Your logic app's name automatically gets the suffix, `.azurewebsites.net`, because the **Logic App (Preview)** resource is powered by Azure Functions, which uses the same app naming convention. |
- | **Publish** | Yes | <*deployment-environment*> | The deployment destination for your logic app. You can deploy to Azure by selecting **Workflow** or **Docker Container**. <p><p>This example uses **Workflow**, which deploys the **Logic App (Preview)** resource to the Azure portal. <p><p>**Note**: Before you select **Docker Container**, make sure that create your Docker container image. For example, you can create this image through [Azure Container Registry](../container-registry/container-registry-intro.md), [App Service](../app-service/overview.md), or [Azure Container Instance](../container-instances/container-instances-overview.md). That way, after you select **Docker Container**, you can [specify the container that you want to use in your logic app's settings](#set-docker-container). |
+ | **Resource Group** | Yes | <*Azure-resource-group-name*> | The Azure resource group where you create your logic app and related resources. This resource name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <p><p>This example creates a resource group named `Fabrikam-Workflows-RG`. |
+ | **Logic App name** | Yes | <*logic-app-name*> | The name to use for your logic app. This resource name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <p><p>This example creates a logic app named `Fabrikam-Workflows`. <p><p>**Note**: Your logic app's name automatically gets the suffix, `.azurewebsites.net`, because the **Logic App (Preview)** resource is powered by Azure Functions, which uses the same app naming convention. |
+ | **Publish** | Yes | <*deployment-environment*> | The deployment destination for your logic app. <p><p>- **Workflow**: Deploy to single-tenant Azure Logic Apps in the portal. <p><p>- **Docker Container**: Deploy to a container. If you don't have a container, first create your Docker container image. That way, after you select **Docker Container**, you can [specify the container that you want to use when creating your logic app](#set-docker-container). For example, you can create this image through [Azure Container Registry](../container-registry/container-registry-intro.md), [App Service](../app-service/overview.md), or [Azure Container Instance](../container-instances/container-instances-overview.md). <p><p>This example continues with the **Workflow** option. |
| **Region** | Yes | <*Azure-region*> | The Azure region to use when creating your resource group and resources. <p><p>This example uses **West US**. | ||||| Here's an example:
- ![Screenshot that shows the Azure portal and "Create Logic App (Preview)" page.](./media/create-stateful-stateless-workflows-azure-portal/create-logic-app-resource-portal.png)
+ ![Screenshot that shows the Azure portal and "Create Logic App" page.](./media/create-stateful-stateless-workflows-azure-portal/create-logic-app-resource-portal.png)
-1. Next, on the **Hosting** tab, provide this information about the storage solution and hosting plan to use for your logic app.
+1. On the **Hosting** tab, provide the following information about the storage solution and hosting plan to use for your logic app.
| Property | Required | Value | Description | |-|-|-|-| | **Storage account** | Yes | <*Azure-storage-account-name*> | The [Azure Storage account](../storage/common/storage-account-overview.md) to use for storage transactions. This resource name must be unique across regions and have 3-24 characters with only numbers and lowercase letters. Either select an existing account or create a new account. <p><p>This example creates a storage account named `fabrikamstorageacct`. |
- | **Plan type** | Yes | <*Azure-hosting-plan*> | The [hosting plan](../app-service/overview-hosting-plans.md) to use for deploying your logic app, which is either [**Functions Premium**](../azure-functions/functions-premium-plan.md) or [**App service plan** (Dedicated)](../azure-functions/dedicated-plan.md). Your choice affects the capabilities and pricing tiers that are later available to you. <p><p>This example uses the **App service plan**. <p><p>**Note**: Similar to Azure Functions, the **Logic App (Preview)** resource type requires a hosting plan and pricing tier. Consumption plans aren't supported nor available for this resource type. For more information, review these topics: <p><p>- [Azure Functions scale and hosting](../azure-functions/functions-scale.md) <br>- [App Service pricing details](https://azure.microsoft.com/pricing/details/app-service/) <p><p>For example, the Functions Premium plan provides access to networking capabilities, such as connect and integrate privately with Azure virtual networks, similar to Azure Functions when you create and deploy your logic apps. For more information, review these topics: <p><p>- [Azure Functions networking options](../azure-functions/functions-networking-options.md) <br>- [Azure Logic Apps Running Anywhere - Networking possibilities with Azure Logic Apps Preview](https://techcommunity.microsoft.com/t5/integrations-on-azure/logic-apps-anywhere-networking-possibilities-with-logic-app/ba-p/2105047) |
+ | **Plan type** | Yes | <*Azure-hosting-plan*> | The [hosting plan](../app-service/overview-hosting-plans.md) to use for deploying your logic app, which is either [**Functions Premium**](../azure-functions/functions-premium-plan.md) or [**App service plan**](../azure-functions/dedicated-plan.md). Your choice affects the capabilities and pricing tiers that are later available to you. <p><p>This example uses the **App service plan**. <p><p>**Note**: Similar to Azure Functions, the **Logic App (Preview)** resource type requires a hosting plan and pricing tier. Consumption plans aren't supported or available for this resource type. For more information, review the following documentation: <p><p>- [Azure Functions scale and hosting](../azure-functions/functions-scale.md) <br>- [App Service pricing details](https://azure.microsoft.com/pricing/details/app-service/) <p><p>For example, the Functions Premium plan provides access to networking capabilities, such as connect and integrate privately with Azure virtual networks, similar to Azure Functions when you create and deploy your logic apps. For more information, review the following documentation: <p><p>- [Azure Functions networking options](../azure-functions/functions-networking-options.md) <br>- [Azure Logic Apps Running Anywhere - Networking possibilities with Azure Logic Apps Preview](https://techcommunity.microsoft.com/t5/integrations-on-azure/logic-apps-anywhere-networking-possibilities-with-logic-app/ba-p/2105047) |
| **Windows Plan** | Yes | <*plan-name*> | The plan name to use. Either select an existing plan or provide the name for a new plan. <p><p>This example uses the name `Fabrikam-Service-Plan`. | | **SKU and size** | Yes | <*pricing-tier*> | The [pricing tier](../app-service/overview-hosting-plans.md) to use for hosting your logic app. Your choices are affected by the plan type that you previously chose. To change the default tier, select **Change size**. You can then select other pricing tiers, based on the workload that you need. <p><p>This example uses the free **F1 pricing tier** for **Dev / Test** workloads. For more information, review [App Service pricing details](https://azure.microsoft.com/pricing/details/app-service/). | |||||
This article shows how to build your logic app and workflow in the Azure portal
> For example, if your selected region reaches a quota for resources that you're trying to create, > you might have to try a different region.
- After Azure finishes deployment, your logic app is automatically live and running but doesn't do anything yet because no workflows exist.
+ After Azure finishes deployment, your logic app is automatically live and running but doesn't do anything yet because the resource is empty, and no workflows exist yet.
-1. On the deployment completion page, select **Go to resource** so that you can start building your workflow. If you selected **Docker Container** for deploying your logic app, continue with the [steps to provide information about that Docker container](#set-docker-container).
+1. On the deployment completion page, select **Go to resource** so that you can add a blank workflow. If you selected **Docker Container** for deploying your logic app, continue with the [steps to provide information about that Docker container](#set-docker-container).
![Screenshot that shows the Azure portal and the finished deployment.](./media/create-stateful-stateless-workflows-azure-portal/logic-app-completed-deployment.png)
Before you start these steps, you need a Docker container image. For example, yo
![Screenshot that shows the logic app resource menu with "Workflows" selected, and then hen on the toolbar, "Add" is selected.](./media/create-stateful-stateless-workflows-azure-portal/logic-app-add-blank-workflow.png)
-1. After the **New workflow** pane opens, provide a name for your workflow, and choose either the [**Stateful** or **Stateless**](logic-apps-overview-preview.md#stateful-stateless) workflow type. When you're done, select **Create**.
+1. After the **New workflow** pane opens, provide a name for your workflow, and choose the state type, either [**Stateful** or **Stateless**](logic-apps-overview-preview.md#stateful-stateless). When you're done, select **Create**.
This example adds a blank stateful workflow named `Fabrikam-Stateful-Workflow`. By default, the workflow is enabled but doesn't do anything until you add a trigger and actions.
This example builds a workflow that has these steps:
* The [Office 365 Outlook action](../connectors/connectors-create-api-office365-outlook.md), **Send an email**.
-* The built-in [Response action](../connectors/connectors-native-reqres.md), which you use to send a reply and return data back to the caller.
- ### Add the Request trigger Before you can add a trigger to a blank workflow, make sure that the workflow designer is open and that the **Choose an operation** prompt is selected on the designer surface.
Before you can add a trigger to a blank workflow, make sure that the workflow de
### Add the Office 365 Outlook action
-1. On the designer, under the trigger that you added, select **New step**.
+1. On the designer, under the trigger that you added, select the plus sign (**+**) > **Add an action**.
The **Choose an operation** prompt appears on the designer, and the **Add an action** pane reopens so that you can select the next action.
Before you can add a trigger to a blank workflow, make sure that the workflow de
> If the **Add an action** pane shows the error message, 'Cannot read property 'filter' of undefined`, > save your workflow, reload the page, reopen your workflow, and try again.
-1. In the **Add an action** pane, under the **Choose an operation** search box, select **Azure**. This tab shows the managed connectors that are available and deployed in Azure.
+1. In the **Add an action** pane, under the **Choose an operation** search box, select **Azure**. This tab shows the managed connectors that are available and hosted in Azure.
> [!NOTE] > If the **Add an action** pane shows the error message, `The access token expiry UTC time '{token-expiration-date-time}' is earlier than current UTC time '{current-date-time}'`,
Before you can add a trigger to a blank workflow, make sure that the workflow de
![Screenshot that shows the designer and the "Send an email (V2)" details pane with "Sign in" selected.](./media/create-stateful-stateless-workflows-azure-portal/send-email-action-sign-in.png)
-1. When you're prompted for consent to access your email account, sign in with your account credentials.
+1. When you're prompted for access to your email account, sign in with your account credentials.
> [!NOTE] > If you get the error message, `Failed with error: 'The browser is closed.'. Please sign in again`,
Before you can add a trigger to a blank workflow, make sure that the workflow de
After Azure creates the connection, the **Send an email** action appears on the designer and is selected by default. If the action isn't selected, select the action so that its details pane is also open.
-1. In the action's details pane, on the **Parameters** tab, provide the required information for the action, for example:
+1. In the action details pane, on the **Parameters** tab, provide the required information for the action, for example:
![Screenshot that shows the designer and the "Send an email" details pane with the "Parameters" tab selected.](./media/create-stateful-stateless-workflows-azure-portal/send-email-action-details.png)
Before you can add a trigger to a blank workflow, make sure that the workflow de
1. Save your work. On the designer toolbar, select **Save**.
-1. If your environment has strict network requirements or firewalls that limit traffic, you have to set up permissions for any trigger or action connections that exist in your workflow. To find the fully qualified
+1. If your environment has strict network requirements or firewalls that limit traffic, you have to set up permissions for any trigger or action connections that exist in your workflow. To find the fully qualified domain names, review [Find domain names for firewall access](#firewall-setup).
Otherwise, to test your workflow, [manually trigger a run](#trigger-workflow).
To find the fully qualified domain names (FQDNs) for these connections, follow t
1. On your logic app menu, under **Workflows**, select **Connections**. On the **API Connections** tab, select the connection's resource name, for example:
- ![Screenshot that shows the Azure portal and logic app menu with the "Connections" and "offic365" connection resource name selected.](./media/create-stateful-stateless-workflows-azure-portal/logic-app-connections.png)
+ ![Screenshot that shows the Azure portal and logic app menu with the "Connections" and "office365" connection resource name selected.](./media/create-stateful-stateless-workflows-azure-portal/logic-app-connections.png)
1. Expand your browser wide enough so that when **JSON View** appears in the browser's upper right corner, select **JSON View**.
In this example, the workflow runs when the Request trigger receives an inbound
For a stateful workflow, after each workflow run, you can view the run history, including the status for the overall run, for the trigger, and for each action along with their inputs and outputs. In the Azure portal, run history and trigger histories appear at the workflow level, not the logic app level. To review the trigger histories outside the run history context, see [Review trigger histories](#view-trigger-histories).
-1. In the Azure portal, on your workflow's menu, select **Monitor**.
+1. In the Azure portal, on the workflow menu, select **Overview**.
- The **Monitor** pane shows the run history for that workflow.
+1. On the **Overview** pane, select **Run History**, which shows the run history for that workflow.
- ![Screenshot that shows the workflow's "Monitor" pane and run history.](./media/create-stateful-stateless-workflows-azure-portal/find-run-history.png)
+ ![Screenshot that shows the workflow's "Overview" pane with "Run History" selected.](./media/create-stateful-stateless-workflows-azure-portal/find-run-history.png)
> [!TIP]
- > If the most recent run status doesn't appear, on the **Monitor** pane toolbar, select **Refresh**.
+ > If the most recent run status doesn't appear, on the **Overview** pane toolbar, select **Refresh**.
> No run happens for a trigger that's skipped due to unmet criteria or finding no data. | Run status | Description |
For a stateful workflow, after each workflow run, you can view the run history,
Here are the possible statuses that each step in the workflow can have:
- | Action status | Icon | Description |
- |||-|
- | **Aborted** | ![Icon for "Aborted" action status][aborted-icon] | The action stopped or didn't finish due to external problems, for example, a system outage or lapsed Azure subscription. |
- | **Cancelled** | ![Icon for "Cancelled" action status][cancelled-icon] | The action was running but received a cancel request. |
- | **Failed** | ![Icon for "Failed" action status][failed-icon] | The action failed. |
- | **Running** | ![Icon for "Running" action status][running-icon] | The action is currently running. |
- | **Skipped** | ![Icon for "Skipped" action status][skipped-icon] | The action was skipped because its `runAfter` conditions weren't met, for example, a preceding action failed. Each action has a `runAfter` object where you can set up conditions that must be met before the current action can run. |
- | **Succeeded** | ![Icon for "Succeeded" action status][succeeded-icon] | The action succeeded. |
- | **Succeeded with retries** | ![Icon for "Succeeded with retries" action status][succeeded-with-retries-icon] | The action succeeded but only after a single or multiple retries. To review the retry history, in the run history details view, select that action so that you can view the inputs and outputs. |
- | **Timed out** | ![Icon for "Timed out" action status][timed-out-icon] | The action stopped due to the timeout limit specified by that action's settings. |
- | **Waiting** | ![Icon for "Waiting" action status][waiting-icon] | Applies to a webhook action that's waiting for an inbound request from a caller. |
- ||||
+ | Action status | Description |
+ ||-|
+ | **Aborted** | The action stopped or didn't finish due to external problems, for example, a system outage or lapsed Azure subscription. |
+ | **Cancelled** | The action was running but received a cancel request. |
+ | **Failed** | The action failed. |
+ | **Running** | The action is currently running. |
+ | **Skipped** | The action was skipped because its `runAfter` conditions weren't met, for example, a preceding action failed. Each action has a `runAfter` object where you can set up conditions that must be met before the current action can run. |
+ | **Succeeded** | The action succeeded. |
+ | **Succeeded with retries** | The action succeeded but only after a single or multiple retries. To review the retry history, in the run history details view, select that action so that you can view the inputs and outputs. |
+ | **Timed out** | The action stopped due to the timeout limit specified by that action's settings. |
+ | **Waiting** | Applies to a webhook action that's waiting for an inbound request from a caller. |
+ |||
[aborted-icon]: ./media/create-stateful-stateless-workflows-azure-portal/aborted.png [cancelled-icon]: ./media/create-stateful-stateless-workflows-azure-portal/cancelled.png
For a stateful workflow, after each workflow run, you can view the run history,
For a stateful workflow, you can review the trigger history for each run, including the trigger status along with inputs and outputs, separately from the [run history context](#view-run-history). In the Azure portal, trigger history and run history appear at the workflow level, not the logic app level. To find this historical data, follow these steps:
-1. In the Azure portal, on your workflow's menu, under **Developer**, select **Trigger Histories**.
+1. In the Azure portal, on the workflow menu, select **Overview**.
+
+1. On the **Overview** page, select **Trigger Histories**.
The **Trigger Histories** pane shows the trigger histories for your workflow's runs.
For a stateful workflow, you can review the trigger history for each run, includ
## Enable or open Application Insights after deployment
-During workflow execution, your logic app emits telemetry along with other events. You can use this telemetry to get better visibility into how well your workflow runs and how the Logic Apps runtime works in various ways. You can monitor your workflow by using [Application Insights](../azure-monitor/app/app-insights-overview.md), which provides near real-time telemetry (live metrics). This capability can help you investigate failures and performance problems more easily when you use this data to diagnose issues, set up alerts, and build charts.
+During workflow run, your logic app emits telemetry along with other events. You can use this telemetry to get better visibility into how well your workflow runs and how the Logic Apps runtime works in various ways. You can monitor your workflow by using [Application Insights](../azure-monitor/app/app-insights-overview.md), which provides near real-time telemetry (live metrics). This capability can help you investigate failures and performance problems more easily when you use this data to diagnose issues, set up alerts, and build charts.
If your logic app's creation and deployment settings support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app. You can do so either when you create your logic app in the Azure portal or after deployment. You need to have an Application Insights instance, but you can create this resource either [in advance](../azure-monitor/app/create-workspace-resource.md), when you create your logic app, or after deployment.
To enable Application Insights on a deployed logic app or open the Application I
1. On the logic app menu, under **Settings**, select **Application Insights**.
-1. If Application Insights isn't enabled, on the **Application Insights** pane, select **Turn on Application Insights**. After the pane updates, at the bottom, select **Apply**.
+1. If Application Insights isn't enabled, on the **Application Insights** pane, select **Turn on Application Insights**. After the pane updates, at the bottom, select **Apply** > **Yes**.
If Application Insights is enabled, on the **Application Insights** pane, select **View Application Insights data**.
To debug a stateless workflow more easily, you can enable the run history for th
1. On the logic app's menu, under **Settings**, select **Configuration**.
-1. On the **Application Settings** tab, select **New application setting**.
+1. On the **Application settings** tab, select **New application setting**.
1. On the **Add/Edit application setting** pane, in the **Name** box, enter this operation option name:
To stop the trigger from firing the next time when the trigger condition is met,
1. On the logic app menu, under **Workflows**, select **Workflows**. In the checkbox column, select the workflow to disable.
-1. On the Workflows pane toolbar, select **Disable**.
+1. On the **Workflows** pane toolbar, select **Disable**.
1. To confirm whether your operation succeeded or failed, on main Azure toolbar, open the **Notifications** list (bell icon).
To stop the trigger from firing the next time when the trigger condition is met,
1. On the logic app menu, under **Workflows**, select **Workflows**. In the checkbox column, select the workflow to enable.
-1. On the Workflows pane toolbar, select **Enable**.
+1. On the **Workflows** pane toolbar, select **Enable**.
1. To confirm whether your operation succeeded or failed, on main Azure toolbar, open the **Notifications** list (bell icon).
To fix this problem, follow these steps to delete the outdated version so that t
## Next steps
-We'd like to hear from you about your experiences with this public preview!
+We'd like to hear from you about your experiences with this scenario!
* For bugs or problems, [create your issues in GitHub](https://github.com/Azure/logicapps/issues). * For questions, requests, comments, and other feedback, [use this feedback form](https://aka.ms/lafeedback).
logic-apps Logic Apps Add Run Inline Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-add-run-inline-code.md
In this article, the example logic app triggers when a new email arrives in a wo
* If you use macOS or Linux, the Inline Code Operations actions are currently unavailable when you use the Azure Logic Apps (Preview) extension in Visual Studio Code.
- * Inline Code Operations actions have [updated limits](logic-apps-overview-preview.md#inline-code-limits).
+ * Inline Code Operations actions have [updated limits](logic-apps-limits-and-config.md#inline-code-action-limits).
You can start from either option here:
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-limits-and-config.md
Last updated 05/05/2021
> For Power Automate, see [Limits and configuration in Power Automate](/flow/limits-and-config).
-This article describes the limits and configuration information for Azure Logic Apps and related resources. Many limits are the same for both the multi-tenant and single-tenant (preview) Logic Apps service with noted differences where they exist.
+This article describes the limits and configuration information for Azure Logic Apps and related resources. To create logic app workflows, you choose the **Logic App** resource type based on your scenario, solution requirements, the capabilities that you want, and the environment where you want to run your workflows.
-The following table provides more information about the terms, *multi-tenant*, *single-tenant*, and *integration service environment*, that appear in this article:
+> [!NOTE]
+> Many limits are the same across these host environments, but differences are noted where they exist.
+> If your scenarios require different limits, [contact the Logic Apps team](mailto://logicappspm@microsoft.com)
+> to discuss your requirements.
-| Environment | Resource sharing and usage | [Pricing model](logic-apps-pricing.md) | Notes |
-|-|-|-|-|
-| Azure Logic Apps <br>(Multi-tenant) | Workflows in logic apps *across multiple tenants* share the same processing (compute), storage, network, and so on. | Consumption | Azure Logic Apps manages the default values for these limits, but you can change some of these values, if that option exists for a specific limit. |
-| Azure Logic Apps <br>(Single-tenant (preview)) | Workflows *in the same logic app and single tenant* share the same processing (compute), storage, network, and so on. | Preview, which is either the [Premium hosting plan](../azure-functions/functions-scale.md), or the [App Service hosting plan](../azure-functions/functions-scale.md) with a specific [pricing tier](../app-service/overview-hosting-plans.md) <p><p>If you have *stateful* workflows, which use [external storage](../azure-functions/storage-considerations.md#storage-account-requirements), the Azure Logic Apps runtime makes storage transactions that follow [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/). | You can change the default values for many limits, based on your scenario's needs. <p><p>**Important**: Some limits have hard upper maximums. In Visual Studio Code, the changes you make to the default limit values in your logic app project configuration files won't appear in the designer experience. <p><p>For more information, see [Create workflows for single-tenant Azure Logic Apps using Visual Studio Code](create-stateful-stateless-workflows-visual-studio-code.md). |
-| Integration service environment | Workflows in the *same environment* share the same processing (compute), storage, network, and so on. | Fixed | Azure Logic Apps manages the default values for these limits, but you can change some of these values, if that option exists for a specific limit. |
-|||||
+The following table briefly summarizes differences between the original **Logic App (Consumption)** resource type and the new **Logic App (Preview)** resource type. You'll also learn how the *single-tenant* (preview) environment compares to the *multi-tenant* and *integration service environment (ISE)* for deploying, hosting, and running your logic app workflows.
-> [!TIP]
-> For scenarios that require different limits, [contact the Logic Apps team](mailto://logicappspm@microsoft.com) to discuss your requirements.
<a name="definition-limits"></a>
The following tables list the values for a single inbound or outbound call:
### Timeout duration
-By default, the HTTP action and APIConnection actions follow the [standard asynchronous operation pattern](https://docs.microsoft.com/azure/architecture/patterns/async-request-reply), while the Response action follows the *synchronous operation pattern*. Some managed connector operations make asynchronous calls or listen for webhook requests, so the timeout for these operations might be longer than the following limits. For more information, review [each connector's technical reference page](/connectors/connector-reference/connector-reference-logicapps-connectors) and also the [Workflow triggers and actions](../logic-apps/logic-apps-workflow-actions-triggers.md#http-action) documentation.
+By default, the HTTP action and APIConnection actions follow the [standard asynchronous operation pattern](/architecture/patterns/async-request-reply), while the Response action follows the *synchronous operation pattern*. Some managed connector operations make asynchronous calls or listen for webhook requests, so the timeout for these operations might be longer than the following limits. For more information, review [each connector's technical reference page](/connectors/connector-reference/connector-reference-logicapps-connectors) and also the [Workflow triggers and actions](../logic-apps/logic-apps-workflow-actions-triggers.md#http-action) documentation.
+
+> [!NOTE]
+> For the preview logic app type in the single-tenant model, stateless workflows can only run *synchronously*.
| Name | Multi-tenant | Single-tenant (preview) | Integration service environment | Notes | ||--|-||-|
logic-apps Logic Apps Overview Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-overview-preview.md
Title: Overview for Azure Logic Apps Preview
-description: Azure Logic Apps Preview is a cloud solution for building automated, single-tenant, stateful, and stateless workflows that integrate apps, data, services, and systems with minimal code for enterprise-level scenarios.
+ Title: Overview - single-tenant (preview) Azure Logic Apps
+description: Learn the differences between single-tenant (preview), multi-tenant, and integration service environment (ISE) for Azure Logic Apps.
ms.suite: integration-+ Previously updated : 03/24/2021 Last updated : 05/05/2021
-# Overview: Azure Logic Apps Preview
+# Single-tenant (preview) versus multi-tenant and integration service environment for Azure Logic Apps
> [!IMPORTANT]
-> This capability is in public preview, is provided without a service level agreement, and is not recommended for production
-> workloads. Certain features might not be supported or might have constrained capabilities. For more information, see
-> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Currently in preview, the single-tenant Logic Apps environment and **Logic App (Preview)** resource type are subject to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-With Azure Logic Apps Preview, you can build automation and integration solutions across apps, data, cloud services, and systems by creating and running single-tenant logic apps with the new **Logic App (Preview)** resource type. Using this single-tenant logic app type, can build multiple [*stateful* and *stateless* workflows](#stateful-stateless) that are powered by the redesigned Azure Logic Apps Preview runtime, which provides portability, better performance, and flexibility for deploying and running in various hosting environments, including not only Azure, but also Docker containers.
+Azure Logic Apps is a cloud-based platform for creating and running automated *logic app workflows* that integrate your apps, data, services, and systems. With this platform, you can quickly develop highly scalable integration solutions for your enterprise and business-to-business (B2B) scenarios. To create a logic app, you use either the original **Logic App (Consumption)** resource type or the new **Logic App (Preview)** resource type.
-How is this possible? The redesigned runtime uses the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) and is hosted as an extension on the Azure Functions runtime. This architecture means that you can run the single-tenant logic app type anywhere that Azure Functions runs. You can host the redesigned runtime on almost any network topology, and choose any available compute size to handle the necessary workload that's required by your workflows. For more information, see [Introduction to Azure Functions](../azure-functions/functions-overview.md) and [Azure Functions triggers and bindings](../azure-functions/functions-triggers-bindings.md).
+Before you choose which resource type to use, review this article to learn how the new preview resource type compares to the original. You can then decide which type is best to use, based on your scenario's needs, solution requirements, and the environment where you want to deploy, host, and run your workflows.
-You can create the **Logic App (Preview)** resource either by [starting in the Azure portal](create-stateful-stateless-workflows-azure-portal.md) or by [creating a project in Visual Studio Code with the Azure Logic Apps (Preview) extension](create-stateful-stateless-workflows-visual-studio-code.md). Also, in Visual Studio Code, you can build *and locally run* your workflows in your development environment. Whether you use the portal or Visual Studio Code, you can deploy and run the single-tenant logic app type in the same kinds of hosting environments.
+If you're new to Azure Logic Apps, review the following documentation:
-This overview covers the following areas:
+* [What is Azure Logic Apps?](logic-apps-overview.md)
+* [What is a *logic app workflow*?](logic-apps-overview.md#logic-app-concepts)
-* [Differences between Azure Logic Apps Preview, the Azure Logic Apps multi-tenant environment, and the integration service environment](#preview-differences).
+<a name="resource-environment-differences"></a>
-* [Differences between stateful and stateless workflows](#stateful-stateless), including behavior differences between [nested stateful and stateless workflows](#nested-behavior).
+## Resource types and environments
-* [Capabilities in this public preview](#public-preview-contents).
+To create logic app workflows, you choose the **Logic App** resource type based on your scenario, solution requirements, the capabilities that you want, and the environment where you want to run your workflows.
-* [How the pricing model works](#pricing-model).
+The following table briefly summarizes differences between the new **Logic App (Preview)** resource type and the original **Logic App (Consumption)** resource type. You'll also learn how the *single-tenant* (preview) environment compares to the *multi-tenant* and *integration service environment (ISE)* for deploying, hosting, and running your logic app workflows.
-* [Changed, limited, unavailable, or unsupported capabilities](#limited-unavailable-unsupported).
-* [Limits in Azure Logic Apps Preview](#limits).
+<a name="preview-resource-type-introduction"></a>
-For more information, review these other topics:
+## Logic App (Preview) resource
+
+The **Logic App (Preview)** resource type is powered by the redesigned Azure Logic Apps (Preview) runtime, which uses the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) and is hosted as an extension on the Azure Functions runtime. This design provides portability, flexibility, and more performance for your logic apps plus other capabilities and benefits inherited from the Azure Functions platform and Azure App Service ecosystem.
+
+For example, you can run **Logic App (Preview)** workflows anywhere that you can run Azure function apps and their functions. The preview resource type introduces a resource structure that can have multiple workflows, similar to how an Azure function app can include multiple functions. With a 1-to-many mapping, workflows in the same logic app and tenant share compute and processing resources, providing better performance due to their proximity. This structure differs from the **Logic App (Consumption)** resource where you have a 1-to-1 mapping between a logic app resource and a workflow.
+
+To learn more about portability, flexibility, and performance improvements, continue with the following sections. Or, for more information about the redesigned runtime and Azure Functions extensibility, review the following documentation:
* [Azure Logic Apps Running Anywhere - Runtime Deep Dive](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-runtime-deep-dive/ba-p/1835564)
+* [Introduction to Azure Functions](../azure-functions/functions-overview.md)
+* [Azure Functions triggers and bindings](../azure-functions/functions-triggers-bindings.md)
+
+<a name="portability"></a>
+<a name="flexibility"></a>
+
+### Portability and flexibility
+
+When you create logic apps using the **Logic App (Preview)** resource type, you can run your workflows anywhere you can run Azure function apps and their functions, not just in the single-tenant service environment.
+
+For example, when you use Visual Studio Code with the Azure Logic Apps (Preview) extension, you can *locally* develop, build, and run your workflows in your development environment without having to deploy to Azure. If your scenario requires containers, you can containerize your logic apps and deploy as Docker containers.
-* [Logic Apps Public Preview Known Issues (GitHub)](https://github.com/Azure/logicapps/blob/master/articles/logic-apps-public-preview-known-issues.md)
+These capabilities provide major improvements and substantial benefits compared to the multi-tenant model, which requires you to develop against an existing running resource in Azure. Also, the multi-tenant model for automating **Logic App (Consumption)** resource deployment is completely based on Azure Resource Manager (ARM) templates, which combine and handle resource provisioning for both apps and infrastructure.
-<a name="preview-differences"></a>
+With the **Logic App (Preview)** resource type, deployment becomes easier because you can separate app deployment from infrastructure deployment. You can package the redesigned runtime and workflows together as part of your logic app. You can use generic steps or tasks that build, assemble, and zip your logic app resources into ready-to-deploy artifacts. To deploy your infrastructure, you can still use ARM templates to separately provision those resources along with other processes and pipelines that you use for those purposes.
-## How does Azure Logic Apps Preview differ?
+To deploy your app, copy the artifacts to the host environment and then start your apps to run your workflows. Or, integrate your artifacts into deployment pipelines using the tools and processes that you already know and use. That way, you can deploy using your own chosen tools, no matter the technology stack that you use for development.
-The Azure Logic Apps Preview runtime uses [Azure Functions](../azure-functions/functions-overview.md) extensibility and is hosted as an extension on the Azure Functions runtime. This architecture means you can run the single-tenant logic app type anywhere that Azure Functions runs. You can host the Azure Logic Apps Preview runtime on almost any network topology that you want, and choose any available compute size to handle the necessary workload that your workflow needs. For more information about Azure Functions extensibility, see [WebJobs SDK: Creating custom input and output bindings](https://github.com/Azure/azure-webjobs-sdk/wiki/Creating-custom-input-and-output-bindings).
+By using standard build and deploy options, you can focus on app development separately from infrastructure deployment. As a result, you get a more generic project model where you can apply many similar or the same deployment options that you use for a generic app. You also benefit from a more consistent experience for building deployment pipelines around your app projects and for running the required tests and validations before publishing to production.
-With this new approach, the Azure Logic Apps Preview runtime and your workflows are both part of your app that you can package together. This capability lets you deploy and run your workflows by simply copying artifacts to the hosting environment and starting your app. This approach also provides a more standardized experience for building deployment pipelines around the workflow projects for running the required tests and validations before you deploy changes to production environments. For more information, see [Azure Logic Apps Running Anywhere - Runtime Deep Dive](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-runtime-deep-dive/ba-p/1835564).
+<a name="performance"></a>
-The following table briefly summarizes the differences in the way that workflows share resources, based on the environment where they run. For differences in limits, see [Limits in Azure Logic Apps Preview](#limits).
+### Performance
-| Environment | Resource sharing and consumption |
-|-|-|
-| Azure Logic Apps (Multi-tenant) | Workflows *from customers across multiple tenants* share the same processing (compute), storage, network, and so on. |
-| Azure Logic Apps (Preview, single-tenant) | Workflows *in the same logic app and a single tenant* share the same processing (compute), storage, network, and so on. |
-| Integration service environment (unavailable in Preview) | Workflows in the *same environment* share the same processing (compute), storage, network, and so on. |
-|||
+Using the **Logic App (Preview)** resource type, you can create and run multiple workflows in the same single logic app and tenant. With this 1-to-many mapping, these workflows share resources, such as compute, processing, storage, and network, providing better performance due to their proximity.
+
+The preview logic app resource type and redesigned Azure Logic Apps (Preview) runtime provide another significant improvement by making the more popular managed connectors available as built-in operations. For example, you can use built-in operations for Azure Service Bus, Azure Event Hubs, SQL, and others. Meanwhile, the managed connector versions are still available and continue to work.
+
+When you use the new built-in operations, you create connections called *built-in connections* or *service provider connections*. Their managed connection counterparts are called *API connections*, which are created and run separately as Azure resources that you also have to then deploy by using ARM templates. Built-in operations and their connections run locally in the same process that runs your workflows. Both are hosted on the redesigned Logic Apps runtime. As a result, built-in operations and their connections provide better performance due to proximity with your workflows. This design also works well with deployment pipelines because the service provider connections are packaged into the same build artifact.
+
+## Create, build, and deploy options
+
+To create a logic app based on the environment that you want, you have multiple options, for example:
+
+**Single-tenant environment**
+
+| Option | Resources and tools | More information |
+|--|||
+| Azure portal | **Logic App (Preview)** resource type | [Create integration workflows for single-tenant Logic Apps - Azure portal](create-stateful-stateless-workflows-azure-portal.md) |
+| Visual Studio Code | [**Azure Logic Apps (Preview)** extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurelogicapps) | [Create integration workflows for single-tenant Logic Apps - Visual Studio Code](create-stateful-stateless-workflows-visual-studio-code.md) |
+| Azure CLI | Logic Apps Azure CLI extension | Not yet available |
+||||
-Meanwhile, you can still create the multi-tenant logic app type in the Azure portal and in Visual Studio Code by using the multi-tenant Azure Logic Apps extension. Although the development experiences differ between the multi-tenant and single-tenant logic app types, your Azure subscription can include both types. You can view and access all the deployed logic apps in your Azure subscription, but the apps are organized in their own categories and sections.
+**Multi-tenant environment**
+
+| Option | Resources and tools | More information |
+|--|||
+| Azure portal | **Logic App (Consumption)** resource type | [Quickstart: Create integration workflows in multi-tenant Azure Logic Apps - Azure portal](quickstart-create-first-logic-app-workflow.md) |
+| Visual Studio Code | [**Azure Logic Apps (Consumption)** extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-logicapps) | [Quickstart: Create integration workflows in multi-tenant Azure Logic Apps - Visual Studio Code](quickstart-create-logic-apps-visual-studio-code.md)
+| Azure CLI | [**Logic Apps Azure CLI** extension](https://github.com/Azure/azure-cli-extensions/tree/master/src/logic) | - [Quickstart: Create and manage integration workflows in multi-tenant Azure Logic Apps - Azure CLI](quickstart-logic-apps-azure-cli.md) <p><p>- [az logic](/cli/azure/logic) |
+| Azure Resource Manager | [**Create a logic app** Azure Resource Manager (ARM) template](https://azure.microsoft.com/resources/templates/101-logic-app-create/) | [Quickstart: Create and deploy integration workflows in multi-tenant Azure Logic Apps - ARM template](quickstart-create-deploy-azure-resource-manager-template.md) |
+| Azure PowerShell | [Az.LogicApp module](/powershell/module/az.logicapp) | [Get started with Azure PowerShell](/powershell/azure/get-started-azureps) |
+| Azure REST API | [Azure Logic Apps REST API](/rest/api/logic) | [Get started with Azure REST API reference](/rest/api/azure) |
+||||
+
+**Integration service environment**
+
+| Option | Resources and tools | More information |
+|--|||
+| Azure portal | **Logic App (Consumption)** resource type with an existing ISE resource | Same as [Quickstart: Create integration workflows in multi-tenant Azure Logic Apps - Azure portal](quickstart-create-first-logic-app-workflow.md), but select an ISE, not a multi-tenant region. |
+||||
+
+Although your development experiences differ based on whether you create **Consumption** or **Preview** logic app resources, you can find and access all your deployed logic apps under your Azure subscription.
+
+For example, in the Azure portal, the **Logic apps** page shows both **Consumption** and **Preview** logic app resource types. In Visual Studio Code, deployed logic apps appear under your Azure subscription, but they are grouped by the extension that you used, namely **Azure: Logic Apps (Consumption)** and **Azure: Logic Apps (Preview)**.
<a name="stateful-stateless"></a> ## Stateful and stateless workflows
-With the single-tenant logic app type, you can create these workflow types within the same logic app:
+With the preview logic app type, you can create these workflow types within the same logic app:
* *Stateful*
- Create stateful workflows when you need to keep, review, or reference data from previous events. These workflows save the inputs and outputs for each action and their states in external storage, which makes reviewing the run details and history possible after each run finishes. Stateful workflows provide high resiliency if outages happen. After services and systems are restored, you can reconstruct interrupted runs from the saved state and rerun the workflows to completion. Stateful workflows can continue running for up to a year.
+ Create stateful workflows when you need to keep, review, or reference data from previous events. These workflows save the inputs and outputs for each action and their states in external storage, which makes reviewing the run details and history possible after each run finishes. Stateful workflows provide high resiliency if outages happen. After services and systems are restored, you can reconstruct interrupted runs from the saved state and rerun the workflows to completion. Stateful workflows can continue running for much longer than stateless workflows.
* *Stateless*
This table specifies the child workflow's behavior based on whether the parent a
| Stateless | Stateless | Trigger and wait | ||||
-<a name="public-preview-contents"></a>
+<a name="other-capabilities"></a>
-## Capabilities
+## Other preview capabilities
-Azure Logic Apps Preview includes many current and additional capabilities, for example:
+The **Logic App (Preview)** resource and single-tenant model include many current and new capabilities, for example:
* Create logic apps and their workflows from [400+ connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) apps and services plus connectors for on-premises systems.
- * Some managed connectors are now available as built-in versions, which run similarly to the built-in triggers and actions, such as the Request trigger and HTTP action, that run natively on the Azure Logic Apps Preview runtime. For example, these new built-in connectors include Azure Service Bus, Azure Event Hubs, SQL Server, and MQ.
+ * More managed connectors are now available as built-in operations and run similarly to other built-in operations, such as Azure Functions. Built-in operations run natively on the redesigned Azure Logic Apps Preview runtime. For example, new built-in operations include Azure Service Bus, Azure Event Hubs, SQL Server, and MQ.
> [!NOTE]
- > For the built-in SQL Server connector , only the **Execute Query** action can directly connect to Azure
- > virtual networks without requiring the [on-premises data gateway](logic-apps-gateway-connection.md).
+ > For the built-in SQL Server version, only the **Execute Query** action can directly connect to Azure
+ > virtual networks without using the [on-premises data gateway](logic-apps-gateway-connection.md).
- * Create your own built-in connectors for any service you need by using the [preview release's extensibility framework](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272). Similar to built-in connectors such as Azure Service Bus and SQL Server, but unlike [custom connectors](../connectors/apis-list.md#custom-apis-and-connectors) that aren't currently supported for preview, these connectors provide higher throughput, low latency, local connectivity, and run natively in the same process as the preview runtime.
+ * You can create your own built-in connectors for any service that you need by using the [preview's extensibility framework](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272). Similarly to built-in operations such as Azure Service Bus and SQL Server but unlike [custom managed connectors](../connectors/apis-list.md#custom-apis-and-connectors), which aren't currently supported during preview, custom built-in connectors provide higher throughput, low latency, and local connectivity because they run in the same process as the redesigned runtime.
The authoring capability is currently available only in Visual Studio Code, but isn't enabled by default. To create these connectors, [switch your project from extension bundle-based (Node.js) to NuGet package-based (.NET)](create-stateful-stateless-workflows-visual-studio-code.md#enable-built-in-connector-authoring). For more information, see [Azure Logic Apps Running Anywhere - Built-in connector extensibility](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272). * You can use the B2B actions for Liquid Operations and XML Operations without an integration account. To use these actions, you need to have Liquid maps, XML maps, or XML schemas that you can upload through the respective actions in the Azure portal or add to your Visual Studio Code project's **Artifacts** folder using the respective **Maps** and **Schemas** folders.
- * Create and deploy logic apps that can run anywhere because the Azure Logic Apps service generates Shared Access Signature (SAS) connection strings that these logic apps can use for sending requests to the cloud connection runtime endpoint. The Logic Apps service saves these connection strings with other application settings so that you can easily store these values in Azure Key Vault when you deploy in Azure.
+ * Logic app (preview) resources can run anywhere because the Azure Logic Apps service generates Shared Access Signature (SAS) connection strings that these logic apps can use for sending requests to the cloud connection runtime endpoint. The Logic Apps service saves these connection strings with other application settings so that you can easily store these values in Azure Key Vault when you deploy in Azure.
> [!NOTE]
- > By default, a **Logic App (Preview)** resource has its [system-assigned managed identity](../logic-apps/create-managed-service-identity.md)
+ > By default, a **Logic App (Preview)** resource has the [system-assigned managed identity](../logic-apps/create-managed-service-identity.md)
> automatically enabled to authenticate connections at runtime. This identity differs from the authentication > credentials or connection string that you use when you create a connection. If you disable this identity, > connections won't work at runtime. To view this setting, on your logic app's menu, under **Settings**, select **Identity**.
-* Create logic apps with stateless workflows that run only in memory so that they finish more quickly, respond faster, have higher throughput, and cost less to run because the run histories and data between actions don't persist in external storage. Optionally, you can enable run history for easier debugging. For more information, see [Stateful versus stateless logic apps](#stateful-stateless).
+* Stateless workflows run only in memory so that they finish more quickly, respond faster, have higher throughput, and cost less to run because the run histories and data between actions don't persist in external storage. Optionally, you can enable run history for easier debugging. For more information, see [Stateful versus stateless workflows](#stateful-stateless).
-* Locally run, test, and debug your logic apps and their workflows in the Visual Studio Code development environment.
+* You can locally run, test, and debug your logic apps and their workflows in the Visual Studio Code development environment.
Before you run and test your logic app, you can make debugging easier by adding and using breakpoints inside the **workflow.json** file for a workflow. However, breakpoints are supported only for actions at this time, not triggers. For more information, see [Create stateful and stateless workflows in Visual Studio Code](create-stateful-stateless-workflows-visual-studio-code.md#manage-breakpoints).
Azure Logic Apps Preview includes many current and additional capabilities, for
* Enable diagnostics logging and tracing capabilities for your logic app by using [Application Insights](../azure-monitor/app/app-insights-overview.md) when supported by your Azure subscription and logic app settings.
-* Access networking capabilities, such as connect and integrate privately with Azure virtual networks, similar to Azure Functions when you create and deploy your logic apps using the [Azure Functions Premium plan](../azure-functions/functions-premium-plan.md). For more information, review these topics:
+* Access networking capabilities, such as connect and integrate privately with Azure virtual networks, similar to Azure Functions when you create and deploy your logic apps using the [Azure Functions Premium plan](../azure-functions/functions-premium-plan.md). For more information, review the following documentation:
* [Azure Functions networking options](../azure-functions/functions-networking-options.md)- * [Azure Logic Apps Running Anywhere - Networking possibilities with Azure Logic Apps Preview](https://techcommunity.microsoft.com/t5/integrations-on-azure/logic-apps-anywhere-networking-possibilities-with-logic-app/ba-p/2105047)
-* Regenerate access keys for managed connections used by individual workflows in the single-tenant **Logic App (Preview)** resource. For this task, [follow the same steps for the multi-tenant **Logic Apps** resource but at the individual workflow level](logic-apps-securing-a-logic-app.md#regenerate-access-keys), not the logic app resource level.
-
-* Add parallel branches in the single-tenant designer by following the same steps as the multi-tenant designer.
+* Regenerate access keys for managed connections used by individual workflows in a **Logic App (Preview)** resource. For this task, [follow the same steps for the **Logic Apps (Consumption)** resource but at the individual workflow level](logic-apps-securing-a-logic-app.md#regenerate-access-keys), not the logic app resource level.
For more information, see [Changed, limited, unavailable, and unsupported capabilities](#limited-unavailable-unsupported) and the [Logic Apps Public Preview Known Issues page in GitHub](https://github.com/Azure/logicapps/blob/master/articles/logic-apps-public-preview-known-issues.md).
-<a name="pricing-model"></a>
-
-## Pricing model
-
-When you create the single-tenant logic app type in the Azure portal or deploy from Visual Studio Code, you must choose a hosting plan, either [App Service or Premium](../azure-functions/functions-scale.md), for your logic app to use. This plan determines the pricing model that applies to running your logic app. If you select the App Service plan, you must also choose a [pricing tier](../app-service/overview-hosting-plans.md).
-
-*Stateful* workflows use [external storage](../azure-functions/storage-considerations.md#storage-account-requirements), so the [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/) applies to storage transactions that the Azure Logic Apps Preview runtime performs. For example, queues are used for scheduling, while tables and blobs are used for storing workflow states.
-
-> [!NOTE]
-> During public preview, running logic apps on App Service doesn't incur *additional* charges on top of your selected plan.
-
-For more information about the pricing models that apply to the single-tenant resource type, review these topics:
-
-* [Azure Functions scale and hosting](../azure-functions/functions-scale.md)
-* [Scale up an app in Azure App Service](../app-service/manage-scale-up.md)
-* [Azure Functions pricing details](https://azure.microsoft.com/pricing/details/functions/)
-* [App Service pricing details](https://azure.microsoft.com/pricing/details/app-service/)
-* [Azure Storage pricing details](https://azure.microsoft.com/pricing/details/storage/)
- <a name="limited-unavailable-unsupported"></a> ## Changed, limited, unavailable, or unsupported capabilities
-In Azure Logic Apps Preview, these capabilities have changed, or they are currently limited, unavailable, or unsupported:
+For the **Logic App (Preview)** resource, these capabilities have changed, or they are currently limited, unavailable, or unsupported:
* **OS support**: Currently, the designer in Visual Studio Code doesn't work on Linux OS, but you can still deploy logic apps that use the Logic Apps Preview runtime to Linux-based virtual machines. For now, you can build your logic apps in Visual Studio Code on Windows or macOS and then deploy to a Linux-based virtual machine.
-* **Triggers and actions**: Built-in triggers and actions run natively in the Azure Logic Apps Preview runtime, while managed connectors are deployed in Azure. Some built-in triggers are unavailable, such as Sliding Window and Batch.
+* **Triggers and actions**: Built-in triggers and actions run natively in the Logic Apps Preview runtime, while managed connectors are deployed in Azure. Some built-in triggers are unavailable, such as Sliding Window and Batch. To start a stateful or stateless workflow, use the [built-in Recurrence, Request, HTTP, HTTP Webhook, Event Hubs, or Service Bus trigger](../connectors/apis-list.md). In the designer, built-in triggers and actions appear under the **Built-in** tab.
- To start your workflow, use the [built-in Recurrence, Request, HTTP, HTTP Webhook, Event Hubs, or Service Bus trigger](../connectors/apis-list.md). In the designer, built-in triggers and actions appear under the **Built-in** tab, while managed connector triggers and actions appear under the **Azure** tab.
+ For *stateful* workflows, [managed connector triggers and actions](../connectors/managed.md) appear under the **Azure** tab, except for the unavailable operations listed below. For *stateless* workflows, the **Azure** tab doesn't appear when you want to select a trigger. You can select only [managed connector *actions*, not triggers](../connectors/managed.md). Although you can enable Azure-hosted managed connectors for stateless workflows, the designer doesn't show any managed connector triggers for you to add.
> [!NOTE] > To run locally in Visual Studio Code, webhook-based triggers and actions require additional setup. For more information, see > [Create stateful and stateless workflows in Visual Studio Code](create-stateful-stateless-workflows-visual-studio-code.md#webhook-setup).
- * For *stateless workflows*, the **Azure** tab doesn't appear when you select a trigger because you can select only [managed connector *actions*, not triggers](../connectors/managed.md). Although you can enable Azure-deployed managed connectors for stateless workflows, the designer doesn't show any managed connector triggers for you to add.
-
- * For *stateful workflows*, other than the triggers and actions that are listed as unavailable below, both [managed connector triggers and actions](../connectors/managed.md) are available for you to use.
- * These triggers and actions have either changed or are currently limited, unsupported, or unavailable: * [On-premises data gateway *triggers*](../connectors/managed.md#on-premises-connectors) are unavailable, but gateway actions *are* available. * The built-in action, [Azure Functions - Choose an Azure function](logic-apps-azure-functions.md) is now **Azure Function Operations - Call an Azure function**. This action currently works only for functions that are created from the **HTTP Trigger** template.
- In the Azure portal, you can select an HTTP trigger function where you have access by creating a connection through the user experience. If you inspect the function action's JSON definition in code view or the **workflow.json** file, the action refers to the function by using a `connectionName` reference. This version abstracts the function's information as a connection, which you can find in your project's **connections.json** file, which is available after you create a connection.
+ In the Azure portal, you can select an HTTP trigger function that you can access by creating a connection through the user experience. If you inspect the function action's JSON definition in code view or the **workflow.json** file, the action refers to the function by using a `connectionName` reference. This version abstracts the function's information as a connection, which you can find in your project's **connections.json** file, which is available after you create a connection.
> [!NOTE]
- > In the single-tenant version, the function action supports only query string authentication.
+ > In the single-tenant model, the function action supports only query string authentication.
> Azure Logic Apps Preview gets the default key from the function when making the connection, > stores that key in your app's settings, and uses the key for authentication when calling the function. >
- > As with the multi-tenant version, if you renew this key, for example, through the Azure Functions experience
+ > As in the multi-tenant model, if you renew this key, for example, through the Azure Functions experience
> in the portal, the function action no longer works due to the invalid key. To fix this problem, you need > to recreate the connection to the function that you want to call or update your app's settings with the new key.
In Azure Logic Apps Preview, these capabilities have changed, or they are curren
* You no longer have to restart your logic app if you make changes in an **Inline Code Operations** action.
- * **Inline Code Operations** actions have [updated limits](logic-apps-overview-preview.md#inline-code-limits).
+ * **Inline Code Operations** actions have [updated limits](logic-apps-limits-and-config.md).
* Some [built-in B2B triggers and actions for integration accounts](../connectors/managed.md#integration-account-connectors) are unavailable, for example, the **Flat File** encoding and decoding actions. * The built-in action, [Azure Logic Apps - Choose a Logic App workflow](logic-apps-http-endpoint.md) is now **Workflow Operations - Invoke a workflow in this workflow app**.
-* [Custom connectors](../connectors/apis-list.md#custom-apis-and-connectors) aren't currently supported for preview.
+* [Custom managed connectors](../connectors/apis-list.md#custom-apis-and-connectors) aren't currently supported.
-* **Hosting plan availability**: Whether you create the single-tenant **Logic App (Preview)** resource type in the Azure portal or deploy from Visual Studio Code, you can only use the Premium or App Service hosting plan in Azure. Consumption hosting plans are unavailable and unsupported for deploying this resource type. You can deploy from Visual Studio Code to a Docker container, but not to an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md).
+* **Hosting plan availability**: Whether you create the single-tenant **Logic App (Preview)** resource type in the Azure portal or deploy from Visual Studio Code, you can only use the Premium or App Service hosting plan in Azure. The preview resource type doesn't support Consumption hosting plans. You can deploy from Visual Studio Code to a Docker container, but not to an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md).
* **Breakpoint debugging in Visual Studio Code**: Although you can add and use breakpoints inside the **workflow.json** file for a workflow, breakpoints are supported only for actions at this time, not triggers. For more information, see [Create stateful and stateless workflows in Visual Studio Code](create-stateful-stateless-workflows-visual-studio-code.md#manage-breakpoints).
In Azure Logic Apps Preview, these capabilities have changed, or they are curren
* **Trigger history and run history**: For the **Logic App (Preview)** resource type, trigger history and run history in the Azure portal appears at the workflow level, not the logic app level. To find this historical data, follow these steps: * To view the run history, open the workflow in your logic app. On the workflow menu, under **Developer**, select **Monitor**.- * To review the trigger history, open the workflow in your logic app. On the workflow menu, under **Developer**, select **Trigger Histories**. <a name="firewall-permissions"></a>
-## Permit traffic in strict network and firewall scenarios
-
-If your environment has strict network requirements or firewalls that limit traffic, you have to allow access for any trigger or action connections in your logic app workflows.
+## Strict network and firewall traffic permissions
-To find the fully qualified domain names (FQDNs) for these connections, review the corresponding sections in these topics:
+If your environment has strict network requirements or firewalls that limit traffic, you have to allow access for any trigger or action connections in your logic app workflows. To find the fully qualified domain names (FQDNs) for these connections, review the corresponding sections in these topics:
* [Firewall permissions for single tenant logic apps - Visual Studio Code](create-stateful-stateless-workflows-visual-studio-code.md#firewall-setup) * [Firewall permissions for single tenant logic apps - Azure portal](create-stateful-stateless-workflows-azure-portal.md#firewall-setup)
-<a name="limits"></a>
-
-## Updated limits
-
-Although many limits for Azure Logic Apps Preview stay the same as the [limits for multi-tenant Azure Logic Apps](logic-apps-limits-and-config.md), these limits have changed for Azure Logic Apps Preview.
-
-<a name="http-timeout-limits"></a>
-
-### HTTP timeout limits
-
-For a single inbound call or outbound call, the timeout limit is 230 seconds (3.9 minutes) for these triggers and actions:
-
-* Outbound request: HTTP trigger, HTTP action
-* Inbound request: Request trigger, HTTP Webhook trigger, HTTP Webhook action
-
-In comparison, here are the timeout limits for these triggers and actions in other environments where logic apps and their workflows run:
-
-* Multi-tenant Azure Logic Apps: 120 seconds (2 minutes)
-* Integration service environment: 240 seconds (4 minutes)
-
-For more information, see [HTTP limits](logic-apps-limits-and-config.md#http-limits).
-
-<a name="managed-connector-limits"></a>
-
-### Managed connectors
-
-Managed connectors are limited to 50 requests per minute per connection. To work with connector throttling issues, see [Handle throttling problems (429 - "Too many requests" error) in Azure Logic Apps](handle-throttling-problems-429-errors.md#connector-throttling).
-
-<a name="inline-code-limits"></a>
-
-### Inline Code Operations (Execute JavaScript Code)
-
-For a single logic app definition, the Inline Code Operations action, [**Execute JavaScript Code**](logic-apps-add-run-inline-code.md), has these updated limits:
-
-* The maximum number of code characters increases from 1,024 characters to 100,000 characters.
-
-* The maximum duration for running code increases from five seconds to 15 seconds.
-
-For more information, see [Logic app definition limits](logic-apps-limits-and-config.md#definition-limits).
- ## Next steps * [Create stateful and stateless workflows in the Azure portal](create-stateful-stateless-workflows-azure-portal.md) * [Create stateful and stateless workflows in Visual Studio Code](create-stateful-stateless-workflows-visual-studio-code.md) * [Logic Apps Public Preview Known Issues page in GitHub](https://github.com/Azure/logicapps/blob/master/articles/logic-apps-public-preview-known-issues.md)
-Also, we'd like to hear from you about your experiences with Azure Logic Apps Preview!
+We'd also like to hear about your experiences with the preview logic app resource type and preview single-tenant model!
* For bugs or problems, [create your issues in GitHub](https://github.com/Azure/logicapps/issues). * For questions, requests, comments, and other feedback, [use this feedback form](https://aka.ms/lafeedback).
logic-apps Logic Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-overview.md
Title: Overview for Azure Logic Apps
-description: The Logic Apps cloud platform helps you create and run automated workflows for enterprise integration scenarios using little to no code.
+description: Azure Logic Apps is a cloud platform for automating workflows that integrate apps, data, services, and systems using little to no code. Workflows can run in a multi-tenant, single-tenant, or dedicated environment.
ms.suite: integration Previously updated : 04/26/2021 Last updated : 05/07/2021
-# What is Azure Logic Apps
+# What is Azure Logic Apps?
-[Logic Apps](https://azure.microsoft.com/services/logic-apps) is a cloud-based platform that helps you create and run automated [workflows](#logic-app-concepts) for integrating apps, data, services, and systems. Using this platform, you can more easily and quickly build highly scalable integration solutions for enterprise and business-to-business (B2B) scenarios. As a member of [Azure Integration Services](https://azure.microsoft.com/product-categories/integration/), Logic Apps provides a simpler way for you to connect legacy, modern, and cutting-edge systems across cloud, on premises, and hybrid environments.
+[Azure Logic Apps](https://azure.microsoft.com/services/logic-apps) is a cloud-based platform for creating and running automated [*workflows*](#logic-app-concepts) that integrate your apps, data, services, and systems. With this platform, you can quickly develop highly scalable integration solutions for your enterprise and business-to-business (B2B) scenarios. As a member of [Azure Integration Services](https://azure.microsoft.com/product-categories/integration/), Logic Apps simplifies the way that you connect legacy, modern, and cutting-edge systems across cloud, on premises, and hybrid environments.
-This list describes just a few example tasks, business processes, and workloads that you can automate with the Logic Apps service:
+The following list describes just a few example tasks, business processes, and workloads that you can automate using the Logic Apps service:
* Schedule and send email notifications using Office 365 when a specific event happens, for example, a new file is uploaded. * Route and process customer orders across on-premises systems and cloud services.
This list describes just a few example tasks, business processes, and workloads
> [!VIDEO https://channel9.msdn.com/Blogs/Azure/Introducing-Azure-Logic-Apps/player]
-To securely access and run operations in real time on various data sources, choose from a [constantly growing gallery](/connectors/connector-reference/connector-reference-logicapps-connectors) of [Microsoft-managed connectors](#logic-app-concepts), for example:
+Based on the logic app resource type that you choose and create, your logic apps run in either a multi-tenant, single-tenant, or dedicated integration service environment. For example, when you containerize single-tenant logic apps, you can deploy your apps as containers and run them anywhere that Azure Functions can run. For more information, review [Resource type and host environment differences for logic apps](#resource-environment-differences).
+
+To securely access and run operations in real time on various data sources, you can choose [*managed connectors*](#logic-app-concepts) from a [400+ and growing Azure connectors ecosystem](/connectors/connector-reference/connector-reference-logicapps-connectors) to use in your workflows, for example:
* Azure services such as Blob Storage and Service Bus
-* Office services such as Outlook, Excel, and SharePoint
+* Office 365 services such as Outlook, Excel, and SharePoint
* Database servers such as SQL and Oracle * Enterprise systems such as SAP and IBM MQ * File shares such as FTP and SFTP
-To communicate with any service endpoint, run your own code, organize your workflow, or manipulate data, you can use [built-in triggers and actions](#logic-app-concepts), which run natively within the Logic Apps service. For example, built-in triggers include Request, HTTP, and Recurrence. Built-in actions include Condition, For each, Execute JavaScript code, and operations that call Azure functions, web apps or API apps hosted in Azure, and other Logic Apps workflows.
+To communicate with any service endpoint, run your own code, organize your workflow, or manipulate data, you can use [*built-in*](#logic-app-concepts) triggers and actions, which run natively within the Logic Apps service. For example, built-in triggers include Request, HTTP, and Recurrence. Built-in actions include Condition, For each, Execute JavaScript code, and operations that call Azure functions, web apps or API apps hosted in Azure, and other Logic Apps workflows.
-For B2B integration scenarios, Logic Apps includes capabilities from [BizTalk Server](/biztalk/core/introducing-biztalk-server). You can create an [integration account](logic-apps-enterprise-integration-create-integration-account.md) where you define trading partners, agreements, schemas, maps, and other B2B artifacts. When you link this account to a logic app, you can build workflows that work with these artifacts and exchange messages using protocols such as AS2, EDIFACT, X12, and RosettaNet.
+For B2B integration scenarios, Logic Apps includes capabilities from [BizTalk Server](/biztalk/core/introducing-biztalk-server). To define business-to-business (B2B) artifacts, you create [*integration account*](#logic-app-concepts) where you store these artifacts. After you link this account to your logic app, your workflows can use these B2B artifacts and exchange messages that comply with Electronic Data Interchange (EDI) and Enterprise Application Integration (EAI) standards.
For more information about the ways workflows can access and work with apps, data, services, and systems, review the following documentation:
For more information about the ways workflows can access and work with apps, dat
## Key terms
-* **Workflow**: A series of steps that defines a task or process, starting with a single trigger and followed by one or multiple actions
-
-* **Trigger**: The first step that starts every workflow and specifies the condition to meet before running any actions in the workflow. For example, a trigger event might be getting an email in your inbox or detecting a new file in a storage account.
+* *Logic app*: The Azure resource to create when you want to develop a workflow. Based on your scenario's needs and solution's requirements, you can create logic apps that run in the multi-tenant, single-tenant (preview), or integration service environment (ISE). For more information, review [Resource type and host environment differences for logic apps](#resource-environment-differences).
-* **Action**: Each subsequent step that follows after the trigger and runs some operation in a workflow
+* *Workflow*: A series of steps that defines a task or process, starting with a single trigger and followed by one or multiple actions.
-* **Managed connector**: A Microsoft-managed REST API that provides access to a specific app, data, service, or system. Before you can use them, most managed connectors require that you first create a connection from your workflow and authenticate your identity.
+* *Trigger*: The first step that starts every workflow and specifies the condition to meet before running any actions in the workflow. For example, a trigger event might be getting an email in your inbox or detecting a new file in a storage account.
- For example, you can start a workflow with a trigger or include an action that works with Azure Blob Storage, Office 365, Salesforce, or SFTP servers. For more information, review [Managed connectors for Azure Logic Apps](../connectors/managed.md).
+* *Action*: Each subsequent step that follows after the trigger and runs some operation in a workflow.
-* **Built-in trigger or action**: A natively running Logic Apps operation that provides a way to control your workflow's schedule or structure, run your own code, manage or manipulate data, or complete other tasks in your workflow. Most built-in operations aren't associated with any service or system. Many also don't require that you first create a connection from your workflow and authenticate your identity. Built-in operations are also available for a few services, systems, and protocols, such as Azure Functions, Azure API Management, Azure App Service, and more.
+* *Built-in trigger or action*: A natively running Logic Apps operation that provides a way to control your workflow's schedule or structure, run your own code, manage or manipulate data, or complete other tasks in your workflow. Most built-in operations aren't associated with any service or system. Many also don't require that you first create a connection from your workflow and authenticate your identity. However, built-in operations are also available for some frequently used services, systems, and protocols, such as Azure Functions, Azure API Management, Azure App Service, and more.
For example, you can start almost any workflow on a schedule when you use the Recurrence trigger. Or, you can have your workflow wait until called when you use the Request trigger. For more information, review [Built-in triggers and actions for Azure Logic Apps](../connectors/built-in.md).
-* **Logic app**: The Azure resource to create for building a workflow. Based on your scenario's needs and solution's requirements, you can create logic apps that run in either the multi-tenant or single-tenant Logic Apps service environment or that run in an integration service environment. For more information, review [Host environments for logic apps](#host-environments).
+* *Managed connector*: A prebuilt proxy or wrapper around a REST API that provides prebuilt triggers and actions for your workflow to access a specific app, data, service, or system. Before you can use most managed connectors, you must first create a connection from your workflow and authenticate your identity.
+
+ For example, you can start a workflow with a trigger or add an action that works with Azure Blob Storage, Office 365, Salesforce, or SFTP servers. Managed connectors are hosted and maintained by Microsoft. For more information, review [Managed connectors for Azure Logic Apps](../connectors/managed.md).
+
+* *Integration account*: The Azure resource to create when you want to define and store B2B artifacts for use in your workflows. After you link this account to your logic app, your workflows can use these B2B artifacts and exchange messages that comply with Electronic Data Interchange (EDI) and Enterprise Application Integration (EAI) standards.
+
+ For example, you can define trading partners, agreements, schemas, maps, and other B2B artifacts. You can create workflows that use these artifacts and exchange messages over protocols such as AS2, EDIFACT, X12, and RosettaNet. For more information, review [Create and manage integration accounts for B2B enterprise integrations](logic-apps-enterprise-integration-create-integration-account.md).
<a name="how-do-logic-apps-work"></a>
For example, the following workflow starts with a Dynamics trigger that has a bu
You can visually create workflows using the Logic Apps designer in the Azure portal, Visual Studio Code, or Visual Studio. Each workflow also has an underlying definition that's described using JavaScript Object Notation (JSON). If you prefer, you can edit workflows by changing this JSON definition. For some creation and management tasks, Logic Apps provides Azure PowerShell and Azure CLI command support. For automated deployment, Logic Apps supports Azure Resource Manager templates.
-<a name="host-environments"></a>
+<a name="resource-environment-differences"></a>
-## Host environments
+## Resource type and host environment differences
-Based on your scenario and solution requirements, you can create logic apps that differ in the Logic Apps service environment where they run and how workflows use resources. The following table briefly summarizes these differences.
+To create logic app workflows, you choose the **Logic App** resource type based on your scenario, solution requirements, the capabilities that you want, and the environment where you want to run your workflows.
-| Environment | [Pricing model](logic-apps-pricing.md) | Description |
-|-|-|-|
-| Azure Logic Apps (multi-tenant) | Consumption | A logic app can have only one workflow. <p><p>Workflows from different logic apps across *multiple tenants* share the same processing (compute), storage, network, and so on. |
-| Azure Logic Apps ([single-tenant (Preview)](logic-apps-overview-preview.md)) | [Preview](logic-apps-overview-preview.md#pricing-model) | A logic app can have multiple workflows. <p><p>Workflows from the *same logic app in a single tenant* share the same processing (compute), storage, network, and so on. |
-| [Integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md) | Fixed | A logic app can have only one workflow. <p><p>Workflows from different logic apps in the *same environment* share the same processing (compute), storage, network, and so on. |
-||||
+The following table briefly summarizes differences between the original **Logic App (Consumption)** resource type and the new **Logic App (Preview)** resource type. You'll also learn how the *single-tenant* (preview) environment compares to the *multi-tenant* and *integration service environment (ISE)* for deploying, hosting, and running your logic app workflows.
-Logic apps hosted in Logic Apps service environments also have different limits. For more information, review [Limits in Logic Apps](logic-apps-limits-and-config.md) and [Limits in Logic Apps (Preview)](logic-apps-overview-preview.md#limits).
## Why use Logic Apps
When you create an ISE, Azure *injects* or deploys that ISE into your Azure virt
#### Pricing options
-Each logic app type, which differs by capabilities and where they run (multi-tenant, single-tenant, integration service environment), has a different [pricing model](../logic-apps/logic-apps-pricing.md). For example, multi-tenant logic apps use consumption-based pricing, while logic apps in an integration service environment use fixed pricing. Learn more about [pricing and metering](../logic-apps/logic-apps-pricing.md) for Logic Apps.
+Each logic app type, which differs by capabilities and where they run (multi-tenant, single-tenant, integration service environment), has a different [pricing model](../logic-apps/logic-apps-pricing.md). For example, multi-tenant logic apps use consumption pricing, while logic apps in an integration service environment use fixed pricing. Learn more about [pricing and metering](../logic-apps/logic-apps-pricing.md) for Logic Apps.
## How does Logic Apps differ from Functions, WebJobs, and Power Automate?
Learn more about the Logic Apps platform with these introductory videos:
## Next steps
-* [Quickstart: Create your first logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md)
-* Learn about [serverless solutions with Azure](../logic-apps/logic-apps-serverless-overview.md)
-* Learn about [B2B integration with the Enterprise Integration Pack](../logic-apps/logic-apps-enterprise-integration-overview.md)
+* [Quickstart: Create your first logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md)
logic-apps Logic Apps Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-pricing.md
Last updated 03/24/2021
<a name="consumption-pricing"></a>
-## Multi-tenant pricing
+## Consumption pricing (multi-tenant)
-A pay-for-use consumption pricing model applies to logic apps that run in the public, "global", multi-tenant Logic Apps service. All successful and unsuccessful runs are metered and billed.
+A pay-for-use consumption pricing model applies to logic apps that run in the public, "global", multi-tenant Logic Apps environment. All successful and unsuccessful runs are metered and billed.
For example, a request that a polling trigger makes is still metered as an execution even if that trigger is skipped, and no logic app workflow instance is created.
To help you estimate more accurate consumption costs, review these tips:
For example, suppose you set up trigger that checks an endpoint every day. When the trigger checks the endpoint and finds 15 events that meet the criteria, the trigger fires and runs the corresponding workflow 15 times. The Logic Apps service meters all the actions that those 15 workflows perform, including the trigger requests.
+<a name="preview-pricing"></a>
+
+## Preview pricing (single-tenant)
+
+When you create the **Logic App (Preview)** resource in the Azure portal or deploy from Visual Studio Code, you must choose a hosting plan, either [App Service or Functions Premium](../azure-functions/functions-scale.md) for your logic app. If you select the App Service plan, you must also choose a [pricing tier](../app-service/overview-hosting-plans.md). These choices determine the pricing that applies when running your workflows in single-tenant Logic Apps.
+
+> [!NOTE]
+> During preview, running preview logic app resources and workflows in App Service doesn't incur *extra* charges on top of your selected hosting plan.
+
+Azure Logic Apps uses [Azure Storage](/storage) for any storage operations. With multi-tenant Logic Apps, any storage usage and costs are attached to the logic app. With single-tenant Logic Apps, you can use your own Azure [storage account](../azure-functions/storage-considerations.md#storage-account-requirements). This capability gives you more control and flexibility with your Logic Apps data.
+
+When *stateful* workflows run their operations, the Azure Logic Apps runtime makes storage transactions. For example, queues are used for scheduling, while tables and blobs are used for storing workflow states. Storage costs change based on your workflow's content. Different triggers, actions, and payloads result in different storage operations and needs. Storage transactions follow the [Azure Storage pricing model](https://azure.microsoft.com/pricing/details/storage/). Storage costs are separately listed in your Azure billing invoice.
+
+### Estimate storage needs and costs
+
+To help you get some idea about the number of storage operations that a workflow might run and their cost, try using the [Logic Apps Storage calculator](https://logicapps.azure.com/calculator). You can either select a sample workflow or use an existing workflow definition. The first calculation estimates the number of operations. You can then use these numbers to estimate costs using the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
+
+This article describes how to estimate your storage costs when you're using your own Azure Storage account with single-tenant logic apps. First, you can estimate the number of storage operations you'll perform using the Logic Apps storage calculator. Then, you can estimate your possible storage costs using these numbers in the
+
+For more information about the pricing models that apply to preview logic apps, review the following documentation:
+
+* [Azure Functions scale and hosting](../azure-functions/functions-scale.md)
+* [Scale up an app in Azure App Service](../app-service/manage-scale-up.md)
+* [Azure Functions pricing details](https://azure.microsoft.com/pricing/details/functions/)
+* [App Service pricing details](https://azure.microsoft.com/pricing/details/app-service/)
+* [Azure Storage pricing details](https://azure.microsoft.com/pricing/details/storage/)
+ <a name="fixed-pricing"></a>
-## ISE pricing
+## ISE pricing (dedicated)
-A fixed pricing model applies to logic apps that run in an [*integration service environment* (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md). An ISE is billed using the [Integration Service Environment price](https://azure.microsoft.com/pricing/details/logic-apps), which depends on the [ISE level or *SKU*](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level) that you create. This pricing differs from multi-tenant pricing as you're paying for reserved capacity and dedicated resources whether or not you use them.
+A fixed pricing model applies to logic apps that run in the dedicated [*integration service environment* (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md). An ISE is billed using the [Integration Service Environment price](https://azure.microsoft.com/pricing/details/logic-apps), which depends on the [ISE level or *SKU*](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level) that you create. This pricing differs from multi-tenant pricing as you're paying for reserved capacity and dedicated resources whether or not you use them.
| ISE SKU | Description | ||-|
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/azure-machine-learning-release-notes.md
__RSS feed__: Get notified when this page is updated by copying and pasting the
### Azure Machine Learning SDK for Python v1.28.0 + **Bug fixes and improvements**
- + **azureml-automl-core**
- + Added support for version 2 of AutoML scoring script which handles improvements and is consistent with the Designer spec
+ **azureml-automl-runtime**
- + Added support for version 2 of AutoML scoring script which handles improvements and is consistent with the Designer spec
- + **azureml-contrib-automl-dnn-forecasting**
- + Added support for version 2 of AutoML scoring script which handles improvements and is consistent with the Designer spec
+ + Improved AutoML Scoring script to make it consistent with designer
+ + Patch bug where forecasting with the Prophet model would throw a "missing column" error if trained on an earlier version of the SDK.
+ **azureml-contrib-dataset** + Updated documentation description with indication that libfuse should be installed while using mount. + **azureml-core**
- + Updated default CPU image to mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04 - Updated default GPU image to mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.2-cudnn8-ubuntu18.04
+ + Default CPU curated image is now mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04. Default GPU image is now mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.2-cudnn8-ubuntu18.04
+ Run.fail() is now deprecated, use Run.tag() to mark run as failed or use Run.cancel() to mark the run as canceled.
- + Updated documentation description with indication that libfuse should be installed while using mount.
- + Enable audience in msi authentication
- + Add experimental register_dask_dataframe() support to tabular dataset.
+ + Updated documentation with a note that libfuse should be installed when mounting a file dataset.
+ + Add experimental register_dask_dataframe() support to tabular dataset.
+ Support DatabricksStep with Azure Blob/ADL-S as inputs/outputs and expose parameter permit_cluster_restart to let customer decide whether AML can restart cluster when i/o access configuration need to be added into cluster
- + **azureml-dataprep**
- + azureml-dataset-runtime now supports versions of pyarrow < 4.0.0
+ **azureml-dataset-runtime** + azureml-dataset-runtime now supports versions of pyarrow < 4.0.0 + **azureml-mlflow**
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-azureml-behind-firewall.md
For more information, see [Create an Azure Batch pool in a virtual network](../b
1. To restrict access to models deployed to Azure Kubernetes Service (AKS), see [Restrict egress traffic in Azure Kubernetes Service](../aks/limit-egress-traffic.md).
+### Diagnostics for support
+
+If you need to gather diagnostics information when working with Microsoft support, use the following steps:
+
+1. Add a __Network rule__ to allow traffic to and from the `AzureMonitor` tag.
+1. Add __Application rules__ for the following hosts. Select __http, https__ for the __Protocol:Port__ for these hosts:
+
+ + **dc.applicationinsights.azure.com**
+ + **dc.applicationinsights.microsoft.com**
+ + **dc.services.visualstudio.com**
+
+ For a list of IP addresses for the Azure Monitor hosts, see [IP addresses used by Azure Monitor](../azure-monitor/app/ip-addresses.md).
## Other firewalls The guidance in this section is generic, as each firewall has its own terminology and specific configurations. If you have questions about how to allow communication through your firewall, please consult the documentation for the firewall you are using.
Also, use the information in [forced tunneling](how-to-secure-training-vnet.md#f
For information on restricting access to models deployed to Azure Kubernetes Service (AKS), see [Restrict egress traffic in Azure Kubernetes Service](../aks/limit-egress-traffic.md).
+> [!TIP]
+> If you are working with Microsoft Support to gather diagnostics information, you must allow outbound traffic to the IP addresses used by Azure Monitor hosts. For a list of IP addresses for the Azure Monitor hosts, see [IP addresses used by Azure Monitor](../azure-monitor/app/ip-addresses.md).
### Python hosts The hosts in this section are used to install Python packages. They are required during development, training, and deployment.
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-manage-compute-instance.md
You can also use the following environment variables in your script:
Once you store the script, specify it during creation of your compute instance:
-1. Sign into the [studio](https://ml.azureml.com) and select your workspace.
+1. Sign into the [studio](https://ml.azure.com/) and select your workspace.
1. On the left, select **Compute**. 1. Select **+New** to create a new compute instance. 1. [Fill out the form](how-to-create-attach-compute-studio.md#compute-instance).
machine-learning How To Deploy And Where https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-and-where.md
adobe-target: true
-# Deploy machine learning models to Azure
+# Deploy machine learning models to Azure
Learn how to deploy your machine learning or deep learning model as a web service in the Azure cloud.
The workflow is similar no matter where you deploy your model:
1. Prepare an entry script 1. Prepare an inference configuration 1. Deploy the model locally to ensure everything works
-1. Choose a compute target.
+1. Choose a compute target
1. Re-deploy the model to the cloud
-1. Test the resulting web service.
+1. Test the resulting web service
For more information on the concepts involved in the machine learning deployment workflow, see [Manage, deploy, and monitor models with Azure Machine Learning](concept-model-management-and-deployment.md).
The following examples demonstrate how to register a model.
### Register a model from a local file
-```azurecli-interactive
-wget https://aka.ms/bidaf-9-model -o model.onnx
-az ml model register -n bidaf_onnx -p ./model.onnx
-```
+[!notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/2.deploy-local-cli.ipynb?name=register-model-from-local-file-code)]
Set `-p` to the path of a folder or a file that you want to register.
For more information on `az ml model register`, consult the [reference documenta
### Register a model from a local file You can register a model by providing the local path of the model. You can provide the path of either a folder or a single file on your local machine.
+<!-- pyhton nb call -->
+[!notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=register-model-from-local-file-code)]
-```python
-
-import urllib.request
-from azureml.core.model import Model
-# Download model
-urllib.request.urlretrieve("https://aka.ms/bidaf-9-model", 'model.onnx')
-
-# Register model
-model = Model.register(ws, model_name='bidaf_onnx', model_path='./model.onnx')
-```
To include multiple files in the model registration, set `model_path` to the path of a folder that contains the files.
You can use any [Azure Machine Learning curated environment](./resource-curated-
A minimal inference configuration can be written as:
-```json
-{
- "entryScript": "echo_score.py",
- "sourceDirectory": "./source_dir",
- "environment": {
- "docker": {
- "arguments": [],
- "baseDockerfile": null,
- "baseImage": "mcr.microsoft.com/azureml/base:intelmpi2018.3-ubuntu16.04",
- "enabled": false,
- "sharedVolumes": true,
- "shmSize": null
- },
- "environmentVariables": {
- "EXAMPLE_ENV_VAR": "EXAMPLE_VALUE"
- },
- "name": "my-deploy-env",
- "python": {
- "baseCondaEnvironment": null,
- "condaDependencies": {
- "channels": [],
- "dependencies": [
- "python=3.6.2",
- {
- "pip": [
- "azureml-defaults"
- ]
- }
- ],
- "name": "project_environment"
- },
- "condaDependenciesFile": null,
- "interpreterPath": "python",
- "userManagedDependencies": false
- },
- "version": "1"
- }
-}
-```
-Save this file with the name `inferenceconfig.json`.
+Save this file with the name `dummyinferenceconfig.json`.
[See this article](./reference-azure-machine-learning-cli.md#inference-configuration-schema) for a more thorough discussion of inference configurations.
Save this file with the name `inferenceconfig.json`.
The following example demonstrates how to create a minimal environment with no pip dependencies, using the dummy scoring script you defined above.
-```python
-from azureml.core import Environment
-from azureml.core.model import InferenceConfig
-
-env = Environment(name='project_environment')
-inf_config = InferenceConfig(environment=env, source_directory='./source_dir', entry_script='./echo_score.py')
-```
+[!notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=inference-configuration-code)]
For more information on environments, see [Create and manage environments for training and deployment](how-to-use-environments.md).
For more information, see [this reference](./reference-azure-machine-learning-cl
To create a local deployment configuration, do the following:
-```python
-from azureml.core.webservice import LocalWebservice
-
-deploy_config = LocalWebservice.deploy_configuration(port=6789)
-```
+[!notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deployment-configuration-code)]
You are now ready to deploy your model.
Let's check that your echo model deployed successfully. You should be able to do a simple liveness request, as well as a scoring request: # [Azure CLI](#tab/azcli)
+<!-- cli nb call -->
-```azurecli-interactive
-curl -v http://localhost:32267
-curl -v -X POST -H "content-type:application/json" -d '{"query": "What color is the fox", "context": "The quick brown fox jumped over the lazy dog."}' http://localhost:32267/score
-```
+[!notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/2.deploy-local-cli.ipynb?name=call-into-model-code)]
# [Python](#tab/python)-
-```python
-import requests
-
-uri = service.scoring_uri
-requests.get('http://localhost:6789')
-headers = {'Content-Type': 'application/json'}
-data = {"query": "What color is the fox", "context": "The quick brown fox jumped over the lazy dog."}
-data = json.dumps(data)
-response = requests.post(uri, data=data, headers=headers)
-print(response.json())
-```
+<!-- python nb call -->
+[!notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-into-model-code)]
print(response.json())
Now it's time to actually load your model. First, modify your entry script:
-```python
-import json
-import numpy as np
-import os
-import onnxruntime
-from nltk import word_tokenize
-import nltk
-
-def init():
- nltk.download('punkt')
- global sess
- sess = onnxruntime.InferenceSession(os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'model.onnx'))
-
-def run(request):
- print(request)
- text = json.loads(request)
- qw, qc = preprocess(text['query'])
- cw, cc = preprocess(text['context'])
-
- # Run inference
- test = sess.run(None, {'query_word': qw, 'query_char': qc, 'context_word': cw, 'context_char': cc})
- start = np.asscalar(test[0])
- end = np.asscalar(test[1])
- ans = [w for w in cw[start:end+1].reshape(-1)]
- print(ans)
- return ans
-
-def preprocess(word):
- tokens = word_tokenize(word)
-
- # split into lower-case word tokens, in numpy array with shape of (seq, 1)
- words = np.asarray([w.lower() for w in tokens]).reshape(-1, 1)
-
- # split words into chars, in numpy array with shape of (seq, 1, 1, 16)
- chars = [[c for c in t][:16] for t in tokens]
- chars = [cs+['']*(16-len(cs)) for cs in chars]
- chars = np.asarray(chars).reshape(-1, 1, 1, 16)
- return words, chars
-```
++ Save this file as `score.py` inside of `source_dir`.
-Notice the use of the `AZUREML_MODEL_DIR` environment variable to locate your registered model. Now that you've added some pip packages, you also need to update your inference configuration to add in those additional packages:
+Notice the use of the `AZUREML_MODEL_DIR` environment variable to locate your registered model. Now that you've added some pip packages.
# [Azure CLI](#tab/azcli)
-```json
-{
- "entryScript": "score.py",
- "sourceDirectory": "./source_dir",
- "environment": {
- "docker": {
- "arguments": [],
- "baseDockerfile": null,
- "baseImage": "mcr.microsoft.com/azureml/base:intelmpi2018.3-ubuntu16.04",
- "enabled": false,
- "sharedVolumes": true,
- "shmSize": null
- },
- "environmentVariables": {
- "EXAMPLE_ENV_VAR": "EXAMPLE_VALUE"
- },
- "name": "my-deploy-env",
- "python": {
- "baseCondaEnvironment": null,
- "condaDependencies": {
- "channels": [],
- "dependencies": [
- "python=3.6.2",
- {
- "pip": [
- "azureml-defaults",
- "nltk",
- "numpy",
- "onnxruntime"
- ]
- }
- ],
- "name": "project_environment"
- },
- "condaDependenciesFile": null,
- "interpreterPath": "python",
- "userManagedDependencies": false
- },
- "version": "2"
- }
-}
-```
+Save this file as `inferenceconfig.json`
# [Python](#tab/python)
For more information, see the documentation for [LocalWebservice](/python/api/az
Deploy your service again: -
-Then ensure you can send a post request to the service:
+ # [Azure CLI](#tab/azcli)
-```bash
-curl -v -X POST -H "content-type:application/json" -d '{"query": "What color is the fox", "context": "The quick brown fox jumped over the lazy dog."}' http://localhost:32267/score
-```
+Replace `bidaf_onnx:1` with the name of your model and its version number.
+
+[!notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/2.deploy-local-cli.ipynb?name=re-deploy-model-code)]
# [Python](#tab/python)
-```python
-import requests
+[!notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-model-code)]
-uri = service.scoring_uri
+[!notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-model-print-logs)]
-headers = {'Content-Type': 'application/json'}
-data = {"query": "What color is the fox", "context": "The quick brown fox jumped over the lazy dog."}
-data = json.dumps(data)
-response = requests.post(uri, data=data, headers=headers)
-print(response.json())
-```
+For more information, see the documentation for [Model.deploy()](/python/api/azureml-core/azureml.core.model.model#deploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-) and [Webservice](/python/api/azureml-core/azureml.core.webservice.webservice).
++
+Then ensure you can send a post request to the service:
+
+# [Azure CLI](#tab/azcli)
+
+[!notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/2.deploy-local-cli.ipynb?name=send-post-request-code)]
+
+# [Python](#tab/python)
+
+[!notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=send-post-request-code)]
Change your deploy configuration to correspond to the compute target you've chos
The options available for a deployment configuration differ depending on the compute target you choose.
-```json
-{
- "computeType": "aci",
- "containerResourceRequirements":
- {
- "cpu": 0.5,
- "memoryInGB": 1.0
- },
- "authEnabled": true,
- "sslEnabled": false,
- "appInsightsEnabled": false
-}
-```
-Save this file as `deploymentconfig.json`.
+Save this file as `re-deploymentconfig.json`.
For more information, see [this reference](./reference-azure-machine-learning-cli.md#deployment-configuration-schema). # [Python](#tab/python) -
-```python
-from azureml.core.webservice import AciWebservice
-
-deployment_config = AciWebservice.deploy_configuration(cpu_cores = 0.5, memory_gb = 1)
-```
+[!notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-on-cloud-code)]
Deploy your service again: +
+# [Azure CLI](#tab/azcli)
+
+Replace `bidaf_onnx:1` with the name of your model and its version number.
+++
+[!notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/2.deploy-local-cli.ipynb?name=deploy-model-on-cloud-code)]
+
+# [Python](#tab/python)
++
+[!notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-service-code)]
+
+[!notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-service-print-logs)]
+
+For more information, see the documentation for [Model.deploy()](/python/api/azureml-core/azureml.core.model.model#deploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-) and [Webservice](/python/api/azureml-core/azureml.core.webservice.webservice).
+++ ## Call your remote webservice When you deploy remotely, you may have key authentication enabled. The example below shows how to get your service key with Python in order to make an inference request.
-```python
-import requests
-import json
-from azureml.core import Webservice
-
-service = Webservice(workspace=ws, name='myservice')
-scoring_uri = service.scoring_uri
-
-# If the service is authenticated, set the key or token
-primary_key, _ = service.get_keys()
-
-# Set the appropriate headers
-headers = {'Content-Type': 'application/json'}
-headers['Authorization'] = f'Bearer {key}'
-
-# Make the request and display the response and logs
-data = {"query": "What color is the fox", "context": "The quick brown fox jumped over the lazy dog."}
-data = json.dumps(data)
-resp = requests.post(scoring_uri, data=data, headers=headers)
-print(resp.text)
-print(service.get_logs())
-```
+[!notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-remote-web-service-code)]
+
+[!notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-remote-webservice-print-logs)]
++ See the article on [client applications to consume web services](how-to-consume-web-service.md) for more example clients in other languages.
The following table describes the different service states:
# [Azure CLI](#tab/azcli) +
+[!notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/2.deploy-local-cli.ipynb?name=delete-resource-code)]
+
+[!notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/2.deploy-local-cli.ipynb?name=delete-your-resource-code)]
+ To delete a deployed webservice, use `az ml service delete <name of webservice>`. To delete a registered model from your workspace, use `az ml model delete <model id>`
Read more about [deleting a webservice](/cli/azure/ml/service#az_ml_service_dele
# [Python](#tab/python)
+[!notebook-python[] (~/azureml-examples-main/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=delete-resource-code)]
+ To delete a deployed web service, use `service.delete()`. To delete a registered model, use `model.delete()`.
For more information, see the documentation for [WebService.delete()](/python/ap
* [One click deployment for automated ML runs in the Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md#deploy-your-model) * [Use TLS to secure a web service through Azure Machine Learning](how-to-secure-web-service.md) * [Monitor your Azure Machine Learning models with Application Insights](how-to-enable-app-insights.md)
-* [Create event alerts and triggers for model deployments](how-to-use-event-grid.md)
+* [Create event alerts and triggers for model deployments](how-to-use-event-grid.md)
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-inferencing-vnet.md
Previously updated : 10/23/2020 Last updated : 05/12/2021
There are two approaches to isolate traffic to and from the AKS cluster to the v
* __Private AKS cluster__: This approach uses Azure Private Link to secure communications with the cluster for deployment/management operations. * __Internal AKS load balancer__: This approach configures the endpoint for your deployments to AKS to use a private IP within the virtual network.
-> [!WARNING]
-> Internal load balancer does not work with an AKS cluster that uses kubenet. If you want to use an internal load balancer and a private AKS cluster at the same time, configure your private AKS cluster with Azure Container Networking Interface (CNI). For more information, see [Configure Azure CNI networking in Azure Kubernetes Service](../aks/configure-azure-cni.md).
- ### Private AKS cluster By default, AKS clusters have a control plane, or API server, with public IP addresses. You can configure AKS to use a private control plane by creating a private AKS cluster. For more information, see [Create a private Azure Kubernetes Service cluster](../aks/private-clusters.md).
machine-learning How To Train With Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-with-custom-image.md
print(compute_target.get_status().serialize())
## Configure your training job
-For this tutorial, use the training script *train.py* on [GitHub](https://github.com/Azure/azureml-examples/blob/main/workflows/train/fastai/pets/src/train.py). In practice, you can take any custom training script and run it, as is, with Azure Machine Learning.
+For this tutorial, use the training script *train.py* on [GitHub](https://github.com/Azure/azureml-examples/blob/main/python-sdk/workflows/train/fastai/pets/src/train.py). In practice, you can take any custom training script and run it, as is, with Azure Machine Learning.
Create a `ScriptRunConfig` resource to configure your job for running on the desired [compute target](how-to-set-up-training-targets.md).
machine-learning Tutorial 1St Experiment Bring Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-bring-data.md
if __name__ == "__main__":
print(aml_url) ```
+> [!TIP]
+> If you used a different name when you created your compute cluster, make sure to adjust the name in the code `compute_target='cpu-cluster'` as well.
+ ### Understand the code changes The control script is similar to the one from [part 3 of this series](tutorial-1st-experiment-sdk-train.md), with the following new lines:
machine-learning Tutorial 1St Experiment Hello World https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-hello-world.md
Select **Save and run script in terminal** to run your control script, which in
In the terminal, you may be asked to sign in to authenticate. Copy the code and follow the link to complete this step.
+> [!TIP]
+> If you just finished creating the compute cluster, you may see the error "UserError: Required Docker image not found..." Wait about 5 minutes or so, and try again. The compute cluster may need more time before it is ready to spin up nodes.
+ > [!div class="nextstepaction"] > [I submitted code in the cloud](?success=submit-to-cloud#monitor) [I ran into an issue](https://www.research.net/r/7C2NTH7?issue=submit-to-cloud)
machine-learning Tutorial 1St Experiment Sdk Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-sdk-train.md
if __name__ == "__main__":
print(aml_url) ```
+> [!TIP]
+> If you used a different name when you created your compute cluster, make sure to adjust the name in the code `compute_target='cpu-cluster'` as well.
+ ### Understand the code changes :::row:::
marketplace Azure Vm Create Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-create-faq.md
New-AzResourceGroupDeployment -Name "dplisvvm$postfix" -ResourceGroupName "$rgNa
## How do I test a hidden preview image? You can deploy hidden preview images using quickstart templates.
-To deploy a preview image,
-1. Goto the respective quick-start template for [Linux](https://github.com/Azure/azure-quickstart-templates/tree/master/101-vm-simple-linux) or [Windows](https://github.com/Azure/azure-quickstart-templates/tree/master/101-vm-simple-windows), select "Deploy to Azure". This should take you to Azure portal.
+To deploy a preview image,
+1. Goto the respective quick-start template for [Linux](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-simple-linux/) or [Windows](https://github.com/Azure/azure-quickstart-templates/tree/master/101-vm-simple-windows), select "Deploy to Azure". This should take you to Azure portal.
2. In Azure portal, select "Edit template". 3. In the JSON template, search for imageReference and update the publisherid, offerid, skuid, and version of the image. To test preview image, append "-PREVIEW" to the offerid. ![image](https://user-images.githubusercontent.com/79274470/110191995-71c7d500-7de0-11eb-9f3c-6a42f55d8f03.png)
marketplace Pc Saas Fulfillment Api V2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/partner-center-portal/pc-saas-fulfillment-api-v2.md
The publisher must implement a webhook in the SaaS service to keep the SaaS subs
"quantity": " 20", "timeStamp": "2019-04-15T20:17:31.7350641Z", "action": "Reinstate",
- "status": "In Progress"
+ "status": "InProgress"
} ```
marketplace What Is New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/what-is-new.md
Learn about important updates in the commercial marketplace program of Partner C
| Category | Description | Date | | | - | - |
-| Policy | The [Microsoft Publisher Agreement](/legal/marketplace/msft-publisher-agreement) has been updated and simplified. To see whatΓÇÖs changed, see [Change history for Microsoft Publisher Agreement](/legal/marketplace/mpa-change-history). | 2021-04-19 |
+| Policy | The [Microsoft Publisher Agreement](/legal/marketplace/msft-publisher-agreement) has been updated. To see whatΓÇÖs changed, see [Change history for Microsoft Publisher Agreement](/legal/marketplace/mpa-change-history). | 2021-05-07 |
| Offers | Microsoft 365 independent software vendors (ISVs) can now link their software as a service (SaaS) offer to their related Teams apps, Office add-ins (WXPO), and SharePoint SPFx solutions in Partner Center. SaaS ISVs can also declare if their SaaS offer is integrated with Microsoft Graph API. To learn more, see [Test and deploy Microsoft 365 Apps by partners in the Integrated apps portal](/microsoft-365/admin/manage/test-and-deploy-microsoft-365-apps). | 2021-04-08 | | Capabilities | Updated and reorganized the account management documentation to make it easier for independent software vendors (ISVs) to manage their commercial marketplace users and accounts. To learn more, see the following:<ul><li>[Create a new commercial marketplace account](create-account.md)</li><li>[Add new publishers](add-publishers.md)</li><li>[Manage your account](manage-account.md)</li><li>[Switch accounts](switch-accounts.md)</li><li>[Manage tenants](manage-tenants.md)</li><li>[Add and manage users](add-manage-users.md)</li><li>[Assign user roles](user-roles.md)</li><li>[Manage groups](manage-groups.md)</li><li>[Add and manage Azure AD applications](manage-aad-apps.md)</li></ul> | 2021-04-06 | | Capabilities | Reorganized and clarified the [Commercial marketplace transact capabilities](marketplace-commercial-transaction-capabilities-and-considerations.md) documentation to help independent software vendors (ISVs) understand the difference between the various transactable and non-transactable options. | 2021-04-06 |
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
Also, when a NSG is deleted, by default the associated flow log resource is dele
**Flow Logging Costs**: NSG flow logging is billed on the volume of logs produced. High traffic volume can result in large flow log volume and the associated costs. NSG Flow log pricing does not include the underlying costs of storage. Using the retention policy feature with NSG Flow Logging means incurring separate storage costs for extended periods of time. If you want to retain data forever and do not want to apply any retention policy, set retention (days) to 0. For more information, see [Network Watcher Pricing](https://azure.microsoft.com/pricing/details/network-watcher/) and [Azure Storage Pricing](https://azure.microsoft.com/pricing/details/storage/) for additional details.
-**Issues with User-defined Inbound TCP rules**: [Network Security Groups (NSGs)](../virtual-network/network-security-groups-overview.md) are implemented as a [Stateful firewall](https://en.wikipedia.org/wiki/Stateful_firewall?oldformat=true). However, due to current platform limitations, user-defined rules that affect inbound TCP flows are implemented in a stateless fashion. Due to this, flows affected by user-defined inbound rules become non-terminating. Additionally byte and packet counts are not recorded for these flows. Consequently the number of bytes and packets reported in NSG Flow Logs (and Traffic Analytics) could be different from actual numbers. An opt-in flag that fixes these issues is scheduled to be available by March 2021 latest. In the interim, customers facing severe issues due to this behavior can request opting-in via Support, please raise a support request under Network Watcher > NSG Flow Logs.
+**Issues with User-defined Inbound TCP rules**: [Network Security Groups (NSGs)](../virtual-network/network-security-groups-overview.md) are implemented as a [Stateful firewall](https://en.wikipedia.org/wiki/Stateful_firewall?oldformat=true). However, due to current platform limitations, user-defined rules that affect inbound TCP flows are implemented in a stateless fashion. Due to this, flows affected by user-defined inbound rules become non-terminating. Additionally byte and packet counts are not recorded for these flows. Consequently the number of bytes and packets reported in NSG Flow Logs (and Traffic Analytics) could be different from actual numbers. An opt-in flag that fixes these issues is scheduled to be available by June 2021 latest. In the interim, customers facing severe issues due to this behavior can request opting-in via Support, please raise a support request under Network Watcher -> NSG Flow Logs.
**Inbound flows logged from internet IPs to VMs without public IPs**: VMs that don't have a public IP address assigned via a public IP address associated with the NIC as an instance-level public IP, or that are part of a basic load balancer back-end pool, use [default SNAT](../load-balancer/load-balancer-outbound-connections.md) and have an IP address assigned by Azure to facilitate outbound connectivity. As a result, you might see flow log entries for flows from internet IP addresses, if the flow is destined to a port in the range of ports assigned for SNAT. While Azure won't allow these flows to the VM, the attempt is logged and appears in Network Watcher's NSG flow log by design. We recommend that unwanted inbound internet traffic be explicitly blocked with NSG. **Issue with Application Gateway V2 Subnet NSG**: Flow logging on the application gateway V2 subnet NSG is [not supported](../application-gateway/application-gateway-faq.yml#are-nsg-flow-logs-supported-on-nsgs-associated-to-application-gateway-v2-subnet) currently. This issue does not affect Application Gateway V1. **Incompatible Services**: Due to current platform limitations, a small set of Azure services are not supported by NSG Flow Logs. The current list of incompatible services is-- [Azure Kubernetes Services (AKS)](https://azure.microsoft.com/services/kubernetes-service/) - [Azure Container Instances (ACI)](https://azure.microsoft.com/services/container-instances/) - [Logic Apps](https://azure.microsoft.com/services/logic-apps/)
Also, when a NSG is deleted, by default the associated flow log resource is dele
Few common scenarios: 1. **Multiple NICs at a VM**: In case multiple NICs are attached to a virtual machine, flow logging must be enabled on all of them 1. **Having NSG at both NIC and Subnet Level**: In case NSG is configured at the NIC as well as the Subnet level, then flow logging must be enabled at both the NSGs.
+1. **AKS Cluster Subnet**: AKS adds a default NSG at the cluster subnet. As explained in the above point, flow logging must be enabled on this default NSG.
**Storage provisioning**: Storage should be provisioned in tune with expected Flow Log volume.
openshift Howto Create A Storageclass https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/howto-create-a-storageclass.md
The OpenShift persistent volume binder service account will require the ability
```bash ARO_API_SERVER=$(az aro list --query "[?contains(name,'$CLUSTER')].[apiserverProfile.url]" -o tsv)
-oc login -u kubeadmin -p $(az aro list-credentials -g $ARO_RESOURCE_GROUP -n $CLUSTER --query=kubeadminPassword -o tsv) $APISERVER
+oc login -u kubeadmin -p $(az aro list-credentials -g $ARO_RESOURCE_GROUP -n $CLUSTER --query=kubeadminPassword -o tsv) $ARO_API_SERVER
oc create clusterrole azure-secret-reader \ --verb=create,get \
private-link Create Private Endpoint Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/create-private-endpoint-template.md
You can also complete this quickstart by using the [Azure portal](create-private
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-private-endpoint-sql%2Fazuredeploy.json)
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.sql%2Fprivate-endpoint-sql%2Fazuredeploy.json)
## Prerequisites
This template creates a private endpoint for an instance of Azure SQL Database.
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-private-endpoint-sql/). Multiple Azure resources are defined in the template:
Here's how to deploy the ARM template to Azure:
1. To sign in to Azure and open the template, select **Deploy to Azure**. The template creates the private endpoint, the instance of SQL Database, the network infrastructure, and a virtual machine to validate.
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-private-endpoint-sql%2Fazuredeploy.json)
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.sql%2Fprivate-endpoint-sql%2Fazuredeploy.json)
2. Select or create your resource group. 3. Type the SQL Administrator sign-in and password.
private-link Create Private Link Service Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/create-private-link-service-template.md
You can also complete this quickstart by using the [Azure portal](create-private
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-privatelink-service%2Fazuredeploy.json)
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Fprivatelink-service%2Fazuredeploy.json)
## Prerequisites
This template creates a private link service.
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-privatelink-service/). Multiple Azure resources are defined in the template:
Here's how to deploy the ARM template to Azure:
1. To sign in to Azure and open the template, select **Deploy to Azure**. The template creates a virtual machine, standard load balancer, private link service, private endpoint, networking, and a virtual machine to validate.
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-privatelink-service%2Fazuredeploy.json)
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Fprivatelink-service%2Fazuredeploy.json)
2. Select or create your resource group. 3. Type the virtual machine administrator username and password.
Connect to the VM _myConsumerVm{uniqueid}_ from the internet as follows:
a. If prompted, select **Connect**. b. Enter the username and password you specified when you created the VM.
-
+ > [!NOTE] > You might need to select **More choices** > **Use a different account**, to specify the credentials you entered when you created the VM.
private-link Private Link Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-link-service-overview.md
Custom TLV details:
## Limitations The following are the known limitations when using the Private Link service:-- Supported only on Standard Load Balancer
+- Supported only on Standard Load Balancer. Not supported on Basic Load Balancer.
+- Supported only on Standard Load Balancer where backend pool is configured by NIC when using VM/VMSS.
- Supports IPv4 traffic only - Supports TCP and UDP traffic only + ## Next steps - [Create a private link service using Azure PowerShell](create-private-link-service-powershell.md) - [Create a private link service using Azure CLI](create-private-link-service-cli.md)
role-based-access-control Conditions Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/conditions-faq.md
Previously updated : 05/06/2021 Last updated : 05/13/2021 #Customer intent:
If you add three or more expressions for a targeted action, you must define the
The Azure portal does not allow you to edit or view a condition at the management group scope. The **Condition** column isn't displayed for the management group scope. Azure PowerShell and Azure CLI does allow you to add conditions at management group scope.
-**Are conditions supported via Azure AD Privileged Identity Management (PIM) for Azure resources in preview?**
+**Are conditions supported via Privileged Identity Management (PIM) for Azure resources in preview?**
-No.
+Yes. For more information, see [Assign Azure resource roles in Privileged Identity Management](../active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md).
**Are conditions supported for classic administrators?**
role-based-access-control Conditions Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/conditions-overview.md
Previously updated : 05/06/2021 Last updated : 05/13/2021 #Customer intent: As a dev, devops, or it admin, I want to learn how to constrain access within a role assignment by using conditions.
If Chandra tries to read a blob without the Project=Cascade tag, access will not
![Diagram of access is not allowed with a condition.](./media/conditions-overview/condition-access-multiple.png)
+Here is what the condition looks like in the Azure portal:
+
+![Build expression section with values for blob index tags.](./media/shared/condition-expressions.png)
+ Here is what the condition looks like in code: ```
Here is what the condition looks like in code:
) OR (
- @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade'
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEqualsIgnoreCase 'Cascade'
) ) ``` For more information about the format of conditions, see [Azure role assignment condition format and syntax](conditions-format.md).
+## Conditions and Privileged Identity Management (PIM)
+
+You can also add conditions to eligible role assignments using Privileged Identity Management (PIM). With PIM, your end users must activate an eligible role assignment to get permission to perform certain actions. Using conditions in PIM enables you not only to limit a user's access to a resource using fine-grained conditions, but also to use PIM to secure it with a time-bound setting, approval workflow, audit trail, and so on. For more information, see [Assign Azure resource roles in Privileged Identity Management](../active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md).
+ ## Terminology To better understand Azure RBAC and Azure ABAC, you can refer back to the following list of terms.
role-based-access-control Conditions Role Assignments Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/conditions-role-assignments-portal.md
Once you have the Add role assignment condition page open, you can review the ba
1. In the Value box, enter a value for the right side of the expression.
- ![Build expression section with values for blob index tags.](./media/conditions-role-assignments-portal/condition-expressions.png)
+ ![Build expression section with values for blob index tags.](./media/shared/condition-expressions.png)
## Step 6: Review and add condition
search Cognitive Search Defining Skillset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-defining-skillset.md
Content-Type: application/json
"outputs": [ { "name": "organizations",
- "targetName": "organizations"
+ "targetName": "orgs"
} ] },
Content-Type: application/json
"httpHeaders": { "Ocp-Apim-Subscription-Key": "foobar" },
- "context": "/document/organizations/*",
+ "context": "/document/orgs/*",
"inputs": [ { "name": "query",
- "source": "/document/organizations/*"
+ "source": "/document/orgs/*"
} ], "outputs": [
Let's look at the first skill, which is the built-in [entity recognition skill](
"outputs": [ { "name": "organizations",
- "targetName": "organizations"
+ "targetName": "orgs"
} ] }
Let's look at the first skill, which is the built-in [entity recognition skill](
* Every built-in skill has `odata.type`, `input`, and `output` properties. Skill-specific properties provide additional information applicable to that skill. For entity recognition, `categories` is one entity among a fixed set of entity types that the pretrained model can recognize.
-* Each skill should have a ```"context"```. The context represents the level at which operations take place. In the skill above, the context is the whole document, meaning that the entity recognition skill is called once per document. Outputs are also produced at that level. More specifically, ```"organizations"``` are generated as a member of ```"/document"```. In downstream skills, you can refer to this newly created information as ```"/document/organizations"```. If the ```"context"``` field is not explicitly set, the default context is the document.
+* Each skill should have a ```"context"```. The context represents the level at which operations take place. In the skill above, the context is the whole document, meaning that the entity recognition skill is called once per document. Outputs are also produced at that level. The skill returns a property called ```organizations``` that is captured as ```orgs```. More specifically, ```"orgs"``` is now added as a member of ```"/document"```. In downstream skills, you can refer to this newly created enrichment as ```"/document/orgs"```. If the ```"context"``` field is not explicitly set, the default context is the document.
+
+* Outputs from the one skill can conflict with outputs from a different skill. If you have multiple skills returning a ```result``` property, you can use the ```targetName``` property of skill outputs to capture a named JSON output from a skill into a different property.
* The skill has one input called "text", with a source input set to ```"/document/content"```. The skill (entity recognition) operates on the *content* field of each document, which is a standard field created by the Azure blob indexer.
-* The skill has one output called ```"organizations"```. Outputs exist only during processing. To chain this output to a downstream skill's input, reference the output as ```"/document/organizations"```.
+* The skill has one output called ```"organizations"``` that is captured in a property ```orgs```. Outputs exist only during processing. To chain this output to a downstream skill's input, reference the output as ```"/document/orgs"```.
-* For a particular document, the value of ```"/document/organizations"``` is an array of organizations extracted from the text. For example:
+* For a particular document, the value of ```"/document/orgs"``` is an array of organizations extracted from the text. For example:
```json ["Microsoft", "LinkedIn"] ```
-Some situations call for referencing each element of an array separately. For example, suppose you want to pass each element of ```"/document/organizations"``` separately to another skill (such as the custom Bing entity search enricher). You can refer to each element of the array by adding an asterisk to the path: ```"/document/organizations/*"```
+Some situations call for referencing each element of an array separately. For example, suppose you want to pass each element of ```"/document/orgs"``` separately to another skill (such as the custom Bing entity search enricher). You can refer to each element of the array by adding an asterisk to the path: ```"/document/orgs/*"```
The second skill for sentiment extraction follows the same pattern as the first enricher. It takes ```"/document/content"``` as input, and returns a sentiment score for each content instance. Since you did not set the ```"context"``` field explicitly, the output (mySentiment) is now a child of ```"/document"```.
Recall the structure of the custom Bing entity search enricher:
"httpHeaders": { "Ocp-Apim-Subscription-Key": "foobar" },
- "context": "/document/organizations/*",
+ "context": "/document/orgs/*",
"inputs": [ { "name": "query",
- "source": "/document/organizations/*"
+ "source": "/document/orgs/*"
} ], "outputs": [
Recall the structure of the custom Bing entity search enricher:
This definition is a [custom skill](cognitive-search-custom-skill-web-api.md) that calls a web API as part of the enrichment process. For each organization identified by entity recognition, this skill calls a web API to find the description of that organization. The orchestration of when to call the web API and how to flow the information received is handled internally by the enrichment engine. However, the initialization necessary for calling this custom API must be provided in the JSON (such as uri, httpHeaders, and the inputs expected). For guidance in creating a custom web API for the enrichment pipeline, see [How to define a custom interface](cognitive-search-custom-skill-interface.md).
-Notice that the "context" field is set to ```"/document/organizations/*"``` with an asterisk, meaning the enrichment step is called *for each* organization under ```"/document/organizations"```.
+Notice that the "context" field is set to ```"/document/orgs/*"``` with an asterisk, meaning the enrichment step is called *for each* organization under ```"/document/orgs"```.
-Output, in this case a company description, is generated for each organization identified. When referring to the description in a downstream step (for example, in key phrase extraction), you would use the path ```"/document/organizations/*/description"``` to do so.
+Output, in this case a company description, is generated for each organization identified. When referring to the description in a downstream step (for example, in key phrase extraction), you would use the path ```"/document/orgs/*/description"``` to do so.
## Add structure
search Semantic Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/semantic-search-overview.md
Results are then re-scored based on the [conceptual similarity](semantic-ranking
To use semantic capabilities in queries, you'll need to make small modifications to the [search request](semantic-how-to-query-request.md), but no extra configuration or reindexing is required.
+## Semantic capabilities and limitations
+
+Semantic search is a newer technology so it's important to set expectations about what it can and cannot do.
+
+It improves the quality of search results in two ways. First, the promotion of documents that are semantically closer to the intent of original query is a significant benefit. Second, results are more immediately consumable when captions, and potentially answers, are present on the page. At all times, the engine is working with existing content. Language models used in semantic search are designed to extract an intact string that looks like an answer, but won't try to compose a new string as an answer to a query, or as a caption for a matching document.
+
+Semantic search is not a logic engine and does not infer information from different pieces of content within the document or corpus of documents. For example, given a query for "resort hotels in a desert" absent any geographical input, the engine won't produce matches for hotels located in Arizona or Nevada, even though both states have deserts. Similarly, if the query includes the clause "in the last 5 years", the engine won't calculate a time interval based on the current date to return.
+
+In Cognitive Search, mechanisms that might be helpful for the above scenarios include [synonym maps](search-synonyms.md) that allow you to build associations among terms that are outwardly different, or [date filters](search-query-odata-filter.md) specified as an OData expression.
+ ## Availability and pricing Semantic search is available through [sign-up registration](https://aka.ms/SemanticSearchPreviewSignup). Between preview launch on March 2 through early June, semantic features are offered free of charge.
search Tutorial Csharp Create First App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/tutorial-csharp-create-first-app.md
For this sample, you are using publicly available hotel data. This data is an ar
```csharp {
- "SearchServiceName": "<YOUR-SEARCH-SERVICE-URI>",
+ "SearchServiceUri": "<YOUR-SEARCH-SERVICE-URI>",
"SearchServiceQueryApiKey": "<YOUR-SEARCH-SERVICE-API-KEY>" } ```
Delete the content of Index.cshtml in its entirety, and rebuild the file in the
{ // Show the result count. <p class="sampleText">
- @Model.resultList.Count Results
+ @Model.resultList.TotalCount Results
</p>
- @for (var i = 0; i < Model.resultList.Count; i++)
+ var results = Model.resultList.GetResults().ToList();
+
+ @for (var i = 0; i < results.Count; i++)
{ // Display the hotel name and description.
- @Html.TextAreaFor(m => m.resultList[i].Document.HotelName, new { @class = "box1" })
- @Html.TextArea($"desc{i}", Model.resultList[i].Document.Description, new { @class = "box2" })
+ @Html.TextAreaFor(m => results[i].Document.HotelName, new { @class = "box1" })
+ @Html.TextArea($"desc{i}", results[i].Document.Description, new { @class = "box2" })
} } }
The Azure Cognitive Search call is encapsulated in our **RunQueryAsync** method.
{ InitSearch();
- var options = new SearchOptions() { };
+ var options = new SearchOptions()
+ {
+ IncludeTotalCount = true
+ };
// Enter Hotel property names into this list so only these values will be returned. // If Select is empty, all values will be returned, which can be inefficient.
The Azure Cognitive Search call is encapsulated in our **RunQueryAsync** method.
options.Select.Add("Description"); // For efficiency, the search call should be asynchronous, so use SearchAsync rather than Search.
- var searchResult = await _searchClient.SearchAsync<Hotel>(model.searchText, options).ConfigureAwait(false);
- model.resultList = searchResult.Value.GetResults().ToList();
+ model.resultList = await _searchClient.SearchAsync<Hotel>(model.searchText, options).ConfigureAwait(false);
// Display the results. return View("Index", model);
To improve upon the user experience, add more features, notably paging (either u
These next steps are addressed in the remaining tutorials. Let's start with paging. > [!div class="nextstepaction"]
-> [C# Tutorial: Search results pagination - Azure Cognitive Search](tutorial-csharp-paging.md)
+> [C# Tutorial: Search results pagination - Azure Cognitive Search](tutorial-csharp-paging.md)
search Tutorial Csharp Search Query Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/tutorial-csharp-search-query-integration.md
The search suggester, `sg`, is defined in the [schema file](https://github.com/A
## Client: Suggestions from the catalog
-Th Suggest function API is called in the React app at `\src\components\SearchBar\SearchBar.js` as part of component initialization:
+The Suggest function API is called in the React app at `\src\components\SearchBar\SearchBar.js` as part of component initialization:
:::code language="javascript" source="~/azure-search-dotnet-samples/search-website/src/components/SearchBar/SearchBar.js" highlight="52-60" :::
search Tutorial Javascript Search Query Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/tutorial-javascript-search-query-integration.md
Routing for the Suggest API is contained in the [function.json](https://github.c
## Client: Suggestions from the catalog
-Th Suggest function API is called in the React app at `\src\components\SearchBar\SearchBar.js` as part of component initialization:
+The Suggest function API is called in the React app at `\src\components\SearchBar\SearchBar.js` as part of component initialization:
:::code language="javascript" source="~/azure-search-javascript-samples/search-website/src/components/SearchBar/SearchBar.js" highlight="52-60" :::
security-center Defender For Dns Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-dns-introduction.md
Title: Azure Defender for DNS - the benefits and features
description: Learn about the benefits and features of Azure Defender for DNS Previously updated : 12/07/2020 Last updated : 05/12/2021
[Azure DNS](../dns/dns-overview.md) is a hosting service for DNS domains that provides name resolution by using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services.
-Azure Defender for DNS provides an additional layer of protection for your cloud resources by:
+Azure Defender for DNS provides an additional layer of protection for your resources that are connected to Azure DNS by:
- continuously monitoring all DNS queries from your Azure resources - running advanced security analytics to alert you about suspicious activity
Azure Defender for DNS provides an additional layer of protection for your cloud
|Aspect|Details| |-|:-|
-|Release state:|Preview<br>[!INCLUDE [Legalese](../../includes/security-center-preview-legal-text.md)] |
+|Release state:|General Availability (GA)|
|Pricing:|**Azure Defender for DNS** is billed as shown on [Security Center pricing](https://azure.microsoft.com/pricing/details/security-center/)| |Clouds:|![Yes](./media/icons/yes-icon.png) Commercial clouds<br>![No](./media/icons/no-icon.png) National/Sovereign (US Gov, China Gov, Other Gov)| ||| ## What are the benefits of Azure Defender for DNS?
-Azure Defender for DNS protects against issues including:
+Azure Defender for DNS protects resources that are connected to Azure DNS against issues including:
- Data exfiltration from your Azure resources using DNS tunneling - Malware communicating with C&C server
security-center Defender For Resource Manager Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-resource-manager-introduction.md
Title: Azure Defender for Resource Manager - the benefits and features
description: Learn about the benefits and features of Azure Defender for Resource Manager Previously updated : 12/07/2020 Last updated : 05/12/2021
Azure Defender for Resource Manager automatically monitors the resource manageme
|Aspect|Details| |-|:-|
-|Release state:|Preview<br>[!INCLUDE [Legalese](../../includes/security-center-preview-legal-text.md)] |
+|Release state:|General Availability (GA)|
|Pricing:|**Azure Defender for Resource Manager** is billed as shown on [Security Center pricing](https://azure.microsoft.com/pricing/details/security-center/)| |Clouds:|![Yes](./media/icons/yes-icon.png) Commercial clouds<br>![No](./media/icons/no-icon.png) National/Sovereign (US Gov, China Gov, Other Gov)| |||
security-center Exempt Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/exempt-resource.md
Title: Exempt an Azure Security Center recommendation from a resource, subscript
description: Learn how to create rules to exempt security recommendations from subscriptions or management groups and prevent them from impacting your secure score Previously updated : 04/21/2021 Last updated : 05/12/2021
Learn more in the following pages:
## FAQ - Exemption rules
+- [What happens when one recommendation is in multiple policy initiatives?](#what-happens-when-one-recommendation-is-in-multiple-policy-initiatives)
+- [Are there any recommendations that don't support exemption?](#are-there-any-recommendations-that-dont-support-exemption)
+ ### What happens when one recommendation is in multiple policy initiatives? Sometimes, a security recommendation appears in more than one policy initiative. If you've got multiple instances of the same recommendation assigned to the same subscription, and you create an exemption for the recommendation, it will affect all of the initiatives that you have permission to edit.
If you try to create an exemption for this recommendation, you'll see one of the
*You have limited permissions to apply the exemption on all the policy initiatives, the exemptions will be created only on the initiatives with sufficient permissions.*
+### Are there any recommendations that don't support exemption?
+
+These recommendations don't support exemption:
+
+- Container CPU and memory limits should be enforced
+- Privileged containers should be avoided
+- Container images should be deployed from trusted registries only
+- Containers should listen on allowed ports only
+- Services should listen on allowed ports only
+- Least privileged Linux capabilities should be enforced for container
+- Immutable (read-only) root filesystem should be enforced for containers
+- Container with privilege escalation should be avoided
+- Running containers as root user should be avoided
+- Usage of host networking and ports should be restricted
+- Containers sharing sensitive host namespaces should be avoided
+- Usage of pod HostPath volume mounts should be restricted to a known list to restrict node access from compromised containers
+- Overriding or disabling of containers AppArmor profile should be restricted
+ ## Next steps
security-center Quickstart Automation Alert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/quickstart-automation-alert.md
This quickstart describes how to use an Azure Resource Manager template (ARM tem
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2f101-securitycenter-create-automation-for-alertnamecontains%2fazuredeploy.json)
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.security%2fsecuritycenter-create-automation-for-alertnamecontains%2fazuredeploy.json)
## Prerequisites
For a list of the roles and permissions required to work with Azure Security Cen
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-securitycenter-create-automation-for-alertnamecontains/). ### Relevant resources
For other Security Center quickstart templates, see these [community contributed
```azurepowershell-interactive New-AzResourceGroup -Name <resource-group-name> -Location <resource-group-location> #use this command when you need to create a new resource group for your deployment
- New-AzResourceGroupDeployment -ResourceGroupName <resource-group-name> -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-securitycenter-create-automation-for-alertnamecontains/azuredeploy.json
+ New-AzResourceGroupDeployment -ResourceGroupName <resource-group-name> -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.security/securitycenter-create-automation-for-alertnamecontains/azuredeploy.json
``` - **CLI**: ```azurecli-interactive az group create --name <resource-group-name> --location <resource-group-location> #use this command when you need to create a new resource group for your deployment
- az deployment group create --resource-group <my-resource-group> --template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-securitycenter-create-automation-for-alertnamecontains/azuredeploy.json
+ az deployment group create --resource-group <my-resource-group> --template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.security/securitycenter-create-automation-for-alertnamecontains/azuredeploy.json
``` - **Portal**:
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2f101-securitycenter-create-automation-for-alertnamecontains%2fazuredeploy.json)
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.security%2fsecuritycenter-create-automation-for-alertnamecontains%2fazuredeploy.json)
To find more information about this deployment option, see [Use a deployment button to deploy templates from GitHub repository](../azure-resource-manager/templates/deploy-to-azure-button.md).
security Customer Lockbox Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/customer-lockbox-overview.md
Previously updated : 04/05/2021 Last updated : 05/12/2021 # Customer Lockbox for Microsoft Azure
The following services are generally available for Customer Lockbox:
- Azure API Management - Azure App Service
+- Azure Cognitive Search
- Azure Cognitive Services - Azure Container Registry - Azure Database for MySQL
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/feature-availability.md
Title: Azure service cloud feature availability for US government customers description: Lists feature availability for Azure security services, such as Azure Sentinel for US government customers--++ Last updated 04/29/2021
sentinel Collaborate In Microsoft Teams https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/collaborate-in-microsoft-teams.md
+
+ Title: Collaborate in Microsoft Teams with an Azure Sentinel incident team | Microsoft Docs
+description: Learn how to connect to Microsoft Teams from Azure Sentinel to collaborate with others on your team using Azure Sentinel data.
+
+documentationcenter: na
++
+editor: ''
+++
+ms.devlang: na
+
+ na
+ Last updated : 05/03/2021++++
+# Collaborate in Microsoft Teams (Public preview)
+
+Azure Sentinel supports a direct integration with [Microsoft Teams](/microsoftteams/), enabling you to jump directly into teamwork on specific incidents.
++
+> [!IMPORTANT]
+> Integration with Microsoft Teams is is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Overview
+
+Integrating with Microsoft Teams directly from Azure Sentinel enables your teams to collaborate seamlessly across the organization, and with external stakeholders.
+
+Use Microsoft Teams with an Azure Sentinel *incident team* to centralize your communication and coordination across the relevant personnel. Incident teams are especially helpful when used as a dedicated conference bridge for high-severity, ongoing incidents.
+
+Organizations that already use Microsoft Teams for communication and collaboration can use the Azure Sentinel integration to bring security data directly into their conversations and daily work.
+
+An Azure Sentinel incident team always has the most updated and recent data from Azure Sentinel, ensuring that your teams have the most relevant data right at hand.
+
+## Use an incident team to investigate
+
+Investigate together with an *incident team* by integrating Microsoft Teams directly from your incident.
+
+**To create your incident team**:
+
+1. In Azure Sentinel, in the **Threat management** > **Incidents** grid, select the incident you're currently investigating.
+
+1. At the bottom of the incident pane that appears on the right, select **Actions** > **Create team**.
+
+ [ ![Create a team to collaborate in a incident team.](media/collaborate-in-microsoft-teams/create-team.png) ](media/collaborate-in-microsoft-teams/create-team.png#lightbox)
+
+ The **New team** pane opens on the right. Define the following settings for your incident team:
+
+ - **Team name**: Automatically defined as the name of your incident. Modify the name as needed so that it's easily identifiable to you.
+ - **Description**: Enter a meaningful description for your incident team.
+ - **Add groups**: Select one or more Azure AD groups to add to your incident team. Individual users aren't supported.
+
+ > [!TIP]
+ > If you regularly work with the same teams, you may want to select the star :::image type="icon" source="media/collaborate-in-microsoft-teams/save-as-favorite.png" border="false"::: to save them as favorites.
+ >
+ > Favorites are automatically selected the next time you create a team. If you want to remove it from the next team you create, either select **Delete** :::image type="icon" source="media/collaborate-in-microsoft-teams/delete-user-group.png" border="false":::, or select the star :::image type="icon" source="media/collaborate-in-microsoft-teams/save-as-favorite.png" border="false"::: again to remove the team from your favorites altogether.
+ >
+
+1. When you're done adding groups, select **Create** to create your incident team.
+
+ The incident pane refreshes, with a link to your new incident team under the **Team name** title.
+
+ [ ![Click the Teams integration link added to your incident.](media/collaborate-in-microsoft-teams/teams-link-added-to-incident.jpg) ](media/collaborate-in-microsoft-teams/teams-link-added-to-incident.jpg#lightbox)
++
+1. Select your **Teams integration** link to switch into Microsoft Teams, where all of the data about your incident is listed on the **Incident page** tab.
+
+ [ ![Incident page in Microsoft Teams.](media/collaborate-in-microsoft-teams/incident-in-teams.jpg) ](media/collaborate-in-microsoft-teams/incident-in-teams.jpg#lightbox)
+
+Continue the conversation about the investigation in Teams for as long as needed. You have the full incident details directly in teams.
+
+> [!TIP]
+> When you [close an incident](tutorial-investigate-cases.md#closing-an-incident), the related incident team you've created in Microsoft Teams is archived.
+>
+> If the incident is ever re-opened, the related incident team is also re-opened in Microsoft Teams so that you can continue your conversation, right where you left off.
+>
+
+## Next steps
+
+For more information, see:
+
+- [Tutorial: Investigate incidents with Azure Sentinel](tutorial-investigate-cases.md)
+- [Overview of teams and channels in Microsoft Teams](/microsoftteams/teams-channels-overview/)
sentinel Customize Entity Activities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/customize-entity-activities.md
+
+ Title: Customize activities on Azure Sentinel entity timelines | Microsoft Docs
+description: Add customized activities to those Azure Sentinel tracks and displays on the timeline of entity pages
+
+documentationcenter: na
++
+editor: ''
+++
+ms.devlang: na
+
+ na
+ Last updated : 05/05/2021+++
+# Customize activities on entity page timelines
+
+> [!IMPORTANT]
+>
+> - Activity customization is in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Introduction
+
+In addition to the activities tracked and presented in the timeline by Azure Sentinel out-of-the-box, you can create any other activities you want to keep track of and have them presented on the timeline as well. You can create customized activities based on queries of entity data from any connected data sources. The following examples show how you might use this capability:
+
+- Add new activities to the entity timeline by modifying existing out-of-the-box activity templates.
+
+- Add new activities from custom logs - for example, from a physical access-control log, you can add a user's entry and exit activities for a particular building to the user's timeline.
+
+## Getting started
+
+1. From the Azure Sentinel navigation menu, select **Entity behavior**.
+
+1. In the **Entity behavior** blade, select **Customize entity page** at the top of the screen.
+
+ :::image type="content" source="./media/customize-entity-activities/entity-behavior-blade.png" alt-text="Entity behavior page":::
+
+1. You'll see a page with a list of any activities you've created in the **My activities** tab. In the **Activity templates** tab, you'll see the collection of activities offered out-of-the-box by Microsoft security researchers. These are the activities that are already being tracked and displayed on the timelines in your entity pages.
+
+## Create an activity from a template
+
+1. Click on the **Activity templates** tab to see the various activities available by default. You can filter the list by entity type as well as by data source. Selecting an activity from the list will display the following details in the preview pane:
+
+ - A description of the activity
+
+ - The data source that provides the events that make up the activity
+
+ - The identifiers used to identify the entity in the raw data
+
+ - The query that results in the detection of this activity
+
+1. Click the **Create activity** button at the bottom of the preview pane to start the activity creation wizard.
+
+1. The **Activity wizard - Create new activity from template** will open, with its fields already populated from the template. You can make changes as you like in the **General** and **Activity configuration** tabs.
+
+1. When you are satisfied, select the **Review and create** tab. When you see the **Validation passed** message, click the **Create** button at the bottom.
+
+## Create an activity from scratch
+
+From the top of the activities page, click on **Add activity** to start the activity creation wizard.
+
+The **Activity wizard - Create new activity** will open, with its fields blank.
+
+### General tab
+1. Enter a name for your activity (example: "user added to group").
+
+1. Enter a description of the activity (example: "user group membership change based on Windows event ID 4728").
+
+1. Select the type of entity (user or host) this query will track.
+
+1. You can filter by additional parameters to help refine the query and optimize its performance. For example, you can filter for Active Directory users by choosing the **IsDomainJoined** parameter and setting the value to **True**.
+
+1. You can select the initial status of the activity to **Enabled** or **Disabled**.
+
+### Activity configuration tab
+
+#### Writing the activity query
+
+Here you will write or paste the KQL query that will be used to detect the activity for the chosen entity, and determine how it will be represented in the timeline.
+
+In order to correlate events and detect the custom activity, the KQL requires an input of several parameters, depending on the entity type. The parameters are the various identifiers of the entity in question.
+
+Selecting a strong identifier is better in order to have one-to-one mapping between the query results and the entity. Selecting a weak identifier may yield inaccurate results. [Learn more about entities and strong vs. weak identifiers](entities-in-azure-sentinel.md).
+
+The following table provides information about the entities' identifiers.
+
+**Strong identifiers for account and host entities**
+
+At least one identifier is required in a query.
+
+| Entity | Identifier | Description |
+| - | - | - |
+| **Account** | Account_Sid | The on-premises SID of the account in Active Directory |
+| | Account_AadUserId | The Azure AD object ID of the user in Azure Active Directory |
+| | Account_Name + Account_NTDomain | Similar to SamAccountName (example: Contoso\Joe) |
+| | Account_Name + Account_UPNSuffix | Similar to UserPrincipalName (example: Joe@Contoso.com) |
+| **Host** | Host_HostName + Host_NTDomain | similar to fully qualified domain name (FQDN) |
+| | Host_HostName + Host_DnsDomain | similar to fully qualified domain name (FQDN) |
+| | Host_NetBiosName + Host_NTDomain | similar to fully qualified domain name (FQDN) |
+| | Host_NetBiosName + Host_DnsDomain | similar to fully qualified domain name (FQDN) |
+| | Host_AzureID | the Azure AD object ID of the host in Azure Active Directory (if AAD domain joined) |
+| | Host_OMSAgentID | the OMS Agent ID of the agent installed on a specific host (unique per host) |
+|
+
+Based on the entity selected you will see the available identifiers. Clicking on the relevant identifiers will paste the identifier into the query, at the location of the cursor.
+
+> [!NOTE]
+> - The query must contain the **TimeGenerated** field, in order to place the detected activity in the entity's timeline.
+>
+> - You can project **up to 10 fields** in the query.
+
+#### Presenting the activity in the timeline
+
+You can determine how the activity will be presented in the timeline for your convenience.
+
+You can add dynamic parameters to the activity output with the following format: `{{ParameterName}}`
+
+You can add the following parameters:
+
+- Any field you projected in the query
+- Count ΓÇô use this parameter to summarize the count of the KQL query output
+- StartTimeUTC ΓÇô Start time of the activity in UTC
+- EndTimeUTC ΓÇô End time of the activity in UTC
+
+**Example**: "User `{{TargetUsername}}` was added to group `{{GroupName}}` by `{{SubjectUsername}}`"
+
+### Review and create tab
+
+1. Verify all the configuration information of your custom activity.
+
+1. When the **Validation passed** message appears, click **Create** to create the activity. You can edit or change it later in the **My Activities** tab.
+
+## Manage your activities
+
+Manage your custom activities from the **My Activities** tab. Click on the ellipsis (...) at the end of an activity's row to:
+
+- Edit the activity.
+- Duplicate the activity to create a new, slightly different one.
+- Delete the activity.
+- Disable the activity (without deleting it).
+
+## View activities in an entity page
+
+Whenever you enter an entity page, all the enabled activity queries for that entity will run, providing you with up-to-the-minute information in the entity timeline. You'll see the activities in the timeline, alongside alerts and bookmarks.
+
+You can use the **Timeline content** filter to present only activities (or any combination of activities, alerts, and bookmarks).
+
+You can also use the **Activities** filter to present or hide specific activities.
+
+## Next steps
+
+In this document, you learned how to create custom activities for your entity page timelines. To learn more about Azure Sentinel, see the following articles:
+- Get the complete picture on [entity pages](identify-threats-with-entity-behavior-analytics.md).
+- See the full list of [entities and identifiers](entities-reference.md).
sentinel Fusion https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/fusion.md
ms.devlang: na
na Previously updated : 08/30/2020 Last updated : 05/05/2021
Customized for your environment, this detection technology not only reduces [fal
## Configuration for advanced multistage attack detection
+### Enable fusion rule
+ This detection is enabled by default in Azure Sentinel. To check the status, or to disable it in the event that you are using an alternative solution to create incidents based on multiple alerts, use the following instructions:
-1. If you haven't already done so, sign in to the [Azure portal](https://portal.azure.com).
+1. If you haven't already done so, sign in to the [Azure portal](https://portal.azure.com) and enter **Azure Sentinel**.
-1. Navigate to **Azure Sentinel** > **Configuration** > **Analytics**
+1. From the Azure Sentinel navigation menu, select **Analytics**.
-1. Select **Active rules**, and then locate **Advanced Multistage Attack Detection** in the **NAME** column by filtering the list for the **Fusion** rule type. Check the **STATUS** column to confirm whether this detection is enabled or disabled.
+1. Select the **Active rules** tab, and then locate **Advanced Multistage Attack Detection** in the **NAME** column by filtering the list for the **Fusion** rule type. Check the **STATUS** column to confirm whether this detection is enabled or disabled.
- :::image type="content" source="./media/fusion/selecting-fusion-rule-type.png" alt-text="{alt-text}":::
+ :::image type="content" source="./media/fusion/selecting-fusion-rule-type.png" alt-text="{alt-text}" lightbox="./media/fusion/selecting-fusion-rule-type.png":::
-1. To change the status, select this entry and on the **Advanced Multistage Attack Detection** blade, select **Edit**.
+1. To change the status, select this entry and on the **Advanced Multistage Attack Detection** preview pane, select **Edit**.
1. On the **Rule creation wizard** blade, the change of status is automatically selected for you, so select **Next: Review**, and then **Save**.
This detection is enabled by default in Azure Sentinel. To check the status, or
> [!NOTE] > Azure Sentinel currently uses 30 days of historical data to train the machine learning systems. This data is always encrypted using Microsoft’s keys as it passes through the machine learning pipeline. However, the training data is not encrypted using [Customer Managed Keys (CMK)](customer-managed-keys.md) if you enabled CMK in your Azure Sentinel workspace. To opt out of Fusion, navigate to **Azure Sentinel** \> **Configuration** \> **Analytics \> Active rules \> Advanced Multistage Attack Detection** and in the **Status** column, select **Disable.**
+### Configure scheduled analytics rules for fusion detections
+
+> [!IMPORTANT]
+>
+> - Fusion-based detection using analytics rule alerts is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+**Fusion** can detect multi-stage attacks using alerts generated by a set of [scheduled analytics rules](tutorial-detect-threats-custom.md). We recommend you take the following steps to configure and enable these rules, so that you can get the most out of Azure Sentinel's fusion capabilities.
+
+1. Use the following **scheduled analytics rule templates**, which can be found in the **Rule templates** tab in the **Analytics** blade, to create new rules. Click on the rule name in the templates gallery, and click **Create rule** in the preview pane:
+
+ - [Cisco - firewall block but success logon to Azure AD](https://github.com/Azure/Azure-Sentinel/blob/60e7aa065b196a6ed113c748a6e7ae3566f8c89c/Detections/MultipleDataSources/SigninFirewallCorrelation.yaml)
+ - [Fortinet - Beacon pattern detected](https://github.com/Azure/Azure-Sentinel/blob/83c6d8c7f65a5f209f39f3e06eb2f7374fd8439c/Detections/CommonSecurityLog/Fortinet-NetworkBeaconPattern.yaml)
+ - [IP with multiple failed Azure AD logins successfully logs in to Palo Alto VPN](https://github.com/Azure/Azure-Sentinel/blob/60e7aa065b196a6ed113c748a6e7ae3566f8c89c/Detections/MultipleDataSources/HostAADCorrelation.yaml)
+ - [Multiple Password Reset by user](https://github.com/Azure/Azure-Sentinel/blob/83c6d8c7f65a5f209f39f3e06eb2f7374fd8439c/Detections/MultipleDataSources/MultiplePasswordresetsbyUser.yaml)
+ - [New Admin account activity seen which was not seen historically](https://github.com/Azure/Azure-Sentinel/blob/83c6d8c7f65a5f209f39f3e06eb2f7374fd8439c/Hunting%20Queries/OfficeActivity/new_adminaccountactivity.yaml)
+ - [Rare application consent](https://github.com/Azure/Azure-Sentinel/blob/83c6d8c7f65a5f209f39f3e06eb2f7374fd8439c/Detections/AuditLogs/RareApplicationConsent.yaml)
+ - [SharePointFileOperation via previously unseen IPs](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/OfficeActivity/SharePoint_Downloads_byNewIP.yaml)
+ - [Suspicious Resource deployment](https://github.com/Azure/Azure-Sentinel/blob/83c6d8c7f65a5f209f39f3e06eb2f7374fd8439c/Detections/AzureActivity/NewResourceGroupsDeployedTo.yaml)
+
+ > [!NOTE]
+ > For the set of scheduled analytics rules used by Fusion, the ML algorithm does fuzzy matching for the KQL queries provided in the templates. Renaming the templates will not impact Fusion detections.
+
+1. Review **entity mapping** for these scheduled rules. Use the [entity mapping configuration section](map-data-fields-to-entities.md) to map parameters from your query results to Azure Sentinel-recognized entities. As Fusion correlates alerts based on entities (such as *user account* or *IP address*), the ML algorithms cannot perform alert matching without the entity information.
+
+1. Review the **tactics** in your analytics rule details. The Fusion ML algorithm uses MITRE ATT&CK tactic information for detecting multi-stage attacks, and the tactics you label the analytics rules with will show up in the resulting incidents. Fusion calculations may be affected if incoming alerts are missing tactic information.
+
+1. Adjust **alert threshold** as needed. Fusion generates incidents based on the alerts raised from your scheduled analytics rules. If you'd like to reduce the number of Fusion incidents for a specific analytics rule, adjust the alert threshold as needed. You can also disable the specific analytics rule if you do not want to receive any incidents based on that rule.
+ ## Attack detection scenarios The following section lists the types of correlation scenarios, grouped by threat classification, that Azure Sentinel looks for using Fusion technology.
This scenario is currently in **PREVIEW**.
- **Sign-in event from user with leaked credentials leading to multiple VM creation activities**
-## Credential harvesting (New threat classification)
+## Credential access
+(New threat classification)
+
+### New! Multiple passwords reset by user following suspicious sign-in
+This scenario makes use of alerts produced by **scheduled analytics rules**.
+
+This scenario is currently in **PREVIEW**.
+
+**MITRE ATT&CK tactics:** Initial Access, Credential Access
+
+**MITRE ATT&CK techniques:** Valid Account (T1078), Brute Force (T1110)
+
+**Data connector sources:** Azure Sentinel (scheduled analytics rule), Azure Active Directory Identity Protection
+
+**Description:** Fusion incidents of this type indicate that a user reset multiple passwords following a suspicious sign-in to an Azure AD account. This evidence suggests that the account noted in the Fusion incident description has been compromised and was used to perform multiple password resets in order to gain access to multiple systems and resources. Account manipulation (including password reset) may aid adversaries in maintaining access to credentials and certain permission levels within an environment. The permutations of suspicious Azure AD sign-in alerts with multiple passwords reset alerts are:
+
+- **Impossible travel to an atypical location leading to multiple passwords reset**
+
+- **Sign-in event from an unfamiliar location leading to multiple passwords reset**
+
+- **Sign-in event from an infected device leading to multiple passwords reset**
+
+- **Sign-in event from an anonymous IP leading to multiple passwords reset**
+
+- **Sign-in event from user with leaked credentials leading to multiple passwords reset**
+
+### New! Suspicious sign-in coinciding with successful sign-in to Palo Alto VPN by IP with multiple failed Azure AD sign-ins
+This scenario makes use of alerts produced by **scheduled analytics rules**.
+
+This scenario is currently in **PREVIEW**.
+
+**MITRE ATT&CK tactics:** Initial Access, Credential Access
+
+**MITRE ATT&CK techniques:** Valid Account (T1078), Brute Force (T1110)
+
+**Data connector sources:** Azure Sentinel (scheduled analytics rule), Azure Active Directory Identity Protection
+
+**Description:** Fusion incidents of this type indicate that a suspicious sign-in to an Azure AD account coincided with a successful sign-in through a Palo Alto VPN from an IP address from which multiple failed Azure AD sign-ins occurred in a similar time frame. Though not evidence of a multistage attack, the correlation of these two lower-fidelity alerts results in a high-fidelity incident suggesting malicious initial access to the organization's network. Alternatively, this could be an indication of an attacker trying to use brute force techniques to gain access to an Azure AD account. The permutations of suspicious Azure AD sign-in alerts with "IP with multiple failed Azure AD logins successfully logs in to Palo Alto VPN" alerts are:
+- **Impossible travel to an atypical location coinciding with IP with multiple failed Azure AD logins successfully logs in to Palo Alto VPN**
+
+- **Sign-in event from an unfamiliar location coinciding with IP with multiple failed Azure AD logins successfully logs in to Palo Alto VPN**
+
+- **Sign-in event from an infected device coinciding with IP with multiple failed Azure AD logins successfully logs in to Palo Alto VPN**
+
+- **Sign-in event from an anonymous IP coinciding with IP with multiple failed Azure AD logins successfully logs in to Palo Alto VPN**
+
+- **Sign-in event from user with leaked credentials coinciding with IP with multiple failed Azure AD logins successfully logs in to Palo Alto VPN**
+
+## Credential harvesting
+(New threat classification)
### Malicious credential theft tool execution following suspicious sign-in **MITRE ATT&CK tactics:** Initial Access, Credential Access
This scenario is currently in **PREVIEW**.
**Data connector sources:** Azure Active Directory Identity Protection, Microsoft Defender for Endpoint
-**Description:** Fusion incidents of this type indicate that a known credential theft tool was executed following a suspicious Azure AD sign-in. This provides a high-confidence indication that the user account noted in the alert description has been compromised and may have successfully used a tool like **Mimikatz** to harvest credentials such as keys, plaintext passwords and/or password hashes from the system. The harvested credentials may allow an attacker to access sensitive data, escalate privileges, and/or move laterally across the network. The permutations of suspicious Azure AD sign-in alerts with the malicious credential theft tool alert are:
+**Description:** Fusion incidents of this type indicate that a known credential theft tool was executed following a suspicious Azure AD sign-in. This evidence suggests with high confidence that the user account noted in the alert description has been compromised and may have successfully used a tool like **Mimikatz** to harvest credentials such as keys, plaintext passwords and/or password hashes from the system. The harvested credentials may allow an attacker to access sensitive data, escalate privileges, and/or move laterally across the network. The permutations of suspicious Azure AD sign-in alerts with the malicious credential theft tool alert are:
- **Impossible travel to atypical locations leading to malicious credential theft tool execution**
This scenario is currently in **PREVIEW**.
**Data connector sources:** Azure Active Directory Identity Protection, Microsoft Defender for Endpoint
-**Description:** Fusion incidents of this type indicate that activity associated with patterns of credential theft occurred following a suspicious Azure AD sign-in. This provides a high-confidence indication that the user account noted in the alert description has been compromised and used to steal credentials such as keys, plain-text passwords, password hashes, and so on. The stolen credentials may allow an attacker to access sensitive data, escalate privileges, and/or move laterally across the network. The permutations of suspicious Azure AD sign-in alerts with the credential theft activity alert are:
+**Description:** Fusion incidents of this type indicate that activity associated with patterns of credential theft occurred following a suspicious Azure AD sign-in. This evidence suggests with high confidence that the user account noted in the alert description has been compromised and used to steal credentials such as keys, plain-text passwords, password hashes, and so on. The stolen credentials may allow an attacker to access sensitive data, escalate privileges, and/or move laterally across the network. The permutations of suspicious Azure AD sign-in alerts with the credential theft activity alert are:
- **Impossible travel to atypical locations leading to suspected credential theft activity**
This scenario is currently in **PREVIEW**.
- **Sign-in event from user with leaked credentials leading to suspected credential theft activity**
-## Crypto-mining (New threat classification)
+## Crypto-mining
+(New threat classification)
### Crypto-mining activity following suspicious sign-in
This scenario is currently in **PREVIEW**.
**Data connector sources:** Azure Active Directory Identity Protection, Azure Defender (Azure Security Center)
-**Description:** Fusion incidents of this type indicate crypto-mining activity associated with a suspicious sign-in to an Azure AD account. This provides a high-confidence indication that the user account noted in the alert description has been compromised and was used to hijack resources in your environment to mine crypto-currency. This can starve your resources of computing power and/or result in significantly higher-than-expected cloud usage bills. The permutations of suspicious Azure AD sign-in alerts with the crypto-mining activity alert are:
+**Description:** Fusion incidents of this type indicate crypto-mining activity associated with a suspicious sign-in to an Azure AD account. This evidence suggests with high confidence that the user account noted in the alert description has been compromised and was used to hijack resources in your environment to mine crypto-currency. This can starve your resources of computing power and/or result in significantly higher-than-expected cloud usage bills. The permutations of suspicious Azure AD sign-in alerts with the crypto-mining activity alert are:
- **Impossible travel to atypical locations leading to crypto-mining activity**
This scenario is currently in **PREVIEW**.
- **Sign-in event from user with leaked credentials leading to crypto-mining activity**
-## Data exfiltration
+## Data destruction
-### Office 365 mailbox exfiltration following a suspicious Azure AD sign-in
+### Mass file deletion following suspicious Azure AD sign-in
-**MITRE ATT&CK tactics:** Initial Access, Exfiltration, Collection
+**MITRE ATT&CK tactics:** Initial Access, Impact
-**MITRE ATT&CK techniques:** Valid Account (T1078), E-mail collection (T1114), Automated Exfiltration (T1020)
+**MITRE ATT&CK techniques:** Valid Account (T1078), Data Destruction (T1485)
**Data connector sources:** Microsoft Cloud App Security, Azure Active Directory Identity Protection
-**Description:** Fusion incidents of this type indicate that a suspicious inbox forwarding rule was set on a user's inbox following a suspicious sign-in to an Azure AD account. This indication provides high confidence that the user's account (noted in the Fusion incident description) has been compromised, and that it was used to exfiltrate data from your organization's network by enabling a mailbox forwarding rule without the true user's knowledge. The permutations of suspicious Azure AD sign-in alerts with the Office 365 mailbox exfiltration alert are:
+**Description:** Fusion incidents of this type indicate that an anomalous number of unique files were deleted following a suspicious sign-in to an Azure AD account. This evidence suggests that the account noted in the Fusion incident description may have been compromised and was used to destroy data for malicious purposes. The permutations of suspicious Azure AD sign-in alerts with the mass file deletion alert are:
-- **Impossible travel to an atypical location leading to Office 365 mailbox exfiltration**
+- **Impossible travel to an atypical location leading to mass file deletion**
-- **Sign-in event from an unfamiliar location leading to Office 365 mailbox exfiltration**
+- **Sign-in event from an unfamiliar location leading to mass file deletion**
-- **Sign-in event from an infected device leading to Office 365 mailbox exfiltration**
+- **Sign-in event from an infected device leading to mass file deletion**
-- **Sign-in event from an anonymous IP address leading to Office 365 mailbox exfiltration**
+- **Sign-in event from an anonymous IP address leading to mass file deletion**
-- **Sign-in event from user with leaked credentials leading to Office 365 mailbox exfiltration**
+- **Sign-in event from user with leaked credentials leading to mass file deletion**
+
+### New! Mass file deletion following successful Azure AD sign-in from IP blocked by a Cisco firewall appliance
+This scenario makes use of alerts produced by **scheduled analytics rules**.
+
+This scenario is currently in **PREVIEW**.
+
+**MITRE ATT&CK tactics:** Initial Access, Impact
+
+**MITRE ATT&CK techniques:** Valid Account (T1078), Data Destruction (T1485)
+
+**Data connector sources:** Azure Sentinel (scheduled analytics rule), Microsoft Cloud App Security
+
+**Description:** Fusion incidents of this type indicate that an anomalous number of unique files were deleted following a successful Azure AD sign-in despite the user's IP address being blocked by a Cisco firewall appliance. This evidence suggests that the account noted in the Fusion incident description has been compromised and was used to destroy data for malicious purposes. Because the IP was blocked by the firewall, that same IP logging on successfully to Azure AD is potentially suspect and could indicate credential compromise for the user account.
+
+### New! Mass file deletion following successful sign-in to Palo Alto VPN by IP with multiple failed Azure AD sign-ins
+This scenario makes use of alerts produced by **scheduled analytics rules**.
+
+This scenario is currently in **PREVIEW**.
+
+**MITRE ATT&CK tactics:** Initial Access, Credential Access, Impact
+
+**MITRE ATT&CK techniques:** Valid Account (T1078), Brute Force (T1110), Data Destruction (T1485)
+
+**Data connector sources:** Azure Sentinel (scheduled analytics rule), Microsoft Cloud App Security
+
+**Description:** Fusion incidents of this type indicate that an anomalous number of unique files were deleted by a user who successfully signed in through a Palo Alto VPN from an IP address from which multiple failed Azure AD sign-ins occurred in a similar time frame. This evidence suggests that the user account noted in the Fusion incident may have been compromised using brute force techniques, and was used to destroy data for malicious purposes.
+
+### Suspicious email deletion activity following suspicious Azure AD sign-in
+This scenario is currently in **PREVIEW**.
+
+**MITRE ATT&CK tactics:** Initial Access, Impact
+
+**MITRE ATT&CK techniques:** Valid Account (T1078), Data Destruction (T1485)
+
+**Data connector sources:** Microsoft Cloud App Security, Azure Active Directory Identity Protection
+
+**Description:** Fusion incidents of this type indicate that an anomalous number of emails were deleted in a single session following a suspicious sign-in to an Azure AD account. This evidence suggests that the account noted in the Fusion incident description may have been compromised and was used to destroy data for malicious purposes, such as harming the organization or hiding spam-related email activity. The permutations of suspicious Azure AD sign-in alerts with the suspicious email deletion activity alert are:
+
+- **Impossible travel to an atypical location leading to suspicious email deletion activity**
+
+- **Sign-in event from an unfamiliar location leading to suspicious email deletion activity**
+
+- **Sign-in event from an infected device leading to suspicious email deletion activity**
+
+- **Sign-in event from an anonymous IP address leading to suspicious email deletion activity**
+
+- **Sign-in event from user with leaked credentials leading to suspicious email deletion activity**
+
+## Data exfiltration
+
+### New! Mail forwarding activities following new admin-account activity not seen recently
+This scenario belongs to two threat classifications in this list: **data exfiltration** and **malicious administrative activity**. For the sake of clarity, it appears in both sections.
+
+This scenario makes use of alerts produced by **scheduled analytics rules**.
+
+This scenario is currently in **PREVIEW**.
+
+**MITRE ATT&CK tactics:** Initial Access, Collection, Exfiltration
+
+**MITRE ATT&CK techniques:** Valid Account (T1078), Email Collection (T1114), Exfiltration Over Web Service (T1567)
+
+**Data connector sources:** Azure Sentinel (scheduled analytics rule), Microsoft Cloud App Security
+
+**Description:** Fusion incidents of this type indicate that either a new Exchange administrator account has been created, or an existing Exchange admin account took some administrative action for the first time, in the last two weeks, and that the account then did some mail-forwarding actions, which are unusual for an administrator account. This evidence suggests that the user account noted in the Fusion incident description has been compromised or manipulated, and that it was used to exfiltrate data from your organization's network.
### Mass file download following suspicious Azure AD sign-in
This scenario is currently in **PREVIEW**.
- **Sign-in event from user with leaked credentials leading to mass file download**
-### Mass file sharing following suspicious Azure AD sign-in
+### New! Mass file download following successful Azure AD sign-in from IP blocked by a Cisco firewall appliance
+This scenario makes use of alerts produced by **scheduled analytics rules**.
+
+This scenario is currently in **PREVIEW**.
**MITRE ATT&CK tactics:** Initial Access, Exfiltration **MITRE ATT&CK techniques:** Valid Account (T1078), Exfiltration Over Web Service (T1567)
-**Data connector sources:** Microsoft Cloud App Security, Azure Active Directory Identity Protection
+**Data connector sources:** Azure Sentinel (scheduled analytics rule), Microsoft Cloud App Security
-**Description:** Fusion incidents of this type indicate that a number of files above a particular threshold were shared to others following a suspicious sign-in to an Azure AD account. This indication provides high confidence that the account noted in the Fusion incident description has been compromised and used to exfiltrate data from your organization's network by sharing files such as documents, spreadsheets, etc., with unauthorized users for malicious purposes. The permutations of suspicious Azure AD sign-in alerts with the mass file sharing alert are:
+**Description:** Fusion incidents of this type indicate that an anomalous number of files were downloaded by a user following a successful Azure AD sign-in despite the user's IP address being blocked by a Cisco firewall appliance. This could possibly be an attempt by an attacker to exfiltrate data from the organization's network after compromising a user account. Because the IP was blocked by the firewall, that same IP logging on successfully to Azure AD is potentially suspect and could indicate credential compromise for the user account.
-- **Impossible travel to an atypical location leading to mass file sharing**
+### New! Mass file download coinciding with SharePoint file operation from previously unseen IP
+This scenario makes use of alerts produced by **scheduled analytics rules**.
-- **Sign-in event from an unfamiliar location leading to mass file sharing**
+This scenario is currently in **PREVIEW**.
-- **Sign-in event from an infected device leading to mass file sharing**
+**MITRE ATT&CK tactics:** Exfiltration
-- **Sign-in event from an anonymous IP address leading to mass file sharing**
+**MITRE ATT&CK techniques:** Exfiltration Over Web Service (T1567), Data Transfer Size Limits (T1030)
-- **Sign-in event from user with leaked credentials leading to mass file sharing**
+**Data connector sources:** Azure Sentinel (scheduled analytics rule), Microsoft Cloud App Security
-### Suspicious inbox manipulation rules set following suspicious Azure AD sign-in
-This scenario belongs to two threat classifications in this list: **data exfiltration** and **lateral movement**. For the sake of clarity, it appears in both sections.
+**Description:** Fusion incidents of this type indicate that an anomalous number of files were downloaded by a user connected from a previously unseen IP address. Though not evidence of a multistage attack, the correlation of these two lower-fidelity alerts results in a high-fidelity incident suggesting an attempt by an attacker to exfiltrate data from the organization's network from a possibly compromised user account. In stable environments, such connections by previously unseen IPs may be unauthorized, especially if associated with spikes in volume that could be associated with large-scale document exfiltration.
-This scenario is currently in **PREVIEW**.
+### Mass file sharing following suspicious Azure AD sign-in
-**MITRE ATT&CK tactics:** Initial Access, Lateral Movement, Exfiltration
+**MITRE ATT&CK tactics:** Initial Access, Exfiltration
-**MITRE ATT&CK techniques:** Valid Account (T1078), Internal Spear Phishing (T1534)
+**MITRE ATT&CK techniques:** Valid Account (T1078), Exfiltration Over Web Service (T1567)
**Data connector sources:** Microsoft Cloud App Security, Azure Active Directory Identity Protection
-**Description:** Fusion incidents of this type indicate that anomalous inbox rules were set on a user's inbox following a suspicious sign-in to an Azure AD account. This provides a high-confidence indication that the account noted in the Fusion incident description has been compromised and was used to manipulate the userΓÇÖs email inbox rules for malicious purposes. This could possibly be an attempt by an attacker to exfiltrate data from the organization's network. Alternatively, the attacker could be trying to generate phishing emails from within the organization (bypassing phishing detection mechanisms targeted at email from external sources) for the purpose of moving laterally by gaining access to additional user and/or privileged accounts. The permutations of suspicious Azure AD sign-in alerts with the suspicious inbox manipulation rules alert are:
+**Description:** Fusion incidents of this type indicate that a number of files above a particular threshold were shared to others following a suspicious sign-in to an Azure AD account. This indication provides high confidence that the account noted in the Fusion incident description has been compromised and used to exfiltrate data from your organization's network by sharing files such as documents, spreadsheets, etc., with unauthorized users for malicious purposes. The permutations of suspicious Azure AD sign-in alerts with the mass file sharing alert are:
-- **Impossible travel to an atypical location leading to suspicious inbox manipulation rule**
+- **Impossible travel to an atypical location leading to mass file sharing**
-- **Sign-in event from an unfamiliar location leading to suspicious inbox manipulation rule**
+- **Sign-in event from an unfamiliar location leading to mass file sharing**
-- **Sign-in event from an infected device leading to suspicious inbox manipulation rule**
+- **Sign-in event from an infected device leading to mass file sharing**
-- **Sign-in event from an anonymous IP address leading to suspicious inbox manipulation rule**
+- **Sign-in event from an anonymous IP address leading to mass file sharing**
-- **Sign-in event from user with leaked credentials leading to suspicious inbox manipulation rule**
+- **Sign-in event from user with leaked credentials leading to mass file sharing**
### Multiple Power BI report sharing activities following suspicious Azure AD sign-in This scenario is currently in **PREVIEW**.
This scenario is currently in **PREVIEW**.
- **Sign-in event from user with leaked credentials leading to multiple Power BI report sharing activities**
-### Suspicious Power BI report sharing following suspicious Azure AD sign-in
-This scenario is currently in **PREVIEW**.
+### Office 365 mailbox exfiltration following a suspicious Azure AD sign-in
-**MITRE ATT&CK tactics:** Initial Access, Exfiltration
+**MITRE ATT&CK tactics:** Initial Access, Exfiltration, Collection
-**MITRE ATT&CK techniques:** Valid Account (T1078), Exfiltration Over Web Service (T1567)
+**MITRE ATT&CK techniques:** Valid Account (T1078), E-mail collection (T1114), Automated Exfiltration (T1020)
**Data connector sources:** Microsoft Cloud App Security, Azure Active Directory Identity Protection
-**Description:** Fusion incidents of this type indicate that a suspicious Power BI report sharing activity occurred following a suspicious sign-in to an Azure AD account. The sharing activity was identified as suspicious because the Power BI report contained sensitive information identified using Natural language processing, and because it was shared with an external email address, published to the web, or delivered as a snapshot to an externally subscribed email address. This alert indicates with high confidence that the account noted in the Fusion incident description has been compromised and was used to exfiltrate sensitive data from your organization by sharing Power BI reports with unauthorized users for malicious purposes. The permutations of suspicious Azure AD sign-in alerts with the suspicious Power BI report sharing are:
+**Description:** Fusion incidents of this type indicate that a suspicious inbox forwarding rule was set on a user's inbox following a suspicious sign-in to an Azure AD account. This indication provides high confidence that the user's account (noted in the Fusion incident description) has been compromised, and that it was used to exfiltrate data from your organization's network by enabling a mailbox forwarding rule without the true user's knowledge. The permutations of suspicious Azure AD sign-in alerts with the Office 365 mailbox exfiltration alert are:
-- **Impossible travel to an atypical location leading to suspicious Power BI report sharing**
+- **Impossible travel to an atypical location leading to Office 365 mailbox exfiltration**
-- **Sign-in event from an unfamiliar location leading to suspicious Power BI report sharing**
+- **Sign-in event from an unfamiliar location leading to Office 365 mailbox exfiltration**
-- **Sign-in event from an infected device leading to suspicious Power BI report sharing**
+- **Sign-in event from an infected device leading to Office 365 mailbox exfiltration**
-- **Sign-in event from an anonymous IP address leading to suspicious Power BI report sharing**
+- **Sign-in event from an anonymous IP address leading to Office 365 mailbox exfiltration**
-- **Sign-in event from user with leaked credentials leading to suspicious Power BI report sharing**
+- **Sign-in event from user with leaked credentials leading to Office 365 mailbox exfiltration**
-## Data destruction
+### New! SharePoint file operation from previously unseen IP following malware detection
+This scenario makes use of alerts produced by **scheduled analytics rules**.
-### Mass file deletion following suspicious Azure AD sign-in
+This scenario is currently in **PREVIEW**.
-**MITRE ATT&CK tactics:** Initial Access, Impact
+**MITRE ATT&CK tactics:** Exfiltration, Defense Evasion
-**MITRE ATT&CK techniques:** Valid Account (T1078), Data Destruction (T1485)
+**MITRE ATT&CK techniques:** Data Transfer Size Limits (T1030)
+
+**Data connector sources:** Azure Sentinel (scheduled analytics rule), Microsoft Cloud App Security
+
+**Description:** Fusion incidents of this type indicate that an attacker attempted to exfiltrate large amounts of data by downloading or sharing through SharePoint through the use of malware. In stable environments, such connections by previously unseen IPs may be unauthorized, especially if associated with spikes in volume that could be associated with large-scale document exfiltration.
+
+### Suspicious inbox manipulation rules set following suspicious Azure AD sign-in
+This scenario belongs to two threat classifications in this list: **data exfiltration** and **lateral movement**. For the sake of clarity, it appears in both sections.
+
+This scenario is currently in **PREVIEW**.
+
+**MITRE ATT&CK tactics:** Initial Access, Lateral Movement, Exfiltration
+
+**MITRE ATT&CK techniques:** Valid Account (T1078), Internal Spear Phishing (T1534), Automated Exfiltration (T1020)
**Data connector sources:** Microsoft Cloud App Security, Azure Active Directory Identity Protection
-**Description:** Fusion incidents of this type indicate that an anomalous number of unique files were deleted following a suspicious sign-in to an Azure AD account. This provides an indication that the account noted in the Fusion incident description may have been compromised and was used to destroy data for malicious purposes. The permutations of suspicious Azure AD sign-in alerts with the mass file deletion alert are:
+**Description:** Fusion incidents of this type indicate that anomalous inbox rules were set on a user's inbox following a suspicious sign-in to an Azure AD account. This evidence provides a high-confidence indication that the account noted in the Fusion incident description has been compromised and was used to manipulate the userΓÇÖs email inbox rules for malicious purposes, possibly to exfiltrate data from the organization's network. Alternatively, the attacker could be trying to generate phishing emails from within the organization (bypassing phishing detection mechanisms targeted at email from external sources) for the purpose of moving laterally by gaining access to additional user and/or privileged accounts. The permutations of suspicious Azure AD sign-in alerts with the suspicious inbox manipulation rules alert are:
-- **Impossible travel to an atypical location leading to mass file deletion**
+- **Impossible travel to an atypical location leading to suspicious inbox manipulation rule**
-- **Sign-in event from an unfamiliar location leading to mass file deletion**
+- **Sign-in event from an unfamiliar location leading to suspicious inbox manipulation rule**
-- **Sign-in event from an infected device leading to mass file deletion**
+- **Sign-in event from an infected device leading to suspicious inbox manipulation rule**
-- **Sign-in event from an anonymous IP address leading to mass file deletion**
+- **Sign-in event from an anonymous IP address leading to suspicious inbox manipulation rule**
-- **Sign-in event from user with leaked credentials leading to mass file deletion**
+- **Sign-in event from user with leaked credentials leading to suspicious inbox manipulation rule**
-### Suspicious email deletion activity following suspicious Azure AD sign-in
+### Suspicious Power BI report sharing following suspicious Azure AD sign-in
This scenario is currently in **PREVIEW**.
-**MITRE ATT&CK tactics:** Initial Access, Impact
+**MITRE ATT&CK tactics:** Initial Access, Exfiltration
-**MITRE ATT&CK techniques:** Valid Account (T1078), Data Destruction (T1485)
+**MITRE ATT&CK techniques:** Valid Account (T1078), Exfiltration Over Web Service (T1567)
**Data connector sources:** Microsoft Cloud App Security, Azure Active Directory Identity Protection
-**Description:** Fusion incidents of this type indicate that an anomalous number of emails were deleted in a single session following a suspicious sign-in to an Azure AD account. This provides an indication that the account noted in the Fusion incident description may have been compromised and was used to destroy data for malicious purposes, such as harming the organization or hiding spam-related email activity. The permutations of suspicious Azure AD sign-in alerts with the suspicious email deletion activity alert are:
+**Description:** Fusion incidents of this type indicate that a suspicious Power BI report sharing activity occurred following a suspicious sign-in to an Azure AD account. The sharing activity was identified as suspicious because the Power BI report contained sensitive information identified using Natural language processing, and because it was shared with an external email address, published to the web, or delivered as a snapshot to an externally subscribed email address. This alert indicates with high confidence that the account noted in the Fusion incident description has been compromised and was used to exfiltrate sensitive data from your organization by sharing Power BI reports with unauthorized users for malicious purposes. The permutations of suspicious Azure AD sign-in alerts with the suspicious Power BI report sharing are:
-- **Impossible travel to an atypical location leading to suspicious email deletion activity**
+- **Impossible travel to an atypical location leading to suspicious Power BI report sharing**
-- **Sign-in event from an unfamiliar location leading to suspicious email deletion activity**
+- **Sign-in event from an unfamiliar location leading to suspicious Power BI report sharing**
-- **Sign-in event from an infected device leading to suspicious email deletion activity**
+- **Sign-in event from an infected device leading to suspicious Power BI report sharing**
-- **Sign-in event from an anonymous IP address leading to suspicious email deletion activity**
+- **Sign-in event from an anonymous IP address leading to suspicious Power BI report sharing**
-- **Sign-in event from user with leaked credentials leading to suspicious email deletion activity**
+- **Sign-in event from user with leaked credentials leading to suspicious Power BI report sharing**
## Denial of service
-### Multiple VM delete activities following suspicious Azure AD sign-in
+### Multiple VM deletion activities following suspicious Azure AD sign-in
This scenario is currently in **PREVIEW**. **MITRE ATT&CK tactics:** Initial Access, Impact
This scenario is currently in **PREVIEW**.
**Data connector sources:** Microsoft Cloud App Security, Azure Active Directory Identity Protection
-**Description:** Fusion incidents of this type indicate that an anomalous number of VMs were deleted in a single session following a suspicious sign-in to an Azure AD account. This indication provides high confidence that the account noted in the Fusion incident description has been compromised and was used to attempt to disrupt or destroy the organization's cloud environment. The permutations of suspicious Azure AD sign-in alerts with the multiple VM delete activities alert are:
+**Description:** Fusion incidents of this type indicate that an anomalous number of VMs were deleted in a single session following a suspicious sign-in to an Azure AD account. This indication provides high confidence that the account noted in the Fusion incident description has been compromised and was used to attempt to disrupt or destroy the organization's cloud environment. The permutations of suspicious Azure AD sign-in alerts with the multiple VM deletion activities alert are:
-- **Impossible travel to an atypical location leading to multiple VM delete activities**
+- **Impossible travel to an atypical location leading to multiple VM deletion activities**
-- **Sign-in event from an unfamiliar location leading to multiple VM delete activities**
+- **Sign-in event from an unfamiliar location leading to multiple VM deletion activities**
-- **Sign-in event from an infected device leading to multiple VM delete activities**
+- **Sign-in event from an infected device leading to multiple VM deletion activities**
-- **Sign-in event from an anonymous IP address leading to multiple VM delete activities**
+- **Sign-in event from an anonymous IP address leading to multiple VM deletion activities**
-- **Sign-in event from user with leaked credentials leading to multiple VM delete activities**
+- **Sign-in event from user with leaked credentials leading to multiple VM deletion activities**
## Lateral movement
This scenario is currently in **PREVIEW**.
**Data connector sources:** Microsoft Cloud App Security, Azure Active Directory Identity Protection
-**Description:** Fusion incidents of this type indicate that anomalous inbox rules were set on a user's inbox following a suspicious sign-in to an Azure AD account. This indication provides high confidence that the account noted in the Fusion incident description has been compromised and was used to manipulate the userΓÇÖs email inbox rules for malicious purposes. This could possibly be an attempt by an attacker to exfiltrate data from the organization's network. Alternatively, the attacker could be trying to generate phishing emails from within the organization (bypassing phishing detection mechanisms targeted at email from external sources) for the purpose of moving laterally by gaining access to additional user and/or privileged accounts. The permutations of suspicious Azure AD sign-in alerts with the suspicious inbox manipulation rules alert are:
+**Description:** Fusion incidents of this type indicate that anomalous inbox rules were set on a user's inbox following a suspicious sign-in to an Azure AD account. This evidence provides a high-confidence indication that the account noted in the Fusion incident description has been compromised and was used to manipulate the userΓÇÖs email inbox rules for malicious purposes, possibly to exfiltrate data from the organization's network. Alternatively, the attacker could be trying to generate phishing emails from within the organization (bypassing phishing detection mechanisms targeted at email from external sources) for the purpose of moving laterally by gaining access to additional user and/or privileged accounts. The permutations of suspicious Azure AD sign-in alerts with the suspicious inbox manipulation rules alert are:
- **Impossible travel to an atypical location leading to suspicious inbox manipulation rule**
This scenario is currently in **PREVIEW**.
**Data connector sources:** Microsoft Cloud App Security, Azure Active Directory Identity Protection
-**Description:** Fusion incidents of this type indicate that an anomalous number of administrative activities were performed in a single session following a suspicious Azure AD sign-in from the same account. This provides an indication that the account noted in the Fusion incident description may have been compromised and was used to make any number of unauthorized administrative actions with malicious intent. This also indicates that an account with administrative privileges may have been compromised. The permutations of suspicious Azure AD sign-in alerts with the suspicious cloud app administrative activity alert are:
+**Description:** Fusion incidents of this type indicate that an anomalous number of administrative activities were performed in a single session following a suspicious Azure AD sign-in from the same account. This evidence suggests that the account noted in the Fusion incident description may have been compromised and was used to make any number of unauthorized administrative actions with malicious intent. This also indicates that an account with administrative privileges may have been compromised. The permutations of suspicious Azure AD sign-in alerts with the suspicious cloud app administrative activity alert are:
- **Impossible travel to an atypical location leading to suspicious cloud app administrative activity**
This scenario is currently in **PREVIEW**.
- **Sign-in event from user with leaked credentials leading to suspicious cloud app administrative activity**
+### New! Mail forwarding activities following new admin-account activity not seen recently
+This scenario belongs to two threat classifications in this list: **malicious administrative activity** and **data exfiltration**. For the sake of clarity, it appears in both sections.
+
+This scenario makes use of alerts produced by **scheduled analytics rules**.
+
+This scenario is currently in **PREVIEW**.
+
+**MITRE ATT&CK tactics:** Initial Access, Collection, Exfiltration
+
+**MITRE ATT&CK techniques:** Valid Account (T1078), Email Collection (T1114), Exfiltration Over Web Service (T1567)
+
+**Data connector sources:** Azure Sentinel (scheduled analytics rule), Microsoft Cloud App Security
+
+**Description:** Fusion incidents of this type indicate that either a new Exchange administrator account has been created, or an existing Exchange admin account took some administrative action for the first time, in the last two weeks, and that the account then did some mail-forwarding actions, which are unusual for an administrator account. This evidence suggests that the user account noted in the Fusion incident description has been compromised or manipulated, and that it was used to exfiltrate data from your organization's network.
+ ## Malicious execution with legitimate process ### PowerShell made a suspicious network connection, followed by anomalous traffic flagged by Palo Alto Networks firewall.
This scenario is currently in **PREVIEW**.
**Data connector sources:** Microsoft Defender for Endpoint (formerly Microsoft Defender Advanced Threat Protection, or MDATP), Palo Alto Networks
-**Description:** Fusion incidents of this type indicate that an outbound connection request was made via a PowerShell command, and following that, anomalous inbound activity was detected by the Palo Alto Networks Firewall. This provides an indication that an attacker has likely gained access to your network and is trying to perform malicious actions. Connection attempts by PowerShell that follow this pattern could be an indication of malware command and control activity, requests for the download of additional malware, or an attacker establishing remote interactive access. As with all ΓÇ£living off the landΓÇ¥ attacks, this activity could be a legitimate use of PowerShell. However, the PowerShell command execution followed by suspicious inbound Firewall activity increases the confidence that PowerShell is being used in a malicious manner and should be investigated further. In Palo Alto logs, Azure Sentinel focuses on [threat logs](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/view-and-manage-logs/log-types-and-severity-levels/threat-logs), and traffic is considered suspicious when threats are allowed (suspicious data, files, floods, packets, scans, spyware, URLs, viruses, vulnerabilities, wildfire-viruses, wildfires). Also reference the Palo Alto Threat Log corresponding to the [Threat/Content Type](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/threat-log-fields.html) listed in the Fusion incident description for additional alert details.
+**Description:** Fusion incidents of this type indicate that an outbound connection request was made via a PowerShell command, and following that, anomalous inbound activity was detected by the Palo Alto Networks Firewall. This evidence suggests that an attacker has likely gained access to your network and is trying to perform malicious actions. Connection attempts by PowerShell that follow this pattern could be an indication of malware command and control activity, requests for the download of additional malware, or an attacker establishing remote interactive access. As with all ΓÇ£living off the landΓÇ¥ attacks, this activity could be a legitimate use of PowerShell. However, the PowerShell command execution followed by suspicious inbound Firewall activity increases the confidence that PowerShell is being used in a malicious manner and should be investigated further. In Palo Alto logs, Azure Sentinel focuses on [threat logs](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/view-and-manage-logs/log-types-and-severity-levels/threat-logs), and traffic is considered suspicious when threats are allowed (suspicious data, files, floods, packets, scans, spyware, URLs, viruses, vulnerabilities, wildfire-viruses, wildfires). Also reference the Palo Alto Threat Log corresponding to the [Threat/Content Type](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/threat-log-fields.html) listed in the Fusion incident description for additional alert details.
### Suspicious remote WMI execution followed by anomalous traffic flagged by Palo Alto Networks firewall This scenario is currently in **PREVIEW**.
This scenario is currently in **PREVIEW**.
**Data connector sources:** Microsoft Defender for Endpoint (formerly MDATP), Palo Alto Networks
-**Description:** Fusion incidents of this type indicate that Windows Management Interface (WMI) commands were remotely executed on a system, and following that, suspicious inbound activity was detected by the Palo Alto Networks Firewall. This provides an indication that an attacker may have gained access to your network and is attempting to move laterally, escalate privileges, and/or execute malicious payloads. As with all ΓÇ£living off the landΓÇ¥ attacks, this activity could be a legitimate use of WMI. However, the remote WMI command execution followed by suspicious inbound Firewall activity increases the confidence that WMI is being used in a malicious manner and should be investigated further. In Palo Alto logs, Azure Sentinel focuses on [threat logs](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/view-and-manage-logs/log-types-and-severity-levels/threat-logs), and traffic is considered suspicious when threats are allowed (suspicious data, files, floods, packets, scans, spyware, URLs, viruses, vulnerabilities, wildfire-viruses, wildfires). Also reference the Palo Alto Threat Log corresponding to the [Threat/Content Type](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/threat-log-fields.html) listed in the Fusion incident description for additional alert details.
+**Description:** Fusion incidents of this type indicate that Windows Management Interface (WMI) commands were remotely executed on a system, and following that, suspicious inbound activity was detected by the Palo Alto Networks Firewall. This evidence suggests that an attacker may have gained access to your network and is attempting to move laterally, escalate privileges, and/or execute malicious payloads. As with all ΓÇ£living off the landΓÇ¥ attacks, this activity could be a legitimate use of WMI. However, the remote WMI command execution followed by suspicious inbound Firewall activity increases the confidence that WMI is being used in a malicious manner and should be investigated further. In Palo Alto logs, Azure Sentinel focuses on [threat logs](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/view-and-manage-logs/log-types-and-severity-levels/threat-logs), and traffic is considered suspicious when threats are allowed (suspicious data, files, floods, packets, scans, spyware, URLs, viruses, vulnerabilities, wildfire-viruses, wildfires). Also reference the Palo Alto Threat Log corresponding to the [Threat/Content Type](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/threat-log-fields.html) listed in the Fusion incident description for additional alert details.
### Suspicious PowerShell command line following suspicious sign-in
This scenario is currently in **PREVIEW**.
**Data connector sources:** Azure Active Directory Identity Protection, Microsoft Defender for Endpoint (formerly MDATP)
-**Description:** Fusion incidents of this type indicate that a user executed potentially malicious PowerShell commands following a suspicious sign-in to an Azure AD account. This provides a high-confidence indication that the account noted in the alert description has been compromised and further malicious actions were taken. Attackers often leverage PowerShell to execute malicious payloads in memory without leaving artifacts on the disk, in order to avoid detection by disk-based security mechanisms such as virus scanners. The permutations of suspicious Azure AD sign-in alerts with the suspicious PowerShell command alert are:
+**Description:** Fusion incidents of this type indicate that a user executed potentially malicious PowerShell commands following a suspicious sign-in to an Azure AD account. This evidence suggests with high confidence that the account noted in the alert description has been compromised and further malicious actions were taken. Attackers often use PowerShell to execute malicious payloads in memory without leaving artifacts on the disk, in order to avoid detection by disk-based security mechanisms such as virus scanners. The permutations of suspicious Azure AD sign-in alerts with the suspicious PowerShell command alert are:
- **Impossible travel to atypical locations leading to suspicious PowerShell command line**
This scenario is currently in **PREVIEW**.
## Malware C2 or download
+### New! Beacon pattern detected by Fortinet following multiple failed user sign-ins to a service
+
+This scenario makes use of alerts produced by **scheduled analytics rules**.
+
+This scenario is currently in **PREVIEW**.
+
+**MITRE ATT&CK tactics:** Initial Access, Command and Control
+
+**MITRE ATT&CK techniques:** Valid Account (T1078), Non-Standard Port (T1571), T1065 (retired)
+
+**Data connector sources:** Azure Sentinel (scheduled analytics rule), Microsoft Cloud App Security
+
+**Description:** Fusion incidents of this type indicate communication patterns, from an internal IP address to an external one, that are consistent with beaconing, following multiple failed user sign-ins to a service from a related internal entity. The combination of these two events could be an indication of malware infection or of a compromised host doing data exfiltration.
+
+### New! Beacon pattern detected by Fortinet following suspicious Azure AD sign-in
+
+This scenario makes use of alerts produced by **scheduled analytics rules**.
+
+This scenario is currently in **PREVIEW**.
+
+**MITRE ATT&CK tactics:** Initial Access, Command and Control
+
+**MITRE ATT&CK techniques:** Valid Account (T1078), Non-Standard Port (T1571), T1065 (retired)
+
+**Data connector sources:** Azure Sentinel (scheduled analytics rule), Azure Active Directory Identity Protection
+
+**Description:** Fusion incidents of this type indicate communication patterns, from an internal IP address to an external one, that are consistent with beaconing, following a user sign-in of a suspicious nature to Azure AD. The combination of these two events could be an indication of malware infection or of a compromised host doing data exfiltration. The permutations of beacon pattern detected by Fortinet alerts with suspicious Azure AD sign-in alerts are:
+
+- **Impossible travel to an atypical location leading to beacon pattern detected by Fortinet**
+
+- **Sign-in event from an unfamiliar location leading to beacon pattern detected by Fortinet**
+
+- **Sign-in event from an infected device leading to beacon pattern detected by Fortinet**
+
+- **Sign-in event from an anonymous IP address leading to beacon pattern detected by Fortinet**
+
+- **Sign-in event from user with leaked credentials leading to beacon pattern detected by Fortinet**
+ ### Network request to TOR anonymization service followed by anomalous traffic flagged by Palo Alto Networks firewall. This scenario is currently in **PREVIEW**.
This scenario is currently in **PREVIEW**.
**Data connector sources:** Microsoft Defender for Endpoint (formerly MDATP), Palo Alto Networks
-**Description:** Fusion incidents of this type indicate that an outbound connection request was made to the TOR anonymization service, and following that, anomalous inbound activity was detected by the Palo Alto Networks Firewall. This provides an indication that an attacker has likely gained access to your network and is trying to conceal their actions and intent. Connections to the TOR network following this pattern could be an indication of malware command and control activity, requests for the download of additional malware, or an attacker establishing remote interactive access. In Palo Alto logs, Azure Sentinel focuses on [threat logs](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/view-and-manage-logs/log-types-and-severity-levels/threat-logs), and traffic is considered suspicious when threats are allowed (suspicious data, files, floods, packets, scans, spyware, URLs, viruses, vulnerabilities, wildfire-viruses, wildfires). Also reference the Palo Alto Threat Log corresponding to the [Threat/Content Type](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/threat-log-fields.html) listed in the Fusion incident description for additional alert details.
+**Description:** Fusion incidents of this type indicate that an outbound connection request was made to the TOR anonymization service, and following that, anomalous inbound activity was detected by the Palo Alto Networks Firewall. This evidence suggests that an attacker has likely gained access to your network and is trying to conceal their actions and intent. Connections to the TOR network following this pattern could be an indication of malware command and control activity, requests for the download of additional malware, or an attacker establishing remote interactive access. In Palo Alto logs, Azure Sentinel focuses on [threat logs](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/view-and-manage-logs/log-types-and-severity-levels/threat-logs), and traffic is considered suspicious when threats are allowed (suspicious data, files, floods, packets, scans, spyware, URLs, viruses, vulnerabilities, wildfire-viruses, wildfires). Also reference the Palo Alto Threat Log corresponding to the [Threat/Content Type](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/threat-log-fields.html) listed in the Fusion incident description for additional alert details.
### Outbound connection to IP with a history of unauthorized access attempts followed by anomalous traffic flagged by Palo Alto Networks firewall This scenario is currently in **PREVIEW**.
This scenario is currently in **PREVIEW**.
**Data connector sources:** Microsoft Defender for Endpoint (formerly MDATP), Palo Alto Networks
-**Description:** Fusion incidents of this type indicate that an outbound connection to an IP address with a history of unauthorized access attempts was established, and following that, anomalous activity was detected by the Palo Alto Networks Firewall. This provides an indication that an attacker has likely gained access to your network. Connection attempts following this pattern could be an indication of malware command and control activity, requests for the download of additional malware, or an attacker establishing remote interactive access. In Palo Alto logs, Azure Sentinel focuses on [threat logs](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/view-and-manage-logs/log-types-and-severity-levels/threat-logs), and traffic is considered suspicious when threats are allowed (suspicious data, files, floods, packets, scans, spyware, URLs, viruses, vulnerabilities, wildfire-viruses, wildfires). Also reference the Palo Alto Threat Log corresponding to the [Threat/Content Type](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/threat-log-fields.html) listed in the Fusion incident description for additional alert details.
+**Description:** Fusion incidents of this type indicate that an outbound connection to an IP address with a history of unauthorized access attempts was established, and following that, anomalous activity was detected by the Palo Alto Networks Firewall. This evidence suggests that an attacker has likely gained access to your network. Connection attempts following this pattern could be an indication of malware command and control activity, requests for the download of additional malware, or an attacker establishing remote interactive access. In Palo Alto logs, Azure Sentinel focuses on [threat logs](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/view-and-manage-logs/log-types-and-severity-levels/threat-logs), and traffic is considered suspicious when threats are allowed (suspicious data, files, floods, packets, scans, spyware, URLs, viruses, vulnerabilities, wildfire-viruses, wildfires). Also reference the Palo Alto Threat Log corresponding to the [Threat/Content Type](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/threat-log-fields.html) listed in the Fusion incident description for additional alert details.
+
+## Persistence
+(New threat classification)
+
+### New! Rare application consent following suspicious sign-in
+
+This scenario makes use of alerts produced by **scheduled analytics rules**.
+
+This scenario is currently in **PREVIEW**.
+
+**MITRE ATT&CK tactics:** Persistence, Initial Access
+
+**MITRE ATT&CK techniques:** Create Account (T1136), Valid Account (T1078)
+
+**Data connector sources:** Azure Sentinel (scheduled analytics rule), Azure Active Directory Identity Protection
+
+**Description:** Fusion incidents of this type indicate that an application was granted consent by a user who has never or rarely done so, following a related suspicious sign-in to an Azure AD account. This evidence suggests that the account noted in the Fusion incident description may have been compromised and used to access or manipulate the application for malicious purposes. Consent to application, Add service principal and Add OAuth2PermissionGrant should typically be rare events. Attackers may use this type of configuration change to establish or maintain their foothold on systems. The permutations of suspicious Azure AD sign-in alerts with the rare application consent alert are:
+
+- **Impossible travel to an atypical location leading to rare application consent**
+
+- **Sign-in event from an unfamiliar location leading to rare application consent**
+
+- **Sign-in event from an infected device leading to rare application consent**
+
+- **Sign-in event from an anonymous IP leading to rare application consent**
+
+- **Sign-in event from user with leaked credentials leading to rare application consent**
## Ransomware
This scenario is currently in **PREVIEW**.
**Description:** Fusion incidents of this type indicate that non-standard uses of protocols, resembling the use of attack frameworks such as Metasploit, were detected, and following that, suspicious inbound activity was detected by the Palo Alto Networks Firewall. This may be an initial indication that an attacker has exploited a service to gain access to your network resources or that an attacker has already gained access and is trying to further exploit available systems/services to move laterally and/or escalate privileges. In Palo Alto logs, Azure Sentinel focuses on [threat logs](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/view-and-manage-logs/log-types-and-severity-levels/threat-logs), and traffic is considered suspicious when threats are allowed (suspicious data, files, floods, packets, scans, spyware, URLs, viruses, vulnerabilities, wildfire-viruses, wildfires). Also reference the Palo Alto Threat Log corresponding to the [Threat/Content Type](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/threat-log-fields.html) listed in the Fusion incident description for additional alert details.
+## Resource hijacking
+(New threat classification)
+
+### New! Suspicious resource / resource group deployment by a previously unseen caller following suspicious Azure AD sign-in
+This scenario makes use of alerts produced by **scheduled analytics rules**.
+
+This scenario is currently in **PREVIEW**.
+
+**MITRE ATT&CK tactics:** Initial Access, Impact
+
+**MITRE ATT&CK techniques:** Valid Account (T1078), Resource Hijacking (T1496)
+
+**Data connector sources:** Azure Sentinel (scheduled analytics rule), Azure Active Directory Identity Protection
+
+**Description:** Fusion incidents of this type indicate that a user has deployed an Azure resource or resource group - a rare activity - following a suspicious sign-in, with properties not recently seen, to an Azure AD account. This could possibly be an attempt by an attacker to deploy resources or resource groups for malicious purposes after compromising the user account noted in the Fusion incident description.
+The permutations of suspicious Azure AD sign-in alerts with the suspicious resource / resource group deployment by a previously unseen caller alert are:
+
+- **Impossible travel to an atypical location leading to suspicious resource / resource group deployment by a previously unseen caller**
+
+- **Sign-in event from an unfamiliar location leading to suspicious resource / resource group deployment by a previously unseen caller**
+
+- **Sign-in event from an infected device leading to suspicious resource / resource group deployment by a previously unseen caller**
+
+- **Sign-in event from an anonymous IP leading to suspicious resource / resource group deployment by a previously unseen caller**
+
+- **Sign-in event from user with leaked credentials leading to suspicious resource / resource group deployment by a previously unseen caller**
+ ## Next steps Now you've learned more about advanced multistage attack detection, you might be interested in the following quickstart to learn how to get visibility into your data and potential threats: [Get started with Azure Sentinel](quickstart-get-visibility.md).
sentinel Hunting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/hunting.md
na Previously updated : 09/10/2019 Last updated : 04/27/2021 # Hunt for threats with Azure Sentinel
-If you're an investigator who wants to be proactive about looking for security threats, Azure Sentinel powerful hunting search and query tools to hunt for security threats across your organization's data sources. But your systems and security appliances generate mountains of data that can be difficult to parse and filter into meaningful events. To help security analysts look proactively for new anomalies that weren't detected by your security apps, Azure Sentinel' built-in hunting queries guide you into asking the right questions to find issues in the data you already have on your network.
-
-For example, one built-in query provides data about the most uncommon processes running on your infrastructure - you wouldn't want an alert about each time they are run, they could be entirely innocent, but you might want to take a look at the query on occasion to see if there's anything unusual.
+> [!IMPORTANT]
+>
+> - Upgrades to the **hunting dashboard** are currently in **PREVIEW**. Items below relating to this upgrade will be marked as "(preview)". See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+As security analysts and investigators, you want to be proactive about looking for security threats, but your various systems and security appliances generate mountains of data that can be difficult to parse and filter into meaningful events. Azure Sentinel has powerful hunting search and query tools to hunt for security threats across your organization's data sources. To help security analysts look proactively for new anomalies that weren't detected by your security apps or even by your scheduled analytics rules, Azure Sentinel's built-in hunting queries guide you into asking the right questions to find issues in the data you already have on your network.
+For example, one built-in query provides data about the most uncommon processes running on your infrastructure. You wouldn't want an alert about each time they are run - they could be entirely innocent - but you might want to take a look at the query on occasion to see if there's anything unusual.
With Azure Sentinel hunting, you can take advantage of the following capabilities: -- Built-in queries: To get you started, a starting page provides preloaded query examples designed to get you started and get you familiar with the tables and the query language. These built-in hunting queries are developed by Microsoft security researchers on a continuous basis, adding new queries, and fine-tuning existing queries to provide you with an entry point to look for new detections and figure out where to start hunting for the beginnings of new attacks.
+- **Built-in queries**: The main hunting page, accessible from the Azure Sentinel navigation menu, provides ready-made query examples designed to get you started and get you familiar with the tables and the query language. These built-in hunting queries are developed by Microsoft security researchers on a continuous basis, both adding new queries and fine-tuning existing queries to provide you with an entry point to look for new detections and figure out where to start hunting for the beginnings of new attacks.
+
+- **Powerful query language with IntelliSense**: Hunting queries are built in [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/), a query language that gives you the power and flexibility you need to take hunting to the next level. It's the same language used by the queries in your analytics rules and elsewhere in Azure Sentinel.
+
+- **Hunting dashboard (preview)**: This upgrade of the main page lets you run all your queries, or a selected subset, in a single click. Identify where to start hunting by looking at result count, spikes, or the change in result count over a 24-hour period. You can also sort and filter by favorites, data source, MITRE ATT&CK tactic or technique, results, or results delta. View the queries that do not yet have the necessary data sources connected, and get recommendations on how to enable these queries.
-- Powerful query language with IntelliSense: Built on top of a query language that gives you the flexibility you need to take hunting to the next level.
+- **Create your own bookmarks**: During the hunting process, you may come across query results that may look unusual or suspicious. You can "bookmark" these items - saving them and putting them aside so you can refer back to them in the future. You can use your bookmarked items to create or enrich an incident for investigation. For more information about bookmarks, see [Use bookmarks in hunting](bookmarks.md).
-- Create your own bookmarks: During the hunting process, you may come across matches or findings, dashboards, or activities that look unusual or suspicious. In order to mark those items so you can come back to them in the future, use the bookmark functionality. Bookmarks let you save items for later, to be used to create an incident for investigation. For more information about bookmarks, see [Use bookmarks in hunting](hunting.md).-- Use notebooks to automate investigation: Notebooks are like step-by-step playbooks that you can build to walk through the steps of an investigation and hunt. Notebooks encapsulate all the hunting steps in a reusable playbook that can be shared with others in your organization. -- Query the stored data: The data is accessible in tables for you to query. For example, you can query process creation, DNS events, and many other event types.
+- **Use notebooks to power investigation**: Notebooks give you a kind of virtual sandbox environment, complete with its own kernel. You can use notebooks to enhance your hunting and investigations with machine learning, visualization, and data analysis. You can carry out a complete investigation in a notebook, encapsulating the raw data, the code you run on it, the results, and their visualizations, and save the whole thing so that it can be shared with and reused by others in your organization.
-- Links to community: Leverage the power of the greater community to find additional queries and data sources.
+- **Query the stored data**: The data is accessible in tables for you to query. For example, you can query process creation, DNS events, and many other event types.
+
+- **Links to community**: Leverage the power of the greater community to find additional queries and data sources.
## Get started hunting
-1. In the Azure Sentinel portal, click **Hunting**.
- ![Azure Sentinel starts hunting](media/tutorial-hunting/hunting-start.png)
+In the Azure Sentinel portal, click **Hunting**.
-2. When you open the **Hunting** page, all the hunting queries are displayed in a single table. The table lists all the queries written by Microsoft's team of security analysts as well as any additional query you created or modified. Each query provides a description of what it hunts for, and what kind of data it runs on. These templates are grouped by their various tactics - the icons on the right categorize the type of threat, such as initial access, persistence, and exfiltration. You can filter these hunting query templates using any of the fields. You can save any query to your favorites. By saving a query to your favorites, the query automatically runs each time the **Hunting** page is accessed. You can create your own hunting query or clone and customize an existing hunting query template.
-
-2. Click **Run query** in the hunting query details page to run any query without leaving the hunting page. The number of matches is displayed within the table. Review the list of hunting queries and their matches. Check out which stage in the kill chain the match is associated with.
+ :::image type="content" source="media/hunting/hunting-start.png" alt-text="Azure Sentinel starts hunting" lightbox="media/hunting/hunting-start.png":::
-3. Perform a quick review of the underlying query in the query details pane or click **View query result** to open the query in Log Analytics. At the bottom, review the matches for the query.
+- When you open the **Hunting** page, all the hunting queries are displayed in a single table. The table lists all the queries written by Microsoft's team of security analysts as well as any additional query you created or modified. Each query provides a description of what it hunts for, and what kind of data it runs on. These templates are grouped by their various tactics - the icons on the right categorize the type of threat, such as initial access, persistence, and exfiltration.
-4. Click on the row and select **Add bookmark** to add the rows to be investigated - you can do this for anything that looks suspicious.
+- (Preview) To see how the queries apply to your environment, click the **Run all queries (Preview)** button, or select a subset of queries using the check boxes to the left of each row and select the **Run selected queries (Preview)** button. Executing the queries can take anywhere from a few seconds to many minutes, depending on how many queries are selected, the time range, and the amount of data that is being queried.
-5. Then, go back to the main **Hunting** page and click the **Bookmarks** tab to see all the suspicious activities.
+- (Preview) Once your queries are done running, you can see which queries returned results using the **Results** filter. You can then sort to see which queries had the most or fewest results. You can also see which queries are not active in your environment by selecting *N/A* in the **Results** filter. Hover over the info icon (i) next to the *N/A* to see which data sources are required to make this query active.
-6. Select a bookmark and then click **Investigate** to open the investigation experience. You can filter the bookmarks. For example, if you're investigating a campaign, you can create a tag for the campaign and then filter all the bookmarks based on the campaign.
+- (Preview) You can identify spikes in the data by sorting or filtering on **Results delta**. This compares the results of the last 24 hours against the results of the previous 24-48 hours to make it easy to see large differences in volume.
-1. After you discovered which hunting query provides high value insights into possible attacks, you can also create custom detection rules based on your query and surface those insights as alerts to your security incident responders.
+- (Preview) The **MITRE ATT&CK tactic bar**, at the top of the table, lists how many queries are mapped to each MITRE ATT&CK tactic. The tactic bar gets dynamically updated based on the current set of filters applied. This is an easy way to see which MITRE ATT&CK tactics show up when you filter by a given result count, a high result delta, *N/A* results, or any other set of filters.
+- (Preview) Queries can also be mapped to MITRE ATT&CK techniques. You can filter or sort by MITRE ATT&CK techniques using the **Technique** filter. By opening a query, you will be able to click on the technique to see the MITRE ATT&CK description of the technique.
+
+- You can save any query to your favorites. Queries saved to your favorites automatically run each time the **Hunting** page is accessed. You can create your own hunting query or clone and customize an existing hunting query template.
+- By clicking **Run Query** in the hunting query details page, you can run any query without leaving the hunting page. The number of matches is displayed within the table, in the **Results** column. Review the list of hunting queries and their matches.
+
+- You can perform a quick review of the underlying query in the query details pane. You can see the results by clicking the **View query results** link (below the query window) or the **View Results** button (at the bottom of the pane). The query will open in the **Logs** (Log Analytics) blade, and below the query, you can review the matches for the query.
+
+- To preserve suspicious or interesting findings from a query in Log Analytics, mark the check boxes of the rows you wish to preserve and select **Add bookmark**. This creates for each marked row a record - a bookmark - that contains the row results, the query that created the results, and entity mappings to extract users, hosts, and IP addresses. You can add your own tags (see below) and notes to each bookmark.
+
+- You can see all the bookmarked findings by clicking on the **Bookmarks** tab in the main **Hunting** page. You can add tags to bookmarks to classify them for filtering. For example, if you're investigating an attack campaign, you can create a tag for the campaign, apply the tag to any relevant bookmarks, and then filter all the bookmarks based on the campaign.
+
+- You can investigate a single bookmarked finding by selecting the bookmark and then clicking **Investigate** in the details pane to open the investigation experience. You can also create an incident from one or more bookmarks, or add one or more bookmarks to an existing incident, by marking the check boxes to the left of the desired bookmarks and then selecting either **Create new incident** or **Add to existing incident** from the **Incident actions** drop-down menu near the top of the screen. You can then triage and investigate the incident like any other.
+
+- Having discovered or created a hunting query that provides high value insights into possible attacks, you can create custom detection rules based on that query and surface those insights as alerts to your security incident responders. View the query's results in Log Analytics (see above), then click the **New alert rule** button at the top of the pane and select **Create Azure Sentinel alert**. The **Analytics rule wizard** will open. Complete the required steps as explained in [Tutorial: Create custom analytics rules to detect threats](tutorial-detect-threats-custom.md).
## Query language
Hunting in Azure Sentinel is based on Kusto query language. For more information
## Public hunting query GitHub repository
-Check out the [Hunting query repository](https://github.com/Azure/Orion). Contribute and use example queries shared by our customers.
-
-
+Check out the [Hunting query repository](https://github.com/Azure/Azure-Sentinel/tree/master/Hunting%20Queries). Contribute and use example queries shared by our customers.
-## Sample query
+ ## Sample query
A typical query starts with a table name followed by a series of operators separated by \|.
In the example above, start with the table name SecurityEvent and add piped elem
1. Define a time filter to review only records from the previous seven days.
-2. Add a filter in the query to only show event ID 4688.
+1. Add a filter in the query to only show event ID 4688.
-3. Add a filter in the query on the CommandLine to contain only instances of cscript.exe.
+1. Add a filter in the query on the CommandLine to contain only instances of cscript.exe.
-4. Project only the columns you're interested in exploring and limit the results to 1000 and click **Run query**.
-5. Click the green triangle and run the query. You can test the query and run it to look for anomalous behavior.
+1. Project only the columns you're interested in exploring and limit the results to 1000 and click **Run query**.
+
+1. Click the green triangle and run the query. You can test the query and run it to look for anomalous behavior.
## Useful operators
-The query language is powerful and has many available operators, some useful operators are listed here:
+The query language is powerful and has many available operators, some useful ones of which are listed here:
**where** - Filter a table to the subset of rows that satisfy a predicate.
The query language is powerful and has many available operators, some useful ope
You can create or modify a query and save it as your own query or share it with users who are in the same tenant.
- ![Save query](./media/tutorial-hunting/save-query.png)
+
+### Create a new hunting query
-Create a new hunting query:
+1. Click **New query**.
-1. Click **New query** and select **Save**.
-2. Fill in all the blank fields and select **Save**.
+1. Fill in all the blank fields and select **Create**.
- ![New query](./media/tutorial-hunting/new-query.png)
+ :::image type="content" source="./media/hunting/new-query.png" alt-text="New query" lightbox="./media/hunting/new-query.png":::
-Clone and modify an existing hunting query:
+### Clone and modify an existing hunting query
1. Select the hunting query in the table you want to modify.
-2. Select the ellipsis (...) in the line of the query you want to modify, and select **Clone query**.
- ![clone query](./media/tutorial-hunting/clone-query.png)
-
+1. Select the ellipsis (...) in the line of the query you want to modify, and select **Clone query**.
+
+ :::image type="content" source="./media/hunting/clone-query.png" alt-text="Clone query" lightbox="./media/hunting/clone-query.png":::
-3. Modify the query and select **Create**.
+1. Modify the query and select **Create**.
- ![custom query](./media/tutorial-hunting/custom-query.png)
+ :::image type="content" source="./media/hunting/custom-query.png" alt-text="Custom query" lightbox="./media/hunting/custom-query.png":::
## Next steps
-In this article, you learned how to run a hunting investigation with Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
+In this article, you learned how to run a hunting investigation with Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
- [Use notebooks to run automated hunting campaigns](notebooks.md)-- [Use bookmarks to save interesting information while hunting](bookmarks.md)
+- [Use bookmarks to save interesting information while hunting](bookmarks.md)
sentinel Identify Threats With Entity Behavior Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/identify-threats-with-entity-behavior-analytics.md
ms.devlang: na
na Previously updated : 04/19/2021 Last updated : 05/11/2021
> [!IMPORTANT] > > - The UEBA and Entity Pages features are now in **General Availability** in ***all*** Azure Sentinel geographies and regions.
+>
+> - The **IP address entity** is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## What is User and Entity Behavior Analytics (UEBA)?
See how behavior analytics is used in [Microsoft Cloud App Security](https://tec
Learn more about [entities in Azure Sentinel](entities-in-azure-sentinel.md) and see the full list of [supported entities and identifiers](entities-reference.md).
-When you encounter any entity (currently limited to users and hosts) in a search, an alert, or an investigation, you can select the entity and be taken to an **entity page**, a datasheet full of useful information about that entity. The types of information you will find on this page include basic facts about the entity, a timeline of notable events related to this entity and insights about the entity's behavior.
+When you encounter a user or host entity (IP address entities are in preview) in an entity search, an alert, or an investigation, you can select the entity and be taken to an **entity page**, a datasheet full of useful information about that entity. The types of information you will find on this page include basic facts about the entity, a timeline of notable events related to this entity and insights about the entity's behavior.
Entity pages consist of three parts:-- The left-side panel contains the entity's identifying information, collected from data sources like Azure Active Directory, Azure Monitor, Azure Security Center, and Microsoft Defender.
+- The left-side panel contains the entity's identifying information, collected from data sources like Azure Active Directory, Azure Monitor, Azure Defender, CEF/Syslog, and Microsoft 365 Defender.
-- The center panel shows a graphical and textual timeline of notable events related to the entity, such as alerts, bookmarks, and activities. Activities are aggregations of notable events from Log Analytics. The queries that detect those activities are developed by Microsoft security research teams.
+- The center panel shows a graphical and textual timeline of notable events related to the entity, such as alerts, bookmarks, and activities. Activities are aggregations of notable events from Log Analytics. The queries that detect those activities are developed by Microsoft security research teams, and you can now [add your own custom queries to detect activities](customize-entity-activities.md) of your choosing.
- The right-side panel presents behavioral insights on the entity. These insights help to quickly identify anomalies and security threats. The insights are developed by Microsoft security research teams, and are based on anomaly detection models.
+> [!NOTE]
+> The **IP address entity page** (now in preview) contains **geolocation data** supplied by the **Microsoft Threat Intelligence service**. This service combines geolocation data from Microsoft solutions and third-party vendors and partners. The data is then available for analysis and investigation in the context of a security incident.
+ ### The timeline :::image type="content" source="./media/identify-threats-with-entity-behavior-analytics/entity-pages-timeline.png" alt-text="Entity pages timeline":::
The following types of items are included in the timeline:
- Bookmarks - any bookmarks that include the specific entity shown on the page. -- Activities - aggregation of notable events relating to the entity.
-
+- Activities - aggregation of notable events relating to the entity. A wide range of activities are collected automatically, and you can now [customize this section by adding activities](customize-entity-activities.md) of your own choosing.
+ ### Entity Insights Entity insights are queries defined by Microsoft security researchers to help your analysts investigate more efficiently and effectively. The insights are presented as part of the entity page, and provide valuable security information on hosts and users, in the form of tabular data and charts. Having the information here means you don't have to detour to Log Analytics. The insights include data regarding sign-ins, group additions, anomalous events and more, and include advanced ML algorithms to detect anomalous behavior.
The insights are based on the following data sources:
- BehaviorAnalytics (Azure Sentinel UEBA) - Heartbeat (Azure Monitor Agent) - CommonSecurityLog (Azure Sentinel)
+- ThreatIntelligenceIndicators (Azure Sentinel)
### How to use entity pages
Entity pages are designed to be part of multiple usage scenarios, and can be acc
:::image type="content" source="./media/identify-threats-with-entity-behavior-analytics/entity-pages-use-cases.png" alt-text="Entity page use cases":::
-For more information about the data displayed in the **Entity behavior analytics** table, see [Azure Sentinel UEBA enrichments reference](ueba-enrichments.md).
+Entity page information is stored in the **BehaviorAnalytics** table, described in detail in the [Azure Sentinel UEBA enrichments reference](ueba-enrichments.md).
## Querying behavior analytics data
sentinel Import Threat Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/import-threat-intelligence.md
Threat Intelligence also provides useful context within other Azure Sentinel exp
## Azure Sentinel data connectors for threat intelligence
-Just like all the other event data in Azure Sentinel, threat indicators are imported using data connectors. There are two data connectors in Azure Sentinel provided specifically for threat indicators, **Threat Intelligence - TAXII** and **Threat Intelligence Platforms**. You can use either data connector alone, or both connectors together, depending on where your organization sources threat indicators. LetΓÇÖs talk about each of the data connectors.
+Just like all the other event data in Azure Sentinel, threat indicators are imported using data connectors. There are two data connectors in Azure Sentinel provided specifically for threat indicators, **Threat Intelligence - TAXII** and **Threat Intelligence Platforms**. You can use either data connector alone, or both connectors together, depending on where your organization sources threat indicators.
+
+See this catalog of [threat intelligence integrations](threat-intelligence-integration.md) available with Azure Sentinel.
### Adding threat indicators to Azure Sentinel with the Threat Intelligence Platforms data connector
There is also a rich community of [Azure Monitor workbooks on GitHub](https://gi
In this document, you learned about the threat intelligence capabilities of Azure Sentinel, and the new Threat Intelligence blade. For practical guidance on using Azure Sentinel's threat intelligence capabilities, see the following articles: - [Connect threat intelligence data](./connect-threat-intelligence.md) to Azure Sentinel.
+- [Integrate TIP platforms, TAXII feeds, and enrichments](threat-intelligence-integration.md) with Azure Sentinel.
- Create [built-in](./tutorial-detect-threats-built-in.md) or [custom](./tutorial-detect-threats-custom.md) alerts, and [investigate](./tutorial-investigate-cases.md) incidents, in Azure Sentinel.
sentinel Multiple Workspace View https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/multiple-workspace-view.md
In **Multiple Workspace View**, only the **Incidents** screen is available for n
- You'll need to have read and write permissions on all the workspaces from which you've selected incidents. If you have only read permissions on some workspaces, you'll see warning messages if you select incidents in those workspaces. You won't be able to modify those incidents or any others you've selected together with those (even if you do have permissions for the others). -- If you choose a single incident and click **View full details** or **Investigate**, you will from then on be in the data context of that incident's workspace and no others.
+- If you choose a single incident and click **View full details** or **Actions** > **Investigate**, you will from then on be in the data context of that incident's workspace and no others.
## Next steps In this document, you learned how to view and work with incidents in multiple Azure Sentinel workspaces concurrently. To learn more about Azure Sentinel, see the following articles:
sentinel Sap Deploy Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-deploy-solution.md
+
+ Title: Deploy the Azure Sentinel solution for SAP environments | Microsoft Docs
+description: Learn how to deploy the Azure Sentinel solution for SAP environments.
+++++ Last updated : 05/12/2021++++
+# Tutorial: Deploy the Azure Sentinel solution for SAP (public preview)
+
+This tutorial takes you step by step through the process of deploying the Azure Sentinel solution for SAP.
+
+> [!IMPORTANT]
+> The Azure Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## Overview
+
+[Azure Sentinel solutions](sentinel-solutions.md) include bundled security content, such as threat detections, workbooks, and watchlists. Solutions enable you to onboard Azure Sentinel security content for a specific data connector using a single process.
+
+The Azure Sentinel SAP data connector enables SAP logs to be ingested into your Azure Sentinel workspace, where you can view your data, create custom alerts, investigate, and more.
+
+The SAP data connector provides both stateless and stateful connections for logs from the entire SAP system landscape, and collects logs from both Advanced Business Application Programming (ABAP) via NetWeaver RFC calls and file storage data via OSSAP Control interface.
+
+To ingest SAP logs into Azure Sentinel, you must have the Azure Sentinel SAP data connector installed on your SAP environment. We recommend that you use a Docker container on an Azure VM for the deployment, as described in this tutorial.
+
+After the SAP data connector is deployed, deploy the SAP solution security content to smoothly gain insight into your organization's SAP environment and improve any related security operation capabilities.
+
+In this tutorial, you learn:
+
+> [!div class="checklist"]
+> * How to prepare your SAP system for the SAP data connector deployment
+> * How to use a Docker container and an Azure VM to deploy the SAP data connector
+> * How to deploy the SAP solution security content in Azure Sentinel
+
+## Prerequisites
+
+In order to deploy the Azure Sentinel SAP data connector and security content as described in this tutorial, you must have the following prerequisites:
+
+|Area |Description |
+|||
+|**Azure prerequisites** | **Access to Azure Sentinel**. Make a note of your Azure Sentinel workspace ID and key to use in this tutorial when [deploying your SAP data connector](#deploy-your-sap-data-connector). <br>To view these details from Azure Sentinel, go to **Settings** > **Workspace settings** > **Agents management**. <br><br>**Ability to create Azure resources**. For more information, see the [Azure Resource Manager documentation](/azure/azure-resource-manager/management/manage-resources-portal). <br><br>**Access to Azure Key Vault**. This tutorial describes the recommended steps for using Azure Key Vault to store your credentials. For more information, see the [Azure Key Vault documentation](/azure/key-vault/). |
+|**System prerequisites** | **Software**. The SAP data connector deployment script automatically installs software prerequisites. For more information, see [Automatically installed software](#automatically-installed-software). <br><br> **System connectivity**. Ensure that the VM serving as your SAP data connector host has access to: <br>- Azure Sentinel <br>- Azure Key Vault <br>- The SAP environment host, via the following TCP ports: *32xx*, *5xx13*, and *33xx*, where *xx* is the SAP instance number. <br><br>Make sure that you also have an SAP user account in order to access the SAP software download page.<br><br>**System architecture**. The SAP solution is deployed on a VM as a Docker container, and each SAP client requires its own container instance. <br>Your VM and the Azure Sentinel workspace can be in different Azure subscriptions, and even different Azure AD tenants.|
+|**SAP prerequisites** | **Supported SAP versions**. We recommend using [SAP_BASIS versions 750 SP13](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows) or higher. <br>Select steps in this tutorial provide alternate instructions if you are working on SAP version [SAP_BASIS 740](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows).<br><br> **SAP system details**. Make a note of the following SAP system details for use in this tutorial:<br> - SAP system IP address<br>- SAP system number, such as `00`<br> - SAP System ID, from the SAP NetWeaver system. For example, `NPL`. <br>- SAP client ID, such as`001`.<br><br>**SAP NetWeaver instance access**. Access to your SAP instances must use one of the following options: <br>- [SAP ABAP user/password](#configure-your-sap-system). <br>- A user with an X509 certificate, using SAP CRYPTOLIB PSE. This option may require expert manual steps.<br><br>**Support from your SAP team**. You'll need the support of your SAP team in order to ensure that your SAP system is [configured correctly](#configure-your-sap-system) for the solution deployment. |
+| | |
++
+### Automatically installed software
+
+The [SAP data connector deployment script](#deploy-your-sap-data-connector) installs the following software on your VM using SUDO (root) privileges:
+
+- [Unzip.](https://www.microsoft.com/en-us/p/unzip/9mt44rnlpxxt?activetab=pivot:overviewtab)
+- [NetCat](https://sectools.org/tool/netcat/)
+- [Python 3.6 or higher](https://www.python.org/downloads/)
+- [Python3-pip](https://pypi.org/project/pip/)
+- [Docker](https://www.docker.com/)
+
+## Configure your SAP system
+
+This procedure describes how to ensure that your SAP system has the correct prerequisites installed and is configured for the Azure Sentinel SAP data connector deployment.
+
+> [!IMPORTANT]
+> Perform this procedure together with your SAP team to ensure correct configurations.
+>
+
+**To configure your SAP system for the SAP data connector**:
+
+1. If you are using a version of SAP earlier than 750, ensure that the following SAP notes are deployed in your system:
+
+ - **SPS12641084**. For systems running SAP versions earlier than SAP BASIS 750 SPS13
+ - **2502336**. For systems running SAP versions earlier than SAP BASIS 750 SPS1
+ - **2173545**. For systems running SAP versions earlier than SAP BASIS 750
+
+ Access these SAP notes at the [SAP support Launchpad site](https://support.sap.com/en/https://docsupdatetracker.net/index.html), using an SAP user account.
+
+1. Download and install one of the following SAP change requests from the Azure Sentinel GitHub repository, at https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/CR:
+
+ - **SAP versions 750 or higher**: Install the SAP change request *131 (NPLK900131)*
+ - **SAP versions 740**: Install the SAP change request *132 (NPLK900132)*
+
+ When performing this step, use the **STMS_IMPORT** SAP transaction code.
+
+ > [!NOTE]
+ > In the SAP **Import Options** area, you may see the **Ignore Invalid Component Version** option displayed. If displayed, select this option before continuing.
+ >
+
+1. Create a new SAP role named **/MSFTSEN/SENTINEL_CONNECTOR** by importing the SAP change request *14 (NPLK900114)*. Use the **STMS_IMPORT** SAP transaction code.
+
+ Verify that the role is created with the required permissions, such as:
+
+ :::image type="content" source="media/sap/required-sap-role-authorizations.png" alt-text="Required SAP role permissions for the Azure Sentinel SAP data connector.":::
+
+ For more information, see [authorizations for the ABAP user](sap-solution-detailed-requirements.md#required-abap-authorizations).
+
+1. Create a non-dialog, RFC/NetWeaver user for the SAP data connector and attach the newly created **/MSFTSEN/SENTINEL_CONNECTOR** role.
+
+ - After attaching the role, verify that the role permissions are distributed to the user.
+ - This process requires that you use a username and password for the ABAP user. After the new user is created and has required permissions, make sure to change the ABAP user password.
+
+1. Download and place the **SAP NetWeaver RFC SDK 7.50 for Linux on x86_64 64 BIT** version on your VM, as it's required during the installation process.
+
+ For example, find the SDK on the [SAP software download site](https://launchpad.support.sap.com/#/softwarecenter/templateproducts/_APP=00200682500000001943&_EVENT=DISPHIER&HEADER=Y&FUNCTIONBAR=N&EVENT=TREE&NE=NAVIGATEENR=01200314690100002214&V=MAINT) > **SAP NW RFC SDK** > **SAP NW RFC SDK 7.50** > **nwrfc750X_X-xxxxxxx.zip**. Make sure to download the **LINUX ON X86_64 65BIT** option. Copy the file, such as by using SCP, to your VM.
+
+ You'll need an SAP user account to access the SAP software download page.
+
+1. (Optional) The SAP **Auditlog** file is used system-wide and supports multiple SAP clients. However, each instance of the Azure Sentinel SAP solution supports a single SAP client only.
+
+ Therefore, if you have a multi-client SAP system, we recommend that you enable the **Auditlog** file only for the client where you deploy the SAP solution to avoid data duplication.
++
+## Deploy a Linux VM for your SAP data connector
+
+This procedure describes how to use the Azure CLI to deploy an Ubuntu server 18.04 LTS VM and assign it with a [system-managed identity](/azure/active-directory/managed-identities-azure-resources/).
+
+> [!TIP]
+> You can also deploy the data connector on RHEL, versions 7.7 and higher or SUSE versions 15 and higher. Note that any OS and patch levels must be completely up to date.
+>
+
+**To deploy and prepare your Ubuntu VM**:
+
+1. Use the following command as an example, inserting the values for your resource group and VM name:
+
+ ```azurecli
+ az vm create --resource-group [resource group name] --name [VM Name] --image UbuntuLTS --admin-username AzureUser --data-disk-sizes-gb 10 ΓÇô --size Standard_DS2_ΓÇô --generate-ssh-keys --assign-identity
+ ```
+
+1. On your new VM, install:
+
+ - [Venv](https://docs.python.org/3.8/library/venv.html), with Python version 3.8 or higher.
+ - The [Azure CLI](/cli/azure/), version 2.8.0 or higher.
+
+> [!IMPORTANT]
+> Make sure that you apply any security best practices for your organization, just as you would any other VM.
+>
+
+For more information, see [Quickstart: Create a Linux virtual machine with the Azure CLI](/azure/virtual-machines/linux/quick-create-cli).
+
+## Create key vault for your SAP credentials
+
+This tutorial uses a newly created or dedicated [Azure Key Vault](/azure/key-vault/) to store credentials for your SAP data connector.
+
+**To create or dedicate an Azure Key Vault**:
+
+1. Create a new Azure Key Vault, or choose an existing one to dedicate to your SAP data connector deployment.
+
+ For example, to create a new Key Vault, run the following commands, using the name of your Key Vault resource group and entering your Key Vault name:
+
+ ```azurecli
+ kvgp=<KVResourceGroup>
+
+ kvname=<keyvaultname>
+
+ #Create Key Vault
+ az keyvault create \
+ --name $kvname \
+ --resource-group $kvgp
+ ```
+
+1. Assign an access policy, including GET, LIST, and SET permissions to the VM's managed identity.
+
+ In Azure Key Vault, select to **Access Policies** > **Add Access Policy - Secret Permissions: Get, List, and Set** > **Select Principal**. Enter your [VM's name](#deploy-a-linux-vm-for-your-sap-data-connector), and then select **Add** > **Save**.
+
+ For more information, see the [Key Vault documentation](/azure/key-vault/general/assign-access-policy-portal).
+
+1. Run the following command to get the [VM's principal ID](#deploy-a-linux-vm-for-your-sap-data-connector), entering the name of your Azure resource group:
+
+ ```azurecli
+ az vm show -g [resource group] -n [Virtual Machine] --query identity.principalΓÇô --out tsv
+ ```
+
+ Your principal ID is displayed for you to use in the following step.
+
+1. Run the following command to assign the VM's access permissions to the Key Vault, entering the name of your resource group and the principal ID value returned from the previous step.
+
+ ```azurecli
+ az keyvault set-policy --name $kv --resource-group [resource group] --object-id [Principal ID] --secret-permissions get set
+ ```
+
+## Deploy your SAP data connector
+
+The Azure Sentinel SAP data connector deployment script installs [required software](#automatically-installed-software) and then installs the connector on your [newly created VM](#deploy-a-linux-vm-for-your-sap-data-connector), storing credentials in your [dedicated key vault](#create-key-vault-for-your-sap-credentials).
+
+The SAP data connector deployment script is stored in the [Azure Sentinel GitHub repository > DataConnectors > SAP](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/SAP/) directory.
+
+To run the SAP data connector deployment script, you'll need the following details:
+
+- Your Azure Sentinel workspace details, as listed in the [Prerequisites](#prerequisites) section.
+- The SAP system details listed in the [Prerequisites](#prerequisites) section.
+- Access to a VM user with SUDO privileges.
+- The SAP user you created in [Configure your SAP system](#configure-your-sap-system), with the **/MSFTSEN/SENTINEL_CONNECTOR** role applied.
+- The help of your SAP team.
++
+**To run the SAP solution deployment script**:
+
+1. Run the following command to deploy the SAP solution on your VM:
+
+ ```azurecli
+ wget -O sapcon-sentinel-kickstart.sh https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-sentinel-kickstart.sh && bash ./sapcon-sentinel-kickstart.sh
+ ```
+
+1. Follow the on-screen instructions to enter your SAP and Key Vault details and complete the deployment. A confirmation message appears when the deployment is complete:
+
+ ```azurecli
+ The process has been successfully completed, thank you!
+ ```
+
+ Azure Sentinel starts to retrieve SAP logs for the configured time span, until 24 hours before the initialization time.
+
+1. We recommend reviewing the system logs to make sure that the data connector is transmitting data. Run:
+
+ ```bash
+ docker logs -f sapcon-[SID]
+ ```
+
+## Deploy SAP security content
+
+Deploy the [SAP security content](sap-solution-security-content.md) from the Azure Sentinel **Solutions** and **Watchlists** areas.
+
+The **Azure Sentinel - Continuous Threat Monitoring for SAP** solution enables the SAP data connector to show in the Azure Sentinel **Data connectors** area, and deploys the **SAP - System Applications and Products** workbook and SAP-related analytics rules.
+
+Add SAP-related watchlists to your Azure Sentinel workspace manually.
+
+**To deploy SAP solution security content**:
+
+1. From the Azure Sentinel navigation menu, select **Solutions (Preview)**.
+
+ The **Solutions** page displays a filtered, searchable list of solutions.
+
+1. Select **Azure Sentinel - Continuous Threat Monitoring for SAP (preview)** to open the SAP solution page.
+
+ :::image type="content" source="media/sap/sap-solution.png" alt-text="Azure Sentinel - Continuous Threat Monitoring for SAP (preview) solution.":::
+
+1. Select **Create** to launch the solution deployment wizard, and enter the details of the Azure subscription, resource group, and Log Analytics workspace where you want to deploy the solution.
+
+1. Select **Next** to cycle through the **Data Connectors** **Analytics** and **Workbooks** tabs, where you can learn about the components that will be deployed with this solution.
+
+ The default name for the workbook is **SAP - System Applications and Products - Preview**. Change it in the workbooks tab as needed.
+
+ For more information, see [Azure Sentinel SAP solution: security content reference (public preview)](sap-solution-security-content.md).
+
+1. In the **Review + create tab**, wait for the **Validation Passed** message, then select **Create** to deploy the solution.
+
+ > [!TIP]
+ > You can also select **Download a template** for a link to deploy the solution as code.
+
+1. After the deployment is completed, a confirmation message appears at the top-right of the page.
+
+ To display the newly deployed content, go to:
+
+ - **Threat Management** > **Workbooks**, to find the [SAP - System Applications and Products - Preview](sap-solution-security-content.md#sapsystem-applications-and-products-workbook) workbook.
+ - **Configuration** > **Analytics** to find a series of [SAP-related analytics rules](sap-solution-security-content.md#built-in-analytics-rules).
+
+1. Add SAP-related watchlists to use in your search, detection rules, threat hunting, and response playbooks. These watchlists provide the configuration for the Azure Sentinel SAP Continuous Threat Monitoring solution.
+
+ 1. Download SAP watchlists from the Azure Sentinel GitHub repository at https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Analytics/Watchlists.
+
+ 1. In the Azure Sentinel **Watchlists** area, add the watchlists to your Azure Sentinel workspace. Use the downloaded CSV files as the sources, and then customize them as needed for your environment.
+
+ [ ![SAP-related watchlists added to Azure Sentinel.](media/sap/sap-watchlists.png) ](media/sap/sap-watchlists.png#lightbox)
+
+ For more information, see [Use Azure Sentinel watchlists](watchlists.md) and [Available SAP watchlists](sap-solution-security-content.md#available-watchlists).
+
+1. In Azure Sentinel, navigate to the **Azure Sentinel Continuous Threat Monitoring for SAP** data connector to confirm the connection:
+
+ [ ![Azure Sentinel Continuous Threat Monitoring for SAP data connector page.](media/sap/sap-data-connector.png) ](media/sap/sap-data-connector.png#lightbox)
+
+ SAP ABAP logs are displayed in the Azure Sentinel **Logs** page under **Custom logs**:
+
+ [ ![SAP ABAP logs under Custom logs in Azure Sentinel.](media/sap/sap-logs-in-sentinel.png) ](media/sap/sap-logs-in-sentinel.png#lightbox)
+
+ For more information, see [Azure Sentinel SAP solution logs reference (public preview)](sap-solution-log-reference.md).
+
+## SAP solution deployment troubleshooting
+
+After having deployed both the SAP data connector and security content, you may experience the following errors or issues:
+
+|Issue |Workaround |
+|||
+|Network connectivity issues to the SAP environment or to Azure Sentinel | Check your network connectivity as needed. |
+|Incorrect SAP ABAP user credentials |Check your credentials and fix them by applying the correct values to the **ABAPUSER** and **ABAPPASS** values in Azure Key Vault. |
+|Missing permissions, such as the **/MSFTSEN/SENTINEL_CONNECTOR** role not assigned to the SAP user as needed, or inactive |Fix this error by assigning the role and ensuring that it's active in your SAP system. |
+|A missing SAP change request | Make sure that you've imported the correct SAP change request, as described in [Configure your SAP system](#configure-your-sap-system). |
+|Incorrect Azure Sentinel workspace ID or key entered in the deployment script | To fix this error, enter the correct credentials in Azure KeyVault. |
+|A corrupt or missing SAP SDK file | Fix this error by reinstalling the SAP SDK and ensuring that you are using the correct Linux 64-bit version. |
+|Missing data in your workbook or alerts | Ensure that the **Auditlog** policy is properly enabled on the SAP side, with no errors in the log file. Use the **RSAU_CONFIG_LOG** transaction for this step. |
+| | |
+
+> [!TIP]
+> We highly recommend that you review the system logs after installing the data connector. Run:
+>
+> ```bash
+> docker logs -f sapcon-[SID]
+> ```
+>
+For more information, see:
+
+- [View all Docker execution logs](#view-all-docker-execution-logs)
+- [Review and update the SAP data connector configuration](#review-and-update-the-sap-data-connector-configuration)
+- [Useful Docker commands](#useful-docker-commands)
+
+### View all Docker execution logs
+
+To view all Docker execution logs for your Azure Sentinel SAP data connector deployment, run one of the following commands:
+
+```bash
+docker exec -it sapcon-[SID] bash && cd /sapcon-app/sapcon/logs
+```
+
+or
+
+```bash
+docker exec ΓÇôit sapcon-[SID] cat /sapcon-app/sapcon/logs/[FILE_LOGNAME]
+```
+
+Output similar to the following should be displayed:
+
+```bash
+Logs directory:
+root@644c46cd82a9:/sapcon-app# ls sapcon/logs/ -l
+total 508
+-rwxr-xr-x 1 root root 0 Mar 12 09:22 ' __init__.py'
+-rw-r--r-- 1 root root 282 Mar 12 16:01 ABAPAppLog.log
+-rw-r--r-- 1 root root 1056 Mar 12 16:01 ABAPAuditLog.log
+-rw-r--r-- 1 root root 465 Mar 12 16:01 ABAPCRLog.log
+-rw-r--r-- 1 root root 515 Mar 12 16:01 ABAPChangeDocsLog.log
+-rw-r--r-- 1 root root 282 Mar 12 16:01 ABAPJobLog.log
+-rw-r--r-- 1 root root 480 Mar 12 16:01 ABAPSpoolLog.log
+-rw-r--r-- 1 root root 525 Mar 12 16:01 ABAPSpoolOutputLog.log
+-rw-r--r-- 1 root root 0 Mar 12 15:51 ABAPTableDataLog.log
+-rw-r--r-- 1 root root 495 Mar 12 16:01 ABAPWorkflowLog.log
+-rw-r--r-- 1 root root 465311 Mar 14 06:54 API.log # view this log to see submits of data into Azure Sentinel
+-rw-r--r-- 1 root root 0 Mar 12 15:51 LogsDeltaManager.log
+-rw-r--r-- 1 root root 0 Mar 12 15:51 PersistenceManager.log
+-rw-r--r-- 1 root root 4830 Mar 12 16:01 RFC.log
+-rw-r--r-- 1 root root 5595 Mar 12 16:03 SystemAdmin.log
+```
+
+### Review and update the SAP data connector configuration
+
+If you want to check the SAP data connector configuration file and make manual updates, perform the following steps:
+
+1. On your VM, in the user's home directory, open the **~/sapcon/[SID]/systemconfig.ini** file.
+1. Update the configuration if needed, and then restart the container:
+
+ ```bash
+ docker restart sapcon-[SID]
+ ```
+
+### Useful Docker commands
+
+When troubleshooting your SAP data connector, you may find the following commands useful:
+
+|Function |Command |
+|||
+|**Stop the Docker container** | `docker stop sapcon-[SID]` |
+|**Start the Docker container** |`docker start sapcon-[SID]` |
+|**View Docker system logs** | `docker logs -f sapcon-[SID]` |
+|**Enter the Docker container** | `docker exec -it sapcon-[SID] bash` |
+| | |
+
+For more information, see the [Docker CLI documentation](https://docs.docker.com/engine/reference/commandline/docker/).
+
+## Update your SAP data connector
+
+If you have a Docker container already running with an earlier version of the SAP data connector, run the SAP data connector update script to get the latest features available.
+
+1. Make sure that you have the most recent versions of the relevant deployment scripts from the Azure Sentinel github repository. Run:
+
+ ```azurecli
+ - wget -O sapcon-sentinel-kickstart.sh https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-sentinel-kickstart.sh && bash ./sapcon-sentinel-update.sh
+ ```
+
+1. Run the following command on your SAP data connector machine:
+
+ ```azurecli
+ ./ sapcon-instance-update.sh
+ ```
+
+The SAP data connector Docker container on your machine is updated.
+
+## Next steps
+
+Learn more about the Azure Sentinel SAP solutions:
+
+- [Deploy the Azure Sentinel SAP solution using alternate deployments](sap-solution-deploy-alternate.md)
+- [Azure Sentinel SAP solution detailed SAP requirements](sap-solution-detailed-requirements.md)
+- [Azure Sentinel SAP solution logs reference](sap-solution-log-reference.md)
+- [Azure Sentinel SAP solution: built-in security content](sap-solution-security-content.md)
+
+For more information, see [Azure Sentinel solutions](sentinel-solutions.md).
sentinel Sap Solution Deploy Alternate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-solution-deploy-alternate.md
+
+ Title: Deploy the Azure Sentinel SAP data connector on-premises | Microsoft Docs
+description: Learn how to deploy the Azure Sentinel data connector for SAP environments using an on-premises machine.
+++++ Last updated : 05/12/2021++++
+# Deploy the Azure Sentinel SAP data connector on-premises
+
+This article describes how to deploy the Azure Sentinel SAP data connector in an expert or custom process, such as using an on-premises machine and an Azure Key Vault to store your credentials.
+
+> [!NOTE]
+> The default, and most recommended process for deploying the Azure Sentinel SAP data connector is by [using an Azure VM](sap-deploy-solution.md). This article is intended for advanced users.
+
+> [!IMPORTANT]
+> The Azure Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## Prerequisites
+
+The basic prerequisites for deploying your Azure Sentinel SAP data connector are the same regardless of your deployment method.
+
+Make sure that your system complies with the prerequisites documented in the main [SAP data connector deployment tutorial](sap-deploy-solution.md#prerequisites) before you start.
+
+For more information, see [Azure Sentinel SAP solution detailed SAP requirements (public preview)](sap-solution-detailed-requirements.md).
+
+## Create your Azure key vault
+
+Create an Azure key vault that you can dedicate to your Azure Sentinel SAP data connector.
+
+Run the following command to create your Azure key vault:
+
+``` azurecli
+kvgp=<KVResourceGroup>
+
+kvname=<keyvaultname>
+
+#Create key vault
+az keyvault create \
+ --name $kvname \
+ --resource-group $kvgp
+```
+
+For more information, see [Quickstart: Create a key vault using the Azure CLI](/azure/key-vault/general/quick-create-cli).
+
+## Add Azure Key Vault secrets
+
+To add Azure Key Vault secrets, run the following script, with your own system ID and the credentials you want to add:
+
+```azurecli
+#Add Abap username
+az keyvault secret set \
+ --name <SID>-ABAPUSER \
+ --value "<abapuser>" \
+ --description SECRET_ABAP_USER --vault-name $kvname
+
+#Add Abap Username password
+az keyvault secret set \
+ --name <SID>-ABAPPASS \
+ --value "<abapuserpass>" \
+ --description SECRET_ABAP_PASSWORD --vault-name $kvname
+
+#Add java Username
+az keyvault secret set \
+ --name <SID>-JAVAOSUSER \
+ --value "<javauser>" \
+ --description SECRET_JAVAOS_USER --vault-name $kvname
+
+#Add java Username password
+az keyvault secret set \
+ --name <SID>-JAVAOSPASS \
+ --value "<javauserpass>" \
+ --description SECRET_JAVAOS_PASSWORD --vault-name $kvname
+
+#Add abapos username
+az keyvault secret set \
+ --name <SID>-ABAPOSUSER \
+ --value "<abaposuser>" \
+ --description SECRET_ABAPOS_USER --vault-name $kvname
+
+#Add abapos username password
+az keyvault secret set \
+ --name <SID>-ABAPOSPASS \
+ --value "<abaposuserpass>" \
+ --description SECRET_ABAPOS_PASSWORD --vault-name $kvname
+
+#Add Azure Log ws ID
+az keyvault secret set \
+ --name <SID>-LOG_WS_ID \
+ --value "<logwsod>" \
+ --description SECRET_AZURE_LOG_WS_ID --vault-name $kvname
+
+#Add Azure Log ws public key
+az keyvault secret set \
+ --name <SID>-LOG_WS_PUBLICKEY \
+ --value "<loswspubkey>" \
+ --description SECRET_AZURE_LOG_WS_PUBLIC_KEY --vault-name $kvname
+```
+
+For more information, see the [az keyvault secret](/cli/azure/keyvault/secret) CLI documentation.
+
+## Perform an expert / custom installation
+
+This procedure describes how to deploy the SAP data connector using an expert or custom installation, such as when installing on-premises.
+
+We recommend that you perform this procedure after you have a key vault ready with your SAP credentials.
+
+**To deploy the SAP data connector**:
+
+1. On your on-premises machine, download the latest SAP NW RFC SDK from the [SAP Launchpad site](https://support.sap.com) > **SAP NW RFC SDK** > **SAP NW RFC SDK 7.50** > **nwrfc750X_X-xxxxxxx.zip**.
+
+ > [!NOTE]
+ > You'll need your SAP user sign-in information in order to access the SDK, and you must download the SDK that matches your operating system.
+ >
+ > Make sure to select the **LINUX ON X86_64** option.
+
+1. On your on-premises machine, create a new folder with a meaningful name, and copy the SDK zip file into your new folder.
+
+1. Clone the Azure Sentinel solution GitHub repo onto your on-premises machine, and copy Azure Sentinel SAP solution **systemconfig.ini** file into your new folder.
+
+ For example:
+
+ ```bash
+ mkdir /home/$(pwd)/sapcon/<sap-sid>/
+ Cd /home/$(pwd)/sapcon/<sap-sid>/
+ Wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/template/systemconfig.inicp <**nwrfc750X_X-xxxxxxx.zip**> /home/$(pwd)/sapcon/<sap-sid>/
+ ```
+
+1. Edit the **systemconfig.ini** file as needed, using the embedded comments as a guide. For more information, see [Manually configure the SAP data connector](#manually-configure-the-sap-data-connector).
+
+ To test your configuration, you may want to add the user and password directly to the **systemconfig.ini** configuration file. While we recommend that you use [Azure Key vault](#add-azure-key-vault-secrets) to store your credentials, you can also use an **env.list** file, [Docker secrets](#manually-configure-the-sap-data-connector), or you can add your credentials directly to the **systemconfig.ini** file.
+
+1. Define the logs that you want to ingest into Azure Sentinel using the instructions in the **systemconfig.ini** file. For example, see [Define the SAP logs that are sent to Azure Sentinel](#define-the-sap-logs-that-are-sent-to-azure-sentinel).
+
+1. Define the following configurations using the instructions in the **systemconfig.ini** file:
+
+ - Whether to include user email addresses in audit logs
+ - Whether to retry failed API calls
+ - Whether to include cexal audit logs
+ - Whether to wait an interval of time between data extractions, especially for large extractions
+
+ For more information, see [SAL logs connector configurations](#sal-logs-connector-settings).
+
+1. Save your updated **systemconfig.ini** file in the **sapcon** directory on your machine.
+
+1. If you have chosen to use an **env.list** file for your credentials, create a temporary **env.list** file with the required credentials. Once your Docker container is running correctly, make sure to delete this file.
+
+ > [!NOTE]
+ > The following script has each Docker container connecting to a specific ABAP system. Modify your script as needed for your environment.
+ >
+
+ Run:
+
+ ```bash
+ ##############################################################
+ ##############################################################
+ # env.list template
+ SAPADMUSER=<SET_SAPCONTROL_USER>
+ SAPADMPASSWORD=<SET_SAPCONTROL_PASS>
+ ABAPUSER=SET_ABAP_USER>
+ ABAPPASS=<SET_ABAP_PASS>
+ JAVAUSER=<SET_JAVA_OS_USER>
+ JAVAPASS=<SET_JAVA_OS_USER>
+ ##############################################################
+ ```
+
+1. Download and run the pre-defined Docker image with the SAP data connector installed. Run:
+
+ ```bash
+ docker pull docker pull mcr.microsoft.com/azure-sentinel/solutions/sapcon /sapcon:latest
+ docker run --env-file=<env.list_location> -d -v /home/$(pwd)/sapcon/<sap-sid>/:/sapcon-app/sapcon/config/system --name sapcon-<sid> sapcon
+ rm -f <env.list_location>
+ ```
+
+1. Verify that the Docker container is running correctly. Run:
+
+ ```bash
+ docker logs ΓÇôf sapcon-[SID]
+ ```
+
+1. Continue with deploying the **Azure Sentinel - Continuous Threat Monitoring for SAP** solution.
+
+ Deploying the solution enables the SAP data connector to display in Azure Sentinel and deploys the SAP workbook and analytics rules. When you're done, manually add and customize your SAP watchlists.
+
+ For more information, see [Deploy SAP security content](sap-deploy-solution.md#deploy-sap-security-content).
+
+## Manually configure the SAP data connector
+
+The Azure Sentinel SAP solution data connector is configured in the **systemconfig.ini** file, which you cloned to your SAP data connector machine as part of the [deployment procedure](#perform-an-expert--custom-installation).
+
+The following code shows a sample **systemconfig.ini** file:
+
+```Python
+[Secrets Source]
+secrets = '<DOCKER_RUNTIME/AZURE_KEY_VAULT/DOCKER_SECRETS/DOCKER_FIXED>'
+keyvault = '<SET_YOUR_AZURE_KEYVAULT>'
+intprefix = '<SET_YOUR_PREFIX>'
+
+[ABAP Central Instance]
+##############################################################
+# Define the following values according to your server configuration.
+ashost = <SET_YOUR_APPLICATION_SERVER_HOST>
+mshost = <SET_YOUR_MESSAGE_SERVER_HOST> - #In case different then App
+##############################################################
+group = <SET_YOUR_LOGON_GROUP>
+msserv = <SET_YOUR_MS_SERVICE> - #Required only if the message server service is not defined as sapms<SYSID> in /etc/services
+sysnr = <SET_YOUR_SYS_NUMBER>
+user = <SET_YOUR_USER>
+##############################################################
+# Enter your password OR your X509 SNC parameters
+passwd = <SET_YOUR_PASSWORD>
+snc_partnername = <SET_YOUR_SNC_PARTNER_NAME>
+snc_lib = <SET_YOUR_SNC_LIBRARY_PATH>
+x509cert = <SET_YOUR_X509_CERTIFICATE>
+##############################################################
+sysid = <SET_YOUR_SYSTEM_ID>
+client = <SET_YOUR_CLIENT>
+
+[Azure Credentials]
+loganalyticswsid = <SET_YOUR_LOG_ANALYTICS_WORKSPACE_ID>
+publickey = <SET_YOUR_PUBLIC_KEY>
+
+[File Extraction ABAP]
+osuser = <SET_YOUR_SAPADM_LIKE_USER>
+##############################################################
+# Enter your password OR your X509 SNC parameters
+ospasswd = <SET_YOUR_SAPADM_PASS>
+x509pkicert = <SET_YOUR_X509_PKI_CERTIFICATE>
+##############################################################
+appserver = <SET_YOUR_SAPCTRL_SERVER>
+instance = <SET_YOUR_SAP_INSTANCE>
+abapseverity = <SET_ABAP_SEVERITY>
+abaptz = <SET_ABAP_TZ>
+
+[File Extraction JAVA]
+javaosuser = <SET_YOUR_JAVAADM_LIKE_USER>
+##############################################################
+# Enter your password OR your X509 SNC parameters
+javaospasswd = <SET_YOUR_JAVAADM_PASS>
+javax509pkicert = <SET_YOUR_X509_PKI_CERTIFICATE>
+##############################################################
+javaappserver = <SET_YOUR_JAVA_SAPCTRL_SERVER>
+javainstance = <SET_YOUR_JAVA_SAP_INSTANCE>
+javaseverity = <SET_JAVA_SEVERITY>
+javatz = <SET_JAVA_TZ>
+```
+
+### Define the SAP logs that are sent to Azure Sentinel
+
+Add the following code to the Azure Sentinel SAP solution **systemconfig.ini** file to define the logs that are sent to Azure Sentinel.
+
+For more information, see [Azure Sentinel SAP solution logs reference (public preview)](sap-solution-log-reference.md).
+
+```Python
+##############################################################
+# Enter True OR False for each log to send those logs to Azure Sentinel
+[Logs Activation Status]
+ABAPAuditLog = True
+ABAPJobLog = True
+ABAPSpoolLog = True
+ABAPSpoolOutputLog = True
+ABAPChangeDocsLog = True
+ABAPAppLog = True
+ABAPWorkflowLog = True
+ABAPCRLog = True
+ABAPTableDataLog = False
+# ABAP SAP Control Logs - Retrieved by using SAP Conntrol interface and OS Login
+ABAPFilesLogs = False
+SysLog = False
+ICM = False
+WP = False
+GW = False
+# Java SAP Control Logs - Retrieved by using SAP Conntrol interface and OS Login
+JAVAFilesLogs = False
+##############################################################
+```
+
+### SAL logs connector settings
+
+Add the following code to the Azure Sentinel SAP data connector **systemconfig.ini** file to define other settings for SAP logs ingested into Azure Sentinel.
+
+For more information, see [Perform an expert / custom SAP data connector installation](#perform-an-expert--custom-installation).
++
+```Python
+##############################################################
+[Connector Configuration]
+extractuseremail = True
+apiretry = True
+auditlogforcexal = False
+auditlogforcelegacyfiles = False
+timechunk = 60
+##############################################################
+```
+
+This section enables you to configure the following parameters:
+
+|Parameter name |Description |
+|||
+|**extractuseremail** | Determines whether user email addresses are included in audit logs. |
+|**apiretry** | Determines whether API calls are retried as a failover mechanism. |
+|**auditlogforcexal** | Determines whether the system forces the use of audit logs for non-SAL systems, such as SAP BASIS version 7.4. |
+|**auditlogforcelegacyfiles** | Determines whether the system forces the use of audit logs with legacy system capabilities, such as from SAP BASIS version 7.4 with lower patch levels.|
+|**timechunk** | Determines that the system waits a specific number of minutes as an interval between data extractions. Use this parameter if you have a large amount of data expected. <br><br>For example, during the initial data load during your first 24 hours, you might want to have the data extraction running only every 30 minutes to give each data extraction enough time. In such cases, set this value to **30**. |
+| | |
++
+## Next steps
+
+After you have your SAP data connector installed, you can add the SAP-related security content.
+
+For more information, see [Deploy the SAP solution](sap-deploy-solution.md#deploy-sap-security-content).
+
+For more information, see:
+
+- [Azure Sentinel SAP solution detailed SAP requirements](sap-solution-detailed-requirements.md)
+- [Azure Sentinel SAP solution logs reference](sap-solution-log-reference.md)
+- [Azure Sentinel SAP solution: security content reference](sap-solution-security-content.md)
sentinel Sap Solution Detailed Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-solution-detailed-requirements.md
+
+ Title: Azure Sentinel SAP solution detailed SAP requirements | Microsoft Docs
+description: Learn about the detailed SAP system requirements for the Azure Sentinel SAP solution.
+++++ Last updated : 05/12/2021++++
+# Azure Sentinel SAP solution detailed SAP requirements (public preview)
+
+The [default procedure for deploying the Azure Sentinel SAP solution](sap-deploy-solution.md) includes the required SAP change requests and SAP notes, and provides a built-in role with all required permissions.
+
+This article lists the required SAP change requests, notes, and permissions in detail.
+
+Use this article as a reference if you're an admin, or if you're [deploying the SAP solution manually](sap-solution-deploy-alternate.md). This article is intended for advanced SAP users.
++
+> [!IMPORTANT]
+> The Azure Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## Required SAP log change requests
+
+The following SAP log change requests are required for the SAP solution, depending on your SAP Basis version:
+
+- **SAP Basis versions 7.50 and higher**, install NPLK900131
+- **SAP Basis version 7.40**, install NPLK900132
+- **To create an SAP role with the required authorizations**, for any supported SAP Basis version, install NPLK900114. For more information, see [Configure your SAP system](sap-deploy-solution.md#configure-your-sap-system) and [Required ABAP authorizations](#required-abap-authorizations).
+
+> [!NOTE]
+> The required SAP log change requests expose custom RFC FMs that are required for the connector, and do not change any standard or custom objects.
+>
+
+## Required SAP notes
+
+If you have an SAP Basis version of 7.50 or lower, install the following SAP notes:
+
+- **SAP Note 2502336**. For systems running SAP versions earlier than SAP BASIS 750 SPS1. Named `RSSCD100 - read only from archive, not from database`.
+- **SAP Note 2173545**. For systems running SAP versions earlier than SAP BASIS 750. Named `CHANGEDOCUMENT_READ_ALL`.
+- **SAP Note 2641084**. For systems running SAP versions earlier than SAP BASIS 750 SPS13. Provides standardized read access for the Security Audit log data.
+
+Access the SAP notes from the [SAP support Launchpad site](https://support.sap.com/en/https://docsupdatetracker.net/index.html).
+
+## Required ABAP authorizations
+
+The following table lists the ABAP authorizations required for the backend SAP user to connect Azure Sentinel to the SAP logs. For more information, see [Configure your SAP system](sap-deploy-solution.md#configure-your-sap-system).
+
+Required authorizations are listed by log type. You only need the authorizations listed for the types of logs you plan to ingest into Azure Sentinel.
+
+> [!TIP]
+> To create the role with all required authorizations, deploy the SAP change request [NPLK900114](#required-sap-log-change-requests) on your SAP system. This change request creates the **/MSFTSEN/SENTINEL_CONNECTOR** role, and assigns the role to the ABAP user connecting to Azure Sentinel.
+>
+
+| Authorization Object | Field | Value |
+| -- | -- | -- |
+| **All RFC logs** | | |
+| S_RFC | FUGR | /OSP/SYSTEM_TIMEZONE |
+| S_RFC | FUGR | ARFC |
+| S_RFC | FUGR | STFC |
+| S_RFC | FUGR | RFC1 |
+| S_RFC | FUGR | SDIFRUNTIME |
+| S_RFC | FUGR | SMOI |
+| S_RFC | FUGR | SYST |
+| S_RFC | FUGR/FUNC | SRFC/RFC_SYSTEM_INFO |
+| S_RFC | FUGR/FUNC | THFB/TH_SERVER_LIST |
+| S_TCODE | TCD | SM51 |
+| **ABAP Application Log** | | |
+| S_APPL_LOG | ACTVT | Display |
+| S_APPL_LOG | ALG_OBJECT | * |
+| S_APPL_LOG | ALG_SUBOBJ | * |
+| S_RFC | FUGR | SXBP_EXT |
+| S_RFC | FUGR | /MSFTSEN/_APPLOG |
+| **ABAP Change Documents Log** | | |
+| S_RFC | FUGR | /MSFTSEN/_CHANGE_DOCS |
+| **ABAP CR Log** | | |
+| S_RFC | FUGR | CTS_API |
+| S_RFC | FUGR | /MSFTSEN/_CR |
+| S_TRANSPRT | ACTVT | Display |
+| S_TRANSPRT | TTYPE | * |
+| **ABAP DB Table Data Log** | | |
+| S_RFC | FUGR | /MSFTSEN/_TD |
+| S_TABU_DIS | ACTVT | Display |
+| S_TABU_DIS | DICBERCLS | &NC& |
+| S_TABU_DIS | DICBERCLS | + Any object required for logging |
+| S_TABU_NAM | ACTVT | Display |
+| S_TABU_NAM | TABLE | + Any object required for logging |
+| S_TABU_NAM | TABLE | DBTABLOG |
+| **ABAP Job Log** | | |
+| S_RFC | FUGR | SXBP |
+| S_RFC | FUGR | /MSFTSEN/_JOBLOG |
+| **ABAP Job Log, ABAP Application Log** | | |
+| S_XMI_PRD | INTERFACE | XBP |
+| **ABAP Security Audit Log - XAL** | | |
+| All RFC | S_RFC | FUGR |
+| S_ADMI_FCD | S_ADMI_FCD | AUDD |
+| S_RFC | FUGR | SALX |
+| S_USER_GRP | ACTVT | Display |
+| S_USER_GRP | CLASS | * |
+| S_XMI_PRD | INTERFACE | XAL |
+| **ABAP Security Audit Log - XAL, ABAP Job Log, ABAP Application Log** | | |
+| S_RFC | FUGR | SXMI |
+| S_XMI_PRD | EXTCOMPANY | Microsoft |
+| S_XMI_PRD | EXTPRODUCT | Azure Sentinel |
+| **ABAP Security Audit Log - SAL** | | |
+| S_RFC | FUGR | RSAU_LOG |
+| S_RFC | FUGR | /MSFTSEN/_AUDITLOG |
+| **ABAP Spool Log, ABAP Spool Output Log** | | |
+| S_RFC | FUGR | /MSFTSEN/_SPOOL |
+| **ABAP Workflow Log** | | |
+| S_RFC | FUGR | SWRR |
+| S_RFC | FUGR | /MSFTSEN/_WF |
+| | |
+
+## Next steps
+
+For more information, see:
+
+- [Tutorial: Deploy the Azure Sentinel solution for SAP](sap-deploy-solution.md)
+- [Deploy the Azure Sentinel SAP data connector on-premises](sap-solution-deploy-alternate.md)
+- [Azure Sentinel SAP solution logs reference](sap-solution-log-reference.md)
+- [Azure Sentinel SAP solution: available security content](sap-solution-security-content.md)
sentinel Sap Solution Log Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-solution-log-reference.md
+
+ Title: Azure Sentinel SAP solution - Available logs reference | Microsoft Docs
+description: Learn about the SAP logs available from the Azure Sentinel SAP solution.
+++++ Last updated : 05/12/2021++++
+# Azure Sentinel SAP solution logs reference (public preview)
+
+This article describes the SAP logs available from the Azure Sentinel SAP data connector, including the table names in Azure Sentinel, the log purposes, and detailed log schemas. Schema field descriptions are based on the field descriptions in the relevant [SAP documentation](https://help.sap.com/).
+
+This article is intended for advanced SAP users.
+
+> [!NOTE]
+> When using the XBP 3.0 interface, the Azure Sentinel SAP solution uses *Not Released* services. These services do not affect backend system or connector behavior.
+>
+> To "release" these services, implement the [SAP Note 2910263 - Unreleased XBP functions](https://launchpad.support.sap.com/#/notes/2910263).
+
+> [!IMPORTANT]
+> The Azure Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## ABAP Application log
+
+- **Name in Azure Sentinel**: `ABAPAppLog_CL`
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/56bf1265a92e4b4d9a72448c579887af/7.5.7/en-US/c769bcc9f36611d3a6510000e835363f.html)
+
+- **Log purpose**: Records the progress of an application execution so that you can reconstruct it later as needed.
+
+ Available by using RFC with a custom service based on standard services of XBP interface.
++
+### ABAPAppLog_CL log schema
+
+| Field | Description |
+| | |
+| AppLogDateTime | Application log date time |
+| CallbackProgram | Callback program |
+| CallbackRoutine | Callback routine |
+| CallbackType | Callback type |
+| ClientID | ABAP client ID (MANDT) |
+| ContextDDIC | Context DDIC structure |
+| ExternalID | External log ID |
+| Host | Host |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| InternalMessageSerial | Application log message serial |
+| LevelofDetail | Level of detail |
+| LogHandle | Application log handle |
+| LogNumber | Log number |
+| MessageClass | Message class |
+| MessageNumber | Message number |
+| MessageText | Message text |
+| MessageType | Message type |
+| Object | Application log object |
+| OperationMode | Operation mode |
+| ProblemClass | Problem class |
+| ProgramName | Program name |
+| SortCriterion | Sort criterion |
+| StandardText | Standard text |
+| SubObject | Application log sub object |
+| SystemID | System ID |
+| SystemNumber | System number |
+| TransactionCode | Transaction code |
+| User | User |
+| UserChange | User change |
+| | |
+++
+## ABAP Change Documents log
+
+- **Name in Azure Sentinel**: `ABAPChangeDocsLog_CL`
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/6f51f5216c4b10149358d088a0b7029c/7.01.22/en-US/b8686150ed102f1ae10000000a44176f.html)
+
+- **Log purpose**: Records:
+
+ - SAP NetWeaver Application Server (AS) ABAP log changes to business data objects in change documents.
+
+ - Other entities in the SAP system, such as user data, roles, addresses.
+
+ Available by using RFC with a custom service based on standard services.
+
+### ABAPAuditLog_CL log schema
++
+| Field | Description |
+| | - |
+| ActualChangeNum | Actual change number |
+| ChangedTableKey | Changed table key |
+| ChangeNumber | Change number |
+| ClientID | ABAP client ID (MANDT) |
+| CreatedfromPlannedChange | Created from planned change, in the following syntax: `(ΓÇÿXΓÇÖ , ΓÇÿ ΓÇÿ)`|
+| CurrencyKeyNew | Currency key: new value |
+| CurrencyKeyOld | Currency key: old value |
+| FieldName | Field name |
+| FlagText | Flag text |
+| Host | Host |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| Language | Language |
+| ObjectClass | Object class, such as `BELEG`, `BPAR`, `PFCG`, `IDENTITY` |
+| ObjectID | Object ID |
+| PlannedChangeNum | Planned change number |
+| SystemID | System ID |
+| SystemNumber | System number |
+| TableName | Table name |
+| TransactionCode | Transaction code |
+| TypeofChange_Header | Header type of change, including: <br>`U` = Change; `I` = Insert; `E` = Delete Single Docu; `D` = Delete; `J` = Insert Single Docu |
+| TypeofChange_Item | Item type of change, including: <br>`U` = Change; `I` = Insert; `E` = Delete Single Docu; `D` = Delete; `J` = Insert Single Docu |
+| UOMNew | Unit of measure: new value |
+| UOMOld | Unit of measure: old value |
+| User | User |
+| ValueNew | Field content: new value |
+| ValueOld | Field content: old value |
+| Version | Version |
+| | |
+
+## ABAP CR log
+
+- **Name in Azure Sentinel**: `ABAPCRLog_CL`
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/56bf1265a92e4b4d9a72448c579887af/7.5.7/en-US/c769bcd5f36611d3a6510000e835363f.html)
+
+- **Log purpose**: Includes the Change & Transport System (CTS) logs, including the directory objects and customizations where changes were made.
+
+ Available by using RFC with a custom service based on standard tables and standard services.
+
+> [!NOTE]
+> In addition to application logging, change documents, and table recording, all changes that you make to your production system using the Change & Transport System are documented in the CTS and TMS logs.
+>
++
+### ABAPCRLog_CL log schema
+
+| Field | Description |
+| | |
+| Category | Category (Workbench, Customizing) |
+| ClientID | ABAP client ID (MANDT) |
+| Description | Description |
+| Host | Host |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| ObjectName | Object name |
+| ObjectType | Object type |
+| Owner | Owner |
+| Request | Change request |
+| Status | Status |
+| SystemID | System ID |
+| SystemNumber | System number |
+| TableKey | Table key |
+| TableName | Table name |
+| ViewName | View name |
+| | |
+
+## ABAP DB table data log
+
+- **Name in Azure Sentinel**: `ABAPTableDataLog_CL`
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/56bf1265a92e4b4d9a72448c579887af/7.5.7/en-US/c769bcd2f36611d3a6510000e835363f.html)
+
+- **Log purpose**: Provides logging for those tables that are critical or susceptible to audits.
+
+ Available by using RFC with a custom service.
+
+### ABAPTableDataLog_CL log schema
+
+| Field | Description |
+| - | - |
+| DBLogID | DB log ID |
+| Host | Host |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| Language | Language |
+| LogKey | Log key |
+| NewValue | Field new value |
+| OldValue | Field old value |
+| OperationTypeSQL | Operation type, `Insert`, `Update`, `Delete` |
+| Program | Program name |
+| SystemID | System ID |
+| SystemNumber | System number |
+| TableField | Table field |
+| TableName | Table name |
+| TransactionCode | Transaction code |
+| UserName | User |
+| VersionNumber | Version number |
+| | |
+
+## ABAP Gateway log
+
+- **Name in Azure Sentinel**: `GW_CL`
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/62b4de4187cb43668d15dac48fc00732/7.5.7/en-US/48b2a710ca1c3079e10000000a42189b.html)
+
+- **Log purpose**: Monitors Gateway activities. Available by the SAP Control Web Service.
+
+### GW_CL log schema
+
+| Field | Description |
+| | - |
+| Host | Host |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| MessageText | Message text |
+| Severity | Message severity: `Debug`, `Info`, `Warning`, `Error` |
+| SystemID | System ID |
+| SystemNumber | System number |
+| | |
+
+## ABAP ICM log
+
+- **Name in Azure Sentinel**: `ICM_CL`
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/683d6a1797a34730a6e005d1e8de6f22/7.52.4/en-US/a10ec40d01e740b58d0a5231736c434e.html)
+
+- **Log purpose**: Records inbound and outbound requests and compiles statistics of the HTTP requests.
+
+ Available by the SAP Control Web Service.
+
+### ICM_CL log schema
+
+| Field | Description |
+| | - |
+| Host | Host |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| MessageText | Message text |
+| Severity | Message severity, including: `Debug`, `Info`, `Warning`, `Error` |
+| SystemID | System ID |
+| SystemNumber | System number |
+| | |
+
+## ABAP Job log
+
+- **Name in Azure Sentinel**: `ABAPJobLog_CL`
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/b07e7195f03f438b8e7ed273099d74f3/7.31.19/en-US/4b2bc0974c594ba2e10000000a42189c.html)
+
+- **Log purpose**: Combines all background processing job logs (SM37).
+
+ Available by using RFC with a custom service based on standard services of XBP interfaces.
+
+### ABAPJobLog_CL log schema
++
+| Field | Description |
+| - | -- |
+| ABAPProgram | ABAP program |
+| BgdEventParameters | Background event parameters |
+| BgdProcessingEvent | Background processing event |
+| ClientID | ABAP client ID (MANDT) |
+| DynproNumber | Dynpro number |
+| GUIStatus | GUI status |
+| Host | Host |
+| Instance | ABAP instance (HOST_SYSID_SYSNR), in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| JobClassification | Job classification |
+| JobCount | Job count |
+| JobGroup | Job group |
+| JobName | Job name |
+| JobPriority | Job priority |
+| MessageClass | Message class |
+| MessageNumber | Message number |
+| MessageText | Message text |
+| MessageType | Message type |
+| ReleaseUser | Job release user |
+| SchedulingDateTime | Scheduling date time |
+| StartDateTime | Start date time |
+| SystemID | System ID |
+| SystemNumber | System number |
+| TargetServer | Target server |
+| User | User |
+| UserReleaseInstance | ABAP instance - user release |
+| WorkProcessID | Work process ID |
+| WorkProcessNumber | Work process Number |
+| | |
+
+## ABAP Security Audit log
+
+- **Name in Azure Sentinel**: `ABAPAuditLog_CL`
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/280f016edb8049e998237fcbd80558e7/7.5.7/en-US/4d41bec4aa601c86e10000000a42189b.html)
+
+- **Log purpose**: Records the following data:
+
+ - Security-related changes to the SAP system environment, such as changes to main user records
+ - Information that provides a higher level of data, such as successful and unsuccessful sign-in attempts
+ - Information that enables the reconstruction of a series of events, such as successful or unsuccessful transaction starts
+
+ Available by using RFC XAL/SAL interfaces. SAL is available starting from version Basis 7.50.
+
+### ABAPAuditLog_CL log schema
+
+| Field | Description |
+| -- | - |
+| ABAPProgramName | Program name, SAL only |
+| AlertSeverity | Alert severity |
+| AlertSeverityText | Alert severity text, SAL only |
+| AlertValue | Alert value |
+| AuditClassID | Audit class ID, SAL only |
+| ClientID | ABAP client ID (MANDT) |
+| Computer | User machine, SAL only |
+| Email | User email |
+| Host | Host |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| MessageClass | Message class |
+| MessageContainerID | Message container ID, XAL Only |
+| MessageID | Message ID, such as `‘AU1’,’AU2’…` |
+| MessageText | Message text |
+| MonitoringObjectName | MTE Monitor object name, XAL only |
+| MonitorShortName | MTE Monitor short name, XAL only |
+| SAPProcesType | System Log: SAP process type, SAL only |
+| B* - Background Processing | |
+| D* - Dialog Processing | |
+| U* - Update Tasks | |
+| SAPWPName | System Log: Work process number, SAL only |
+| SystemID | System ID |
+| SystemNumber | System number |
+| TerminalIPv6 | User machine IP, SAL only |
+| TransactionCode | Transaction code, SAL only |
+| User | User |
+| Variable1 | Message variable 1 |
+| Variable2 | Message variable 2 |
+| Variable3 | Message variable 3 |
+| Variable4 | Message variable 4 |
+| | |
+
+## ABAP Spool log
+
+- **Name in Azure Sentinel**: `ABAPSpoolLog_CL`
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/290ce8983cbc4848a9d7b6f5e77491b9/7.52.1/en-US/4eae791c40f72045e10000000a421937.html)
+
+- **Log purpose**: Serves as the main log for SAP Printing with the history of spool requests. (SP01).
+
+ Available by using RFC with a custom service based on standard tables.
+
+### ABAPSpoolLog_CL log schema
+
+| Field | Description |
+| -- | |
+| ArchiveStatus | Archive status |
+| ArchiveType | Archive type |
+| ArchivingDevice | Archiving device |
+| AutoRereoute | Auto reroute |
+| ClientID | ABAP client ID (MANDT) |
+| CountryKey | Country key |
+| DeleteSpoolRequestAuto | Delete spool request auto |
+| DelFlag | Deletion flag |
+| Department | Department |
+| DocumentType | Document type |
+| ExternalMode | External mode |
+| FormatType | Format type |
+| Host | Host |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| NumofCopies | Number of copies |
+| OutputDevice | Output device |
+| PrinterLongName | Printer long name |
+| PrintImmediately | Print immediately |
+| PrintOSCoverPage | Print OSCover page |
+| PrintSAPCoverPage | Print SAPCover page |
+| Priority | Priority |
+| RecipientofSpoolRequest | Recipient of spool request |
+| SpoolErrorStatus | Spool error status |
+| SpoolRequestCompleted | Spool request completed |
+| SpoolRequestisALogForAnotherRequest | Spool request is a log for another request |
+| SpoolRequestName | Spool request name |
+| SpoolRequestNumber | Spool request number |
+| SpoolRequestSuffix1 | Spool request suffix1 |
+| SpoolRequestSuffix2 | Spool request suffix2 |
+| SpoolRequestTitle | Spool request title |
+| SystemID | System ID |
+| SystemNumber | System number |
+| TelecommunicationsPartner | Telecommunications partner |
+| TelecommunicationsPartnerE | Telecommunications partner E |
+| TemSeGeneralcounter | Temse counter |
+| TemseNumAddProtectionRule | Temse number add protection rule |
+| TemseNumChangeProtectionRule | Temse number change protection rule |
+| TemseNumDeleteProtectionRule | Temse number delete protection rule |
+| TemSeObjectName | Temse object name |
+| TemSeObjectPart | TemSe object part |
+| TemseReadProtectionRule | Temse read protection rule |
+| User | User |
+| ValueAuthCheck | Value auth check |
+| | |
+
+## APAB Spool Output log
+
+- **Name in Azure Sentinel**: `ABAPSpoolOutputLog_CL`
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/290ce8983cbc4848a9d7b6f5e77491b9/7.52.1/en-US/4eae779e40f72045e10000000a421937.html)
+
+- **Log purpose**: Serves as the main log for SAP Printing with the history of spool output requests. (SP02).
+
+ Available by using RFC with a custom service based on standard tables.
+
+### ABAPSpoolOutputLog_CL log schema
+
+| Field | Description |
+| - | -- |
+| AppServer | Application server |
+| ClientID | ABAP client ID (MANDT) |
+| Comment | Comment |
+| CopyCount | Copy count |
+| CopyCounter | Copy counter |
+| Department | Department |
+| ErrorSpoolRequestNumber | Error request number |
+| FormatType | Format type |
+| Host | Host |
+| HostName | Host name |
+| HostSpoolerID | Host spooler ID |
+| Instance | ABAP instance |
+| LastPage | Last page |
+| NumofCopies | Number of copies |
+| OutputDevice | Output device |
+| OutputRequestNumber | Output request number |
+| OutputRequestStatus | Output request status |
+| PhysicalFormatType | Physical format type |
+| PrinterLongName | Printer long name |
+| PrintRequestSize | Print request size |
+| Priority | Priority |
+| ReasonforOutputRequest | Reason for output request |
+| RecipientofSpoolRequest | Recipient of spool request |
+| SpoolNumberofOutputReqProcessed | Number of output requests - processed |
+| SpoolNumberofOutputReqWithErrors | Number of output requests - with errors |
+| SpoolNumberofOutputReqWithProblems | Number of output requests - with problems |
+| SpoolRequestNumber | Spool request number |
+| StartPage | Start page |
+| SystemID | System ID |
+| SystemNumber | System number |
+| TelecommunicationsPartner | Telecommunications partner |
+| TemSeGeneralcounter | Temse counter |
+| Title | Title |
+| User | User |
+| | |
++
+## ABAP SysLog
+
+- **Name in Azure Sentinel**: `SysLog_CL`
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/56bf1265a92e4b4d9a72448c579887af/7.5.7/en-US/c769bcbaf36611d3a6510000e835363f.html)
+
+- **Log purpose**: Records all SAP NetWeaver Application Server (SAP NetWeaver AS) ABAP system errors, warnings, user locks because of failed sign-in attempts from known users, and process messages.
+
+ Available by the SAP Control Web Service.
+
+### SysLog_CL log schema
++
+| Field | Description |
+| - | - |
+| ClientID | ABAP client ID (MANDT) |
+| Host | Host |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR> ` |
+| MessageNumber | Message number |
+| MessageText | Message text |
+| Severity | Message severity, one of the following values: `Debug`, `Info`, `Warning`, `Error` |
+| SystemID | System ID |
+| SystemNumber | System number |
+| TransacationCode | Transaction code |
+| Type | SAP process type |
+| User | User |
+| | |
++
+## ABAP Workflow log
+
+- **Name in Azure Sentinel**: `ABAPWorkflowLog_CL`
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/56bf1265a92e4b4d9a72448c579887af/7.5.7/en-US/c769bcccf36611d3a6510000e835363f.html)
+
+- **Log purpose**: The SAP Business Workflow (WebFlow Engine) enables you to define business processes that aren't yet mapped in the SAP system.
+
+ For example, unmapped business processes may be simple release or approval procedures, or more complex business processes such as creating base material and then coordinating the associated departments.
+
+ Available by using RFC with a custom service based on standard tables and standard services.
+
+### ABAPWorkflowLog_CL log schema
++
+| Field | Description |
+| - | -- |
+| ActualAgent | Actual agent |
+| Address | Address |
+| ApplicationArea | Application area |
+| CallbackFunction | Callback function |
+| ClientID | ABAP client ID (MANDT) |
+| CreationDateTime | Creation date time |
+| Creator | Creator |
+| CreatorAddress | Creator address |
+| ErrorType | Error type |
+| ExceptionforMethod | Exception for method |
+| Host | Host |
+| Instance | ABAP instance (HOST_SYSID_SYSNR), in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| Language | Language |
+| LogCounter | Log counter |
+| MessageNumber | Message number |
+| MessageType | Message type |
+| MethodUser | Method user |
+| Priority | Priority |
+| SimpleContainer | Simple container, packed as a list of Key-Value entities for the work item |
+| Status | Status |
+| SuperWI | Super WI |
+| SystemID | System ID |
+| SystemNumber | System number |
+| TaskID | Task ID |
+| TasksClassification | Task classifications |
+| TaskText | Task text |
+| TopTaskID | Top task ID |
+| UserCreated | User created |
+| WIText | Work item text |
+| WIType | Work item type |
+| WorkflowAction | Workflow action |
+| WorkItemID | Work item ID |
+| | |
+++++
+## ABAP WorkProcess log
+
+- **Name in Azure Sentinel**: `WP_CL`
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/d0739d980ecf42ae9f3b4c19e21a4b6e/7.3.15/en-US/46fb763b6d4c5515e10000000a1553f6.html)
+
+- **Log purpose**: Combines all work process logs. (default: `dev_*`).
+
+ Available by the SAP Control Web Service.
+
+### WP_CL log schema
++
+| Field | Description |
+| | - |
+| Host | Host |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| MessageText | Message text |
+| Severity | Message severity: `Debug`, `Info`, `Warning`, `Error` |
+| SystemID | System ID |
+| SystemNumber | System number |
+| WPNumber | Work process number |
+| | |
++
+## HANA DB Audit Trail
+
+- **Name in Azure Sentinel**: `Syslog`
+
+- **Related SAP documentation**: [General](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.03/en-US/48fd6586304c4f859bf92d64d0cd8b08.html) | [Audit Trail](https://help.sap.com/viewer/b3ee5778bc2e4a089d3299b82ec762a7/2.0.03/en-US/0a57444d217649bf94a19c0b68b470cc.html)
+
+- **Log purpose**: Records user actions, or attempted actions in the SAP HANA database. For example, enables you to log and monitor read access to sensitive data.
+
+ Available by the Sentinel Linux Agent for Syslog.
+
+### Syslog log schema
+
+| Field | Description |
+| - | |
+| Computer | Host name |
+| HostIP | Host IP |
+| HostName | Host name |
+| ProcessID | Process ID |
+| ProcessName | Process name: `HDB*` |
+| SeverityLevel | Alert |
+| SourceSystem | Source system OS, `Linux` |
+| SyslogMessage | Message, an unparsed audit trail message |
+| | |
+
+## JAVA files
+
+- **Name in Azure Sentinel**: `JavaFilesLogsCL`
+
+- **Related SAP documentation**: [General](https://help.sap.com/viewer/2f8b1599655d4544a3d9c6d1a9b6546b/7.5.9/en-US/485059dfe31672d4e10000000a42189c.html) | [Java Security Audit Log](https://help.sap.com/viewer/1531c8a1792f45ab95a4c49ba16dc50b/7.5.9/en-US/4b6013583840584ae10000000a42189c.html)
+
+- **Log purpose**: Combines all Java files-based logs, including the Security Audit Log, and System (cluster and server process), Performance, and Gateway logs. Also includes Developer Traces and Default Trace logs.
+
+ Available by the SAP Control Web Service.
+
+### JavaFilesLogsCL log schema
++
+| Field | Description |
+| - | -- |
+| Application | Java application |
+| ClientID | Client ID |
+| CSNComponent | CSN component, such as `BC-XI-IBD` |
+| DCComponent | DC component, such as `com.sap.xi.util.misc` |
+| DSRCounter | DSR counter |
+| DSRRootContentID | DSR context GUID |
+| DSRTransaction | DSR transaction GUID |
+| Host | Host |
+| Instance | Java instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| Location | Java class |
+| LogName | Java logName, such as: `Available`, `defaulttrace`, `dev*`, `security`, and so on
+| MessageText | Message text |
+| MNo | Message number |
+| Pid | Process ID |
+| Program | Program name |
+| Session | Session |
+| Severity | Message severity, including: `Debug`,`Info`,`Warning`,`Error` |
+| Solution | Solution |
+| SystemID | System ID |
+| SystemNumber | System number |
+| ThreadName | Thread name |
+| Thrown | Exception thrown |
+| TimeZone | Timezone |
+| User | User |
+| | |
+
+## Next steps
+
+For more information, see:
+
+- [Tutorial: Deploy the Azure Sentinel solution for SAP](sap-deploy-solution.md)
+- [Azure Sentinel SAP solution detailed SAP requirements](sap-solution-detailed-requirements.md)
+- [Deploy the Azure Sentinel SAP data connector on-premises](sap-solution-deploy-alternate.md)
+- [Azure Sentinel SAP solution: built-in security content](sap-solution-security-content.md)
sentinel Sap Solution Security Content https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-solution-security-content.md
+
+ Title: Azure Sentinel SAP solution - security content reference | Microsoft Docs
+description: Learn about the built-in security content provided by the Azure Sentinel SAP solution.
+++++ Last updated : 05/12/2021++++
+# Azure Sentinel SAP solution: security content reference (public preview)
+
+This article details the security content available for the [Azure Sentinel SAP solution](sap-deploy-solution.md#deploy-sap-security-content).
+
+Available security content includes a built-in workbook and built-in analytics rules. You can also add SAP-related [watchlists](watchlists.md) to use in your search, detection rules, threat hunting, and response playbooks.
+
+> [!IMPORTANT]
+> The Azure Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
++
+## SAP - System Applications and Products workbook
+
+Use the [SAP - System Applications and Products](sap-deploy-solution.md#deploy-sap-security-content) workbook to visualize and monitor the data ingested via the SAP data connector.
+
+For example:
++
+For more information, see [Tutorial: Visualize and monitor your data](tutorial-monitor-your-data.md).
+
+## Built-in analytics rules
+
+The following tables list the built-in [analytics rules](sap-deploy-solution.md#deploy-sap-security-content) that are included in the Azure Sentinel SAP solution, deployed from the Azure Sentinel Solutions marketplace.
+
+### High-level, built-in SAP solution analytics rules
+
+|Rule name |Description |Source action |Tactics |
+|||||
+|**SAP - High - Change in Sensitive privileged user** | Identifies changes of sensitive privileged users. <br> <br>Maintain privileged users in the [SAP - Privileged Users](#users) watchlist. | Change user details / authorizations using `SU01`. <br><br>**Data sources**: SAPcon - Audit Log | Privilege Escalation, Credential Access |
+|**SAP - High - Client Configuration Change** | Identifies changes for client configuration such as the client role or the change recording mode. | Perform client configuration changes using the `SCC4` transaction code. <br><br>**Data sources**: SAPcon - Audit Log | Defense Evasion, Exfiltration, Persistence |
+|**SAP - High - Data has Changed during Debugging Activity** | Identifies changes for runtime data during a debugging activity. | 1. Activate Debug ("/h"). <br>2. Select a field for change and update its value.<br><br>**Data sources**: SAPcon - Audit Log | Execution, Lateral Movement |
+|**SAP - High - Deactivation of Security Audit Log** | Identifies deactivation of the Security Audit Log, | Disable security Audit Log using `SM19/RSAU_CONFIG`. <br><br>**Data sources**: SAPcon - Audit Log | Exfiltration, Defense Evasion, Persistence |
+|**SAP - High - Execution of a Sensitive ABAP Program** |Identifies the direct execution of a sensitive ABAP program. <br><br>Maintain ABAP Programs in the [SAP - Sensitive ABAP Programs](#programs) watchlist. | Run a program directly using `SE38`/`SA38`/`SE80`. <br> <br>**Data sources**: SAPcon - Audit Log | Exfiltration, Lateral Movement, Execution |
+|**SAP - High - Execution of a Sensitive Transaction Code** | Identifies the execution of a sensitive Transaction Code. <br><br>Maintain transaction codes in the [SAP - Sensitive Transaction Codes](#transactions) watchlist. | Run a sensitive transaction code. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Execution |
+|**SAP - High - Function Module tested** | Identifies the testing of a function module. | Test a function module using `SE37` / `SE80`. <br><br>**Data sources**: SAPcon - Audit Log | Collection, Defense Evasion, Lateral Movement |
+|**SAP - High - HANA DB - Assign Admin Authorizations** | Identifies admin privilege or role assignment. | Assign a user with any admin role or privileges. <br><br>**Data sources**: Linux Agent - Syslog | Privilege Escalation |
+|**SAP - High - HANA DB - Audit Trail Policy Changes** | Identifies changes for HANA DB audit trail policies. | Create or update the existing audit policy in security definitions. <br> <br>**Data sources**: Linux Agent - Syslog | Lateral Movement, Defense Evasion, Persistence |
+|**SAP - High - HANA DB - Deactivation of Audit Trail** | Identifies the deactivation of the HANA DB audit log. | Deactivate the audit log in the HANA DB security definition. <br><br>**Data sources**: Linux Agent - Syslog | Persistence, Lateral Movement, Defense Evasion |
+| **SAP - High - HANA DB - User Admin actions** | Identifies user administration actions. | Create, update, or delete a database user. <br><br>**Data Sources**: Linux Agent - Syslog* |Privilege Escalation |
+|**SAP - High - Login from unexpected network** | Identifies a sign-in from an unexpected network. <br><br>Maintain networks in the [SAP - Networks](#networks) watchlist. | Sign in to the backend system from an IP address that is not assigned to one of the networks. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access |
+|**SAP - High - RFC Execution of a Sensitive Function Module** | Sensitive function models to be used in relevant detections. <br><br>Maintain function modules in the [SAP - Sensitive Function Modules](#module) watchlist. | Run a function module using RFC. <br><br>**Data sources**: SAPcon - Audit Log | Execution, Lateral Movement, Discovery |
+|**SAP - High - Sensitive privileged user logged in** | Identifies the Dialog sign-in of a sensitive privileged user. <br><br>Maintain privileged users in the [SAP - Privileged Users](#users) watchlist. | Sign in to the backend system using `SAP*` or another privileged user. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access, Credential Access |
+| **SAP - High - Sensitive privileged user makes a change in other user** | Identifies changes of sensitive, privileged users in other users. | Change user details / authorizations using SU01. <br><br>**Data Sources**: SAPcon - Audit Log | Privilege Escalation, Credential Access |
+|**SAP - High - System Configuration Change** | Identifies changes for system configuration. | Adapt system change options or software component modification using the `SE06` transaction code.<br><br>**Data sources**: SAPcon - Audit Log |Exfiltration, Defense Evasion, Persistence |
+| | | | |
+
+### Medium-level, built-in SAP solution analytics rules
+
+|Rule name |Description |Source action |Tactics |
+|||||
+|**SAP - Medium - Assignment of a sensitive profile** | Identifies new assignments of a sensitive profile to a user. <br><br>Maintain sensitive profiles in the [SAP - Sensitive Profiles](#profiles) watchlist. | Assign a profile to a user using `SU01`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation |
+|**SAP - Medium - Assignment of a sensitive role** | Identifies new assignments for a sensitive role to a user. <br><br>Maintain sensitive roles in the [SAP - Sensitive Roles](#roles) watchlist.| Assign a role to a user using `SU01` / `PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log, Audit Log | Privilege Escalation |
+|**SAP - Medium - Brute force attacks** | Identifies brute force attacks on the SAP system, according to failed sign-in attempts for the backend system. | Attempt to sign in from the same IP address to several systems/clients within the scheduled time interval. <br><br>**Data sources**: SAPcon - Audit Log | Credential Access |
+|**SAP - Medium - Critical authorizations assignment - New Authorization Value** | Identifies the assignment of a critical authorization object value to a new user. <br><br>Maintain critical authorization objects in the [SAP - Critical Authorization Objects](#objects) watchlist. | Assign a new authorization object or update an existing one in a role, using `PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation |
+|**SAP - Medium - Critical authorizations assignment - New User Assignment** | Identifies the assignment of a critical authorization object value to a new user. <br><br>Maintain critical authorization objects in the [SAP - Critical Authorization Objects](#objects) watchlist. | Assign a new user to a role that holds critical authorization values, using `SU01`/`PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation |
+|**SAP - Medium - Debugging Activities** | Identifies all debugging related activities. |Activate Debug ("/h") in the system, debug an active process, add breakpoint to source code, and so on. <br><br>**Data sources**: SAPcon - Audit Log | Discovery |
+|**SAP - Medium - Multiple Logons by IP** | Identifies the sign-in of several users from same IP address within a scheduled time interval. | Sign in using several users through the same IP address. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access |
+|**SAP - Medium - Multiple Logons by User** | Identifies sign-ins of the same user from several terminals within scheduled time interval. <br><br>Available only via the Audit SAL method, for SAP versions 7.5 and higher. | Sign in using the same user, using different IP addresses. <br><br>**Data sources**: SAPcon - Audit Log | PreAttack, Credential Access, Initial Access, Collection |
+|**SAP - Medium - Security Audit Log Configuration Change** | Identifies changes in the configuration of the Security Audit Log | Change any Security Audit Log Configuration using `SM19`/`RSAU_CONFIG`, such as the filters, status, recording mode, and so on. <br><br>**Data sources**: SAPcon - Audit Log | Persistence, Exfiltration, Defense Evasion |
+|**SAP - Medium - Transaction is unlocked** |Identifies unlocking of a transaction. | Unlock a transaction code using `SM01`/`SM01_DEV`/`SM01_CUS`. <br><br>**Data sources**: SAPcon - Audit Log | Persistence, Execution |
+| | | | |
+
+### Low-level, built-in SAP solution analytics rules
+
+|Rule name |Description |Source action |Tactics |
+|||||
+|**SAP - Low - Multiple Password Changes by User** | Identifies multiple password changes by user. | Change user password <br><br>**Data sources**: SAPcon - Audit Log | Credential Access |
+|**SAP - Low - Sensitive Tables Direct Access By Dialog Logon** | Identifies generic table access via dialog sign-in. | Open table contents using `SE11`/`SE16`/`SE16N`. <br><br>**Data sources**: SAPcon - Audit Log | Discovery |
+| | | | |
+
+## Available watchlists
+
+The following table lists the [watchlists](sap-deploy-solution.md#deploy-sap-security-content) available for the Azure Sentinel SAP solution, and the fields in each watchlist.
+
+These watchlists provide the configuration for the Azure Sentinel SAP Continuous Threat Monitoring solution, and are accessible in the Azure Sentinel GitHub repository at https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Analytics/Watchlists.
++
+|Watchlist name |Description and fields |
+|||
+|<a name="objects"></a>**SAP - Critical Authorization Objects** | Critical Authorizations object, where assignments should be governed. <br><br>- **AuthorizationObject**: An SAP authorization object, such as `S_DEVELOP`, `S_TCODE`, or `Table TOBJ` <br>- **AuthorizationField**: An SAP authorization field, such as `OBJTYP` or `TCD` <br>- **AuthorizationValue**: An SAP authorization field value, such as `DEBUG` <br>- **ActivityField** : SAP activity field. For most cases, this value will be `ACTVT`. For Authorizations objects without an **Activity**, or with only an **Activity** field, filled with `NOT_IN_USE`. <br>- **Activity**: SAP activity, according to the authorization object, such as: `01`: Create; `02`: Change; `03`: Display, and so on. <br>- **Description**: A meaningful Critical Authorization Object description. |
+|**SAP - Excluded Networks** | For internal maintenance of excluded networks, such as to ignore web dispatchers, terminal servers, and so on. <br><br>-**Network**: A network IP address or range, such as `111.68.128.0/17`. <br>-**Description**: A meaningful network description.|
+|**SAP Excluded Users** |System users who are signed in to the system and must be ignored. For example, alerts for multiple sign-ins by the same user. <br><br>- **User**: SAP User <br>-**Description**: A meaningful user description. |
+|<a name="networks"></a>**SAP - Networks** | Internal and maintenance networks for identification of unauthorized logins. <br><br>- **Network**: Network IP address or range, such as `111.68.128.0/17` <br>- **Description**: A meaningful network description.|
+|<a name="users"></a>**SAP - Privileged Users** | Privileged users that are under extra restrictions. <br><br>- **User**: the ABAP user, such as `DDIC` or `SAP` <br>- **Description**: A meaningful user description. |
+|<a name= "programs"></a>**SAP - Sensitive ABAP Programs** | Sensitive ABAP programs (reports), where execution should be governed. <br><br>- **ABAPProgram**: ABAP program or report, such as `RSPFLDOC` <br>- **Description**: A meaningful program description.|
+|<a name="module"></a>**SAP - Sensitive Function Module** | Internal and maintenance networks for identification of unauthorized logins. <br><br>- **FunctionModule**: An ABAP function module, such as `RSAU_CLEAR_AUDIT_LOG` <br>- **Description**: A meaningful module description. |
+|<a name="profiles"></a>**SAP - Sensitive Profiles** | Sensitive profiles, where assignments should be governed. <br><br>- **Profile**: SAP authorization profile, such as `SAP_ALL` or `SAP_NEW` <br>- **Description**: A meaningful profile description.|
+|<a name="tables"></a>**SAP - Sensitive Tables** | Sensitive tables, where access should be governed. <br><br>- **Table**: ABAP Dictionary Table, such as `USR02` or `PA008` <br>- **Description**: A meaningful table description. |
+|<a name="roles"></a>**SAP - Sensitive Roles** | Sensitive roles, where assignment should be governed. <br><br>- **Role**: SAP authorization role, such as `SAP_BC_BASIS_ADMIN` <br>- **Description**: A meaningful role description. |
+|<a name="transactions"></a>**SAP - Sensitive Transactions** | Sensitive transactions where execution should be governed. <br><br>- **TransactionCode**: SAP transaction code, such as `RZ11` <br>- **Description**: A meaningful code description. |
+|<a name="systems"></a>**SAP - Systems** | Describes the landscape of SAP systems according to role and usage.<br><br>- **SystemID**: the SAP system ID (SYSID) <br>- **SystemRole**: the SAP system role, one of the following values: `Sandbox`, `Development`, `Quality Assurance`, `Training`, `Production` <br>- **SystemUsage**: The SAP system usage, one of the following values: `ERP`, `BW`, `Solman`, `Gateway`, `Enterprise Portal` |
+| | |
++
+## Next steps
+
+For more information, see:
+
+- [Tutorial: Deploy the Azure Sentinel solution for SAP](sap-deploy-solution.md)
+- [Azure Sentinel SAP solution logs reference](sap-solution-log-reference.md)
+- [Deploy the Azure Sentinel SAP data connector on-premises](sap-solution-deploy-alternate.md)
+- [Azure Sentinel SAP solution detailed SAP requirements](sap-solution-detailed-requirements.md)
sentinel Sentinel Solutions Catalog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sentinel-solutions-catalog.md
+
+ Title: Azure Sentinel solutions catalog | Microsoft Docs
+description: This article displays and details the Sentinel solutions packages currently available.
+
+cloud: na
+documentationcenter: na
+++
+ms.assetid:
+++
+ na
+ms.devlang: na
+ Last updated : 05/05/2021++
+# Azure Sentinel solutions catalog
+
+> [!IMPORTANT]
+>
+> The Azure Sentinel solutions experience is currently in **PREVIEW**, as are all individual solution packages. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+[Azure Sentinel solutions](sentinel-solutions.md) provide a consolidated way to acquire Azure Sentinel content - like data connectors, workbooks, analytics, and automations - in your workspace with a single deployment step.
+
+## Currently available solutions
+
+The following solutions are now available from Azure Sentinel. They are all free to install, though usual charges apply for data ingestion, Logic Apps use, and so on.
+
+| Solution | Description |
+| - | - |
+| **Azure Firewall Solution for Sentinel** | Azure Firewall is a managed, cloud-based network security service that protects your Azure Virtual Network resources. It's a fully stateful firewall-as-a-service with built-in high availability and unrestricted cloud scalability. This Azure Firewall solution in Sentinel provides built-in customizable threat detection on top of Azure Sentinel. The solution contains a workbook, detections, hunting queries and playbooks. |
+| **Azure Sentinel 4 Dynamics 365** | The Continuous Threat Monitoring for Dynamics 365 solution provides you with ability to collect Dynamics 365 logs, gain visibility of activities within Dynamics 365 and analyze them to detect threats and malicious activities. The solution includes a data connector, workbooks, analytics rules, and hunting queries. |
+| **Azure Sentinel for Teams** | Teams serves a central role in both communication and data sharing in the Microsoft 365 Cloud. Because the Teams service touches on so many underlying technologies in the Cloud, it can benefit from human and automated analysis not only when it comes to hunting in logs, but also in real-time monitoring of meetings. Azure Sentinel offers admins these solutions. The solution includes analytics rules, hunting queries, and playbooks. |
+| **Box Solution** | [Box](https://www.box.com/overview) is a single, secure, easy-to-use platform built for the entire content lifecycle, from file creation and sharing, to co-editing, signature, classification, and retention. |
+| **Cisco ISE Solution** | The [Cisco Identity Services Engine (ISE)](https://www.cisco.com/c/en/us/products/collateral/security/identity-services-engine/data_sheet_c78-656174.html) is your one-stop solution to streamline security policy management and reduce operating costs. With ISE, you can see users and devices controlling access across wired, wireless, and VPN connections to the corporate network. |
+| **Cisco Umbrella Solution** | [Cisco Umbrella](https://umbrella.cisco.com/) offers flexible, cloud-delivered security when and how you need it. It combines multiple security functions into one solution, so you can extend protection to devices, remote users, and distributed locations anywhere. Umbrella is the easiest way to effectively protect your users everywhere in minutes. |
+| **Cloudflare Solution** | [Cloudflare](https://www.cloudflare.com/) secures and ensures the reliability of your external-facing resources such as websites, APIs, and applications. It protects your internal resources such as behind-the-firewall applications, teams, and devices. And it is your platform for developing globally scalable applications. |
+| **Continuous Threat Monitoring for SAP** | [Continuous Threat Monitoring for SAP](sap-deploy-solution.md) provides both stateless and stateful connections for logs from the entire SAP system landscape, and collects logs from both Advanced Business Application Programming (ABAP) and the OS. |
+| **Contrast Protect Azure Sentinel Solution** | [Contrast Protect](https://www.contrastsecurity.com/runtime-application-self-protection-rasp) empowers teams to defend their applications anywhere they run, by embedding an automated and accurate runtime protection capability within the application to continuously monitor and block attacks. By focusing on actionable and timely application layer threat intelligence, we make it easier for security and operation teams to understand and manage the severity of threats and attacks. Contrast Protect seamlessly integrates into Azure Sentinel so you can gain greater insight into security risks at the application layer. |
+| **Corelight for Azure Sentinel** | [Corelight](https://corelight.com/) for Azure Sentinel enables incident responders and threat hunters who use Azure Sentinel to work faster and more effectively. Corelight provides a network detection and response (NDR) solution, based on best-of-breed open-source technologies Zeek and Suricata, that enables network defenders to get broad visibility into their environments.<br>The data connector enables ingestion of events from Zeek and Suricata via Corelight Sensors into Azure Sentinel. Corelight for Azure Sentinel also includes workbooks and dashboards, hunting queries, and analytic rules to help organizations drive efficient investigations and incident response with the combination of Corelight and Azure Sentinel. |
+| **CrowdStrike Falcon Endpoint Protection Solution** | [CrowdStrike Falcon Endpoint Protection](https://www.crowdstrike.com/resources/data-sheets/falcon-endpoint-protection-pro/) offers the ideal antivirus replacement solution by combining the most effective prevention technologies and full attack visibility with built-in threat intelligence and response. |
+| **HYAS Insight for Azure Sentinel Solutions Gallery** | [HYAS Insight](https://www.hyas.com/hyas-insight) is a threat investigation and attribution solution that uses exclusive data sources and non-traditional mechanisms to improve visibility and productivity for analysts, researchers, and investigators while increasing the accuracy of findings. HYAS Insight connects attack instances and campaigns to billions of indicators of compromise to deliver insights and visibility. With an easy-to-use user interface, transforms, and API access, HYAS Insight combines rich threat data into a powerful research and attribution solution. HYAS Insight is complemented by the HYAS Intelligence team that helps organizations to better understand the nature of the threats they face on a daily basis. |
+| **Infoblox Cloud Data Connector Solution** | [BloxOne DDI](https://www.infoblox.com/products/bloxone-ddi/) is the industryΓÇÖs first DDI (DNS/DHCP/IPAM) solution that enables you to centrally manage and automate DDI from the cloud to any and all locations with unprecedented cost efficiency. Built using cloud-native principles and available as a SaaS service, BloxOne DDI greatly simplifies network management by eliminating the complexity, bottlenecks, and scalability limitations of traditional DDI implementations.<br>BloxOne Threat Defense maximizes brand protection by working with your existing defenses to protect your network and automatically extend security to your digital imperatives, including SD-WAN, IoT, and the cloud. It powers security orchestration, automation, and response (SOAR) solutions, slashes the time to investigate and remediate cyberthreats, optimizes the performance of the entire security ecosystem, and reduces the total cost of enterprise threat defense. |
+| **McAfee ePolicy Orchestrator Solution** | The [McAfee ePO](https://www.mcafee.com/enterprise/en-in/products/epolicy-orchestrator.html) is a centralized policy management and enforcement for your endpoints and enterprise security products. McAfee ePO monitors and manages your network, detecting threats and protecting endpoints against these threats. |
+| **Oracle Database Audit Solution** | The [Oracle Database](https://www.oracle.com/database/technologies/security/db-auditing.html) provides robust audit support in both the Enterprise and Standard Edition of the database. Oracle Database Unified Auditing enables selective and effective auditing inside the Oracle database using policies and conditions. |
+| **Palo Alto Prisma Solution** | [Prisma Cloud](https://www.paloaltonetworks.com/prisma/cloud) is an industry-leading comprehensive Cloud Native Security Platform that delivers full lifecycle security and full stack protection for multi- and hybrid-cloud environments. |
+| **PingFederate Solution** | [PingFederate®](https://www.pingidentity.com/en/resources/client-library/data-sheets/pingfederate-data-sheet.html) is the leading enterprise federation server for user authentication and standards-based single sign-on (SSO) for employee, partner, and customer identity types. PingFederate allows organizations to break free from expensive, inflexible legacy IAM solutions, and instead apply a modern identity and access management solution designed to meet complex enterprise demands. |
+| **Proofpoint POD Solution** | [Proofpoint on Demand Email Security](https://www.proofpoint.com/us/products/email-security-and-protection/email-protection) accurately classifies various types of email, while detecting and blocking threats that don't involve malicious payload. You can automatically tag suspicious email to raise end-user awareness, and track down any email in seconds. |
+| **Proofpoint TAP Solution** | [Proofpoint Targeted Attack Protection (TAP)](https://www.proofpoint.com/us/products/advanced-threat-protection/targeted-attack-protection) helps detect, mitigate and block advanced threats that target people through email, including attacks that use malicious attachments and URLs to install malware or trick users into sharing passwords and sensitive information. TAP also detects threats and risks in cloud apps and connects email attacks related to credential theft or other attacks. |
+| **Qualys VM Solution** | [Qualys Vulnerability Management (VM)](https://www.qualys.com/apps/vulnerability-management/) provides global visibility into where your IT assets are vulnerable and how to protect them. As enterprises adopt cloud computing, mobility, and other disruptive technologies for digital transformation, Qualys VM offers next-generation vulnerability management for these hybrid IT environments whose traditional boundaries have been blurred. This is accomplished through agent-based detection from lightweight Qualys Cloud Agents, which extend network coverage to assets that can't be scanned. |
+| **RiskIQ Security Intelligence Playbooks** | [RiskIQ](https://www.riskiq.com/) has created several Azure Sentinel playbooks that pre-package functionality in order to enrich and add context to incidents within the Azure Sentinel platform. These playbooks can be ran individually or configured to run automatically within the Azure Sentinel portal. When an incident contains a known indicator such as a domain or IP address, RiskIQ will enrich that value with what else it's connected to on the Internet and if it may pose a threat. Comments are added to the incident that link to further detailed information within RiskIQ's investigative platform. |
+| **Senserva Offer for Azure Sentinel** | [Senserva](https://www.senserva.com/product/), a Cloud Security Posture Management (CSPM) for Azure Sentinel, simplifies the management of Azure Active Directory security risks before they become problems by continually producing priority-based risk assessments. Senserva information includes a detailed security ranking for all the Azure objects Senserva manages, enabling customers to perform optimal discovery and remediation by fixing the most critical issues with the highest impact items first. All Senserva's enriched information is sent to Azure Sentinel for processing by Queries, Workbooks, and Playbooks via the Log Analytics Workspace.<br>Senserva delivers deep analysis for security user accounts, Azure Active Directory configurations (including Conditional Access), applications, and events within the Microsoft cloud environment. SenservaΓÇÖs patented technology saves countless hours for Microsoft 365 and Microsoft Azure administrators and security teams. Senserva is created using a patent pending combination of industry standards (NIST 800-53, MITRE ATT&CK), industry recommendations, vendor recommendations, and Senserva's own experts. |
+| **Slack Audit Solution** | The [Slack Audit](https://slack.com/) data connector lets you ingest [Slack Audit Records](https://api.slack.com/admins/audit-logs) events into Azure Sentinel through the REST API. Refer to [API documentation](https://api.slack.com/admins/audit-logs#the_audit_event) for more information. The visibility of these events helps you examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more. |
+| **Sophos XG Firewall Solution** | [Sophos XG Firewall](https://www.sophos.com/products/next-gen-firewall.aspx) provides all-in-one protection for enterprises through visibility, synchronized security, and automated response. This includes exposing hidden risks, stopping unknown threats, and isolating infected systems. XG Firewall also makes it easy to extend your secure network to employees anywhere through its VPN Client and hardware. |
+| **Symantec Endpoint Protection Solution** | [Symantec Endpoint Protection](https://www.broadcom.com/products/cyber-security/endpoint) is a security software suite that consists of anti-malware, intrusion prevention and firewall features for server and desktop computers. It also has some features typical of data loss prevention software. It is used to prevent unapproved programs from running, and to apply firewall policies that block or allow network traffic. It attempts to identify and block malicious traffic in a corporate network or coming from a web browser. |
+| **Symantec ProxySG Solution** | Symantec protects organizations with a scalable, high-performance web proxy appliance designed to secure communications from advanced threats targeting web activity. [Symantec Secure Web Gateway](https://www.broadcom.com/products/cyber-security/network/gateway/proxy-sg-and-advanced-secure-gateway) solutions draw on a unique proxy server architecture that allows organizations to effectively monitor, control, and secure traffic to ensure a safe web and cloud experience. |
+| **TitaniumCloud File Enrichment Solution** | [ReversingLabs TitaniumCloud](https://www.reversinglabs.com/products/file-reputation-service) is a threat intelligence solution providing up-to-date file reputation services, threat classification, and rich context on over 10 billion "goodware" and malware files. Files are processed using ReversingLabs' File Decomposition Technology. A powerful set of REST API query and feed functions deliver targeted file and malware intelligence for threat identification, analysis, intelligence development, and threat hunting services. |
+| **Ubiquiti UniFi Solution** | [Ubiquiti Inc.](https://www.ui.com/) is an American technology company that manufactures and sells wireless data communication and wired products for enterprises and homes under multiple brand names. |
+| **vArmour Application Controller and Azure Sentinel Solution** | [vArmour Application Controller](https://www.varmour.com/) is an industry-leading solution for Application Relationship Management: a transformative way to visualize and secure your enterprise. When coupled with Azure Sentinel, the two seamlessly integrate to provide enhanced visibility and automated security operations. Digital-first businesses are built on millions of dynamic interconnections between users and applications across hybrid environments. Most of these interconnections canΓÇÖt be seen today. As environmental complexity grows, so does the risk to the organization. Application Controller is an easy to deploy solution that delivers comprehensive real-time visibility and control of your application relationships and dependencies, so you can improve operational decision-making, strengthen your security posture, and reduce business risk across your multi-cloud deployments, all without adding costly new agents or infrastructure. As a result, your applications will be more resilient and secure. |
+| **VMware Carbon Black Solution** | [VMware Carbon Black](https://www.carbonblack.com/products/vmware-carbon-black-cloud-endpoint/) transforms your security with cloud native endpoint protection that adapts to your needs. VMware Carbon Black provides an endpoint platform that helps you spot the minor fluctuations that hide malicious attacks and adapt prevention in response. |
+|
+
+## Next steps
+
+In this document, you learned about Azure Sentinel solutions and how to find and deploy them.
+
+- Learn more about [Azure Sentinel Solutions](sentinel-solutions.md).
+- [Find and deploy Azure Sentinel Solutions](sentinel-solutions-deploy.md).
sentinel Sentinel Solutions Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sentinel-solutions-deploy.md
+
+ Title: Deploy Azure Sentinel solutions | Microsoft Docs
+description: This article shows how customers can easily find and deploy data analysis tools packaged together with data connectors.
+
+cloud: na
+documentationcenter: na
+++
+ms.assetid:
+++
+ na
+ms.devlang: na
+ Last updated : 05/05/2021++
+# Discover and deploy Azure Sentinel solutions
+
+> [!IMPORTANT]
+>
+> The Azure Sentinel solutions experience is currently in **PREVIEW**, as are all individual solution packages. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Azure Sentinel solutions provide in-product discoverability, single-step deployment, and enablement of end-to-end product, domain, and/or vertical scenarios in Azure Sentinel. This experience is powered by [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace) for solutionsΓÇÖ discoverability, deployment, and enablement, and by [Microsoft Partner Center](/partner-center/overview) for solutionsΓÇÖ authoring and publishing.
+
+Solutions can consist of any or all of the following components:
+
+- **Data connectors**, some with accompanying **parsers**
+- **Workbooks**
+- **Analytics rules**
+- **Hunting queries**
+- **Playbooks**
+
+## Find your solution
+
+1. From the Azure Sentinel navigation menu, select **Solutions (Preview)**.
+
+1. The **Solutions** blade displays a searchable list of solutions.
+
+ :::image type="content" source="./media/sentinel-solutions-deploy/solutions-list.png" alt-text="Solutions list":::
+
+ - If you scroll to the bottom of the list but don't find what you're looking for, click the **Load more** link at the bottom to expand the list.
+
+ :::image type="content" source="./media/sentinel-solutions-deploy/load-more.png" alt-text="Load more solutions":::
+
+1. To narrow down your choices and find the solution you want more easily, type any part of the solution's name in the **Search** field at the top of the list. (The search engine will only recognize whole words.)
+
+ :::image type="content" source="./media/sentinel-solutions-deploy/solutions-search-1.png" alt-text="Search solutions":::
+
+1. Select your desired solution from the list to deploy it. The solution's details page will open on the **Overview** tab, which displays essential and important information about the solution.
+
+ :::image type="content" source="./media/sentinel-solutions-deploy/proofpoint-tap-solution.png" alt-text="Proofpoint Tap solution":::
+
+1. You can view other useful information about your solution in the **Plans** and **Usage Information + Support** tabs, and you can get other customers' impressions in the **Reviews** tab.
+
+## Deploy your solution
+
+1. Select the **Create** button to launch the solution deployment wizard, which will open on the **Basics** tab.
+
+ :::image type="content" source="./media/sentinel-solutions-deploy/wizard-basics.png" alt-text="deployment wizard basics tab":::
+
+1. Enter the subscription, resource group, and workspace to which you want to deploy the solution.
+
+1. Click **Next** to cycle through the remaining tabs (corresponding to the components included in the solution), where you can learn about, and in some cases configure, each of the components.
+
+ > [!NOTE]
+ > The tabs listed below correspond with the components offered by the solution shown in the accompanying screenshots. Different solutions may have different types of components, so you may not see all the same tabs in every solution, and you may see tabs not shown below.
+
+ 1. **Analytics** tab
+ :::image type="content" source="./media/sentinel-solutions-deploy/wizard-analytics.png" alt-text="deployment wizard analytics tab":::
+
+ 1. **Workbooks** tab
+ :::image type="content" source="./media/sentinel-solutions-deploy/wizard-workbooks.png" alt-text="deployment wizard workbooks tab":::
+
+ 1. **Playbooks** tab - you'll need to enter valid Proofpoint TAP credentials here, so that the playbook can authenticate to your Proofpoint system to take any prescribed response actions.
+ :::image type="content" source="./media/sentinel-solutions-deploy/wizard-playbooks.png" alt-text="deployment wizard playbooks tab":::
+
+1. Finally, in the **Review + create** tab, wait for the "Validation Passed" message, then click **Create** to deploy the solution. You can also select the **Download a template for automation** link to deploy the solution as code.
+
+ :::image type="content" source="./media/sentinel-solutions-deploy/wizard-create.png" alt-text="deployment wizard review and create tab":::
+
+## Next steps
+
+In this document, you learned about Azure Sentinel solutions and how to find and deploy them.
+
+- Learn more about [Azure Sentinel solutions](sentinel-solutions.md).
+- See the full [Sentinel solutions catalog](sentinel-solutions-catalog.md).
sentinel Sentinel Solutions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sentinel-solutions.md
+
+ Title: About Azure Sentinel solutions | Microsoft Docs
+description: This article describes the Azure Sentinel solutions experience, showing how customers can easily find data analysis tools packaged together with data connectors, and displays the packages currently available.
+
+cloud: na
+documentationcenter: na
+++
+ms.assetid:
+++
+ na
+ms.devlang: na
+ Last updated : 05/05/2021++
+# About Azure Sentinel solutions
+
+> [!IMPORTANT]
+>
+> The Azure Sentinel solutions experience is currently in **PREVIEW**, as are all individual solution packages. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Azure Sentinel solutions provide in-product discoverability, single-step deployment, and enablement of end-to-end product, domain, and/or vertical scenarios in Azure Sentinel. This experience is powered by [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace) for solutionsΓÇÖ discoverability, deployment, and enablement, and by [Microsoft Partner Center](/partner-center/overview) for solutionsΓÇÖ authoring and publishing.
+
+## Why Azure Sentinel solutions?
+
+- Customers can easily discover packaged content and integrations that deliver value for a product, domain, or vertical within Azure Sentinel.
+
+- Customers can easily deploy content in a single step, and optionally enable content to get started immediately.
+
+- Providers or partners can deliver combined product or domain or vertical value using solutions in Azure Sentinel, and also be able to productize investments.
+
+## Types of Azure Sentinel solutions
+
+Azure Sentinel currently offers **packaged content** solutions. [These solutions](sentinel-solutions-catalog.md) include combinations of one or more data connectors, workbooks, analytics rules, playbooks, hunting queries, parsers, watchlists, and other components for Azure Sentinel. Learn how to [find and deploy packaged content solutions](sentinel-solutions-deploy.md).
+
+There are two other types of solutions for Azure Sentinel that can be offered at this time in the generic Azure Marketplace, though they will not appear in the Azure Sentinel solutions gallery:
+
+- **Integrations** ΓÇô includes services or tools built using Azure Sentinel APIs or Azure Log Analytics APIs to enable customers to integrate their existing applications with Azure Sentinel or migrate data, queries, etc.from existing applications to Azure Sentinel.
+
+- **Service offerings** ΓÇô includes listings to specific managed services for Azure Sentinel.
+
+## Next steps
+
+In this document, you learned about Azure Sentinel solutions and how to find and deploy them.
+
+- [Find and deploy Azure Sentinel solutions](sentinel-solutions-deploy.md).
+- See the full [Sentinel solutions catalog](sentinel-solutions-catalog.md).
sentinel Soc Ml Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/soc-ml-anomalies.md
+
+ Title: Use SOC-ML anomalies to detect threats in Azure Sentinel | Microsoft Docs
+description: This article explains how to use the new SOC-ML anomaly detection capabilities in Azure Sentinel.
+
+cloud: na
+documentationcenter: na
+++
+ms.assetid:
+++
+ na
+ms.devlang: na
+ Last updated : 04/28/2021++
+# Use SOC-ML anomalies to detect threats in Azure Sentinel
+
+> [!IMPORTANT]
+>
+> - SOC-ML anomalies are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## What are SOC-ML anomalies?
+
+With attackers and defenders constantly fighting for advantage in the cybersecurity arms race, attackers are always finding ways to evade detection. Inevitably, though, attacks will still result in unusual behavior in the systems being attacked. Azure Sentinel's SOC-ML machine learning-based anomalies can identify this behavior with analytics rule templates that can be put to work right out of the box. While anomalies don't necessarily indicate malicious or even suspicious behavior by themselves, they can be used to improve detections, investigations, and threat hunting:
+
+- **Additional signals to improve detection**: Security analysts can use anomalies to detect new threats and make existing detections more effective. A single anomaly is not a strong signal of malicious behavior, but when combined with several anomalies that occur at different points on the kill chain, their cumulative effect is much stronger. Security analysts can enhance existing detections as well by making the unusual behavior identified by anomalies a condition for alerts to be fired.
+
+- **Evidence during investigations**: Security analysts also can use anomalies during investigations to help confirm a breach, find new paths for investigating it, and assess its potential impact. These efficiencies reduce the time security analysts spend on investigations.
+
+- **The start of proactive threat hunts**: Threat hunters can use anomalies as context to help determine whether their queries have uncovered suspicious behavior. When the behavior is suspicious, the anomalies also point toward potential paths for further hunting. These clues provided by anomalies reduce both the time to detect a threat and its chance to cause harm.
+
+Anomalies can be powerful tools, but they are notoriously very noisy. They typically require a lot of tedious tuning for specific environments or complex post-processing. Azure Sentinel SOC-ML anomaly templates are tuned by our data science team to provide out-of-the box value, but should you need to tune them further, the process is simple and requires no knowledge of machine learning. The thresholds and parameters for many of the anomalies can be configured and fine-tuned through the already familiar analytics rule user interface. The performance of the original threshold and parameters can be compared to the new ones within the interface and further tuned as necessary during a testing, or flighting, phase. Once the anomaly meets the performance objectives, the anomaly with the new threshold or parameters can be promoted to production with the click of a button. Azure Sentinel SOC-ML anomalies enable you to get the benefit of anomalies without the hard work.
+
+## Next steps
+
+In this document, you learned how SOC-ML helps you detect anomalies in Azure Sentinel.
+
+- Learn how to [view, create, manage, and fine-tune anomaly rules](work-with-anomaly-rules.md).
+- Learn about [other types of analytics rules](tutorial-detect-threats-built-in.md).
sentinel Threat Intelligence Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/threat-intelligence-integration.md
+
+ Title: Threat intelligence integration in Azure Sentinel | Microsoft Docs
+description: Learn about the different ways threat intelligence feeds are integrated with and used by Azure Sentinel.
+documentationcenter: na
++
+editor: ''
+++
+ms.devlang: na
+
+ na
+ Last updated : 05/12/2021+++
+# Threat intelligence integration in Azure Sentinel
+
+> [!IMPORTANT]
+> The Threat Intelligence data connectors in Azure Sentinel are currently in public preview.
+> This feature is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Azure Sentinel gives you a few different ways to [use threat intelligence feeds](import-threat-intelligence.md) to enhance your security analysts' ability to detect and prioritize known threats.
+
+You can use one of many available integrated threat intelligence platform (TIP) products, you can connect to TAXII servers to take advantage of any STIX-compatible threat intelligence source, and you can also make use of any custom solutions that can communicate directly with the [Microsoft Graph Security tiIndicators API](/graph/api/resources/tiindicator).
+
+You can also connect to threat intelligence sources from playbooks, in order to enrich incidents with TI information that can help direct investigation and response actions.
+
+## TAXII threat intelligence feeds
+
+To connect to TAXII threat intelligence feeds, use the [Threat intelligence - TAXII](connect-threat-intelligence.md#connect-azure-sentinel-to-taxii-servers) data connector, together with the data supplied by each vendor linked below. You may need to contact the vendor directly to obtain the necessary data to use with the connector.
+
+### Anomali Limo
+
+- [See what you need to connect to Anomali Limo feed](https://www.anomali.com/resources/limo).
+
+### Cybersixgill Darkfeed
+
+- [Learn about Cybersixgill integration with Azure Sentinel @Cybersixgill](https://www.cybersixgill.com/partners/azure-sentinel/)
+- To connect Azure Sentinel to Cybersixgill TAXII Server and get access to Darkfeed, [contact Cybersixgill](mailto://azuresentinel@cybersixgill.com) to obtain the API Root, Collection ID, Username and Password.
+
+### Financial services intelligence sharing community (FS-ISAC)
+
+- [Join the FS-ISAC](https://www.fsisac.com/intelligenceexchange) to get the credentials to access this feed.
+
+### Health intelligence sharing community (H-ISAC)
+
+- [Join the H-ISAC](https://h-isac.org/soltra/) to get the credentials to access this feed.
+
+### IBM X-Force
+
+- [Learn more about IBM X-Force integration](https://www.ibm.com/security/xforce)
+
+### IntSights
+
+- [Learn more about the IntSights integration with Azure Sentinel @IntSights](https://intsights.com/resources/intsights-microsoft-azure-sentinel)
+- To connect Azure Sentinel to the IntSights TAXII Server, obtain the API Root, Collection ID, Username and Password from the IntSights portal after you configure a policy of the data you wish to send to Azure Sentinel.
+
+### ThreatConnect
+
+- [Learn more about STIX and TAXII @ThreatConnect](https://threatconnect.com/stix-taxii/)
+- [TAXII Services documentation @ThreatConnect](https://docs.threatconnect.com/en/latest/rest_api/taxii/taxii.html)
+
+## Integrated threat intelligence platform products
+
+To connect to Threat Intelligence Platform (TIP) feeds, follow the instructions to [connect Threat Intelligence Platform](connect-threat-intelligence.md#connect-azure-sentinel-to-your-threat-intelligence-platform) feeds to Azure Sentinel. The second part of these instructions calls for you to enter information into your TIP solution. See the links below for more information.
+
+### Agari Phishing Defense and Brand Protection
+
+- To connect [Agari Phishing Defense and Brand Protection](https://agari.com/products/phishing-defense/), use the built-in [Agari data connector](connect-agari-phishing-defense.md) in Azure Sentinel.
+
+### Anomali ThreatStream
+
+- To download [ThreatStream Integrator and Extensions](https://www.anomali.com/products/threatstream), and the instructions for connecting ThreatStream intelligence to the Microsoft Graph Security API, see the [ThreatStream downloads](https://ui.threatstream.com/downloads) page.
+
+### AlienVault Open Threat Exchange (OTX) from AT&T Cybersecurity
+
+- [AlienVault OTX](https://otx.alienvault.com/) makes use of Azure Logic Apps (playbooks) to connect to Azure Sentinel. See the [specialized instructions](https://techcommunity.microsoft.com/t5/azure-sentinel/ingesting-alien-vault-otx-threat-indicators-into-azure-sentinel/ba-p/1086566) necessary to take full advantage of the complete offering.
+
+### EclecticIQ Platform
+
+- Learn more about the [EclecticIQ Platform](https://www.eclecticiq.com/platform/).
+
+### GroupIB Threat Intelligence and Attribution
+
+- To connect [GroupIB Threat Intelligence and Attribution](https://www.group-ib.com/intelligence-attribution.html) to Azure Sentinel, GroupIB makes use of Azure Logic Apps. See the [specialized instructions](https://techcommunity.microsoft.com/t5/azure-sentinel/group-ib-threat-intelligence-and-attribution-connector-azure/ba-p/2252904) necessary to take full advantage of the complete offering.
+
+### MISP Open Source Threat Intelligence Platform
+
+- For a sample script that provides clients with MISP instances to migrate threat indicators to the Microsoft Graph Security API, see the [MISP to Microsoft Graph Security Script](https://github.com/microsoftgraph/security-api-solutions/tree/master/Samples/MISP).
+- [Learn more about the MISP Project](https://www.misp-project.org/).
+
+### Palo Alto Networks MineMeld
+
+- To configure [Palo Alto MineMeld](https://www.paloaltonetworks.com/products/secure-the-network/subscriptions/minemeld) with the connection information to Azure Sentinel, see [Sending IOCs to the Microsoft Graph Security API using MineMeld](https://live.paloaltonetworks.com/t5/MineMeld-Articles/Sending-IOCs-to-the-Microsoft-Graph-Security-API-using-MineMeld/ta-p/258540) and skip to the **MineMeld Configuration** heading.
+
+### Recorded Future Security Intelligence Platform
+
+- [Recorded Future](https://www.recordedfuture.com/integrations/microsoft-azure/) makes use of Azure Logic Apps (playbooks) to connect to Azure Sentinel. See the [specialized instructions](https://go.recordedfuture.com/hubfs/partners/microsoft-azure-installation-guide.pdf) necessary to take full advantage of the complete offering.
+
+### ThreatConnect Platform
+
+- See the [Microsoft Graph Security Threat Indicators Integration Configuration Guide](https://training.threatconnect.com/learn/article/microsoft-graph-security-threat-indicators-integration-configuration-guide-kb-article) for instructions to connect [ThreatConnect](https://threatconnect.com/solution/) to Azure Sentinel.
+
+### ThreatQuotient Threat Intelligence Platform
+
+- See [Microsoft Sentinel Connector for ThreatQ integration](https://appsource.microsoft.com/product/web-apps/threatquotientinc1595345895602.microsoft-sentinel-connector-threatq?src=health&tab=DetailsAndSupport) for support information and instructions to connect [ThreatQuotient TIP](https://www.threatq.com/) to Azure Sentinel.
+
+## Incident enrichment sources
+
+Besides being used to import threat indicators, threat intelligence feeds can also serve as a source to enrich the information in your incidents and provide more context to your investigations. The following feeds serve this purpose, and provide Logic App playbooks to use in your [automated incident response](automate-responses-with-playbooks.md).
+
+### HYAS Insight
+
+- Find and enable incident enrichment playbooks for [HYAS Insight](https://www.hyas.com/hyas-insight) in the Azure Sentinel [GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks). Search for subfolders beginning with "Enrich-Sentinel-Incident-HYAS-Insight-".
+- See the HYAS Insight Logic App [connector documentation](/connectors/hyasinsight/).
+
+### Recorded Future Security Intelligence Platform
+
+- Find and enable incident enrichment playbooks for [Recorded Future](https://www.recordedfuture.com/integrations/microsoft-azure/) in the Azure Sentinel [GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks). Search for subfolders beginning with "RecordedFuture_".
+- See the Recorded Future Logic App [connector documentation](/connectors/recordedfuture/).
+
+### ReversingLabs TitaniumCloud
+
+- Find and enable incident enrichment playbooks for [ReversingLabs](https://www.reversinglabs.com/products/file-reputation-service) in the Azure Sentinel [GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Enrich-SentinelIncident-ReversingLabs-File-Information).
+- See the ReversingLabs Intelligence Logic App [connector documentation](/connectors/reversinglabsintelligence/).
+
+### RiskIQ Passive Total
+
+- Find and enable incident enrichment playbooks for [RiskIQ Passive Total](https://www.riskiq.com/products/passivetotal/) in the Azure Sentinel [GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks). Search for subfolders beginning with "Enrich-SentinelIncident-RiskIQ-".
+- See [more information](https://techcommunity.microsoft.com/t5/azure-sentinel/enrich-azure-sentinel-security-incidents-with-the-riskiq/ba-p/1534412) on working with RiskIQ playbooks.
+- See the RiskIQ PassiveTotal Logic App [connector documentation](/connectors/riskiqpassivetotal/).
+
+### Virus Total
+
+- Find and enable incident enrichment playbooks for [Virus Total](https://developers.virustotal.com/v3.0/reference) in the Azure Sentinel [GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks). Search for subfolders beginning with "Get-VirusTotal" and "Get-VTURL".
+- See the Virus Total Logic App [connector documentation](/connectors/virustotal/).
+
+## Next steps
+
+In this document, you learned how to connect your threat intelligence provider to Azure Sentinel. To learn more about Azure Sentinel, see the following articles.
+
+- Learn how to [get visibility into your data and potential threats](quickstart-get-visibility.md).
+- Get started [detecting threats with Azure Sentinel](./tutorial-detect-threats-built-in.md).
sentinel Tutorial Detect Threats Built In https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/tutorial-detect-threats-built-in.md
ms.devlang: na
na Previously updated : 04/12/2021 Last updated : 05/11/2021
Based on Fusion technology, advanced multistage attack detection in Azure Sentin
> > To see which detections are in preview, see [Advanced multistage attack detection in Azure Sentinel](fusion.md).
-### Machine learning behavioral analytics
+In addition, the Fusion engine can now correlate alerts produced by [scheduled analytics rules](#scheduled) with those from other systems, producing high-fidelity incidents as a result.
+
+### Machine learning (ML) behavioral analytics
These templates are based on proprietary Microsoft machine learning algorithms, so you cannot see the internal logic of how they work and when they run. Because the logic is hidden and therefore not customizable, you can only create one rule with each template of this type.
These templates are based on proprietary Microsoft machine learning algorithms,
> > - By creating and enabling any rules based on the ML behavior analytics templates, **you give Microsoft permission to copy ingested data outside of your Azure Sentinel workspace's geography** as necessary for processing by the machine learning engines and models.
+### Anomaly
+
+Anomaly rule templates use SOC-ML (machine learning) to detect specific types of anomalous behavior. Each rule has its own unique parameters and thresholds, appropriate to the behavior being analyzed, and while its configuration can't be changed or fine-tuned, you can duplicate the rule, change and fine-tune the duplicate, run the duplicate in **Flighting** mode and the original concurrently in **Production** mode, compare results, and switch the duplicate to **Production** if and when its fine-tuning is to your liking. Learn more about [SOC-ML](soc-ml-anomalies.md) and [working with anomaly rules](work-with-anomaly-rules.md).
+
+> [!IMPORTANT]
+> The Anomaly rule templates are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ ### Scheduled Scheduled analytics rules are based on built-in queries written by Microsoft security experts. You can see the query logic and make changes to it. You can use the scheduled rules template and customize the query logic and scheduling settings to create new rules.
+Several new scheduled analytics rule templates produce alerts that are correlated by the Fusion engine with alerts from other systems to produce high-fidelity incidents. See [Advanced multistage attack detection](fusion.md#configure-scheduled-analytics-rules-for-fusion-detections) for details.
+ > [!TIP] > Rule scheduling options include configuring the rule to run every specified number of minutes, hours, or days, with the clock starting when you enable the rule. >
sentinel Tutorial Investigate Cases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/tutorial-investigate-cases.md
To use the investigation graph:
1. Select an incident, then select **Investigate**. This takes you to the investigation graph. The graph provides an illustrative map of the entities directly connected to the alert and each resource connected further. +
+ [ ![View map.](media/tutorial-investigate-cases/investigation-map.png) ](media/tutorial-investigate-cases/investigation-map.png#lightbox)
+ > [!IMPORTANT] > - You'll only be able to investigate the incident if you used the entity mapping fields when you set up your analytics rule. The investigation graph requires that your original incident includes entities. > > - Azure Sentinel currently supports investigation of **incidents up to 30 days old**.
- ![View map](media/tutorial-investigate-cases/map1.png)
1. Select an entity to open the **Entities** pane so you can review information on that entity.
sentinel Ueba Enrichments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/ueba-enrichments.md
na ms.devlang: na Previously updated : 01/04/2021 Last updated : 05/10/2021 # Azure Sentinel UEBA enrichments reference
-This article describes the **Behavior analytics** table found on the [entity details pages](identify-threats-with-entity-behavior-analytics.md#how-to-use-entity-pages), as well as other entity enrichments you can use to focus and sharpen your security incident investigations.
+This article describes the Azure Sentinel **BehaviorAnalytics** table found in **Logs** and mentioned on the [entity details pages](identify-threats-with-entity-behavior-analytics.md#how-to-use-entity-pages), and provides the details of the entity enrichments fields in that table, the contents of which you can use to focus and sharpen your security incident investigations.
-The [User insights table](#user-insights-table) and the [Device insights table](#device-insights-table) contain entity information from Active Directory / Azure AD and Microsoft Threat Intelligence sources.
+The following three dynamic fields from the BehaviorAnalytics table are described in the [tables below](#entity-enrichments-dynamic-fields).
-Other tables, described under [Activity insights tables](#activity-insights-tables), contain entity information based on the behavioral profiles built by Azure Sentinel's entity behavior analytics.
+The [UsersInsights](#usersinsights-field) and [DevicesInsights](#devicesinsights-field) fields contain entity information from Active Directory / Azure AD and Microsoft Threat Intelligence sources.
-<a name="baseline-explained"></a>User activities are analyzed against a baseline that is dynamically compiled each time it is used. Each activity has its defined lookback period from which the dynamic baseline is derived. The lookback period is specified in the [**Baseline**](#activity-insights-tables) column in this table.
+The [ActivityInsights](#activityinsights-field) field contains entity information based on the behavioral profiles built by Azure Sentinel's entity behavior analytics.
+
+<a name="baseline-explained"></a>User activities are analyzed against a baseline that is dynamically compiled each time it is used. Each activity has its defined lookback period from which the dynamic baseline is derived. The lookback period is specified in the [**Baseline**](#activityinsights-field) column in this table.
> [!NOTE]
-> The **Enrichment name** field in the [User insights table](#user-insights-table), [Device insights table](#device-insights-table), and the [Activity insights tables](#activity-insights-tables) displays two rows of information.
+> The **Enrichment name** column in all the [entity enrichment field](#entity-enrichments-dynamic-fields) tables displays two rows of information.
>
-> The first, in **bold**, is the "friendly name" of the enrichment. The second *(in italics and parentheses)* is the field name of the enrichment as stored in the [**Behavior Analytics table**](#behavior-analytics-table).
+> - The first, in **bold**, is the "friendly name" of the enrichment.
+> - The second *(in italics and parentheses)* is the field name of the enrichment as stored in the [**Behavior Analytics table**](#behavioranalytics-table).
-## Behavior analytics table
+## BehaviorAnalytics table
The following table describes the behavior analytics data displayed on each [entity details page](identify-threats-with-entity-behavior-analytics.md#how-to-use-entity-pages) in Azure Sentinel.
-| Field | Description |
-|||
-| **TenantId** | unique ID number of the tenant |
-| **SourceRecordId** | unique ID number of the EBA event |
-| **TimeGenerated** | timestamp of the activity's occurrence |
-| **TimeProcessed** | timestamp of the activity's processing by the EBA engine |
-| **ActivityType** | high-level category of the activity |
-| **ActionType** | normalized name of the activity |
-| **UserName** | username of the user that initiated the activity |
-| **UserPrincipalName** | full username of the user that initiated the activity |
-| **EventSource** | data source that provided the original event |
-| **SourceIPAddress** | IP address from which activity was initiated |
-| **SourceIPLocation** | country from which activity was initiated, enriched from IP address |
-| **SourceDevice** | hostname of the device that initiated the activity |
-| **DestinationIPAddress** | IP address of the target of the activity |
-| **DestinationIPLocation** | country of the target of the activity, enriched from IP address |
-| **DestinationDevice** | name of the target device |
-| **UsersInsights** | contextual enrichments of involved users |
-| **DevicesInsights** | contextual enrichments of involved devices |
-| **ActivityInsights** | contextual analysis of activity based on our profiling |
-| **InvestigationPriority** | anomaly score, between 0-10 (0=benign, 10=highly anomalous) |
+| Field | Type | Description |
+|||--|
+| **TenantId** | string | unique ID number of the tenant |
+| **SourceRecordId** | string | unique ID number of the EBA event |
+| **TimeGenerated** | datetime | timestamp of the activity's occurrence |
+| **TimeProcessed** | datetime | timestamp of the activity's processing by the EBA engine |
+| **ActivityType** | string | high-level category of the activity |
+| **ActionType** | string | normalized name of the activity |
+| **UserName** | string | username of the user that initiated the activity |
+| **UserPrincipalName** | string | full username of the user that initiated the activity |
+| **EventSource** | string | data source that provided the original event |
+| **SourceIPAddress** | string | IP address from which activity was initiated |
+| **SourceIPLocation** | string | country from which activity was initiated, enriched from IP address |
+| **SourceDevice** | string | hostname of the device that initiated the activity |
+| **DestinationIPAddress** | string | IP address of the target of the activity |
+| **DestinationIPLocation** | string | country of the target of the activity, enriched from IP address |
+| **DestinationDevice** | string | name of the target device |
+| **UsersInsights** | dynamic | contextual enrichments of involved users ([details below](#usersinsights-field)) |
+| **DevicesInsights** | dynamic | contextual enrichments of involved devices ([details below](#devicesinsights-field)) |
+| **ActivityInsights** | dynamic | contextual analysis of activity based on our profiling ([details below](#activityinsights-field)) |
+| **InvestigationPriority** | int | anomaly score, between 0-10 (0=benign, 10=highly anomalous) |
|
-## User insights table
+## Entity enrichments dynamic fields
+
+### UsersInsights field
-The following table describes the <?> listed in the **User insights** table in Azure Sentinel (where?)
+The following table describes the enrichments featured in the **UsersInsights** dynamic field in the BehaviorAnalytics table:
| Enrichment name | Description | Sample value |
-| | | | |
+| | | |
| **Account display name**<br>*(AccountDisplayName)* | The account display name of the user. | Admin, Hayden Cook | | **Account domain**<br>*(AccountDomain)* | The account domain name of the user. | | | **Account object ID**<br>*(AccountObjectID)* | The account object ID of the user. | a58df659-5cab-446c-9dd0-5a3af20ce1c2 |
The following table describes the <?> listed in the **User insights** table in
| **On premises SID**<br>*(OnPremisesSID)* | The on-premises SID of the user related to the action. | S-1-5-21-1112946627-1321165628-2437342228-1103 | |
-## Device insights table
+### DevicesInsights field
+
+The following table describes the enrichments featured in the **DevicesInsights** dynamic field in the BehaviorAnalytics table:
| Enrichment name | Description | Sample value |
-| | | | |
+| | | |
| **Browser**<br>*(Browser)* | The browser used in the action. | Edge, Chrome | | **Device family**<br>*(DeviceFamily)* | The device family used in the action. | Windows | | **Device type**<br>*(DeviceType)* | The client device type used in the action | Desktop |
The following table describes the <?> listed in the **User insights** table in
| **User agent family**<br>*(UserAgentFamily)* | The user agent family used in the action. | Chrome, Edge, Firefox | |
-## Activity insights tables
+### ActivityInsights field
-### Action performed
+The following tables describe the enrichments featured in the **ActivityInsights** dynamic field in the BehaviorAnalytics table:
+
+#### Action performed
| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value | | | | | |
The following table describes the <?> listed in the **User insights** table in
| **Action uncommonly performed in tenant**<br>*(ActionUncommonlyPerformedInTenant)* | 180 | The action is not commonly performed in the organization. | True, False | |
-### App used
+#### App used
| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value | | | | | |
The following table describes the <?> listed in the **User insights** table in
| **App uncommonly used in tenant**<br>*(AppUncommonlyUsedInTenant)* | 180 | The app is not commonly used in the organization. | True, False | |
-### Browser used
+#### Browser used
| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value | | | | | |
The following table describes the <?> listed in the **User insights** table in
| **Browser uncommonly used in tenant**<br>*(BrowserUncommonlyUsedInTenant)* | 30 | The browser is not commonly used in the organization. | True, False | |
-### Country connected from
+#### Country connected from
| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value | | | | | |
The following table describes the <?> listed in the **User insights** table in
| **Country uncommonly connected from in tenant**<br>*(CountryUncommonlyConnectedFromInTenant)* | 90 | The geo location, as resolved from the IP address, is not commonly connected from in the organization. | True, False | |
-### Device used to connect
+#### Device used to connect
| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value | | | | | |
The following table describes the <?> listed in the **User insights** table in
| **Device uncommonly used in tenant**<br>*(DeviceUncommonlyUsedInTenant)* | 180 | The device is not commonly used in the organization. | True, False | |
-### Other device-related
+#### Other device-related
| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value | | | | | |
The following table describes the <?> listed in the **User insights** table in
| **Device family uncommonly used in tenant**<br>*(DeviceFamilyUncommonlyUsedInTenant)* | 30 | The device family is not commonly used in the organization. | True, False | |
-### Internet Service Provider used to connect
+#### Internet Service Provider used to connect
| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value | | | | | |
The following table describes the <?> listed in the **User insights** table in
| **ISP uncommonly used in tenant**<br>*(ISPUncommonlyUsedInTenant)* | 30 | The ISP is not commonly used in the organization. | True, False | |
-### Resource accessed
+#### Resource accessed
| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value | | | | | |
The following table describes the <?> listed in the **User insights** table in
| **Resource uncommonly accessed in tenant**<br>*(ResourceUncommonlyAccessedInTenant)* | 180 | The resource is not commonly accessed in the organization. | True, False | |
-### Miscellaneous
+#### Miscellaneous
| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value | | | | | |
The following table describes the <?> listed in the **User insights** table in
| **Unusual number of devices added**<br>*(UnusualNumberOfDevicesAdded)* | 5 | A user added an unusual number of devices. | True, False | | **Unusual number of devices deleted**<br>*(UnusualNumberOfDevicesDeleted)* | 5 | A user deleted an unusual number of devices. | True, False | | **Unusual number of users added to group**<br>*(UnusualNumberOfUsersAddedToGroup)* | 5 | A user added an unusual number of users to a group. | True, False |
-|
+|
+
+## Next steps
+
+This document described the Azure Sentinel entity behavior analytics table schema.
+
+- Learn more about [entity behavior analytics](identify-threats-with-entity-behavior-analytics.md).
+- [Put UEBA to use](investigate-with-ueba.md) in your investigations.
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/whats-new.md
Previously updated : 05/05/2021 Last updated : 05/12/2021 # What's new in Azure Sentinel
Noted features are currently in PREVIEW. The [Azure Preview Supplemental Terms](
## May 2021
+- [Sentinel solutions (Public preview)](#sentinel-solutions-public-preview)
+- [Threat intelligence integrations (Public preview)](#threat-intelligence-integrations-public-preview)
+- [Fusion over scheduled alerts (Public preview)](#fusion-over-scheduled-alerts-public-preview)
+- [SOC-ML anomalies (Public preview)](#soc-ml-anomalies-public-preview)
+- [IP Entity page (Public preview)](#ip-entity-page-public-preview)
+- [Activity customization (Public preview)](#activity-customization-public-preview)
+- [Hunting dashboard (Public preview)](#hunting-dashboard-public-preview)
+- [Incident teams - collaborate in Microsoft Teams (Public preview)](#azure-sentinel-incident-teamcollaborate-in-microsoft-teams-public-preview)
- [Zero Trust (TIC3.0) workbook](#zero-trust-tic30-workbook)
+### Sentinel solutions (Public preview)
+
+Azure Sentinel now offers **packaged content** [solutions](sentinel-solutions-catalog.md) that include combinations of one or more data connectors, workbooks, analytics rules, playbooks, hunting queries, parsers, watchlists, and other components for Azure Sentinel.
+
+Solutions provide improved in-product discoverability, single-step deployment, and end-to-end product scenarios. For more information, see [Discover and deploy Azure Sentinel solutions](sentinel-solutions-deploy.md).
+
+### Threat intelligence integrations (Public preview)
+
+Azure Sentinel gives you a few different ways to [use threat intelligence](import-threat-intelligence.md) feeds to enhance your security analysts' ability to detect and prioritize known threats.
+
+You can now use one of many newly available integrated threat intelligence platform (TIP) products, connect to TAXII servers to take advantage of any STIX-compatible threat intelligence source, and make use of any custom solutions that can communicate directly with the [Microsoft Graph Security tiIndicators API](/graph/api/resources/tiindicator).
+
+You can also connect to threat intelligence sources from playbooks, in order to enrich incidents with TI information that can help direct investigation and response actions.
+
+For more information, see [Threat intelligence integration in Azure Sentinel](threat-intelligence-integration.md).
+
+### Fusion over scheduled alerts (Public preview)
+
+The **Fusion** machine-learning correlation engine can now detect multi-stage attacks using alerts generated by a set of [scheduled analytics rules](tutorial-detect-threats-custom.md) in its correlations, in addition to the alerts imported from other data sources.
+
+For more information, see [Advanced multistage attack detection in Azure Sentinel](fusion.md).
+
+### SOC-ML anomalies (Public preview)
+
+Azure Sentinel's SOC-ML machine learning-based anomalies can identify unusual behavior that might otherwise evade detection.
+
+SOC-ML uses analytics rule templates that can be put to work right out of the box. While anomalies don't necessarily indicate malicious or even suspicious behavior by themselves, they can be used to improve the fidelity of detections, investigations, and threat hunting.
+
+For more information, see [Use SOC-ML anomalies to detect threats in Azure Sentinel](soc-ml-anomalies.md).
+
+### IP Entity page (Public preview)
+
+Azure Sentinel now supports the IP address entity, and you can now view IP entity information in the new IP entity page.
+
+Like the user and host entity pages, the IP page includes general information about the IP, a list of activities the IP has been found to be a part of, and more, giving you an ever-richer store of information to enhance your investigation of security incidents.
+
+For more information, see [Entity pages](identify-threats-with-entity-behavior-analytics.md#entity-pages).
+
+### Activity customization (Public preview)
+
+Speaking of entity pages, you can now create new custom-made activities for your entities, that will be tracked and displayed on their respective entity pages alongside the out-of-the-box activities youΓÇÖve seen there until now.
+
+For more information, see [Customize activities on entity page timelines](customize-entity-activities.md).
+
+### Hunting dashboard (Public preview)
+
+The **Hunting** blade has gotten a refresh. The new dashboard lets you run all your queries, or a selected subset, in a single click.
+
+Identify where to start hunting by looking at result count, spikes, or the change in result count over a 24-hour period. You can also sort and filter by favorites, data source, MITRE ATT&CK tactic and technique, results, or results delta. View the queries that do not yet have the necessary data sources connected, and get recommendations on how to enable these queries.
+
+For more information, see [Hunt for threats with Azure Sentinel](hunting.md).
+
+### Azure Sentinel incident team - collaborate in Microsoft Teams (public preview)
+
+Azure Sentinel now supports a direct integration with Microsoft Teams, enabling you to collaborate seamlessly across the organization and with external stakeholders.
+
+Directly from the incident in Azure Sentinel, create a new *incident team* to use for central communication and coordination.
+
+Incident teams are especially helpful when used as a dedicated conference bridge for high-severity, ongoing incidents. Organizations that already use Microsoft Teams for communication and collaboration can use the Azure Sentinel integration to bring security data directly into their conversations and daily work.
+
+In Microsoft Teams, the new team's **Incident page** tab always has the most updated and recent data from Azure Sentinel, ensuring that your teams have the most relevant data right at hand.
+
+[ ![Incident page in Microsoft Teams.](media/collaborate-in-microsoft-teams/incident-in-teams.jpg) ](media/collaborate-in-microsoft-teams/incident-in-teams.jpg#lightbox)
+
+For more information, see [Collaborate in Microsoft Teams (Public preview)](collaborate-in-microsoft-teams.md).
+ ### Zero Trust (TIC3.0) workbook The new, Azure Sentinel Zero Trust (TIC3.0) workbook provides an automated visualization of [Zero Trust](/security/zero-trust/) principles, cross-walked to the [Trusted Internet Connections](https://www.cisa.gov/trusted-internet-connections) (TIC) framework.
sentinel Work With Anomaly Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/work-with-anomaly-rules.md
+
+ Title: Work with anomaly detection analytics rules in Azure Sentinel | Microsoft Docs
+description: This article explains how to view, create, manage, assess, and fine-tune anomaly detection analytics rules in Azure Sentinel.
+
+cloud: na
+documentationcenter: na
+++
+ms.assetid:
+++
+ na
+ms.devlang: na
+ Last updated : 04/28/2021++
+# Work with anomaly detection analytics rules in Azure Sentinel
+
+> [!IMPORTANT]
+>
+> - Anomaly rules are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## View SOC-ML anomaly rule templates
+
+Azure SentinelΓÇÖs SOC-ML anomalies feature provides [built-in anomaly templates](tutorial-detect-threats-built-in.md#anomaly) for immediate value out-of-the-box. These anomaly templates were developed to be robust by using thousands of data sources and millions of events, but this feature also enables you to change thresholds and parameters for the anomalies easily within the user interface. Anomaly rules must be activated before they will generate anomalies, which you can find in the **Anomalies** table in the **Logs** section.
+
+1. From the Azure Sentinel navigation menu, select **Analytics**.
+
+1. In the **Analytics** blade, select the **Rule templates** tab.
+
+1. Filter the list for **Anomaly** templates:
+
+ 1. Click the **Rule type** filter, then the drop-down list that appears below.
+
+ 1. Unmark **Select all**, then mark **Anomaly**.
+
+ 1. If necessary, click the top of the drop-down list to retract it, then click **OK**.
+
+## Activate anomaly rules
+
+When you click on one of the rule templates, you will see the following information in the details pane, along with a **Create rule** button:
+
+- **Description** explains how the anomaly works and the data it requires.
+
+- **Data sources** indicates the type of logs that need to be ingested in order to be analyzed.
+
+- **Tactics** are the MITRE ATT&CK framework tactics covered by the anomaly.
+
+- **Parameters** are the configurable attributes for the anomaly.
+
+- **Threshold** is a configurable value that indicates the degree to which an event must be unusual before an anomaly is created.
+
+- **Rule frequency** is the time between log processing jobs that find the anomalies.
+
+- **Anomaly version** shows the version of the template that is used by a rule. If you want to change the version used by a rule that is already active, you must recreate the rule.
+
+- **Template last updated** is the date the anomaly version was changed.
+
+Complete the following steps to activate a rule:
+
+1. Choose a rule template that is not already labeled **IN USE**. Click the **Create rule** button to open the rule creation wizard.
+
+ The wizard for each rule template will be slightly different, but it has three steps or tabs: **General**, **Configuration**, **Review and create**.
+
+ You can't change any of the values in the wizard; you first have to create and activate the rule.
+
+1. Cycle through the tabs, wait for the "Validation passed" message on the **Review and create** tab, and select the **Create** button.
+
+ You can only create one active rule from each template. Once you complete the wizard, an active anomaly rule is created in the **Active rules** tab, and the template (in the **Rule templates** tab) will be marked **IN USE**.
+
+ > [!NOTE]
+ > Assuming the required data is available, the new rule may still take up to 24 hours to appear in the **Active rules** tab. To view the new rules, select the Active rules tab and filter it the same way you filtered the Rule templates list above.
+
+Once the anomaly rule is activated, detected anomalies will be stored in the **Anomalies** table in the **Logs** section of your Azure Sentinel workspace.
+
+Each anomaly rule has a training period, and anomalies will not appear in the table until after that training period. You can find the training period in the description of each anomaly rule.
+
+## Assess the quality of anomalies
+
+You can see how well an anomaly rule is performing by reviewing a sample of the anomalies created by a rule over the last 24-hour period.
+
+1. From the Azure Sentinel navigation menu, select **Analytics**.
+
+1. In the **Analytics** blade, check that the **Active rules** tab is selected.
+
+1. Filter the list for **Anomaly** rules (as above).
+
+1. Select the rule you want to assess, and copy its name from the top of the details pane to the right.
+
+1. From the Azure Sentinel navigation menu, select **Logs**.
+
+1. If a **Queries** gallery pops up over the top, close it.
+
+1. Select the **Tables** tab on the left pane of the **Logs** blade.
+
+1. Set the **Time range** filter to **Last 24 hours**.
+
+1. Enter the following in the query window (in place of "Type your query here..."):
+
+ ```kusto
+ Anomalies
+ | where AnomalyTemplateName contains "________________________________"
+ ```
+ Paste the rule name you copied above in place of the underscores between the quotation marks.
+
+1. Click **Run**.
+
+When you have some results, you can start assessing the quality of the anomalies. If you donΓÇÖt have results, try increasing the time range.
+
+Expand the results for each anomaly and then expand the **AnomalyReasons** field. This will tell you why the anomaly fired.
+
+The "reasonableness" or "usefulness" of an anomaly may depend on the conditions of your environment, but a common reason for an anomaly rule to produce too many anomalies is that the threshold is too low.
+
+## Tune anomaly rules
+
+While anomaly rules are engineered for maximum effectiveness out of the box, every situation is unique and sometimes anomaly rules need to be tuned.
+
+Since you can't edit an original active rule, you must first duplicate an active anomaly rule and then customize the copy.
+
+The original anomaly rule will keep running until you either disable or delete it.
+
+This is by design, to give you the opportunity to compare the results generated by the original configuration and the new one. Duplicate rules are disabled by default. You can only make one customized copy of any given anomaly rule. Attempts to make a second copy will fail.
+
+1. To change the configuration of an anomaly rule, select the anomaly rule in the **Active rules** tab.
+
+1. Right-click anywhere on the row of the rule, or left-click the ellipsis (...) at the end of the row, then click **Duplicate**.
+
+1. The new copy of the rule will have the suffix " - Customized" in the rule name. To actually customize this rule, select this rule and click **Edit**.
+
+1. The rule opens in the Analytics rule wizard. Here you can change the parameters of the rule and its threshold. The parameters that can be changed vary with each anomaly type and algorithm.
+
+ You can preview the results of your changes in the **Results preview pane**. Click an **Anomaly ID** in the results preview to see why the ML model identifies that anomaly.
+
+1. Enable the customized rule to generate results. Some of your changes may require the rule to re-run, so you must wait for it to finish and come back to check the results on the logs page. The customized anomaly rule runs in **Flighting** (testing) mode by default. The original rule continues to run in **Production** mode by default.
+
+1. To compare the results, go back to the Anomalies table in **Logs** to [assess the new rule as before](#assess-the-quality-of-anomalies), only look for rows with the original rule name as well as the duplicate rule name with " - Customized" appended to it in the **AnomalyTemplateName** column.
+
+ If you are satisfied with the results for the customized rule, you can go back to the **Active rules** tab, click on the customized rule, click the **Edit** button and on the **General** tab switch it from **Flighting** to **Production**. The original rule will automatically change to **Flighting** since you can't have two versions of the same rule in production at the same time.
+
+## Next steps
+
+In this document, you learned how to work with SOC-ML anomaly detection analytics rules in Azure Sentinel.
+
+- Get some background information about [SOC-ML](soc-ml-anomalies.md).
+- Explore other [analytics rule types](tutorial-detect-threats-built-in.md).
service-bus-messaging Service Bus Resource Manager Namespace Topic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-resource-manager-namespace-topic.md
Now that you've created and deployed resources using Azure Resource Manager, lea
[Learn more about Service Bus topics and subscriptions]: service-bus-queues-topics-subscriptions.md [Using Azure PowerShell with Azure Resource Manager]: ../azure-resource-manager/management/manage-resources-powershell.md [Using the Azure CLI for Mac, Linux, and Windows with Azure Resource Management]: ../azure-resource-manager/management/manage-resources-cli.md
-[Service Bus namespace with topic and subscription]: https://github.com/Azure/azure-quickstart-templates/blob/master/201-servicebus-create-topic-and-subscription/
+[Service Bus namespace with topic and subscription]: https://azure.microsoft.com/resources/templates/201-servicebus-create-topic-and-subscription/
service-bus-messaging Service Bus Resource Manager Namespace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-resource-manager-namespace.md
If you don't have an Azure subscription, [create a free account](https://azure.m
In this quickstart, you use an [existing Resource Manager template](https://github.com/Azure/azure-quickstart-templates/blob/master/101-servicebus-create-namespace/azuredeploy.json) from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/):
-[!code-json[create-azure-service-bus-namespace](~/quickstart-templates/101-servicebus-create-namespace/azuredeploy.json)]
+[!code-json[create-azure-service-bus-namespace](~/quickstart-templates/quickstarts/microsoft.servicebus/servicebus-create-namespace/azuredeploy.json)]
To find more template samples, see [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Servicebus&pageNumber=1&sort=Popular).
service-health Resource Health Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-health/resource-health-overview.md
Non-platform events are triggered by user actions. Examples include stopping a v
### Unknown
-*Unknown* means that Resource Health hasn't received information about the resource for more than 10 minutes. Although this status isn't a definitive indication of the state of the resource, it's an important data point for troubleshooting.
+*Unknown* means that Resource Health hasn't received information about the resource for more than 10 minutes. This commonly occurs when virtual machines have been dallocated. Although this status isn't a definitive indication of the state of the resource, it can be an important data point for troubleshooting.
If the resource is running as expected, the status of the resource will change to *Available* after a few minutes.
Different resources have their own criteria for when they report that they are d
![Status of *Degraded* for a virtual machine](./media/resource-health-overview/degraded.png)
-## Reporting an incorrect status
-
-If you think that the current health status is incorrect, you can tell us by selecting **Report incorrect health status**. In cases where an Azure problem is affecting you, we encourage you to contact Support from Resource Health.
-
-![Form to submit information about an incorrect status](./media/resource-health-overview/incorrect-status.png)
- ## History information You can access up to 30 days of history in the **Health history** section of Resource Health. ![List of Resource Health events over the last two weeks](./media/resource-health-overview/history-blade.png)
+## Root cause information
+
+If Azure has further information about the root cause of a platform-initiated unavailability, that information may be posted in resource health up to 72 hours after the initial unavailability. This information is only available for virtual machinse at this time.
+ ## Get started To open Resource Health for one resource:
spring-cloud Concepts Blue Green Deployment Strategies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/concepts-blue-green-deployment-strategies.md
Previously updated : 04/02/2021 Last updated : 05/12/2021
spring-cloud How To Cicd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-cicd.md
Title: CI/CD for Azure Spring Cloud
-description: CI/CD for Azure Spring Cloud
+ Title: Automate application deployments to Azure Spring Cloud
+description: Describes how to use the Azure Spring Cloud task for Azure Pipelines.
Previously updated : 09/08/2020 Last updated : 05/12/2021 zone_pivot_groups: programming-languages-spring-cloud
static-web-apps Add Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/add-api.md
-# Add an API to Azure Static Web Apps Preview with Azure Functions
+# Add an API to Azure Static Web Apps with Azure Functions
You can add serverless APIs to Azure Static Web Apps via integration with Azure Functions. This article demonstrates how to add and deploy an API to an Azure Static Web Apps site.
Using Visual Studio Code, commit and push your changes to the remote git reposit
1. Navigate to the [Azure portal](https://portal.azure.com) 1. Click **Create a Resource** 1. Search for **Static Web App**
-1. Click **Static Web App (Preview)**
+1. Click **Static Web App**
1. Click **Create** Next, add the app-specific settings.
static-web-apps Add Mongoose https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/add-mongoose.md
This tutorial uses a GitHub template repository to help you create your applicat
5. Return to the [Azure portal](https://portal.azure.com) 6. Click **Create a resource** 7. Type **static web apps** in the search box
-8. Select **Static Web App (preview)**
+8. Select **Static Web App**
9. Click **Create** 10. Configure your Azure Static Web App with the following information - Subscription: Choose the same subscription as before
static-web-apps Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/apis.md
Last updated 05/08/2020
-# API support in Azure Static Web Apps Preview with Azure Functions
+# API support in Azure Static Web Apps with Azure Functions
Azure Static Web Apps provides serverless API endpoints via [Azure Functions](../azure-functions/functions-overview.md). By leveraging Azure Functions, APIs dynamically scale based on demand, and include the following features:
static-web-apps Application Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/application-settings.md
-# Configure application settings for Azure Static Web Apps Preview
+# Configure application settings for Azure Static Web Apps
Application settings hold configuration settings for values that may change, such as database connection strings. Adding application settings allows you to modify the configuration input to your app, without having to change application code.
static-web-apps Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/authentication-authorization.md
Last updated 04/09/2021
-# Authentication and authorization for Azure Static Web Apps Preview
+# Authentication and authorization for Azure Static Web Apps
Azure Static Web Apps streamlines the authentication experience by managing authentication with the following providers:
static-web-apps