Updates from: 10/04/2022 01:07:47
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
To configure the Temporary Access Pass authentication method policy:
After you enable a policy, you can create a Temporary Access Pass for a user in Azure AD. These roles can perform the following actions related to a Temporary Access Pass. -- Global Administrator can create, delete, view a Temporary Access Pass on any user (except themselves)-- Privileged Authentication Administrators can create, delete, view a Temporary Access Pass on admins and members (except themselves)-- Authentication Administrators can create, delete, view a Temporary Access Pass on members (except themselves)
+- Global Administrators can create, delete, and view a Temporary Access Pass on any user (except themselves)
+- Privileged Authentication Administrators can create, delete, and view a Temporary Access Pass on admins and members (except themselves)
+- Authentication Administrators can create, delete, and view a Temporary Access Pass on members (except themselves)
- Global Reader can view the Temporary Access Pass details on the user (without reading the code itself). 1. Sign in to the Azure portal as either a Global administrator, Privileged Authentication administrator, or Authentication administrator.
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/whats-new-docs.md
Previously updated : 09/01/2022 Last updated : 09/03/2022
Welcome to what's new in the Microsoft identity platform documentation. This article lists new docs that have been added and those that have had significant updates in the last three months.
+## September 2022
+
+### New articles
+
+- [Configure a user-assigned managed identity to trust an external identity provider (preview)](workload-identity-federation-create-trust-user-assigned-managed-identity.md)
+- [Important considerations and restrictions for federated identity credentials](workload-identity-federation-considerations.md)
+
+### Updated articles
+
+- [How to use Continuous Access Evaluation enabled APIs in your applications](app-resilience-continuous-access-evaluation.md)
+- [Run automated integration tests](test-automate-integration-testing.md)
+- [Tutorial: Sign in users and call the Microsoft Graph API from a JavaScript single-page application (SPA)](tutorial-v2-javascript-spa.md)
+ ## August 2022 ### Updated articles
Welcome to what's new in the Microsoft identity platform documentation. This art
- [Microsoft identity platform access tokens](access-tokens.md) - [Single-page application: Sign-in and Sign-out](scenario-spa-sign-in.md) - [Tutorial: Add sign-in to Microsoft to an ASP.NET web app](tutorial-v2-asp-webapp.md)-
-## June 2022
-
-### Updated articles
--- [Add app roles to your application and receive them in the token](howto-add-app-roles-in-azure-ad-apps.md)-- [Azure AD Authentication and authorization error codes](reference-aadsts-error-codes.md)-- [Microsoft identity platform refresh tokens](refresh-tokens.md)-- [Single-page application: Acquire a token to call an API](scenario-spa-acquire-token.md)-- [Tutorial: Sign in users and call the Microsoft Graph API in an Electron desktop app](tutorial-v2-nodejs-desktop.md)
active-directory Concept Azure Ad Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-azure-ad-register.md
Previously updated : 02/15/2022 Last updated : 09/30/2022
The goal of Azure AD registered - also known as Workplace joined - devices is to
| | Bring your own device | | | Mobile devices | | **Device ownership** | User or Organization |
-| **Operating Systems** | Windows 10 or newer, iOS, Android, and macOS |
+| **Operating Systems** | Windows 10 or newer, iOS, Android, macOS, Ubuntu 20.04/22.04 |
| **Provisioning** | Windows 10 or newer ΓÇô Settings | | | iOS/Android ΓÇô Company Portal or Microsoft Authenticator app | | | macOS ΓÇô Company Portal |
+| | Linux - Intune Agent |
| **Device sign in options** | End-user local credentials | | | Password | | | Windows Hello |
active-directory Plan Device Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/plan-device-deployment.md
Previously updated : 02/15/2022 Last updated : 09/30/2022
Consider your organizational needs while you determine the strategy for this dep
### Engage the right stakeholders
-When technology projects fail, they typically do because of mismatched expectations on impact, outcomes, and responsibilities. To avoid these pitfalls, [ensure that you're engaging the right stakeholders](../fundamentals/active-directory-deployment-plans.md) and that stakeholder roles in the project are well understood.
+When technology projects fail, they typically do because of mismatched expectations on impact, outcomes, and responsibilities. To avoid these pitfalls, [ensure that you're engaging the right stakeholders,](../fundamentals/active-directory-deployment-plans.md) and that stakeholder roles in the project are well understood.
For this plan, add the following stakeholders to your list:
iOS and Android devices may only be Azure AD registered. The following table pre
| **Client operating systems** | | | | | Windows 11 or Windows 10 devices | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | | Windows down-level devices (Windows 8.1 or Windows 7) | | | ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
+| Linux Desktop - Ubuntu 20.04/22.04 | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | | |
|**Sign in options** | | | | | End-user local credentials | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | | | | Password | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
BYOD and corporate owned mobile device are registered by users installing the Co
* [Android](/mem/intune/user-help/enroll-device-android-company-portal) * [Windows 10 or newer](/mem/intune/user-help/enroll-windows-10-device) * [macOS](/mem/intune/user-help/enroll-your-device-in-intune-macos-cp)
+* [Linux Desktop - Ubuntu 20.04/22.04](/mem/intune/user-help/enroll-device-linux)
If registering your devices is the best option for your organization, see the following resources:
active-directory Groups Dynamic Membership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-membership.md
The following device attributes can be used.
deviceOSVersion | any string value | device.deviceOSVersion -eq "9.1"<br>device.deviceOSVersion -startsWith "10.0.1" deviceOwnership | Personal, Company, Unknown | device.deviceOwnership -eq "Company" devicePhysicalIds | any string value used by Autopilot, such as all Autopilot devices, OrderID, or PurchaseOrderID | device.devicePhysicalIDs -any _ -contains "[ZTDId]"<br>(device.devicePhysicalIds -any _ -eq "[OrderID]:179887111881"<br>(device.devicePhysicalIds -any _ -eq "[PurchaseOrderId]:76222342342"
- deviceTrustType | AzureAD, ServerAD, Workplace | device.deviceOwnership -eq "AzureAD"
+ deviceTrustType | AzureAD, ServerAD, Workplace | device.deviceTrustType -eq "AzureAD"
enrollmentProfileName | Apple Device Enrollment Profile name, Android Enterprise Corporate-owned dedicated device Enrollment Profile name, or Windows Autopilot profile name | device.enrollmentProfileName -eq "DEP iPhones" extensionAttribute1 | any string value | device.extensionAttribute1 -eq "some string value" extensionAttribute2 | any string value | device.extensionAttribute2 -eq "some string value"
active-directory How To Connect Group Writeback Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-enable.md
Group writeback requires enabling both the original and new versions of the feat
> [!NOTE] > We recommend that you follow the [swing migration](how-to-upgrade-previous-version.md#swing-migration) method for rolling out the new group writeback feature in your environment. This method will provide a clear contingency plan if a major rollback is necessary.
+>
+>The enhanced group writeback feature is enabled on the tenant and not per Azure AD Connect client instance. Please be sure that all Azure AD Connect client instances are updated to a minimal build version of 1.6.4.0 or later.
### Enable group writeback by using PowerShell
active-directory How To Connect Group Writeback V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md
There are two versions of group writeback. The original version is in general av
- You can configure the common name in an Active Directory group's distinguished name to include the group's display name when it's written back. - You can use the Azure AD admin portal, Graph Explorer, and PowerShell to configure which Azure AD groups are written back.
-The new version is available only in [Azure AD Connect version 2.0.89.0 or later](https://www.microsoft.com/download/details.aspx?id=47594). It must be enabled in addition to the original version.
+The new version is enabled on the tenant and not per Azure AD Connect client instance. Make sure that all Azure AD Connect client instances are updated to a minimal build of [Azure AD Connect version 2.0 or later](https://www.microsoft.com/download/details.aspx?id=47594) if group writeback is currently enabled on the client instance.
This article walks you through activities that you should complete before you enable group writeback for your tenant. These activities include discovering your current configuration, verifying the prerequisites, and choosing the deployment approach.
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/whats-new-docs.md
Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 09/06/2022 Last updated : 10/03/2022
Welcome to what's new in Azure Active Directory (Azure AD) application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure AD](../fundamentals/whats-new.md).
+## September 2022
+
+### New articles
+
+- [Tutorial: Configure Datawiza to enable Azure Active Directory Multi-Factor Authentication and single sign-on to Oracle PeopleSoft](datawiza-azure-ad-sso-oracle-peoplesoft.md)
+- [SAML Request Signature Verification (Preview)](howto-enforce-signed-saml-authentication.md)
+
+### Updated articles
+
+- [Manage app consent policies](manage-app-consent-policies.md)
+- [Unexpected consent prompt when signing in to an application](application-sign-in-unexpected-user-consent-prompt.md)
+ ## August 2022 ### Updated articles
Welcome to what's new in Azure Active Directory (Azure AD) application managemen
- [Configure Azure Active Directory SAML token encryption](howto-saml-token-encryption.md) - [Review permissions granted to applications](manage-application-permissions.md) - [Tutorial: Configure Secure Hybrid Access with Azure Active Directory and Datawiza](datawiza-with-azure-ad.md)-
-## June 2022
-
-### Updated articles
--- [Protect against consent phishing](protect-against-consent-phishing.md)-- [Request to publish your application in the Azure AD application gallery](v2-howto-app-gallery-listing.md)
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md
Title: Use the cluster autoscaler in Azure Kubernetes Service (AKS)
description: Learn how to use the cluster autoscaler to automatically scale your cluster to meet application demands in an Azure Kubernetes Service (AKS) cluster. Previously updated : 07/18/2019 Last updated : 10/03/2022
To configure logs to be pushed from the cluster autoscaler into Log Analytics, f
1. Select the "Logs" section on your cluster via the Azure portal. 1. Input the following example query into Log Analytics:
-```
+```kusto
AzureDiagnostics | where Category == "cluster-autoscaler" ```
You should see logs similar to the following example as long as there are logs t
The cluster autoscaler will also write out health status to a `configmap` named `cluster-autoscaler-status`. To retrieve these logs, execute the following `kubectl` command. A health status will be reported for each node pool configured with the cluster autoscaler.
-```
+```bash
kubectl get configmap -n kube-system cluster-autoscaler-status -o yaml ```
Kubernetes supports [horizontal pod autoscaling][kubernetes-hpa] to adjust the n
This article showed you how to automatically scale the number of AKS nodes. You can also use the horizontal pod autoscaler to automatically adjust the number of pods that run your application. For steps on using the horizontal pod autoscaler, see [Scale applications in AKS][aks-scale-apps].
+To further help improve cluster resource utilization and free up CPU and memory for other pods, see [Vertical Pod Autoscaler][vertical-pod-autoscaler].
+ <!-- LINKS - internal -->
-[aks-faq]: faq.md
[aks-faq-node-resource-group]: faq.md#can-i-modify-tags-and-other-properties-of-the-aks-resources-in-the-node-resource-group [aks-multiple-node-pools]: use-multiple-node-pools.md [aks-scale-apps]: tutorial-kubernetes-scale.md
-[aks-support-policies]: support-policies.md
-[aks-upgrade]: upgrade-cluster.md
[aks-view-master-logs]: monitor-aks.md#configure-monitoring
-[autoscaler-profile-properties]: #using-the-autoscaler-profile
[azure-cli-install]: /cli/azure/install-azure-cli
-[az-aks-show]: /cli/azure/aks#az-aks-show
-[az-extension-add]: /cli/azure/extension#az-extension-add
-[az-extension-update]: /cli/azure/extension#az-extension-update
[az-aks-create]: /cli/azure/aks#az-aks-create [az-aks-update]: /cli/azure/aks#az-aks-update [az-aks-scale]: /cli/azure/aks#az-aks-scale
-[az-feature-register]: /cli/azure/feature#az-feature-register
-[az-feature-list]: /cli/azure/feature#az-feature-list
-[az-provider-register]: /cli/azure/provider#az-provider-register
+[vertical-pod-autoscaler]: vertical-pod-autoscaler.md
<!-- LINKS - external --> [az-aks-update-preview]: https://github.com/Azure/azure-cli-extensions/tree/master/src/aks-preview [az-aks-nodepool-update]: https://github.com/Azure/azure-cli-extensions/tree/master/src/aks-preview#enable-cluster-auto-scaler-for-a-node-pool [autoscaler-scaledown]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node
-[autoscaler-parameters]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca
[kubernetes-faq]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#ca-doesnt-work-but-it-used-to-work-yesterday-why [kubernetes-hpa]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ [kubernetes-hpa-walkthrough]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
api-management Self Hosted Gateway Settings Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-settings-reference.md
This article provides a reference for required and optional settings that are us
|-||-|-| | config.service.endpoint | Configuration endpoint in Azure API Management for the self-hosted gateway. Find this value in the Azure portal under **Gateways** > **Deployment**. | Yes | N/A | | config.service.auth | Access token (authentication key) of the self-hosted gateway. Find this value in the Azure portal under **Gateways** > **Deployment**. | Yes | N/A |
+| neighborhood.host | DNS name used to resolve all instances of a self-hosted gateway deployment for cross-instance synchronization. In Kubernetes, this can be achieved by using a headless Service. | No | N/A |
+| neighborhood.heartbeat.port | UDP port used for instances of a self-hosted gateway deployment to send heartbeats to other instances. | No | 4291 |
+| policy.rate-limit.sync.port | UDP port used for self-hosted gateway instances to synchronize rate limiting across multiple instances. | No | 4290 |
## Metrics
app-service Web Sites Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/web-sites-monitor.md
You can increase or remove quotas from your app by upgrading your App Service pl
## Understand metrics
-> [!NOTE]
-> **File System Usage** is now available globally for apps hosted in multi-tenants and App Service Environment.
->
- > [!IMPORTANT] > **Average Response Time** will be deprecated to avoid confusion with metric aggregations. Use **Response Time** as a replacement. > [!NOTE] > Metrics for an app include the requests to the app's SCM site(Kudu). This includes requests to view the site's logstream using Kudu. Logstream requests may span several minutes, which will affect the Request Time metrics. Users should be aware of this relationship when using these metrics with autoscale logic. >
+> **Http Server Errors** only records requests that reach the backend service (the worker(s) hosting the app). If the requests are failing at the FrontEnd, they are not recorded as Http Server Errors. The [Health Check feature](./monitor-instances-health-check.md) / Application Insights availability tests can be used for outside in monitoring.
Metrics provide information about the app or the App Service plan's behavior.
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
A conceptual overview of this feature is available in [Cluster connect - Azure A
``` ```console
- TOKEN=$(kubectl get secret demo-user-secret -o jsonpath='{$.data.token}' | base64 -d | sed 's/$/\\\n/g')
+ TOKEN=$(kubectl get secret demo-user-secret -o jsonpath='{$.data.token}' | base64 -d | sed 's/$/\n/g')
``` 1. Get the token to output to console
azure-functions Functions How To Use Azure Function App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md
The Functions editor built into the Azure portal lets you update your function c
Files in the root of the app, such as function.proj or extensions.csproj need to be created and edited by using the [Advanced Tools (Kudu)](#kudu). 1. Select your function app, then under **Development tools** select **Advanced tools** > **Go**.
-1. If promoted, sign-in to the SCM site with your Azure credentials.
+1. If prompted, sign-in to the SCM site with your Azure credentials.
1. From the **Debug console** menu, choose **CMD**. 1. Navigate to `.\site\wwwroot`, select the plus (**+**) button at the top, and select **New file**. 1. Name the file, such as `extensions.csproj` and press Enter.
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
Title: Migrate from legacy agents to Azure Monitor Agent description: This article provides guidance for migrating from the existing legacy agents to the new Azure Monitor Agent (AMA) and data collection rules (DCR). -+ Last updated 9/14/2022
# Migrate to Azure Monitor Agent from Log Analytics agent
-[Azure Monitor Agent (AMA)](./agents-overview.md) replaces the Log Analytics agent (also known as MMA and OMS) for both Windows and Linux machines, in Azure and on premises. It introduces a simplified, flexible method of configuring collection configuration called [Data Collection Rules (DCRs)](../essentials/data-collection-rule-overview.md). This article outlines the benefits of migrating to Azure Monitor Agent (AMA) and provides guidance on how to implement a successful migration.
+[Azure Monitor Agent (AMA)](./agents-overview.md) replaces the Log Analytics agent (also known as MMA and OMS) for both Windows and Linux machines, in both Azure and non-Azure (on-premises and 3rd party clouds) environments. It introduces a simplified, flexible method of configuring collection configuration called [Data Collection Rules (DCRs)](../essentials/data-collection-rule-overview.md). This article outlines the benefits of migrating to Azure Monitor Agent (AMA) and provides guidance on how to implement a successful migration.
> [!IMPORTANT] > The Log Analytics agent will be [retired on **31 August, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you are currently using the Log Analytics agent with Azure Monitor or other supported features and services, you should start planning your migration to Azure Monitor Agent using the information in this article.
Your migration plan to the Azure Monitor Agent should take into account:
- Running two telemetry agents on the same machine consumes double the resources, including, but not limited to CPU, memory, storage space, and network bandwidth. ## Prerequisites
-Review the [prerequisites](./azure-monitor-agent-manage.md#prerequisites) for use Azure Monitor Agent. For on-premises servers or other cloud managed servers, [installing the Azure Arc agent](/azure/azure-arc/servers/agent-overview) is an important prerequisite that then helps to install the agent extension and other required extensions. Using Arc for this purpose comes at no added cost, and it's not mandatory to use Arc for server management overall (i.e. you can continue using your existing on premises management solutions). Once Arc agent is installed, you can follow the same guidance below across Azure and on-premise for migration.
+Review the [prerequisites](./azure-monitor-agent-manage.md#prerequisites) for use Azure Monitor Agent. For non-Azure servers, [installing the Azure Arc agent](/azure/azure-arc/servers/agent-overview) is an important prerequisite that then helps to install the agent extension and other required extensions. Using Arc for this purpose comes at no added cost, and it's not mandatory to use Arc for server management overall (i.e. you can continue using your existing non-Azure management solutions). Once Arc agent is installed, you can follow the same guidance below across Azure and non-Azure for migration.
## Migration testing To ensure safe deployment during migration, begin testing with few resources running Azure Monitor Agent in your nonproduction environment. After you validate the data collected on these test resources, roll out to production by following the same steps.
See [create new data collection rules](./data-collection-rule-azure-monitor-agen
After you **validate** that data is flowing as expected with Azure Monitor Agent, check the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table for the value *Azure Monitor Agent* for AMA collected data. Ensure it matches data flowing through the existing Log Analytics agent. ## At-scale migration using Azure Policy
-We recommend using [Azure Policy](../../governance/policy/overview.md) to migrate a large number of agents. Start by analyzing your current monitoring setup with the Log Analytics agent using the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper-preview) to find sources, such as virtual machines, virtual machine scale sets, and on-premises servers.
+We recommend using [Azure Policy](../../governance/policy/overview.md) to migrate a large number of agents. Start by analyzing your current monitoring setup with the Log Analytics agent using the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper-preview) to find sources, such as virtual machines, virtual machine scale sets, and non-Azure servers.
Use the [DCR Config Generator](./azure-monitor-agent-migration-tools.md#installing-and-using-dcr-config-generator-preview) to migrate legacy agent configuration, including data sources and destinations, from the workspace to the new DCRs.
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
And then defining these elements for the resulting alert actions using:
|Threshold value|If you selected a **static** threshold, enter the threshold value for the condition logic. | |Unit|If the selected metric signal supports different units,such as bytes, KB, MB, and GB, and if you selected a **static** threshold, enter the unit for the condition logic.| |Threshold sensitivity| If you selected a **dynamic** threshold, enter the sensitivity level. The sensitivity level affects the amount of deviation from the metric series pattern is required to trigger an alert. |
- |Aggregation granularity| Select the interval over which data points are grouped using the aggregation type function.|
- |Frequency of evaluation|Select the frequency on how often the alert rule should be run. Selecting frequency smaller than granularity of data points grouping will result in sliding window evaluation. |
+ |Aggregation granularity| Select the interval that is used to group the data points using the aggregation type function. Choose an **Aggregation granularity** (Period) that's greater than the **Frequency of evaluation** to reduce the likelihood of missing the first evaluation period of an added time series.|
+ |Frequency of evaluation|Select how often the alert rule is be run. Select a frequency that is smaller than the aggregation granularity to generate a sliding window for the evaluation.|
1. Select **Done**. ### [Log alert](#tab/log)
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
If you don't have alert rules defined for the selected resource, you can [enable
## Azure role-based access control (Azure RBAC) for alerts You can only access, create, or manage alerts for resources for which you have permissions.
-To create an alert rule, you need to have the following permissions:
+
+To create an alert rule, you need to have:
- Read permission on the target resource of the alert rule - Write permission on the resource group in which the alert rule is created (if youΓÇÖre creating the alert rule from the Azure portal, the alert rule is created by default in the same resource group in which the target resource resides)
+ - Read permission on any action group associated with the alert rule (if applicable)
+ These built-in Azure roles, supported at all Azure Resource Manager scopes, have permissions to and access alerts information and create alert rules:
+ - **Monitoring contributor**: can create alerts and use resources within their scope
+ - **Monitoring reader**: can view alerts and read resources within their scope
+
+If the target action group or rule location is in a different scope than the two built-in roles, you need to create a user with the appropriate permissions.
## Alerts and State
azure-monitor Itsmc Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-definition.md
Action groups provide a modular and reusable way to trigger actions for your Azu
### Define a template
-Certain work item types can use templates that you define in ServiceNow. When you use templates, you can define fields that will be automatically populated by using constant values defined in ServiceNow (not values from the payload). The templates are synced with Azure. You can define which template you want to use as a part of the definition of an action group. For information about how to create templates, see the [ServiceNow documentation](https://docs.servicenow.com/bundle/paris-platform-administration/page/administer/form-administration/task/t_CreateATemplateUsingTheTmplForm.html).
+Certain work item types can use templates that you define in ServiceNow. When you use templates, you can define fields that will be automatically populated by using constant values defined in ServiceNow (not values from the payload). The templates are synced with Azure. You can define which template you want to use as a part of the definition of an action group. For information about how to create templates, see the [ServiceNow documentation](https://docs.servicenow.com/en-US/bundle/tokyo-platform-administration/page/administer/form-administration/task/t_CreateATemplateUsingTheTmplForm.html).
To create an action group:
When you create or edit an Azure alert rule, use an action group, which has an I
## Next steps
-[Troubleshoot problems in ITSMC](./itsmc-resync-servicenow.md)
+[Troubleshoot problems in ITSMC](./itsmc-resync-servicenow.md)
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
Sampling is a feature in [Azure Application Insights](./app-insights-overview.md). It's the recommended way to reduce telemetry traffic, data costs, and storage costs, while preserving a statistically correct analysis of application data. Sampling also helps you avoid Application Insights throttling your telemetry. The sampling filter selects items that are related, so that you can navigate between items when you're doing diagnostic investigations.
-When metric counts are presented in the portal, they're re-normalized to take into account sampling. Doing so minimizes any effect on the statistics.
+When metric counts are presented in the portal, they're renormalized to take into account sampling. Doing so minimizes any effect on the statistics.
## Brief summary
The following table summarizes the sampling types available for each SDK and typ
> [!NOTE] > The information on most of this page applies to the current versions of the Application Insights SDKs. For information on older versions of the SDKs, [see the section below](#older-sdk-versions).
+## When to use sampling
+
+In general, for most small and medium size applications you don't need sampling. The most useful diagnostic information and most accurate statistics are obtained by collecting data on all your user activities.
+
+The main advantages of sampling are:
+
+* Application Insights service drops ("throttles") data points when your app sends a very high rate of telemetry in a short time interval. Sampling reduces the likelihood that your application will see throttling occur.
+* To keep within the [quota](../logs/daily-cap.md) of data points for your pricing tier.
+* To reduce network traffic from the collection of telemetry.
+
+## How sampling works
+
+The sampling algorithm decides which telemetry items to drop, and which ones to keep. This is true whether sampling is done by the SDK or in the Application Insights service. The sampling decision is based on several rules that aim to preserve all interrelated data points intact, maintaining a diagnostic experience in Application Insights that is actionable and reliable even with a reduced data set. For example, if your app has a failed request included in a sample, the additional telemetry items (such as exception and traces logged for this request) will be retained. Sampling either keeps or drops them all together. As a result, when you look at the request details in Application Insights, you can always see the request along with its associated telemetry items.
+
+The sampling decision is based on the operation ID of the request, which means that all telemetry items belonging to a particular operation is either preserved or dropped. For the telemetry items that do not have an operation ID set (such as telemetry items reported from asynchronous threads with no HTTP context) sampling simply captures a percentage of telemetry items of each type.
+
+When presenting telemetry back to you, the Application Insights service adjusts the metrics by the same sampling percentage that was used at the time of collection, to compensate for the missing data points. Hence, when looking at the telemetry in Application Insights, the users are seeing statistically correct approximations that are very close to the real numbers.
+
+The accuracy of the approximation largely depends on the configured sampling percentage. Also, the accuracy increases for applications that handle a large volume of generally similar requests from lots of users. On the other hand, for applications that don't work with a significant load, sampling is not needed as these applications can usually send all their telemetry while staying within the quota, without causing data loss from throttling.
+ ## Types of sampling There are three different sampling methods:
Ingestion sampling doesn't operate while adaptive or fixed-rate sampling is in o
> [!WARNING] > The value shown on the portal tile indicates the value that you set for ingestion sampling. It doesn't represent the actual sampling rate if any sort of SDK sampling (adaptive or fixed-rate sampling) is in operation.
-## When to use sampling
-
-In general, for most small and medium size applications you don't need sampling. The most useful diagnostic information and most accurate statistics are obtained by collecting data on all your user activities.
-
-The main advantages of sampling are:
-
-* Application Insights service drops ("throttles") data points when your app sends a very high rate of telemetry in a short time interval. Sampling reduces the likelihood that your application will see throttling occur.
-* To keep within the [quota](../logs/daily-cap.md) of data points for your pricing tier.
-* To reduce network traffic from the collection of telemetry.
- ### Which type of sampling should I use? **Use ingestion sampling if:**
If you see that `RetainedPercentage` for any type is less than 100, then that ty
> [!IMPORTANT] > Application Insights does not sample session, metrics (including custom metrics), or performance counter telemetry types in any of the sampling techniques. These types are always excluded from sampling as a reduction in precision can be highly undesirable for these telemetry types.
-## How sampling works
-
-The sampling algorithm decides which telemetry items to drop, and which ones to keep. This is true whether sampling is done by the SDK or in the Application Insights service. The sampling decision is based on several rules that aim to preserve all interrelated data points intact, maintaining a diagnostic experience in Application Insights that is actionable and reliable even with a reduced data set. For example, if your app has a failed request included in a sample, the additional telemetry items (such as exception and traces logged for this request) will be retained. Sampling either keeps or drops them all together. As a result, when you look at the request details in Application Insights, you can always see the request along with its associated telemetry items.
-
-The sampling decision is based on the operation ID of the request, which means that all telemetry items belonging to a particular operation is either preserved or dropped. For the telemetry items that do not have an operation ID set (such as telemetry items reported from asynchronous threads with no HTTP context) sampling simply captures a percentage of telemetry items of each type.
-
-When presenting telemetry back to you, the Application Insights service adjusts the metrics by the same sampling percentage that was used at the time of collection, to compensate for the missing data points. Hence, when looking at the telemetry in Application Insights, the users are seeing statistically correct approximations that are very close to the real numbers.
-
-The accuracy of the approximation largely depends on the configured sampling percentage. Also, the accuracy increases for applications that handle a large volume of generally similar requests from lots of users. On the other hand, for applications that don't work with a significant load, sampling is not needed as these applications can usually send all their telemetry while staying within the quota, without causing data loss from throttling.
- ## Log query accuracy and high sample rates As the application is scaled up, it may be processing dozens, hundreds, or thousands of work items per second. Logging an event for each of them is not resource nor cost effective. Application Insights uses sampling to adapt to growing telemetry volume in a flexible manner and to control resource usage and cost.
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
Previously updated : 03/07/2022 Last updated : 10/03/2022
The following table provides unique requirements for each destination including
| Destination | Requirements | |:|:| | Log Analytics workspace | The workspace doesn't need to be in the same region as the resource being monitored.|
-| Storage account | It is not recommended to use an existing storage account that has other, non-monitoring data stored in it so that you can better control access to the data. If you're archiving the activity log and resource logs together, you might choose to use the same storage account to keep all monitoring data in a central location.<br><br>To send the data to immutable storage, set the immutable policy for the storage account as described in [Set and manage immutability policies for Azure Blob Storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this linked article including enabling protected append blobs writes.<br><br>The storage account needs to be in the same region as the resource being monitored if the resource is regional.|
+| Storage account | It is not recommended to use an existing storage account that has other, non-monitoring data stored in it so that you can better control access to the data. If you're archiving the activity log and resource logs together, you might choose to use the same storage account to keep all monitoring data in a central location.<br><br>To send the data to immutable storage, set the immutable policy for the storage account as described in [Set and manage immutability policies for Azure Blob Storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this linked article including enabling protected append blobs writes.<br><br>The storage account needs to be in the same region as the resource being monitored if the resource is regional.<br><br> Diagnostic settings can't access storage accounts when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in storage accounts so that the Azure Monitor diagnostic settings service is granted access to your storage account.|
| Event Hubs | The shared access policy for the namespace defines the permissions that the streaming mechanism has. Streaming to Event Hubs requires Manage, Send, and Listen permissions. To update the diagnostic setting to include streaming, you must have the ListKey permission on that Event Hubs authorization rule.<br><br>The event hub namespace needs to be in the same region as the resource being monitored if the resource is regional. <br><br> Diagnostic settings can't access Event Hubs resources when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in Event Hubs so that the Azure Monitor diagnostic settings service is granted access to your Event Hubs resources.| | Partner integrations | The solutions vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
azure-vmware Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-storage.md
Now that you've covered Azure VMware Solution storage concepts, you may want to
- [Scale clusters in the private cloud][tutorial-scale-private-cloud] - You can scale the clusters and hosts in a private cloud as required for your application workload. Performance and availability limitations for specific services should be addressed on a case by case basis. -- [Azure NetApp Files with Azure VMware Solution](netapp-files-with-azure-vmware-solution.md) - You can use Azure NetApp Files to migrate and run the most demanding enterprise file-workloads in the cloud: databases, and general purpose computing applications, with no code changes. Azure NetApp Files volumes can be attached to virtual machines and can also be connected as data stores directly to Azure VMware Solution. This functionality is in preview.
+- [Azure NetApp Files with Azure VMware Solution](netapp-files-with-azure-vmware-solution.md) - You can use Azure NetApp Files to migrate and run the most demanding enterprise file-workloads in the cloud: databases, and general purpose computing applications, with no code changes. Azure NetApp Files volumes can be attached to virtual machines, and as [datastores](/azure/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts) to extend the vSAN datastore capacity without adding more nodes.
- [vSphere role-based access control for Azure VMware Solution](concepts-identity.md) - You use vCenter Server to manage VM workloads and NSX-T Manager to manage and extend the private cloud. Access and identity management use the CloudAdmin role for vCenter Server and restricted administrator rights for NSX-T Manager.
azure-vmware Deploy Vsan Stretched Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-vsan-stretched-clusters.md
Title: Deploy vSAN stretched clusters
+ Title: Deploy vSAN stretched clusters (Preview)
description: Learn how to deploy vSAN stretched clusters.
Last updated 09/02/2022
-# Deploy vSAN stretched clusters
+# Deploy vSAN stretched clusters (Preview)
In this article, you'll learn how to implement a vSAN stretched cluster for an Azure VMware Solution private cloud.
Stretched clusters allow the configuration of vSAN Fault Domains across two AZs
To protect against split-brain scenarios and help measure site health, a managed vSAN Witness is created in a third AZ. With a copy of the data in each AZ, vSphere HA attempts to recover from any failure using a simple restart of the virtual machine.
-**vSAN stretched cluster**
+The following diagram depicts a vSAN cluster stretched across two AZs.
:::image type="content" source="media/stretch-clusters/diagram-1-vsan-witness-third-availability-zone.png" alt-text="Diagram shows a managed vSAN stretched cluster created in a third Availability Zone with the data being copied to all three of them.":::
In summary, stretched clusters simplify protection needs by providing the same t
It's important to understand that stretched cluster private clouds only offer an extra layer of resiliency, and they don't address all failure scenarios. For example, stretched cluster private clouds: - Don't protect against region-level failures within Azure or data loss scenarios caused by application issues or poorly planned storage policies. - Provides protection against a single zone failure but aren't designed to provide protection against double or progressive failures. For example:
- - Despite various layers of redundancy built into the fabric, if an inter-AZ failure results in the partitioning of the secondary site, vSphere HA starts powering off the workload VMs on the secondary site. The following diagram shows the secondary site partitioning scenario.
+ - Despite various layers of redundancy built into the fabric, if an inter-AZ failure results in the partitioning of the secondary site, vSphere HA starts powering off the workload VMs on the secondary site.
+
+ The following diagram shows the secondary site partitioning scenario.
:::image type="content" source="media/stretch-clusters/diagram-2-secondary-site-power-off-workload.png" alt-text="Diagram shows vSphere high availability powering off the workload virtual machines on the secondary site.":::
- - If the secondary site partitioning progressed into the failure of the primary site instead, or resulted in a complete partitioning, vSphere HA would attempt to restart the workload VMs on the secondary site. If vSphere HA attempted to restart the workload VMs on the secondary site, it would put the workload VMs in an unsteady state. The following diagram shows the preferred site failure or complete partitioning scenario.
+ - If the secondary site partitioning progressed into the failure of the primary site instead, or resulted in a complete partitioning, vSphere HA would attempt to restart the workload VMs on the secondary site. If vSphere HA attempted to restart the workload VMs on the secondary site, it would put the workload VMs in an unsteady state.
+
+
+ The following diagram shows the preferred site failure or complete partitioning scenario.
:::image type="content" source="media/stretch-clusters/diagram-3-restart-workload-secondary-site.png" alt-text="Diagram shows vSphere high availability trying to restart the workload virtual machines on the secondary site when preferred site failure or complete partitioning occurs.":::
-It should be noted that these types of failures, although rare, fall outside the scope of the protection offered by a stretched cluster private cloud. Because of this, a stretched cluster solution should be regarded as a multi-AZ high availability solution reliant upon vSphere HA. It's important you understand that a stretched cluster solution isn't meant to replace a comprehensive multi-region Disaster Recovery strategy that can be employed to ensure application availability. The reason is because a Disaster Recovery solution typically has separate management and control planes in separate Azure regions. Azure VMware Solution stretched clusters have a single management and control plane stretched across two availability zones within the same Azure region. For example, one vCenter Server, one NSX-T Manager cluster, one NSX-T Data Center Edge VM pair.
+It should be noted that these types of failures, although rare, fall outside the scope of the protection offered by a stretched cluster private cloud. Because of those types of rare failures, a stretched cluster solution should be regarded as a multi-AZ high availability solution reliant upon vSphere HA. It's important you understand that a stretched cluster solution isn't meant to replace a comprehensive multi-region Disaster Recovery strategy that can be employed to ensure application availability. The reason is because a Disaster Recovery solution typically has separate management and control planes in separate Azure regions. Azure VMware Solution stretched clusters have a single management and control plane stretched across two availability zones within the same Azure region. For example, one vCenter Server, one NSX-T Manager cluster, one NSX-T Data Center Edge VM pair.
## Deploy a stretched cluster private cloud
-Currently, Azure VMware Solution stretched clusters is in a limited availability phase. In the limited availability phase, you must contact Microsoft to request and qualify for support.
+Currently, Azure VMware Solution stretched clusters is in the (preview) phase. While in the (preview) phase, you must contact Microsoft to request and qualify for support.
## Prerequisites
Azure VMware Solution stretched clusters are available in the following regions:
As of now, the only 3 regions listed above are planned for support of stretched clusters.
-### What kind of SLA does Azure VMware Solution provide with the stretched clusters limited availability release?
+### What kind of SLA does Azure VMware Solution provide with the stretched clusters (preview) release?
A private cloud created with a vSAN stretched cluster is designed to offer a 99.99% infrastructure availability commitment when the following conditions exist: - A minimum of 6 nodes are deployed in the cluster (3 in each availability zone)
No. A stretched cluster is created between two availability zones, while the thi
- Scale out and scale-in of stretched clusters can only happen in pairs. A minimum of 6 nodes and a maximum of 16 nodes are supported in a stretched cluster environment. - Customer workload VMs are restarted with a medium vSphere HA priority. Management VMs have the highest restart priority. - The solution relies on vSphere HA and vSAN for restarts and replication. Recovery time objective (RTO) is determined by the amount of time it takes vSphere HA to restart a VM on the surviving AZ after the failure of a single AZ.-- Preview features for standard private cloud environments aren't supported in a stretched cluster environment. For example, external storage options like disk pools and Azure NetApp Files (ANF), Customer Management Keys, Public IP via NSX-T Data Center Edge, and others.
+- Preview and recent GA features for standard private cloud environments aren't supported in a stretched cluster environment.
- Disaster recovery addons like, VMware SRM, Zerto, and JetStream are currently not supported in a stretched cluster environment. ### What kind of latencies should I expect between the availability zones (AZs)?
-vSAN stretched clusters operate within a 5 minute round trip time (RTT) and 10 Gb/s or greater bandwidth between the AZs that host the workload VMs. The Azure VMware Solution stretched cluster deployment follows that guiding principle. Consider that information when deploying applications (with SFTT of dual site mirroring, which uses synchronous writes) that have stringent latency requirements.
+vSAN stretched clusters operate within a 5-milliseconds round trip time (RTT) and 10 Gb/s or greater bandwidth between the AZs that host the workload VMs. The Azure VMware Solution stretched cluster deployment follows that guiding principle. Consider that information when deploying applications (with SFTT of dual site mirroring, which uses synchronous writes) that have stringent latency requirements.
### Can I mix stretched and standard clusters in my private cloud?
Customers will be charged based on the number of nodes deployed within the priva
### Will I be charged for the witness node and for inter-AZ traffic?
-No. While in limited availability, customers won't see a charge for the witness node and the inter-AZ traffic. The witness node is entirely service managed, and Azure VMware Solution provides the required lifecycle management of the witness node. As the entire solution is service managed, the customer only needs to identify the appropriate SPBM policy to set for the workload virtual machines. The rest is managed by Microsoft.
+No. While in (preview), customers won't see a charge for the witness node and the inter-AZ traffic. The witness node is entirely service managed, and Azure VMware Solution provides the required lifecycle management of the witness node. As the entire solution is service managed, the customer only needs to identify the appropriate SPBM policy to set for the workload virtual machines. The rest is managed by Microsoft.
### Which SKUs are available?
communication-services Direct Routing Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-infrastructure.md
The port ranges of the Media Processors are shown in the following table:
|Traffic|From|To|Source port|Destination port| |: |: |: |: |: |
-|UDP/SRTP|Media Processor|SBC|3478ΓÇô3481 and 49152ΓÇô53247|Defined on the SBC|
-|UDP/SRTP|SBC|Media Processor|Defined on the SBC|3478ΓÇô3481 and 49152ΓÇô53247|
+|UDP/SRTP|Media Processor|SBC|49152ΓÇô53247|Defined on the SBC|
+|UDP/SRTP|SBC|Media Processor|Defined on the SBC|49152ΓÇô53247|
> [!NOTE] > Microsoft recommends at least two ports per concurrent call on the SBC.
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
The Communication Services Calling SDK supports the following streaming configur
| Limit | Web | Windows/Android/iOS | | - | | -- |
-| **Maximum # of outgoing local streams that can be sent simultaneously** | 1 video or 1 screen sharing | 1 video + 1 screen sharing |
+| **Maximum # of outgoing local streams that can be sent simultaneously** | 1 video and 1 screen sharing | 1 video + 1 screen sharing |
| **Maximum # of incoming remote streams that can be rendered simultaneously** | 4 videos + 1 screen sharing | 6 videos + 1 screen sharing | While the Calling SDK won't enforce these limits, your users may experience performance degradation if they're exceeded.
communication-services Virtual Visits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/virtual-visits.md
And then make sure "Add online meeting" is enabled.
### Step 2 ΓÇô Sample Builder
-Use the Sample Builder to customize the consumer experience. You can reach the Sampler Builder using this [link](https://aka.ms/acs-sample-builder), or navigating to the page within the Azure Communication Services resource in the Azure portal. Step through the Sample Builder wizard and select Industry template, then configure if Chat or Screen Sharing should be enabled. Change themes and text to you match your application. You can preview your configuration live from the page in both Desktop and Mobile browser form-factors.
+Use the Sample Builder to customize the consumer experience. You can reach the Sampler Builder using this [link](https://aka.ms/acs-sample-builder) or navigating to the page within the Azure Communication Services resource in the Azure portal. Step through the Sample Builder wizard: select Industry template, configure the call experience (Chat or Screen Sharing availability), change themes and text to match your application style and get valuable feedback through post-call survey options. You can preview your configuration live from the page in both Desktop and Mobile browser form-factors.
[ ![Screenshot of Sample builder start page.](./media/virtual-visits/sample-builder-themes.png)](./media/virtual-visits/sample-builder-themes.png#lightbox)
Opening the App ServiceΓÇÖs URL and navigating to `https://<YOUR URL>/VISIT` all
### Step 5 - Set deployed app URL in Bookings
-Copy your application url into your calendar Business information setting by going to https://outlook.office.com/bookings/businessinformation.
-
-![Screenshot of final view of bookings business information.](./media/virtual-visits/bookings-acs-app-integration-url.png)
+Enter the application url followed by "/visit" in the "Deployed App URL" field in https://outlook.office.com/bookings/businessinformation.
## Going to production The Sample Builder gives you the basics of a Microsoft 365 and Azure virtual visit: consumer scheduling via Bookings, consumer joins via custom app, and the provider joins via Teams. However, there are several things to consider as you take this scenario to production.
cost-management-billing Programmatically Create Subscription Enterprise Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md
Using one of the following methods, you'll create a subscription alias name. We
- Start with a letter and end with an alphanumeric character - Don't use periods
-An alias is used for simple substitution of a user-defined string instead of the subscription GUID. In other words, you can use it as a shortcut. You can learn more about alias at [Alias - Create](/rest/api/subscription/2020-09-01/alias/create). In the following examples, `sampleAlias` is created but you can use any string you like.
+An alias is used for simple substitution of a user-defined string instead of the subscription GUID. In other words, you can use it as a shortcut. You can learn more about alias at [Alias - Create](/rest/api/subscription/2021-10-01/alias/create). In the following examples, `sampleAlias` is created but you can use any string you like.
If you have multiple user roles in addition to the Account Owner role, then you must retrieve the account ID from the Azure portal. Then you can use the ID to programmatically create subscriptions.
If you have multiple user roles in addition to the Account Owner role, then you
Call the PUT API to create a subscription creation request/alias. ```json
-PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/sampleAlias?api-version=2020-09-01
+PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/sampleAlias?api-version=2021-10-01
``` In the request body, provide as the `billingScope` the `id` from one of your `enrollmentAccounts`.
You can do a GET on the same URL to get the status of the request.
### Request ```json
-GET https://management.azure.com/providers/Microsoft.Subscription/aliases/sampleAlias?api-version=2020-09-01
+GET https://management.azure.com/providers/Microsoft.Subscription/aliases/sampleAlias?api-version=2021-10-01
``` ### Response
The following ARM template creates a subscription. For `billingScope`, provide t
"scope": "/", "name": "[parameters('subscriptionAliasName')]", "type": "Microsoft.Subscription/aliases",
- "apiVersion": "2020-09-01",
+ "apiVersion": "2021-10-01",
"properties": { "workLoad": "Production", "displayName": "[parameters('subscriptionAliasName')]",
param subscriptionAliasName string
@description('Provide the full resource ID of billing scope to use for subscription creation.') param billingScope string
-resource subscriptionAlias 'Microsoft.Subscription/aliases@2020-09-01' = {
+resource subscriptionAlias 'Microsoft.Subscription/aliases@2021-10-01' = {
scope: tenant() name: subscriptionAliasName properties: {
cost-management-billing Programmatically Create Subscription Microsoft Customer Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement.md
The following example creates a subscription named *Dev Team subscription* for t
### [REST](#tab/rest) ```json
-PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/sampleAlias?api-version=2020-09-01
+PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/sampleAlias?api-version=2021-10-01
``` ### Request body
You can do a GET on the same URL to get the status of the request.
### Request ```json
-GET https://management.azure.com/providers/Microsoft.Subscription/aliases/sampleAlias?api-version=2020-09-01
+GET https://management.azure.com/providers/Microsoft.Subscription/aliases/sampleAlias?api-version=2021-10-01
``` ### Response
The following template creates a subscription. For `billingScope`, provide the i
"scope": "/", "name": "[parameters('subscriptionAliasName')]", "type": "Microsoft.Subscription/aliases",
- "apiVersion": "2020-09-01",
+ "apiVersion": "2021-10-01",
"properties": { "workLoad": "Production", "displayName": "[parameters('subscriptionAliasName')]",
param subscriptionAliasName string
@description('Provide the full resource ID of billing scope to use for subscription creation.') param billingScope string
-resource subscriptionAlias 'Microsoft.Subscription/aliases@2020-09-01' = {
+resource subscriptionAlias 'Microsoft.Subscription/aliases@2021-10-01' = {
scope: tenant() name: subscriptionAliasName properties: {
cost-management-billing Programmatically Create Subscription Microsoft Partner Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-partner-agreement.md
The following example creates a subscription named *Dev Team subscription* for
### [REST](#tab/rest) ```json
-PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/sampleAlias?api-version=2020-09-01
+PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/sampleAlias?api-version=2021-10-01
``` ### Request body
You can do a GET on the same URL to get the status of the request.
### Request ```json
-GET https://management.azure.com/providers/Microsoft.Subscription/aliases/sampleAlias?api-version=2020-09-01
+GET https://management.azure.com/providers/Microsoft.Subscription/aliases/sampleAlias?api-version=2021-10-01
``` ### Response
The following ARM template creates a subscription. For `billingScope`, provide t
"scope": "/", "name": "[parameters('subscriptionAliasName')]", "type": "Microsoft.Subscription/aliases",
- "apiVersion": "2020-09-01",
+ "apiVersion": "2021-10-01",
"properties": { "workLoad": "Production", "displayName": "[parameters('subscriptionAliasName')]",
param subscriptionAliasName string
@description('Provide the full resource ID of billing scope to use for subscription creation.') param billingScope string
-resource subscriptionAlias 'Microsoft.Subscription/aliases@2020-09-01' = {
+resource subscriptionAlias 'Microsoft.Subscription/aliases@2021-10-01' = {
scope: tenant() name: subscriptionAliasName properties: {
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alert-engine-messages.md
See [What's new in Microsoft Defender for IoT?](release-notes.md#whats-new-in-m
Policy engine alerts describe detected deviations from learned baseline behavior.
-| Title | Description | Severity |
-|--|--|--|
-| Beckhoff Software Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
-| Database Login Failed | A failed sign-in attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major |
-| Emerson ROC Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
-| External address within the network communicated with Internet | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical |
-| Field Device Discovered Unexpectedly | A new source device was detected on the network but hasn't been authorized. | Major |
-| Firmware Change Detected | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
-| Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
-| Foxboro I/A Unauthorized Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| FTP Login Failed | A failed sign-in attempt was detected from a source device to a destination server. This alert might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major |
-| Function Code Raised Unauthorized Exception | A source device (secondary) returned an exception to a destination device (primary). | Major |
-| GOOSE Message Type Settings | Message (identified by protocol ID) settings were changed on a source device. | Warning |
-| Honeywell Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
-| * Illegal HTTP Communication | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Internet Access Detected | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Major |
-| Mitsubishi Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
-| Modbus Address Range Violation | A primary device requested access to a new secondary memory address. | Major |
-| Modbus Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
-| New Activity Detected - CIP Class | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - CIP Class Service | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - CIP PCCC Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - CIP Symbol | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - EtherNet/IP I/O Connection | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - EtherNet/IP Protocol Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - GSM Message Code | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - LonTalk Command Codes | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Port Discovery | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Warning |
-| New Activity Detected - LonTalk Network Variable | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - Ovation Data Request | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - Read/Write Command (AMS Index Group) | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - Read/Write Command (AMS Index Offset) | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - Unauthorized DeltaV Message Type | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - Unauthorized DeltaV ROC Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - Unauthorized RPC Message Type | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - Using AMS Protocol Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - Using Siemens SICAM Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - Using Suitelink Protocol command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - Using Suitelink Protocol sessions | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - Using Yokogawa VNetIP Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Asset Detected | A new source device was detected on the network but hasn't been authorized. <br><br>This alert applies to devices discovered in OT subnets. New devices discovered in IT subnets don't trigger an alert.| Major |
-| New LLDP Device Configuration | A new source device was detected on the network but hasn't been authorized. | Major |
-| Omron FINS Unauthorized Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| S7 Plus PLC Firmware Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
-| Sampled Values Message Type Settings | Message (identified by protocol ID) settings were changed on a source device. | Warning |
-| Suspicion of Illegal Integrity Scan | A scan was detected on a DNP3 source device (outstation). This scan wasn't authorized as learned traffic on your network. | Major |
-| Toshiba Computer Link Unauthorized Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Minor |
-| Unauthorized ABB Totalflow File Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized ABB Totalflow Register Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized Access to Siemens S7 Data Block | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Warning |
-| Unauthorized Access to Siemens S7 Plus Object | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized Access to Wonderware Tag | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major |
-| Unauthorized BACNet Object Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized BACNet Route | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized Database Login | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major |
-| Unauthorized Database Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized Emerson ROC Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized GE SRTP File Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized GE SRTP Protocol Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized GE SRTP System Memory Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized HTTP Activity | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| * Unauthorized HTTP SOAP Action | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| * Unauthorized HTTP User Agent | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major |
-| Unauthorized Internet Connectivity Detected | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical |
-| Unauthorized Mitsubishi MELSEC Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized MMS Program Access | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major |
-| Unauthorized MMS Service | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized Multicast/Broadcast Connection | A Multicast/Broadcast connection was detected between a source device and other devices. Multicast/Broadcast communication isn't authorized. | Critical |
-| Unauthorized Name Query | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized OPC UA Activity | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized OPC UA Request/Response | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized Operation was detected by a User Defined Rule | Traffic was detected between two devices. This activity is unauthorized based on a Custom Alert Rule defined by a user. | Major |
-| Unauthorized PLC Configuration Read | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Warning |
-| Unauthorized PLC Configuration Write | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major |
-| Unauthorized PLC Program Upload | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major |
-| Unauthorized PLC Programming | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Critical |
-| Unauthorized Profinet Frame Type | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized SAIA S-Bus Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized Siemens S7 Execution of Control Function | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized Siemens S7 Execution of User Defined Function | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized Siemens S7 Plus Block Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized Siemens S7 Plus Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized SMB Login | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major |
-| Unauthorized SNMP Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized SSH Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized Windows Process | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major |
-| Unauthorized Windows Service | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major |
+| Title | Description | Severity | Category |
+|--|--|--|--|
+| Beckhoff Software Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| Database Login Failed | A failed sign-in attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major | Authentication |
+| Emerson ROC Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| External address within the network communicated with Internet | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | Internet Access |
+| Field Device Discovered Unexpectedly | A new source device was detected on the network but hasn't been authorized. | Major | Discovery |
+| Firmware Change Detected | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| Foxboro I/A Unauthorized Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| FTP Login Failed | A failed sign-in attempt was detected from a source device to a destination server. This alert might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major | Authentication |
+| Function Code Raised Unauthorized Exception | A source device (secondary) returned an exception to a destination device (primary). | Major | Command Failures |
+| GOOSE Message Type Settings | Message (identified by protocol ID) settings were changed on a source device. | Warning | Unauthorized Communication Behavior |
+| Honeywell Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| * Illegal HTTP Communication | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior |
+| Internet Access Detected | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Major | Internet Access |
+| Mitsubishi Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| Modbus Address Range Violation | A primary device requested access to a new secondary memory address. | Major | Unauthorized Communication Behavior |
+| Modbus Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| New Activity Detected - CIP Class | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| New Activity Detected - CIP Class Service | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| New Activity Detected - CIP PCCC Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| New Activity Detected - CIP Symbol | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| New Activity Detected - EtherNet/IP I/O Connection | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| New Activity Detected - EtherNet/IP Protocol Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| New Activity Detected - GSM Message Code | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| New Activity Detected - LonTalk Command Codes | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| New Port Discovery | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Warning | Discovery |
+| New Activity Detected - LonTalk Network Variable | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| New Activity Detected - Ovation Data Request | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| New Activity Detected - Read/Write Command (AMS Index Group) | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Configuration Changes |
+| New Activity Detected - Read/Write Command (AMS Index Offset) | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Configuration Changes |
+| New Activity Detected - Unauthorized DeltaV Message Type | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| New Activity Detected - Unauthorized DeltaV ROC Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| New Activity Detected - Unauthorized RPC Message Type | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| New Activity Detected - Using AMS Protocol Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| New Activity Detected - Using Siemens SICAM Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| New Activity Detected - Using Suitelink Protocol command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| New Activity Detected - Using Suitelink Protocol sessions | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| New Activity Detected - Using Yokogawa VNetIP Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| New Asset Detected | A new source device was detected on the network but hasn't been authorized. <br><br>This alert applies to devices discovered in OT subnets. New devices discovered in IT subnets don't trigger an alert.| Major | Discovery |
+| New LLDP Device Configuration | A new source device was detected on the network but hasn't been authorized. | Major | Configuration Changes |
+| Omron FINS Unauthorized Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| S7 Plus PLC Firmware Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| Sampled Values Message Type Settings | Message (identified by protocol ID) settings were changed on a source device. | Warning | Unauthorized Communication Behavior |
+| Suspicion of Illegal Integrity Scan | A scan was detected on a DNP3 source device (outstation). This scan wasn't authorized as learned traffic on your network. | Major | Scan |
+| Toshiba Computer Link Unauthorized Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Minor | Unauthorized Communication Behavior |
+| Unauthorized ABB Totalflow File Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Unauthorized ABB Totalflow Register Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Unauthorized Access to Siemens S7 Data Block | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Warning | Unauthorized Communication Behavior |
+| Unauthorized Access to Siemens S7 Plus Object | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Unauthorized Access to Wonderware Tag | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | Unauthorized Communication Behavior |
+| Unauthorized BACNet Object Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Unauthorized BACNet Route | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Unauthorized Database Login | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major | Authentication |
+| Unauthorized Database Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior |
+| Unauthorized Emerson ROC Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Unauthorized GE SRTP File Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Unauthorized GE SRTP Protocol Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Unauthorized GE SRTP System Memory Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Unauthorized HTTP Activity | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior |
+| * Unauthorized HTTP SOAP Action | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior |
+| * Unauthorized HTTP User Agent | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal HTTP Communication Behavior |
+| Unauthorized Internet Connectivity Detected | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | Internet Access |
+| Unauthorized Mitsubishi MELSEC Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Unauthorized MMS Program Access | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | Programming |
+| Unauthorized MMS Service | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Unauthorized Multicast/Broadcast Connection | A Multicast/Broadcast connection was detected between a source device and other devices. Multicast/Broadcast communication isn't authorized. | Critical | Abnormal Communication Behavior |
+| Unauthorized Name Query | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior |
+| Unauthorized OPC UA Activity | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Unauthorized OPC UA Request/Response | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Unauthorized Operation was detected by a User Defined Rule | Traffic was detected between two devices. This activity is unauthorized based on a Custom Alert Rule defined by a user. | Major | Custom Alerts |
+| Unauthorized PLC Configuration Read | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Warning | Configuration Changes |
+| Unauthorized PLC Configuration Write | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major | Configuration Changes |
+| Unauthorized PLC Program Upload | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major | Programming |
+| Unauthorized PLC Programming | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Critical | Programming |
+| Unauthorized Profinet Frame Type | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Unauthorized SAIA S-Bus Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Unauthorized Siemens S7 Execution of Control Function | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Unauthorized Siemens S7 Execution of User Defined Function | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Unauthorized Siemens S7 Plus Block Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Unauthorized Siemens S7 Plus Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Unauthorized SMB Login | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major | Authentication |
+| Unauthorized SNMP Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior |
+| Unauthorized SSH Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Remote Access |
+| Unauthorized Windows Process | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal Communication Behavior |
+| Unauthorized Windows Service | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal Communication Behavior |
| Unauthorized Operation was detected by a User Defined Rule | New traffic parameters were detected. This parameter combination violates a user defined rule | Major |
-| Unpermitted Modbus Schneider Electric Extension | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unpermitted Usage of ASDU Types | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unpermitted Usage of DNP3 Function Code | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unpermitted Usage of Internal Indication (IIN) | A DNP3 source device (outstation) reported an internal indication (IIN) that hasn't authorized as learned traffic on your network. | Major |
-| Unpermitted Usage of Modbus Function Code | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unpermitted Modbus Schneider Electric Extension | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Unpermitted Usage of ASDU Types | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Unpermitted Usage of DNP3 Function Code | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| Unpermitted Usage of Internal Indication (IIN) | A DNP3 source device (outstation) reported an internal indication (IIN) that hasn't authorized as learned traffic on your network. | Major | Illegal Commands |
+| Unpermitted Usage of Modbus Function Code | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
## Anomaly engine alerts Anomaly engine alerts describe detected anomalies in network activity.
-| Title | Description | Severity |
-|--|--|--|
-| Abnormal Exception Pattern in Slave | An excessive number of errors were detected on a source device. This alert may be the result of an operational issue. | Minor |
-| * Abnormal HTTP Header Length | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical |
-| * Abnormal Number of Parameters in HTTP Header | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical |
-| Abnormal Periodic Behavior In Communication Channel | A change in the frequency of communication between the source and destination devices was detected. | Minor |
-| Abnormal Termination of Applications | An excessive number of stop commands were detected on a source device. This alert may be the result of an operational issue or an attempt to manipulate the device. | Major |
-| Abnormal Traffic Bandwidth | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning |
-| Abnormal Traffic Bandwidth Between Devices | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning |
-| Address Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical |
-| ARP Address Scan Detected | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. | Critical |
-| ARP Address Scan Detected | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. | Critical |
-| ARP Spoofing | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. | Warning |
-| Excessive Login Attempts | A source device was seen performing excessive sign-in attempts to a destination server. This alert may indicate a brute force attack. The server may be compromised by a malicious actor. | Critical |
-| Excessive Number of Sessions | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. | Critical |
-| Excessive Restart Rate of an Outstation | An excessive number of restart commands were detected on a source device. These alerts may be the result of an operational issue or an attempt to manipulate the device. | Major |
-| Excessive SMB login attempts | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. | Critical |
-| ICMP Flooding | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. | Warning |
-|* Illegal HTTP Header Content | The source device initiated an invalid request. | Critical |
-| Inactive Communication Channel | A communication channel between two devices was inactive during a period in which activity is usually observed. This might indicate that the program generating this traffic was changed, or the program might be unavailable. It's recommended to review the configuration of installed program and verify that it's configured properly. | Warning |
-| Long Duration Address Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical |
-| Password Guessing Attempt Detected | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. | Critical |
-| PLC Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical |
-| Port Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical |
-| Unexpected message length | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical |
-| Unexpected Traffic for Standard Port | Traffic was detected on a device using a port reserved for another protocol. | Major |
+| Title | Description | Severity | Category |
+|--|--|--|--|
+| Abnormal Exception Pattern in Slave | An excessive number of errors were detected on a source device. This alert may be the result of an operational issue. | Minor | Abnormal Communication Behavior |
+| * Abnormal HTTP Header Length | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior |
+| * Abnormal Number of Parameters in HTTP Header | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior |
+| Abnormal Periodic Behavior In Communication Channel | A change in the frequency of communication between the source and destination devices was detected. | Minor | Abnormal Communication Behavior |
+| Abnormal Termination of Applications | An excessive number of stop commands were detected on a source device. This alert may be the result of an operational issue or an attempt to manipulate the device. | Major | Abnormal Communication Behavior |
+| Abnormal Traffic Bandwidth | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies |
+| Abnormal Traffic Bandwidth Between Devices | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies |
+| Address Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical | Scan |
+| ARP Address Scan Detected | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. | Critical | Scan |
+| ARP Spoofing | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. | Warning | Abnormal Communication Behavior |
+| Excessive Login Attempts | A source device was seen performing excessive sign-in attempts to a destination server. This alert may indicate a brute force attack. The server may be compromised by a malicious actor. | Critical | Authentication |
+| Excessive Number of Sessions | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. | Critical | Abnormal Communication Behavior |
+| Excessive Restart Rate of an Outstation | An excessive number of restart commands were detected on a source device. These alerts may be the result of an operational issue or an attempt to manipulate the device. | Major | Restart/ Stop Commands |
+| Excessive SMB login attempts | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. | Critical | Authentication |
+| ICMP Flooding | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. | Warning | Abnormal Communication Behavior |
+|* Illegal HTTP Header Content | The source device initiated an invalid request. | Critical | Abnormal HTTP Communication Behavior |
+| Inactive Communication Channel | A communication channel between two devices was inactive during a period in which activity is usually observed. This might indicate that the program generating this traffic was changed, or the program might be unavailable. It's recommended to review the configuration of installed program and verify that it's configured properly. | Warning | Unresponsive |
+| Long Duration Address Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical | Scan |
+| Password Guessing Attempt Detected | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. | Critical | Authentication |
+| PLC Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical | Scan |
+| Port Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical | Scan |
+| Unexpected message length | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal Communication Behavior |
+| Unexpected Traffic for Standard Port | Traffic was detected on a device using a port reserved for another protocol. | Major | Abnormal Communication Behavior |
## Protocol violation engine alerts Protocol engine alerts describe detected deviations in the packet structure, or field values compared to protocol specifications.
-| Title | Description | Severity |
-|--|--|--|
-| Excessive Malformed Packets In a Single Session | An abnormal number of malformed packets sent from the source device to the destination device. This alert might indicate erroneous communications, or an attempt to manipulate the targeted device. | Major |
-| Firmware Update | A source device sent a command to update firmware on a destination device. Verify that recent programming, configuration and firmware upgrades made to the destination device are valid. | Warning |
-| Function Code Not Supported by Outstation | The destination device received an invalid request. | Major |
-| Illegal BACNet message | The source device initiated an invalid request. | Major |
-| Illegal Connection Attempt on Port 0 | A source device attempted to connect to destination device on port number zero (0). For TCP, port 0 is reserved and canΓÇÖt be used. For UDP, the port is optional and a value of 0 means no port. There's usually no service on a system that listens on port 0. This event may indicate an attempt to attack the destination device, or indicate that an application was programmed incorrectly. | Minor |
-| Illegal DNP3 Operation | The source device initiated an invalid request. | Major |
-| Illegal MODBUS Operation (Exception Raised by Master) | The source device initiated an invalid request. | Major |
-| Illegal MODBUS Operation (Function Code Zero) | The source device initiated an invalid request. | Major |
-| Illegal Protocol Version | The source device initiated an invalid request. | Major |
-| Incorrect Parameter Sent to Outstation | The destination device received an invalid request. | Major |
-| Initiation of an Obsolete Function Code (Initialize Data) | The source device initiated an invalid request. | Minor |
-| Initiation of an Obsolete Function Code (Save Config) | The source device initiated an invalid request. | Minor |
-| Master Requested an Application Layer Confirmation | The source device initiated an invalid request. | Warning |
-| Modbus Exception | A source device (secondary) returned an exception to a destination device (primary). | Major |
-| Slave Device Received Illegal ASDU Type | The destination device received an invalid request. | Major |
-| Slave Device Received Illegal Command Cause of Transmission | The destination device received an invalid request. | Major |
-| Slave Device Received Illegal Common Address | The destination device received an invalid request. | Major |
-| Slave Device Received Illegal Data Address Parameter | The destination device received an invalid request. | Major |
-| Slave Device Received Illegal Data Value Parameter | The destination device received an invalid request. | Major |
-| Slave Device Received Illegal Function Code | The destination device received an invalid request. | Major |
-| Slave Device Received Illegal Information Object Address | The destination device received an invalid request. | Major |
-| Unknown Object Sent to Outstation | The destination device received an invalid request. | Major |
-| Usage of a Reserved Function Code | The source device initiated an invalid request. | Major |
-| Usage of Improper Formatting by Outstation | The source device initiated an invalid request. | Warning |
-| Usage of Reserved Status Flags (IIN) | A DNP3 source device (outstation) used the reserved Internal Indicator 2.6. It's recommended to check the device's configuration. | Warning |
+| Title | Description | Severity | Category |
+|--|--|--|--|
+| Excessive Malformed Packets In a Single Session | An abnormal number of malformed packets sent from the source device to the destination device. This alert might indicate erroneous communications, or an attempt to manipulate the targeted device. | Major | Illegal Commands |
+| Firmware Update | A source device sent a command to update firmware on a destination device. Verify that recent programming, configuration and firmware upgrades made to the destination device are valid. | Warning | Firmware Change |
+| Function Code Not Supported by Outstation | The destination device received an invalid request. | Major | Illegal Commands |
+| Illegal BACNet message | The source device initiated an invalid request. | Major | Illegal Commands |
+| Illegal Connection Attempt on Port 0 | A source device attempted to connect to destination device on port number zero (0). For TCP, port 0 is reserved and canΓÇÖt be used. For UDP, the port is optional and a value of 0 means no port. There's usually no service on a system that listens on port 0. This event may indicate an attempt to attack the destination device, or indicate that an application was programmed incorrectly. | Minor | Illegal Commands |
+| Illegal DNP3 Operation | The source device initiated an invalid request. | Major | Illegal Commands |
+| Illegal MODBUS Operation (Exception Raised by Master) | The source device initiated an invalid request. | Major | Illegal Commands |
+| Illegal MODBUS Operation (Function Code Zero) | The source device initiated an invalid request. | Major | Illegal Commands |
+| Illegal Protocol Version | The source device initiated an invalid request. | Major | Illegal Commands |
+| Incorrect Parameter Sent to Outstation | The destination device received an invalid request. | Major | Illegal Commands |
+| Initiation of an Obsolete Function Code (Initialize Data) | The source device initiated an invalid request. | Minor | Illegal Commands |
+| Initiation of an Obsolete Function Code (Save Config) | The source device initiated an invalid request. | Minor | Illegal Commands |
+| Master Requested an Application Layer Confirmation | The source device initiated an invalid request. | Warning | Illegal Commands |
+| Modbus Exception | A source device (secondary) returned an exception to a destination device (primary). | Major | Illegal Commands |
+| Slave Device Received Illegal ASDU Type | The destination device received an invalid request. | Major | Illegal Commands |
+| Slave Device Received Illegal Command Cause of Transmission | The destination device received an invalid request. | Major | Illegal Commands |
+| Slave Device Received Illegal Common Address | The destination device received an invalid request. | Major | Illegal Commands |
+| Slave Device Received Illegal Data Address Parameter | The destination device received an invalid request. | Major | Illegal Commands |
+| Slave Device Received Illegal Data Value Parameter | The destination device received an invalid request. | Major | Illegal Commands |
+| Slave Device Received Illegal Function Code | The destination device received an invalid request. | Major | Illegal Commands |
+| Slave Device Received Illegal Information Object Address | The destination device received an invalid request. | Major | Illegal Commands |
+| Unknown Object Sent to Outstation | The destination device received an invalid request. | Major | Illegal Commands |
+| Usage of a Reserved Function Code | The source device initiated an invalid request. | Major | Illegal Commands |
+| Usage of Improper Formatting by Outstation | The source device initiated an invalid request. | Warning | Illegal Commands |
+| Usage of Reserved Status Flags (IIN) | A DNP3 source device (outstation) used the reserved Internal Indicator 2.6. It's recommended to check the device's configuration. | Warning | Illegal Commands |
## Malware engine alerts Malware engine alerts describe detected malicious network activity.
-| Title | Description| Severity |
-|--|--|--|
-| Connection Attempt to Known Malicious IP | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
-| Invalid SMB Message (DoublePulsar Backdoor Implant) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
-| Malicious Domain Name Request | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
-| Malware Test File Detected - EICAR AV Success | An EICAR AV test file was detected in traffic between two devices (over any transport - TCP or UDP). The file isn't malware. It's used to confirm that the antivirus software is installed correctly. Demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Major |
-| Suspicion of Conficker Malware | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
-| Suspicion of Denial Of Service Attack | A source device attempted to initiate an excessive number of new connections to a destination device. This may indicate a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, affect performance and service availability, or cause unrecoverable errors. | Critical |
-| Suspicion of Malicious Activity | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team. | Major |
-| Suspicion of Malicious Activity (BlackEnergy) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
-| Suspicion of Malicious Activity (DarkComet) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
-| Suspicion of Malicious Activity (Duqu) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
-| Suspicion of Malicious Activity (Flame) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
-| Suspicion of Malicious Activity (Havex) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
-| Suspicion of Malicious Activity (Karagany) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
-| Suspicion of Malicious Activity (LightsOut) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
-| Suspicion of Malicious Activity (Name Queries) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
-| Suspicion of Malicious Activity (Poison Ivy) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
-| Suspicion of Malicious Activity (Regin) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
-| Suspicion of Malicious Activity (Stuxnet) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
-| Suspicion of Malicious Activity (WannaCry) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
-| Suspicion of NotPetya Malware - Illegal SMB Parameters Detected | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
-| Suspicion of NotPetya Malware - Illegal SMB Transaction Detected | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
-| Suspicion of Remote Code Execution with PsExec | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
-| Suspicion of Remote Windows Service Management | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
-| Suspicious Executable File Detected on Endpoint | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
-| Suspicious Traffic Detected | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team | Critical |
-| Backup Activity with Antivirus Signatures | Traffic detected between the source device and the destination backup server triggered this alert. The traffic includes backup of antivirus software that might contain malware signatures. This is most likely legitimate backup activity. | Warning |
+| Title | Description| Severity | Category |
+|--|--|--|--|
+| Connection Attempt to Known Malicious IP | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
+| Invalid SMB Message (DoublePulsar Backdoor Implant) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| Malicious Domain Name Request | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
+| Malware Test File Detected - EICAR AV Success | An EICAR AV test file was detected in traffic between two devices (over any transport - TCP or UDP). The file isn't malware. It's used to confirm that the antivirus software is installed correctly. Demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Major | Suspicion of Malicious Activity |
+| Suspicion of Conficker Malware | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware |
+| Suspicion of Denial Of Service Attack | A source device attempted to initiate an excessive number of new connections to a destination device. This may indicate a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, affect performance and service availability, or cause unrecoverable errors. | Critical | Suspicion of Malicious Activity |
+| Suspicion of Malicious Activity | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team. | Major | Suspicion of Malicious Activity |
+| Suspicion of Malicious Activity (BlackEnergy) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| Suspicion of Malicious Activity (DarkComet) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| Suspicion of Malicious Activity (Duqu) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| Suspicion of Malicious Activity (Flame) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| Suspicion of Malicious Activity (Havex) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| Suspicion of Malicious Activity (Karagany) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| Suspicion of Malicious Activity (LightsOut) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| Suspicion of Malicious Activity (Name Queries) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
+| Suspicion of Malicious Activity (Poison Ivy) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| Suspicion of Malicious Activity (Regin) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| Suspicion of Malicious Activity (Stuxnet) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| Suspicion of Malicious Activity (WannaCry) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware |
+| Suspicion of NotPetya Malware - Illegal SMB Parameters Detected | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| Suspicion of NotPetya Malware - Illegal SMB Transaction Detected | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| Suspicion of Remote Code Execution with PsExec | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
+| Suspicion of Remote Windows Service Management | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
+| Suspicious Executable File Detected on Endpoint | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
+| Suspicious Traffic Detected | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team | Critical | Suspicion of Malicious Activity |
+| Backup Activity with Antivirus Signatures | Traffic detected between the source device and the destination backup server triggered this alert. The traffic includes backup of antivirus software that might contain malware signatures. This is most likely legitimate backup activity. | Warning | Backup
## Operational engine alerts Operational engine alerts describe detected operational incidents, or malfunctioning entities.
-| Title | Description | Severity |
-|--|--|--|
-| An S7 Stop PLC Command was Sent | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning |
-| BACNet Operation Failed | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major |
-| Bad MMS Device State | An MMS Virtual Manufacturing Device (VMD) sent a status message. The message indicates that the server may not be configured correctly, partially operational, or not operational at all. | Major |
-| Change of Device Configuration | A configuration change was detected on a source device. | Minor |
-| Continuous Event Buffer Overflow at Outstation | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. | Major |
-| Controller Reset | A source device sent a reset command to a destination controller. The controller stopped operating temporarily and started again automatically. | Warning |
-| Controller Stop | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning |
-| Device Failed to Receive a Dynamic IP Address | The source device is configured to receive a dynamic IP address from a DHCP server but didn't receive an address. This indicates a configuration error on the device, or an operational error in the DHCP server. It's recommended to notify the network administrator of the incident | Major |
-| Device is Suspected to be Disconnected (Unresponsive) | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. | Major |
-| EtherNet/IP CIP Service Request Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major |
-| EtherNet/IP Encapsulation Protocol Command Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major |
-| Event Buffer Overflow in Outstation | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. | Major |
-| Expected Backup Operation Did Not Occur | Expected backup/file transfer activity didn't occur between two devices. This alert may indicate errors in the backup / file transfer process. | Major |
-| GE SRTP Command Failure | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major |
-| GE SRTP Stop PLC Command was Sent | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning |
-| GOOSE Control Block Requires Further Configuration | A source device sent a GOOSE message indicating that the device needs commissioning. This means that the GOOSE control block requires further configuration and GOOSE messages are partially or completely non-operational. | Major |
-| GOOSE Dataset Configuration was Changed | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning |
-| Honeywell Controller Unexpected Status | A Honeywell Controller sent an unexpected diagnostic message indicating a status change. | Warning |
-|* HTTP Client Error | The source device initiated an invalid request. | Warning |
-| Illegal IP Address | System detected traffic between a source device and an IP address that is an invalid address. This may indicate wrong configuration or an attempt to generate illegal traffic. | Minor |
-| Master-Slave Authentication Error | The authentication process between a DNP3 source device (primary) and a destination device (outstation) failed. | Minor |
-| MMS Service Request Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major |
-| No Traffic Detected on Sensor Interface | A sensor stopped detecting network traffic on a network interface. | Critical |
-| OPC UA Server Raised an Event That Requires User's Attention | An OPC UA server sent an event notification to a client. This type of event requires user attention | Major |
-| OPC UA Service Request Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major |
-| Outstation Restarted | A cold restart was detected on a source device. This means the device was physically turned off and back on again. | Warning |
-| Outstation Restarts Frequently | An excessive number of cold restarts were detected on a source device. This means the device was physically turned off and back on again an excessive number of times. | Minor |
-| Outstation's Configuration Changed | A configuration change was detected on a source device. | Major |
-| Outstation's Corrupted Configuration Detected | This DNP3 source device (outstation) reported a corrupted configuration. | Major |
-| Profinet DCP Command Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major |
-| Profinet Device Factory Reset | A source device sent a factory reset command to a Profinet destination device. The reset command clears Profinet device configurations and stops its operation. | Warning |
-| * RPC Operation Failed | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major |
-| Sampled Values Message Dataset Configuration was Changed | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning |
-| Slave Device Unrecoverable Failure | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major |
-| Suspicion of Hardware Problems in Outstation | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major |
-| Suspicion of Unresponsive MODBUS Device | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. | Minor |
-| Traffic Detected on Sensor Interface | A sensor resumed detecting network traffic on a network interface. | Warning |
+| Title | Description | Severity | Category |
+|--|--|--|--|
+| An S7 Stop PLC Command was Sent | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands |
+| BACNet Operation Failed | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures |
+| Bad MMS Device State | An MMS Virtual Manufacturing Device (VMD) sent a status message. The message indicates that the server may not be configured correctly, partially operational, or not operational at all. | Major | Operational Issues |
+| Change of Device Configuration | A configuration change was detected on a source device. | Minor | Configuration Changes |
+| Continuous Event Buffer Overflow at Outstation | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. | Major | Buffer Overflow |
+| Controller Reset | A source device sent a reset command to a destination controller. The controller stopped operating temporarily and started again automatically. | Warning | Restart/ Stop Commands |
+| Controller Stop | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands |
+| Device Failed to Receive a Dynamic IP Address | The source device is configured to receive a dynamic IP address from a DHCP server but didn't receive an address. This indicates a configuration error on the device, or an operational error in the DHCP server. It's recommended to notify the network administrator of the incident | Major | Command Failures |
+| Device is Suspected to be Disconnected (Unresponsive) | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. | Major | Unresponsive |
+| EtherNet/IP CIP Service Request Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
+| EtherNet/IP Encapsulation Protocol Command Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
+| Event Buffer Overflow in Outstation | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. | Major | Buffer Overflow |
+| Expected Backup Operation Did Not Occur | Expected backup/file transfer activity didn't occur between two devices. This alert may indicate errors in the backup / file transfer process. | Major | Backup |
+| GE SRTP Command Failure | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures |
+| GE SRTP Stop PLC Command was Sent | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands |
+| GOOSE Control Block Requires Further Configuration | A source device sent a GOOSE message indicating that the device needs commissioning. This means that the GOOSE control block requires further configuration and GOOSE messages are partially or completely non-operational. | Major | Configuration Changes |
+| GOOSE Dataset Configuration was Changed | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes |
+| Honeywell Controller Unexpected Status | A Honeywell Controller sent an unexpected diagnostic message indicating a status change. | Warning | Operational Issues |
+|* HTTP Client Error | The source device initiated an invalid request. | Warning | Abnormal HTTP Communication Behavior |
+| Illegal IP Address | System detected traffic between a source device and an IP address that is an invalid address. This may indicate wrong configuration or an attempt to generate illegal traffic. | Minor | Abnormal Communication Behavior |
+| Master-Slave Authentication Error | The authentication process between a DNP3 source device (primary) and a destination device (outstation) failed. | Minor | Authentication |
+| MMS Service Request Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
+| No Traffic Detected on Sensor Interface | A sensor stopped detecting network traffic on a network interface. | Critical | Sensor Traffic |
+| OPC UA Server Raised an Event That Requires User's Attention | An OPC UA server sent an event notification to a client. This type of event requires user attention | Major | Operational Issues |
+| OPC UA Service Request Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
+| Outstation Restarted | A cold restart was detected on a source device. This means the device was physically turned off and back on again. | Warning | Restart/ Stop Commands |
+| Outstation Restarts Frequently | An excessive number of cold restarts were detected on a source device. This means the device was physically turned off and back on again an excessive number of times. | Minor | Restart/ Stop Commands |
+| Outstation's Configuration Changed | A configuration change was detected on a source device. | Major | Configuration Changes |
+| Outstation's Corrupted Configuration Detected | This DNP3 source device (outstation) reported a corrupted configuration. | Major | Configuration Changes |
+| Profinet DCP Command Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
+| Profinet Device Factory Reset | A source device sent a factory reset command to a Profinet destination device. The reset command clears Profinet device configurations and stops its operation. | Warning | Restart/ Stop Commands |
+| * RPC Operation Failed | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures |
+| Sampled Values Message Dataset Configuration was Changed | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes |
+| Slave Device Unrecoverable Failure | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Command Failures |
+| Suspicion of Hardware Problems in Outstation | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Operational Issues |
+| Suspicion of Unresponsive MODBUS Device | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. | Minor | Unresponsive |
+| Traffic Detected on Sensor Interface | A sensor resumed detecting network traffic on a network interface. | Warning | Sensor Traffic |
\* The alert is disabled by default, but can be enabled again. To enable the alert, navigate to the Support page, find the alert and select **Enable**. You need administrative level permissions to access the Support page.
defender-for-iot Hpe Proliant Dl360 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl360.md
Title: HPE ProLiant DL360 OT monitoring - Microsoft Defender for IoT description: Learn about the HPE ProLiant DL360 appliance when used for OT monitoring with Microsoft Defender for IoT. Previously updated : 04/24/2022 Last updated : 10/03/2022 # HPE ProLiant DL360
-This article describes the **HPE ProLiant DL360** appliance for OT sensors.
+This article describes the **HPE ProLiant DL360** appliance for OT sensors, customized for use with Microsoft Defender for IoT.
| Appliance characteristic |Details | |||
This article describes the **HPE ProLiant DL360** appliance for OT sensors.
|**Physical specifications** | Mounting: 1U<br>Ports: 15x RJ45 or 8x SFP (OPT)| |**Status** | Supported, Available preconfigured| -
-The following image shows a view of the HPE ProLiant Dl360 front panel:
--
-The following image shows a view of the HPE ProLiant Dl360 back panel:
+The following image describes the hardware elements on the HPE ProLiant DL360 back panel that are used by Defender for IoT:
:::image type="content" source="../media/tutorial-install-components/hpe-proliant-dl360-back-panel.png" alt-text="Photo of the HPE ProLiant DL360 back panel." border="false"::: + ## Specifications |Component |Specifications|
firewall Long Running Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/long-running-sessions.md
Previously updated : 09/22/2022 Last updated : 10/03/2022
Azure Firewall is designed to be available and redundant. Every effort is made t
## Scenarios that impact long running TCP sessions The following scenarios can potentially drop long running TCP sessions:-- Scale down
+- Scale in
- Firewall maintenance - Idle timeout - Auto-recovery
-### Scale down
+### Scale in
-Azure Firewall scales up\down based on throughput and CPU usage. Scale down is performed by putting the VM instance in drain mode for 90 seconds before recycling the VM instance. Any long running connections remaining on the VM instance after 90 seconds will be disconnected.
+Azure Firewall scales in/out based on throughput and CPU usage. Scale in is performed by putting the VM instance in drain mode for 90 seconds before recycling the VM instance. Any long running connections remaining on the VM instance after 90 seconds will be disconnected.
### Firewall maintenance
healthcare-apis Deploy 02 New Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-02-new-button.md
+
+ Title: Deploy the MedTech service with a QuickStart template - Azure Health Data Services
+description: In this article, you'll learn how to deploy the MedTech service in the Azure portal using a QuickStart template.
++++ Last updated : 09/30/2022+++
+# Deploy the MedTech service with an Azure Resource Manager QuickStart template
+
+In this article, you'll learn how to deploy the MedTech service in the Azure portal using an Azure Resource Manager (ARM) Quickstart template. This template will be used with the **Deploy to Azure** button to make it easy to provide the information you need to automatically set up the infrastructure and configuration of your deployment. For more information about Azure ARM templates, see [What are ARM templates?](../../azure-resource-manager/templates/overview.md).
+
+If you need to see a diagram with information on the MedTech service deployment, there is an architecture overview at [Choose a deployment method](deploy-iot-connector-in-azure.md#deployment-architecture-overview). This diagram shows the data flow steps of deployment and how MedTech service processes data into a Fast Healthcare Interoperability Resource (FHIR) Observation.
+
+There are four simple tasks you need to complete in order to deploy MedTech service with the ARM template **Deploy to Azure** button. They are:
+
+## Prerequisites
+
+In order to begin deployment, you need to have the following prerequisites:
+
+- An active Azure subscription account. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/).
+
+- Two resource providers registered with your Azure subscription: **Microsoft.HealthcareApis** and **Microsoft.EventHub**. To learn more about registering resource providers, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md).
+
+When you've fulfilled these two prerequisites, you are ready to begin the second task.
+
+## Deploy to Azure button
+
+Next, you need to select the ARM template **Deploy to Azure** button here:
+
+ [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.healthcareapis%2Fworkspaces%2Fiotconnectors%2Fazuredeploy.json).
+
+This button will call a template from the Azure ARM QuickStart template library to get information from your Azure subscription environment and begin deploying the MedTech service.
+
+After you select the **Deploy to Azure** button, it may take a few minutes to implement the following resources and roles:
+
+- An Azure Event Hubs Namespace and device message Azure event hub. In this example, the event hub is named **devicedata**.
+
+- An Azure event hub consumer group. In this example, the consumer group is named **$Default**.
+
+- An Azure event hub sender role. In this example, the sender role is named **devicedatasender**.
+
+- An Azure Health Data Services workspace.
+
+- An Azure Health Data Services FHIR service.
+
+- An Azure Health Data Services MedTech service instance, including the necessary [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles to the device message event hub (named **Azure Events Hubs Receiver**) and FHIR service (named **FHIR Data Writer**).
+
+After these resources and roles have completed their implementation, the Azure portal will be launched.
+
+## Provide configuration details
+
+When the Azure portal screen appears, your next task is to fill out five fields that provide specific details of your deployment configuration.
++
+### Use these values to fill out the five fields
+
+- **Subscription** - Choose the Azure subscription you want to use for the deployment.
+
+- **Resource Group** - Choose an existing Resource Group or create a new Resource Group.
+
+- **Region** - The Azure region of the Resource Group used for the deployment. This field will auto-fill, based on the Resource Group region.
+
+- **Basename** - This value will be appended to the name of the Azure resources and services to be deployed.
+
+- **Location** - Use the drop-down list to select a supported Azure region for the Azure Health Data Services (the value could be the same or different region than your Resource Group).
+
+### When completed, do the following
+
+Don't change the **Device Mapping** and **Destination Mapping** default values at this time.
+
+Select the **Review + create** button after all the fields are filled out. This will review your input and check to see if all your values are valid.
+
+When the validation is successful, select the **Create** button to begin the deployment. After a brief wait, a message will appear telling you that your deployment is complete.
+
+## Required post-deployment tasks
+
+Now that the MedTech service is successfully deployed, there are three post-deployment tasks that need to be completed before MedTech is fully functional and ready for use:
+
+1. First, you must provide a working device mapping. For more information, see [How to use device mappings](how-to-use-device-mappings.md).
+
+2. Second, you need to ensure that you have a working FHIR destination mapping. For more information, see [How to use FHIR destination mappings](how-to-use-fhir-mappings.md).
+
+3. Third, you must use a Shared access policies (SAS) key (named **devicedatasender**) to connect your device or application to the MedTech service device message event hub (named **devicedata**). For more information, see [Connection string for a specific event hub in a namespace](../../event-hubs/event-hubs-get-connection-string.md#connection-string-for-a-specific-event-hub-in-a-namespace).
+
+> [!IMPORTANT]
+>
+> If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group. Consumer groups enable multiple consuming applications to each have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups).
+>
+> **Examples:**
+>
+> - Two MedTech services accessing the same device message event hub.
+> - A MedTech service and a storage writer application accessing the same device message event hub.
+
+## Next steps
+
+In this article, you learned how to deploy the MedTech service in the Azure portal using a Quickstart ARM template with a **Deploy to Azure** button. To learn more about other methods of deployment, see
+
+>[!div class="nextstepaction"]
+>[Choosing a method of deployment for MedTech service in Azure](deploy-iot-connector-in-azure.md)
+
+>[!div class="nextstepaction"]
+>[How to manually deploy MedTech service with Azure portal](deploy-03-new-manual.md)
+
+>[!div class="nextstepaction"]
+>[How to deploy MedTech service using an ARM template and Azure PowerShell or Azure CLI](deploy-08-new-ps-cli.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy 03 New Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-03-new-manual.md
+
+ Title: Overview of manually deploying the MedTech service using the Azure portal - Azure Health Data Services
+description: In this article, you'll see an overview of how to manually deploy the MedTech service in the Azure portal.
++++ Last updated : 09/30/2022+++
+# How to manually deploy MedTech service using the Azure portal
+
+You may prefer to manually deploy MedTech service if you need to track every step of the developmental process. This might be necessary if you have to customize or troubleshoot your deployment. Manual deployment will help you by providing all the details for implementing each task.
+
+The explanation of MedTech service manual deployment using the Azure portal is divided into three parts that cover each of key tasks required:
+
+- Prerequisites (see Prerequisites below)
+- Configuration (see [Configure for manual deployment](./deploy-05-new-config.md))
+- Deployment and Post Deployment (see [Manual deployment and post-deployment](./deploy-06-new-deploy.md))
+
+If you need a diagram with information on the MedTech service deployment, there is an architecture overview at [Choose a deployment method](deploy-iot-connector-in-azure.md#deployment-architecture-overview). This diagram shows the data flow steps of deployment and how MedTech service processes data into a Fast Healthcare Interoperability Resource (FHIR) Observation.
+
+## Prerequisites
+
+Before you can begin configuring to deploy MedTech services, you need to have the following five prerequisites:
+
+- A valid Azure subscription
+- A resource group deployed in the Azure portal
+- A workspace deployed in Azure Health Data Services
+- An event hub deployed in a namespace
+- FHIR service deployed in Azure Health Data Services
+
+## Open your Azure account
+
+The first thing you need to do is determine if you have a valid Azure subscription. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/).
+
+## Deploy a resource group in the Azure portal
+
+When you log in to your Azure account, go to Azure portal and select the Create a resource button. Then enter "Azure Health Data Services" in the "Search services and marketplace" box. This should take you to the Azure Health Data Services page.
+
+## Deploy a workspace in Azure Health Data Services
+
+The first resource you must create is a workspace to contain your Azure Health Data Services resources. Start by selecting Create from the Azure Health Data Services resource page. This will take you to the first page of Create Azure Health Data Services workspace, when you need to do the following 8 steps:
+
+1. Fill in the resource group you want to use or create a new one.
+
+2. Give the workspace a unique name.
+
+3. Select the region you want to use.
+
+4. Select the Networking button at the bottom to continue.
+
+5. Choose whether you want a public or private endpoint.
+
+6. Create tags if you want to use them. They are optional.
+
+7. When you are ready to continue, select the Review + create tab.
+
+8. Select the Create button to deploy your workspace.
+
+After a short delay, you will start to see information about your new workspace. Make sure you wait until all parts of the screen are displayed. If your initial deployment was successful, you should see:
+
+- "Your deployment is complete"
+- Deployment name
+- Subscription name
+- Resource group name
+
+## Deploy an event hub in the Azure portal using a namespace
+
+An event hub is the next prerequisite you need to create. It is important because it receives the data flow from a medical device and stores it there until MedTech can pick up the data and translate it into a FHIR service Observation resource. Because Internet propagation times are indeterminate, the event hub is needed to buffer the data and store it for as much as 24 hours before expiring.
+
+Before you can create an event hub, you must create a namespace in Azure portal to contain it. For more information on how To create a namespace and an event hub, see [Azure Event Hubs namespace and event hub deployed in the Azure portal](../../event-hubs/event-hubs-create.md).
+
+## Deploy the FHIR service
+
+The last prerequisite you need to do before you can configure and deploy MedTech service, is to deploy the FHIR service.
+
+There are three ways to deploy FHIR service:
+
+1. Using portal. See [Deploy a FHIR service within Azure Health Data Services - using portal](../fhir/fhir-portal-quickstart.md).
+
+2. Using Bicep. See [Deploy a FHIR service within Azure Health Data Services using Bicep](../fhir/fhir-service-bicep.md).
+
+3. Using an ARM template. See [Deploy a FHIR service within Azure Health Data Services - using ARM template](../fhir/fhir-service-resource-manager-template.md).
+
+After you have deployed FHIR service, it will be ready to receive the data processed by MedTech and persist it as a FHIR service Observation.
+
+## Next steps
+
+In this article, you learned about the prerequisites needed to deploy the MedTech service manually. When you have completed all the prerequisite requirements and your FHIR service is deployed, you are ready for the next step of manual deployment, see
+
+>[!div class="nextstepaction"]
+>[Configure the MedTech service for manual deployment using the Azure portal](deploy-05-new-config.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy 04 New Prereq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-04-new-prereq.md
+
+ Title: Prerequisites for deploying the MedTech service manually using the Azure portal - Azure Health Data Services
+description: In this article, you'll learn the prerequisites for manually deploying the MedTech service in the Azure portal.
++++ Last updated : 09/27/2022+++
+# Prerequisites for manually deploying the MedTech service using the Azure portal
+
+Before you can configure or deploy MedTech services, you need to have the following five prerequisites:
+
+- A valid Azure subscription
+- A resource group deployed in the Azure portal
+- A workspace deployed in Azure Health Data Services
+- An event hub deployed in a namespace
+- FHIR service deployed in Azure Health Data Services
+
+## Open your Azure account
+
+The first thing you need to do is determine if you have a valid Azure subscription. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/).
+
+## Deploy a resource group in the Azure portal
+
+When you log in to your Azure account, go to Azure portal and select the Create a resource button. Then enter "Azure Health Data Services" in the "Search services and marketplace" box. You should take you to the Azure Health Data Services blade.
+
+## Deploy a workspace in Azure Health Data Services
+
+The first resource you must create is a workspace to contain your Azure Health Data Services resources.
+
+Start by selecting Create from the Azure Health Data Services resource page. This will take you to the first page of Create Azure Health Data Services workspace, when you need to do the following 8 steps:
+
+1. Fill in the resource group you want to use or create a new one.
+
+2. Give the workspace a unique name.
+
+3. Select the region you want to use.
+
+4. Select the Networking button at the bottom to continue.
+
+5. Choose whether you want a public or private endpoint.
+
+6. Create tags if you want to use them.
+
+7. When you are ready to move forward, select the Review + create tab.
+
+8. Select the Create button to deploy your workspace.
+
+After a short delay, you will start to see information about your new workspace. Make sure you wait until all parts of the screen are displayed. If your initial deployment was successful, you should see:
+
+- "Your deployment is complete"
+- Deployment name
+- Subscription name
+- Resource group name
+
+## Deploy an event hub in the Azure portal using a namespace
+
+An event hub is the next prerequisite you need to create. It is important because it receives the data flow from a medical device and stores it there until MedTech can pick up the data and translate it into a FHIR service Observation resource. Because Internet propagation times are indeterminate, the event hub is needed to buffer the data and store it for as much as 24 hours before expiring.
+
+Before you can create an event hub, you must create a namespace in Azure portal to contain it. For more information on how To create a namespace and an event hub, see [Azure Event Hubs namespace and event hub deployed in the Azure portal](../../event-hubs/event-hubs-create.md).
+
+## Deploy the FHIR service
+
+The last thing you need to do before you can configure and deploy MedTech service is to deploy the FHIR service.
+
+There are three ways to deploy FHIR service:
+
+1. Using portal. See [Deploy a FHIR service within Azure Health Data Services - using portal](../fhir/fhir-portal-quickstart.md).
+
+2. Using Bicep. See [Deploy a FHIR service within Azure Health Data Services using Bicep](../fhir/fhir-service-bicep.md).
+
+3. Using an ARM template. See [Deploy a FHIR service within Azure Health Data Services - using ARM template](../fhir/fhir-service-resource-manager-template.md).
+
+After you have deployed FHIR service, it will be ready to take the data processed by MedTech and persist it as a FHIR service Observation.
+
+## Next steps
+
+In this article, you learned about the prerequisites needed to deploy the MedTech service manually. When you have completed all the prerequisite requirements and your FHIR service is deployed, you are ready for the next step of manual deployment, see
+
+>[!div class="nextstepaction"]
+>[Configure the MedTech service for manual deployment using the Azure portal](deploy-05-new-config.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy 05 New Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-05-new-config.md
+
+ Title: Configuring the MedTech service for deployment using the Azure portal - Azure Health Data Services
+description: In this article, you'll learn how to configure the MedTech service for manual deployment using the Azure portal.
++++ Last updated : 09/30/2022+++
+# Configure the MedTech service for manual deployment using the Azure portal
+
+Before you can manually deploy the MedTech service, you must complete the following configuration tasks:
+
+## Set up the MedTech service configuration
+
+Start with these three steps to begin configuring the MedTech service so it will be ready to accept your tabbed configuration input:
+
+1. Start by going to the Health Data Services workspace you created in the manual deployment [Prerequisites](deploy-03-new-manual.md#prerequisites) section. Select the Create MedTech service box.
+
+2. This will take you to the Add MedTech service button. Select the button.
+
+3. This will take you to the Create MedTech service page. This page has five tabs you need to fill out:
+
+- Basics
+- Device mapping
+- Destination mapping
+- Tags (optional)
+- Review + create
+
+## Configure the Basics tab
+
+Follow these six steps to fill in the Basics tab configuration:
+
+1. Enter the **MedTech service name**.
+
+ The **MedTech service name** is a friendly, unique name for your MedTech service. For this example, we'll name the MedTech service `mt-azuredocsdemo`.
+
+2. Enter the **Event Hubs Namespace**.
+
+ The Event Hubs Namespace is the name of the **Event Hubs Namespace** that you previously deployed. For this example, we'll use `eh-azuredocsdemo` with our MedTech service device messages.
+
+ > [!TIP]
+ >
+ > For information about deploying an Azure Event Hubs Namespace, see [Create an Event Hubs Namespace](../../event-hubs/event-hubs-create.md#create-an-event-hubs-namespace).
+ >
+ > For more information about Azure Event Hubs Namespaces, see [Namespace](../../event-hubs/event-hubs-features.md?WT.mc_id=Portal-Microsoft_Healthcare_APIs#namespace) in the Features and terminology in Azure Event Hubs document.
+
+3. Enter the **Events Hubs name**.
+
+ The Event Hubs name is the name of the event hub that you previously deployed within the Event Hubs Namespace. For this example, we'll use `devicedata` with our MedTech service device messages.
+
+ > [!TIP]
+ >
+ > For information about deploying an Azure event hub, see [Create an event hub](../../event-hubs/event-hubs-create.md#create-an-event-hub).
+
+4. Enter the **Consumer group**.
+
+ The Consumer group name is located by going to the **Overview** page of the Event Hubs Namespace and selecting the event hub to be used for the MedTech service device messages. In this example, the event hub is named `devicedata`.
+
+5. When you're inside the event hub, select the **Consumer groups** button under **Entities** to display the name of the consumer group to be used by your MedTech service.
+
+6. By default, a consumer group named **$Default** is created during the deployment of an event hub. Use this consumer group for your MedTech service deployment.
+
+ > [!IMPORTANT]
+ >
+ > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group.
+ >
+ > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups).
+ >
+ > Examples:
+
+ > - Two MedTech services accessing the same device message event hub.
+
+ > - A MedTech service and a storage writer application accessing the same device message event hub.
+
+The Basics tab should now look like this after you have filled it out:
+
+ :::image type="content" source="media\iot-deploy-manual-in-portal\select-device-mapping-button.png" alt-text="Screenshot of Basics tab filled out correctly." lightbox="media\iot-deploy-manual-in-portal\select-device-mapping-button.png":::
+
+You are now ready to select the Device mapping tab and begin setting up the connection from the medical device to MedTech service.
+
+## Configure the Device mapping tab
+
+You need to configure device mapping so that your instance of MedTech service can connect to the device you want to receive data from. This means that the data will be first sent to your event hub instance and then picked up by the MedTech service.
+
+The easiest way to configure the Device mapping tab is to use the Internet of Medical Things (IoMT) Connector Data Mapper tool to visualize, edit, and test your device mapping. This open source tool is available from [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper).
+
+To begin configuring the device mapping tab, go to the Create MedTech service page and select the **Device mapping** tab. Then follow these two steps:
+
+1. Go to the IoMT Connector Data Mapper and get the appropriate JSON code.
+
+2. Return to the Create MedTech service page. Enter the JSON code for the template you want to use into the **Device mapping** tab. After you enter the template code, the Device mapping code will be displayed on the screen.
+
+3. If the Device code is correct, select the **Next: Destination >** tab to enter the destination properties you want to use with your MedTech service. Note that your device configuration data will be saved for this session.
+
+For more information regarding device mappings, see the relevant GitHub open source documentation at [Device Content Mapping](https://github.com/microsoft/iomt-fhir/blob/master/docs/Configuration.md#device-content-mapping).
+
+For Azure docs information about device mapping, see [How to use Device mappings](how-to-use-device-mappings.md).
+
+## Configure the Destination tab
+
+In order to configure the destination mapping tab, you can use the IoMT Connector Data Mapper tool to visualize, edit, and test the destination mapping. This open source tool is available from [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper). You need to configure destination mapping so that your instance of MedTech service can send and receive data to and from the FHIR service.
+
+To begin configuring destination mapping, go to the Create MedTech service page and select the **Destination mapping** tab. There are two parts of the tab you must fill out:
+
+ 1. Destination properties
+ 2. JSON template request
+
+### Destination properties
+
+Under the **Destination** tab, use these values to enter the destination properties for your MedTech service instance:
+
+- First, enter the name of your **FHIR server** using the following four steps:
+
+ 1. The **FHIR Server** name (also known as the **FHIR service**) can be located by using the **Search** bar at the top of the screen.
+ 1. To connect to your FHIR service instance, enter the name of the FHIR service you used in the manual deploy configuration article at [Deploy the FHIR service](deploy-04-new-prereq.md#deploy-the-fhir-service).
+ 1. Then select the **Properties** button.
+ 1. Next, Copy and paste the **Name** string into the **FHIR Server** text field. In this example, the **FHIR Server** name is `fs-azuredocsdemo`.
+
+- Next, enter the **Destination Name**.
+
+ The **Destination Name** is a friendly name for the destination. Enter a unique name for your destination. In this example, the **Destination Name** is
+
+ `fs-azuredocsdemo`.
+
+- Then, select the **Resolution Type**.
+
+ **Resolution Type** specifies how MedTech service will resolve missing data when reading from the FHIR service. MedTech reads device and patient resources from the FHIR service using [device identifiers](https://www.hl7.org/fhir/device-definitions.html#Device.identifier) and [patient identifiers](https://www.hl7.org/fhir/patient-definitions.html#Patient.identifier).
+
+ Missing data can be resolved by choosing a **Resolution Type** of **Create** and **Lookup**:
+
+ - **Create**
+
+ If you selected **Create**, and device or patient resources are missing when you are reading data, new resources will be created, containing just the identifier.
+
+ - **Lookup**
+
+ If you selected **Lookup**, and device or patient resources are missing, an error will occur, and the data won't be processed. The errors **DeviceNotFoundException** and/or a **PatientNotFoundException** error will be generated, depending on the type of resource not found.
+
+For more information regarding destination mapping, see the FHIR service GitHub documentation at [FHIR mapping](https://github.com/microsoft/iomt-fhir/blob/master/docs/Configuration.md#fhir-mapping).
+
+For Azure docs information about destination mapping, see [How to use FHIR destination mappings](how-to-use-fhir-mappings.md).
+
+### JSON template request
+
+Before you can complete the FHIR destination mapping, you must get a FHIR destination mapping code. Follow these four steps:
+
+1. Go to the [IoMT Connector Data Mapper Tool](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) and get the JSON template for your FHIR destination.
+1. Go back to the Destination tab of the Create MedTech service page.
+1. Go to the large box below the boxes for FHIR server name, Destination name, and Resolution type. Enter the JSON template request in that box.
+1. You will then receive the FHIR Destination mapping code which will be saved as part of your configuration.
+
+## Configure the Tags tab (optional)
+
+Before you complete your configuration in the **Review + create** tab, you may want to configure tabs. You can do this by selecting the **Next: Tags >** tabs.
+
+Tags are name and value pairs used for categorizing resources. This an optional step you may have a lot of resources and want to sort them. For more information about tags, see [Use tags to organize your Azure resources and management hierarchy](../../azure-resource-manager/management/tag-resources.md).
+
+Follow these steps if you want to create tags:
+
+1. Under the **Tags** tab, enter the tag properties associated with the MedTech service.
+
+ - Enter a **Name**.
+ - Enter a **Value**.
+
+2. Once you've entered your tag(s), you are ready to do the last step of your configuration.
+
+## Select the Review + create tab to validate your deployment request
+
+To begin the validation process of your MedTech service deployment, select the **Review + create** tab. There will be a short delay and then you should see a screen that displays a **Validation success** message. Below the message, you should see the following values for your deployment.
+
+**Basics**
+- MedTech service name
+- Event Hubs name
+- Consumer group
+- Event Hubs namespace
+++
+**Destination**
+- FHIR server
+- Destination name
+- Resolution type
+
+Your validation screen should look something like this:
+
+ :::image type="content" source="media\iot-deploy-manual-in-portal\validate-and-review-medtech-service.png" alt-text="Screenshot of validation success with details displayed." lightbox="media\iot-deploy-manual-in-portal\validate-and-review-medtech-service.png":::
+
+If your MedTech service didn't validate, review the validation failure message, and troubleshoot the issue. Check all properties under each MedTech service tab that you've configured. Go back and try again.
+
+If your deployment request was successful, you are ready to go on the next step, where you will deploy your MedTech service instance.
+
+## Next steps
+
+In this article, you were shown how to configure MedTech service in preparation for deployment and ensure that everything has been validated. To learn about deploying a validated MedTech service instance, see
+
+>[!div class="nextstepaction"]
+>[Manual deployment and post-deployment of MedTech service](deploy-06-new-deploy.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy 06 New Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-06-new-deploy.md
+
+ Title: Manual deployment and post-deployment of the MedTech service using the Azure portal - Azure Health Data Services
+description: In this article, you'll learn how to manually create a deployment and post-deployment of the MedTech service in the Azure portal.
++++ Last updated : 08/30/2022+++
+# Manual Deployment and Post-deployment of MedTech service
+
+When you are satisfied with your configuration and it has been successfully validated, you can complete the deployment and post-deployment process.
+
+## Create your manual deployment
+
+1. Select the **Create** button to begin the deployment.
+
+2. The deployment process may take several minutes. The screen will display a message saying that your deployment is in progress.
+
+3. When Azure has finished deploying, a message will appear will say, "Your Deployment is complete" and will also display the following information:
+
+- Deployment name
+- Subscription
+- Resource group
+- Deployment details
+
+Your screen should look something like this:
+
+ :::image type="content" source="media\iot-deploy-manual-in-portal\created-medtech-service.png" alt-text="Screenshot of the MedTech service deployment completion." lightbox="media\iot-deploy-manual-in-portal\created-medtech-service.png":::
+
+## Manual Post-deployment requirements
+
+There are two post-deployment steps you must perform or the MedTech service can't read device data from the device message event hub, and it also can't read or write to the FHIR service. These steps are:
+
+1. Grant access to the device message event hub.
+2. Grant access to the FHIR service.
+
+These two additional steps are needed because MedTech service uses [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) and a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for extra security and control of your MedTech service assets.
+
+### Grant access to the device message event hub
+
+Follow these steps to grant access to the device message event hub:
+
+1. In the **Search** bar at the top center of the Azure portal, enter and select the name of your **Event Hubs Namespace** that was previously created for your MedTech service device messages.
+
+2. Select the **Event Hubs** button under **Entities**.
+
+3. Select the event hub that will be used for your MedTech service device messages. For this example, the device message event hub is named `devicedata'.
+
+4. Select the **Access control (IAM)** button.
+
+5. Select the **Add role assignment** button.
+
+6. On the **Add role assignment** page, select the **View** button directly across from the **Azure Event Hubs Data Receiver** role. The Azure Event Hubs Data Receiver role allows the MedTech service to receive device message data from this event hub. For more information about application roles, see [Authentication & Authorization for Azure Health Data Services](.././authentication-authorization.md).
+
+7. Select the **Select role** button.
+
+8. Select the **Next** button.
+
+9. In the **Add role assignment** page, select **Managed identity** next to **Assign access to** and **+ Select members** next to **Members**.
+
+10. When the **Select managed identities** box opens, under the **Managed identity** box, select **MedTech service,** and find your MedTech service system-assigned managed identity under the **Select** box. Once the system-assigned managed identity for your MedTech service is found, select it, and then select the **Select** button.
+
+ The system-assigned managed identify name for your MedTech service is a concatenation of the workspace name and the name of your MedTech service, using the format: **"your workspace name"/"your MedTech service name"** or **"your workspace name"/iotconnectors/"your MedTech service name"**. For example: **azuredocsdemo/mt-azuredocsdemo** or **azuredocsdemo/iotconnectors/mt-azuredocsdemo**.
+
+11. On the **Add role assignment** page, select the **Review + assign** button.
+
+12. On the **Add role assignment** confirmation page, select the **Review + assign** button.
+
+13. After the role assignment has been successfully added to the event hub, a notification will display on your screen with a green check mark. This notification indicates that your MedTech service can now read from your device message event hub. It should look like this:
+
+ :::image type="content" source="media\iot-deploy-manual-in-portal\validate-medtech-service-managed-identity-added-to-event-hub.png" alt-text="Screenshot of the MedTech service system-assigned managed identity being successfully granted access to the event hub with a red box around the message." lightbox="media\iot-deploy-manual-in-portal\validate-medtech-service-managed-identity-added-to-event-hub.png":::
+
+For more information about authorizing access to Event Hubs resources, see [Authorize access with Azure Active Directory](../../event-hubs/authorize-access-azure-active-directory.md).
+
+### Grant access to the FHIR service
+
+The process for granting your MedTech service system-assigned managed identity access to your FHIR service requires the same 13 steps that you used to grant access to your device message event hub. The only exception will be a change to step 6. Your MedTech service system-assigned managed identity will require you to select the **View** button directly across from **FHIR Data Writer** access instead of the button across from **Azure Event Hubs Data Receiver**.
+
+The **FHIR Data Writer** role provides read and write access to your FHIR service, which your MedTech service uses to access or persist data. Because the MedTech service is deployed as a separate resource, the FHIR service will receive requests from the MedTech service. If the FHIR service doesnΓÇÖt know who's making the request, it will deny the request as unauthorized.
+
+For more information about assigning roles to the FHIR service, see [Configure Azure Role-based Access Control (RBAC)](.././configure-azure-rbac.md).
+
+For more information about application roles, see [Authentication & Authorization for Azure Health Data Services](.././authentication-authorization.md).
+
+Now that you have granted access to the device message event hub and the FHIR service, your manual deployment is complete and MedTech service is ready to receive data from a medical device and process it into a FHIR Observation resource.
+
+## Next steps
+
+In this article, you learned how to perform the manual deployment and post-deployment steps to implement your MedTech service. To learn more about other methods of deployment, see
+
+>[!div class="nextstepaction"]
+>[Choosing a method of deployment for MedTech service in Azure](deploy-iot-connector-in-azure.md)
+
+>[!div class="nextstepaction"]
+>[Deploy the MedTech service with a QuickStart template](deploy-02-new-button.md)
+
+>[!div class="nextstepaction"]
+>[Using Azure PowerShell and Azure CLI to deploy the MedTech service using Azure Resource Manager templates](deploy-08-new-ps-cli.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy 07 New Post Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-07-new-post-deploy.md
+
+ Title: Post-deployment granting MedTech service access to the device message event hub and FHIR service
+description: In this article, you'll learn how to grant the MedTech service access to the device message event hub and the FHIR service.
++++ Last updated : 09/17/2022+++
+# Post-Deployment: Granting the MedTech service access to the device message event hub and FHIR service
+
+There are two final post-deployment steps you must make before the MedTech service can operate fully:
+
+1. Grant access to the device message event hub.
+2. Grant access to the FHIR service.
+
+## Granting the MedTech service access to the device message event hub and FHIR service
+
+MedTech service uses [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) and a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for extra security and control of your MedTech service assets. Once the MedTech service is deployed, it needs to use system-assigned managed identity to access your device message event hub and your instance of the FHIR service. Without these steps, MedTech service can't read device data from the device message event hub, and it also can't read or write to the FHIR service.
+
+### Granting access to the device message event hub
+
+Follow these steps to grant access to the device message event hub:
+
+1. In the **Search** bar at the top center of the Azure portal, enter and select the name of your **Event Hubs Namespace** that was previously created for your MedTech service device messages.
+
+2. Select the **Event Hubs** button under **Entities**.
+
+3. Select the event hub that will be used for your MedTech service device messages. For this example, the device message event hub is named `devicedata'.
+
+4. Select the **Access control (IAM)** button.
+
+5. Select the **Add role assignment** button.
+
+6. On the **Add role assignment** page, select the **View** button directly across from the **Azure Event Hubs Data Receiver** role.
+
+ The Azure Event Hubs Data Receiver role allows the MedTech service that's being assigned this role to receive device message data from this event hub.
+
+ > [!TIP]
+ >
+ > For more information about application roles, see [Authentication & Authorization for Azure Health Data Services](.././authentication-authorization.md).
+
+7. Select the **Select role** button.
+
+8. Select the **Next** button.
+
+9. In the **Add role assignment** page, select **Managed identity** next to **Assign access to** and **+ Select members** next to **Members**.
+
+10. When the **Select managed identities** box opens, under the **Managed identity** box, select **MedTech service,** and find your MedTech service system-assigned managed identity under the **Select** box. Once the system-assigned managed identity for your MedTech service is found, select it, and then select the **Select** button.
+
+ > [!TIP]
+ >
+ > The system-assigned managed identify name for your MedTech service is a concatenation of the workspace name and the name of your MedTech service.
+ >
+ > **"your workspace name"/"your MedTech service name"** or **"your workspace name"/iotconnectors/"your MedTech service name"**
+ >
+ > For example:
+ >
+ > **azuredocsdemo/mt-azuredocsdemo** or **azuredocsdemo/iotconnectors/mt-azuredocsdemo**
+
+11. On the **Add role assignment** page, select the **Review + assign** button.
+
+12. On the **Add role assignment** confirmation page, select the **Review + assign** button.
+
+13. After the role assignment has been successfully added to the event hub, a notification will display on your screen with a green check mark. This notification indicates that your MedTech service can now read from your device message event hub.
+
+ :::image type="content" source="media\iot-deploy-manual-in-portal\validate-medtech-service-managed-identity-added-to-event-hub.png" alt-text="Screenshot of the MedTech service system-assigned managed identity being successfully granted access to the event hub with a red box around the message." lightbox="media\iot-deploy-manual-in-portal\validate-medtech-service-managed-identity-added-to-event-hub.png":::
+
+ > [!TIP]
+ >
+ > For more information about authorizing access to Event Hubs resources, see [Authorize access with Azure Active Directory](../../event-hubs/authorize-access-azure-active-directory.md).
+
+### Granting access to the FHIR service
+
+The steps for granting your MedTech service system-assigned managed identity access to your FHIR service are the same steps that you took to grant access to your device message event hub. The only exception will be that your MedTech service system-assigned managed identity will require **FHIR Data Writer** access versus **Azure Event Hubs Data Receiver**.
+
+The **FHIR Data Writer** role provides read and write access to your FHIR service, which your MedTech service uses to access or persist data. Because the MedTech service is deployed as a separate resource, the FHIR service will receive requests from the MedTech service. If the FHIR service doesnΓÇÖt know who's making the request, it will deny the request as unauthorized.
+
+> [!TIP]
+>
+> For more information about assigning roles to the FHIR service, see [Configure Azure Role-based Access Control (RBAC)](.././configure-azure-rbac.md)
+>
+> For more information about application roles, see [Authentication & Authorization for Azure Health Data Services](.././authentication-authorization.md).
+
+## Next steps
+
+In this article, you learned how to perform post-deployment steps to make the MedTech service work properly. To learn more about other methods of deployment, see
+
+>[!div class="nextstepaction"]
+>[How to manually deploy MedTech service with Azure portal](deploy-03-new-manual.md)
+
+>[!div class="nextstepaction"]
+>[Deploy the MedTech service with a QuickStart template](deploy-02-new-button.md)
+
+To learn about choosing a deployment method for the MedTech service, see
+
+>[!div class="nextstepaction"]
+>[Choosing a method of deployment for MedTech service in Azure](deploy-iot-connector-in-azure.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy 08 New Ps Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-08-new-ps-cli.md
+
+ Title: Using Azure PowerShell and Azure CLI to deploy the MedTech service with Azure Resource Manager templates - Azure Health Data Services
+description: In this article, you'll learn how to use Azure PowerShell and Azure CLI to deploy the MedTech service using an Azure Resource Manager template.
++++ Last updated : 09/30/2022+++
+# Using Azure PowerShell and Azure CLI to deploy the MedTech service with Azure Resource Manager templates
+
+In this quickstart article, you'll learn how to use Azure PowerShell and Azure CLI to deploy the MedTech service using an Azure Resource Manager (ARM) template. When you call the template from PowerShell or CLI, it provides automation that enables you to distribute your deployment to large numbers of developers. Using PowerShell or CLI allows for modifiable automation capabilities that will speed up your deployment configuration in enterprise environments. For more information about ARM templates, see [What are ARM templates?](./../../azure-resource-manager/templates/overview.md).
+
+## Resources provided by the ARM template
+
+The ARM template will help you automatically configure and deploy the following resources. Each one can be modified to meet your deployment requirements.
+
+- Azure Event Hubs namespace and device message event hub (the device message event hub is named: **devicedata**).
+- Azure event hub consumer group (named **$Default**).
+- Azure event hub sender role (named **devicedatasender**).
+- Azure Health Data Services workspace.
+- Azure Health Data Services Fast Healthcare Interoperability Resources (FHIR&#174;) service.
+- Azure Health Data Services MedTech service. This resource includes setup for:
+ - [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) access roles needed to read from the device message event hub (named **Azure Events Hubs Receiver**)
+ - system-assigned managed identity access roles needed to read and write to the FHIR service (named **FHIR Data Writer**)
+- An output file containing the ARM template deployment results (named **medtech_service_ARM_template_deployment_results.txt**). The file is located in the directory from which you ran the script.
+
+The ARM template used in this article is available from the [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/iotconnectors/) site using the **azuredeploy.json** file located on [GitHub](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/azuredeploy.json).
+
+If you need to see a diagram with information on the MedTech service deployment, there is an architecture overview at [Choose a deployment method](deploy-iot-connector-in-azure.md#deployment-architecture-overview). This diagram shows the data flow steps of deployment and how MedTech service processes data into a Fast Healthcare Interoperability Resource (FHIR) Observation.
+
+## Azure PowerShell prerequisites
+
+Before you can begin, you need to have the following prerequisites if you're using Azure PowerShell:
+
+- An Azure account with an active subscription. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/).
+
+- If you want to run the code locally, use [Azure PowerShell](/powershell/azure/install-az-ps).
+
+## Azure CLI prerequisites
+
+Before you can begin, you need to have the following prerequisites if you're using Azure CLI:
+
+- An Azure account with an active subscription. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/).
+
+- If you want to run the code locally:
+
+ - Use a Bash shell (such as Git Bash, which is included in [Git for Windows](https://gitforwindows.org).
+
+ - Use [Azure CLI](/cli/azure/install-azure-cli).
+
+## Deploy MedTech service with the ARM template and Azure PowerShell
+
+Complete the following five steps to deploy the MedTech service using Azure PowerShell:
+
+1. First you need to confirm the region you want to deploy in. See the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=health-data-services) site for the current Azure regions where the Azure Health Data Services is supported.
+
+ You can also review the **location** section of the **azuredeploy.json** file on [GitHub](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/azuredeploy.json) for Azure regions where the Azure Health Data Services is publicly available. If you need a list of the Azure regions location names, you can use this code to display a list:
+
+ ```azurepowershell
+ Get-AzLocation | Format-Table -Property DisplayName,Location
+ ```
+
+2. If the `Microsoft.EventHub` resource provider isn't already registered with your subscription, you can use this code to register it:
+
+ ```azurepowershell
+ Register-AzResourceProvider -ProviderNamespace Microsoft.EventHub
+ ```
+
+3. If the `Microsoft.HealthcareApis` resource provider isn't already registered with your subscription, you can use this code to register it:
+
+ ```azurepowershell
+ Register-AzResourceProvider -ProviderNamespace Microsoft.HealthcareApis
+ ```
+
+4. If you don't already have a resource group created for this quickstart, you can use this code to create one:
+
+ ```azurepowershell
+ New-AzResourceGroup -name <ResourceGroupName> -location <AzureRegion>
+ ```
+
+ For example: `New-AzResourceGroup -name ArmTestDeployment -location southcentralus`
+
+ > [!IMPORTANT]
+ >
+ > For a successful deployment of the MedTech service, you'll need to use numbers and lowercase letters for the basename of your resources. The minimum basename requirement is three characters with a maximum of 16 characters.
+
+5. Use the following code to deploy the MedTech service using the ARM template:
+
+ ```azurepowershell
+ New-AzResourceGroupDeployment -ResourceGroupName <ResourceGroupName> -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/azuredeploy.json -basename <BaseName> -location <AzureRegion> | Out-File medtech_service_ARM_template_deployment_results.txt
+ ```
+
+ For example: `New-AzResourceGroupDeployment -ResourceGroupName ArmTestDeployment -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/azuredeploy.json -basename abc123 -location southcentralus | Out-File medtech_service_ARM_template_deployment_results.txt`
+
+> [!NOTE]
+> If you want to run the Azure PowerShell commands locally, first enter `Connect-AzAccount` into your PowerShell command-line shell and enter your Azure credentials.
+
+## Deploy MedTech service with the ARM template and Azure CLI
+
+Complete the following five steps to deploy the MedTech service using Azure CLI:
+
+1. First you need to confirm the region you want to deploy in. See the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=health-data-services) site for the current Azure regions where the Azure Health Data Services is supported.
+
+ You can also review the **location** section of the **azuredeploy.json** file on [GitHub](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/azuredeploy.json) for Azure regions where the Azure Health Data Services is publicly available. If you need a list of the Azure regions location names, you can use this code to display a list:
+
+ ```azurecli
+ az account list-locations -o table
+ ```
+
+2. If the `Microsoft.EventHub` resource provider isn't already registered with your subscription, you can use this code to register it:
+
+ ```azurecli
+ az provider register --name Microsoft.EventHub
+ ```
+
+3. If the `Microsoft.HealthcareApis` resource provider isn't already registered with your subscription, you can use this code to register it:
+
+ ```azurecli
+ az provider register --name Microsoft.HealthcareApis
+ ```
+
+4. If you don't already have a resource group created for this quickstart, you can use this code to create one:
+
+ ```azurecli
+ az group create --resource-group <ResourceGroupName> --location <AzureRegion>
+ ```
+
+ For example: `az group create --resource-group ArmTestDeployment --location southcentralus`
+
+ > [!IMPORTANT]
+ >
+ > For a successful deployment of the MedTech service, you'll need to use numbers and lowercase letters for the basename of your resources.
+
+5. Use the following code to deploy the MedTech service using the ARM template:
+
+ ```azurecli
+ az deployment group create --resource-group <ResourceGroupName> --template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/azuredeploy.json --parameters basename=<YourBaseName> location=<AzureRegion | Out-File medtech_service_ARM_template_deployment_results.txt
+ ```
+
+ For example: `az deployment group create --resource-group ArmTestDeployment --template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/azuredeploy.json --parameters basename=abc123 location=southcentralus | Out-File medtech_service_ARM_template_deployment_results.txt`
+
+> [!NOTE]
+> If you want to run the Azure CLI scripts commands locally, first enter `az login` into your PowerShell command-line shell and enter your Azure credentials.
+
+## Deployment completion
+
+The deployment takes a few minutes to complete. You can check the status of your deployment by reading the **medtech_service_ARM_template_deployment_results.txt** file to find out the results of the ARM template deployment.
+
+## Post-deployment mapping
+
+After you successfully deploy the MedTech service, you still need to provide valid device mapping and FHIR destination mapping. The device mapping will connect the device message event hub to MedTech service and the FHIR destination mapping will connect FHIR service to MedTech service. You must provide device mapping or the MedTech service can't read device data from the device message event hub. You also must provide FHIR destination mapping or MedTech can't read or write to the FHIR service.
+
+To learn more about providing device mapping, see [How to use device mappings](how-to-use-device-mappings.md).
+
+To learn more about providing FHIR destination mapping, see [How to use the FHIR destination mappings](how-to-use-fhir-mappings.md).
+
+## Clean up Azure PowerShell resources
+
+When your resource group and deployed ARM template resources are no longer needed, delete the resource group, which deletes the resources in the resource group. Delete them with this code:
+
+```azurepowershell
+Remove-AzResourceGroup -Name <ResourceGroupName>
+ ```
+
+For example: `Remove-AzResourceGroup -Name ArmTestDeployment`
+
+## Clean up Azure CLI resources
+
+When your resource group and deployed ARM template resources are no longer needed, delete the resource group, which deletes the resources in the resource group. Delete them with this code:
+
+```azurecli
+az group delete --name <ResourceGroupName>
+```
+
+For example: `az group delete --resource-group ArmTestDeployment`
+
+> [!TIP]
+>
+> For a step-by-step tutorial that guides you through the process of creating an ARM template, see [Tutorial: Create and deploy your first ARM template](../../azure-resource-manager/templates/template-tutorial-create-first-template.md).
+
+## Next steps
+
+In this article, you learned how to use Azure PowerShell and Azure CLI to deploy the MedTech service using an Azure Resource Manager (ARM) template. To learn more about other methods of deployment, see
+
+>[!div class="nextstepaction"]
+>[Choosing a method of deployment for MedTech service in Azure](deploy-iot-connector-in-azure.md)
+
+>[!div class="nextstepaction"]
+>[How to deploy the MedTech service with a Azure ARM QuickStart template](deploy-03-new-manual.md)
+
+>[!div class="nextstepaction"]
+>[How to manually deploy MedTech service with Azure portal](deploy-03-new-manual.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy Iot Connector In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-iot-connector-in-azure.md
Title: Deploy the MedTech service using the Azure portal - Azure Health Data Services
-description: In this article, you'll learn how to deploy the MedTech service in the Azure portal using either a quickstart template or manual steps.
-
+ Title: Choosing a method of deployment for MedTech service in Azure - Azure Health Data Services
+description: In this article, you'll learn how to choose a method to deploy the MedTech service in Azure.
+ Previously updated : 08/02/2022-- Last updated : 09/30/2022+
-# Deploy the MedTech service using the Azure portal
+# Choose a deployment method
-In this quickstart, you'll learn how to deploy the MedTech service in the Azure portal using two different methods: with a [quickstart template](#deploy-the-medtech-service-with-a-quickstart-template) or [manually](#deploy-the-medtech-service-manually). The MedTech service will enable you to ingest data from Internet of Things (IoT) into your Fast Healthcare Interoperability Resources (FHIR&#174;) service.
+MedTech service provides multiple methods for deploying it into an Azure Platform as a service (PaaS) configuration. Each method has different advantages that will allow you to customize your development environment to suit your needs.
-> [!IMPORTANT]
->
-> You'll want to confirm that the **Microsoft.HealthcareApis** and **Microsoft.EventHub** resource providers have been registered with your Azure subscription for a successful deployment. To learn more about registering resource providers, see [Azure resource providers and types](/azure/azure-resource-manager/management/resource-providers-and-types)
+The different deployment methods are:
-## Deploy the MedTech service with a quickstart template
+- Azure ARM Quickstart template with Deploy to Azure button
+- Azure PowerShell and Azure CLI automation
+- Manual deployment
-If you already have an active Azure account, you can use this [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.healthcareapis%2Fworkspaces%2Fiotconnectors%2Fazuredeploy.json) button to deploy a MedTech service that will include the following resources and roles:
+## Azure ARM QuickStart template with Deploy to Azure button
- * An Azure Event Hubs Namespace and device message Azure event hub (the event hub is named: **devicedata**).
- * An Azure event hub consumer group (the consumer group is named: **$Default**).
- * An Azure event hub sender role (the sender role is named: **devicedatasender**).
- * An Azure Health Data Services workspace.
- * An Azure Health Data Services FHIR service.
- * An Azure Health Data Services MedTech service including the necessary [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles to the device message event hub (**Azure Events Hubs Receiver**) and FHIR service (**FHIR Data Writer**).
+Using a Quickstart template with Azure portal is the easiest and fastest deployment method because it automates most of your configuration with the touch of a **Deploy to Azure** button. This button automatically generates the following configurations and resources: managed identity RBAC roles, a provisioned workspace and namespace, an Event Hubs instance, a FHIR service instance, and a MedTech service instance. All you need to add are post-deployment device mapping, destination mapping, and a shared access policy key. This method simplifies your deployment, but does not allow for much customization.
-> [!TIP]
->
-> By using the drop down menus, you can find all the values that can be selected. You can also begin to type the value to begin the search for the resource, however, selecting the resource from the drop down menu will ensure that there are no typos.
->
-> :::image type="content" source="media\iot-deploy-quickstart-in-portal\display-drop-down-box.png" alt-text="Screenshot of Azure portal page displaying drop down menu example." lightbox="media\iot-deploy-quickstart-in-portal\display-drop-down-box.png":::
+For more information about the Quickstart template and the Deploy to Azure button, see [Deploy the MedTech service with a QuickStart template](deploy-02-new-button.md).
+## Azure PowerShell and Azure CLI automation
-1. When the Azure portal launches, the following fields must be filled out:
+Azure provides Azure PowerShell and Azure CLI to speed up your configurations when used in enterprise environments. Deploying MedTech service with Azure PowerShell or Azure CLI can be useful for adding automation so that you can scale your deployment for a large number of developers. This method is more detailed but provides extra speed and efficiency because it allows you to automate your deployment.
- :::image type="content" source="media\iot-deploy-quickstart-in-portal\iot-deploy-quickstart-options.png" alt-text="Screenshot of Azure portal page displaying deployment options for the Azure Health Data Service MedTech service." lightbox="media\iot-deploy-quickstart-in-portal\iot-deploy-quickstart-options.png":::
+For more information about Using an ARM template with Azure PowerShell and Azure CLI, see [Using Azure PowerShell and Azure CLI to deploy the MedTech service using Azure Resource Manager templates](/deploy-08-new-ps-cli.md).
- * **Subscription** - Choose the Azure subscription you would like to use for the deployment.
- * **Resource Group** - Choose an existing Resource Group or create a new Resource Group.
- * **Region** - The Azure region of the Resource Group used for the deployment. This field will auto-fill based on the Resource Group region.
- * **Basename** - Will be used to append the name the Azure resources and services to be deployed.
- * **Location** - Use the drop-down list to select a supported Azure region for the Azure Health Data Services (could be the same or different region than your Resource Group).
+## Manual deployment
-2. Leave the **Device Mapping** and **Destination Mapping** fields with their default values.
+The manual deployment method uses Azure portal to implement each deployment task individually. There are no shortcuts. Because you will be able to see all the details of how to complete the sequence of each task, this procedure can be beneficial if you need to customize or troubleshoot your deployment process. This is the most complex method, but it provides valuable technical information and developmental options that will enable you to fine-tune your deployment very precisely.
-3. Select the **Review + create** button once the fields are filled out.
+For more information about manual deployment with portal, see [Overview of how to manually deploy the MedTech service using the Azure portal](/deploy-03-new-manual.md).
-4. After the validation has passed, select the **Create** button to begin the deployment.
+## Deployment architecture overview
- :::image type="content" source="media\iot-deploy-quickstart-in-portal\iot-deploy-quickstart-create.png" alt-text="Screenshot of Azure portal page displaying validation box and Create button for the Azure Health Data Service MedTech service." lightbox="media\iot-deploy-quickstart-in-portal\iot-deploy-quickstart-create.png":::
+The following data-flow diagram outlines the basic steps of MedTech service deployment and shows how these steps fit together with its data processing procedures. This may help you analyze the options and determine which deployment method is best for you.
-5. After a successful deployment, there will be remaining configurations that will need to be completed by you for a fully functional MedTech service:
- * Provide a working device mapping. For more information, see [How to use device mappings](how-to-use-device-mappings.md).
- * Provide a working FHIR destination mapping. For more information, see [How to use FHIR destination mappings](how-to-use-fhir-mappings.md).
- * Use the Shared access policies (SAS) key (**devicedatasender**) for connecting your device or application to the MedTech service device message event hub (**devicedata**). For more information, see [Connection string for a specific event hub in a namespace](../../event-hubs/event-hubs-get-connection-string.md#connection-string-for-a-specific-event-hub-in-a-namespace).
- > [!IMPORTANT]
- >
- > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group.
- >
- > Consumer groups enable multiple consuming applications to each have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups).
- >
- > **Examples:**
- > * Two MedTech services accessing the same device message event hub.
- > * A MedTech service and a storage writer application accessing the same device message event hub.
+There are six different steps of the MedTech service PaaS. Only the first four apply to deployment. All the methods of deployment will implement each of these four steps. However, the QuickStart template method will automatically implement part of step 1 and all of step 2. The other two methods will have to implement all of the steps individually. Here is a summary of each of the four deployment steps:
-## Deploy the MedTech service manually
+### Step 1: Prerequisites
-## Prerequisites
+- Have an Azure subscription
+- Create RBAC roles contributor and user access administrator or owner. This feature is automatically done in the QuickStart template method with the Deploy to Azure button, but it is not included in manual or PowerShell/CLI method and need to be implemented individually.
-It's important that you have the following prerequisites completed before you begin the steps of creating a MedTech service instance in Azure Health Data
+### Step 2: Provision
-* [Azure account](https://azure.microsoft.com/free/search/?OCID=AID2100131_SEM_c4b0772dc7df1f075552174a854fd4bc:G:s&ef_id=c4b0772dc7df1f075552174a854fd4bc:G:s&msclkid=c4b0772dc7df1f075552174a854fd4bc)
-* [Resource group deployed in the Azure portal](../../azure-resource-manager/management/manage-resource-groups-portal.md)
-* [Azure Event Hubs namespace and event hub deployed in the Azure portal](../../event-hubs/event-hubs-create.md)
-* [Workspace deployed in Azure Health Data Services](../healthcare-apis-quickstart.md)
-* [FHIR service deployed in Azure Health Data Services](../fhir/fhir-portal-quickstart.md)
+The QuickStart template method with the Deploy to Azure button automatically provides all these steps, but they are not included in the manual or the PowerShell/CLI method and must be completed individually.
-> [!TIP]
->
-> By using the drop down menus, you can find all the values that can be selected. You can also begin to type the value to begin the search for the resource, however, selecting the resource from the drop down menu will ensure that there are no typos.
->
-> :::image type="content" source="media\iot-deploy-quickstart-in-portal\display-drop-down-box.png" alt-text="Screenshot of Azure portal page displaying drop down menu example." lightbox="media\iot-deploy-quickstart-in-portal\display-drop-down-box.png":::
->
+- Create a resource group and workspace for Event Hubs, FHIR, and MedTech services.
+- Provision an Event Hubs instance to a namespace.
+- Provision a FHIR service instance to the same workspace.
+- Provision a MedTech service instance in the same workspace.
-1. Sign into the [Azure portal](https://portal.azure.com), and then enter your Health Data Services workspace resource name in the **Search** bar field located at the middle top of your screen. The name of the workspace you'll be deploying into will be of your own choosing. For this example deployment of the MedTech service, we'll be using a workspace named `azuredocsdemo`.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\find-workspace-in-portal.png" alt-text="Screenshot of Azure portal and entering the workspace that will be used for the MedTech service deployment." lightbox="media\iot-deploy-manual-in-portal\find-workspace-in-portal.png":::
+### Step 3: Configure
-2. Select the **Deploy MedTech service** button.
+Each method needs to provide **all** these configuration details. They include:
- :::image type="content" source="media\iot-deploy-manual-in-portal\select-deploy-medtech-service-button.png" alt-text="Screenshot of Azure Health Data Services workspace with a red box around the Deploy MedTech service button." lightbox="media\iot-deploy-manual-in-portal\select-deploy-medtech-service-button.png":::
+- Configure MedTech service to ingest data from an event hub.
+- Configure device mapping properties.
+- Configure destination mappings to an Observation resource in the FHIR service.
+- When the prerequisites, provisioning, and configuration are complete, create and deploy MedTech service.
-3. Select the **Add MedTech service** button.
+### Step 4: Post-Deployment
- :::image type="content" source="media\iot-deploy-manual-in-portal\select-add-medtech-service-button.png" alt-text="Screenshot of workspace and red box round the Add MedTech service button." lightbox="media\iot-deploy-manual-in-portal\select-add-medtech-service-button.png":::
+Each method must add **all** these post-deployment tasks:
-## Configure the MedTech service to ingest data
-
-1. Under the **Basics** tab, complete the required fields under **MedTech service details** page section.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\deploy-medtech-service-basics.png" alt-text="Screenshot of create MedTech services basics information with red boxes around the required information." lightbox="media\iot-deploy-manual-in-portal\deploy-medtech-service-basics.png":::
-
- 1. Enter the **MedTech service name**.
-
- The **MedTech service name** is a friendly, unique name for your MedTech service. For this example, we'll name the MedTech service `mt-azuredocsdemo`.
-
- 2. Enter the **Event Hubs Namespace**.
-
- The Event Hubs Namespace is the name of the **Event Hubs Namespace** that you've previously deployed. For this example, we'll use `eh-azuredocsdemo` for use with our MedTech service device messages.
-
- > [!TIP]
- >
- > For information about deploying an Azure Event Hubs Namespace, see [Create an Event Hubs Namespace](../../event-hubs/event-hubs-create.md#create-an-event-hubs-namespace).
- >
- > For more information about Azure Event Hubs Namespaces, see [Namespace](../../event-hubs/event-hubs-features.md?WT.mc_id=Portal-Microsoft_Healthcare_APIs#namespace) in the Features and terminology in Azure Event Hubs document.
-
- 3. Enter the **Events Hubs name**.
-
- The Event Hubs name is the event hub that you previously deployed within the Event Hubs Namespace. For this example, we'll use `devicedata` for use with our MedTech service device messages.
-
- > [!TIP]
- >
- > For information about deploying an Azure event hub, see [Create an event hub](../../event-hubs/event-hubs-create.md#create-an-event-hub).
-
- 4. Enter the **Consumer group**.
-
- The Consumer group name is located by going to the **Overview** page of the Event Hubs Namespace and selecting the event hub to be used for the MedTech service device messages. In this example, the event hub is named `devicedata`.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\select-medtech-service-event-hub.png" alt-text="Screenshot of Event Hubs overview and red box around the event hub to be used for the MedTech service device messages." lightbox="media\iot-deploy-manual-in-portal\select-medtech-service-event-hub.png":::
-
- 5. Once inside of the event hub, select the **Consumer groups** button under **Entities** to display the name of the consumer group to be used by your MedTech service.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\select-event-hub-consumer-groups.png" alt-text="Screenshot of event hub overview and red box around the consumer groups button." lightbox="media\iot-deploy-manual-in-portal\select-event-hub-consumer-groups.png":::
-
- 6. By default, a consumer group named **$Default** is created during the deployment of an event hub. Use this consumer group for your MedTech service deployment.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\display-event-hub-consumer-group.png" alt-text="Screenshot of event hub consumer groups with red box around the consumer group to be used with the MedTech service." lightbox="media\iot-deploy-manual-in-portal\display-event-hub-consumer-group.png":::
-
- > [!IMPORTANT]
- >
- > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group.
- >
- > Consumer groups enable multiple consuming applications to each have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups).
- >
- > Examples:
- > * Two MedTech services accessing the same device message event hub.
- > * A MedTech service and a storage writer application accessing the same device message event hub.
-
-2. Select **Next: Device mapping** button.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\select-device-mapping-button.png" alt-text="Screenshot of MedTech services basics information filled out and a red box around the Device mapping button." lightbox="media\iot-deploy-manual-in-portal\select-device-mapping-button.png":::
-
-## Configure the Device mapping properties
-
-> [!TIP]
->
-> The IoMT Connector Data Mapper is an open source tool to visualize the mapping configuration for normalizing a device's input data, and then transforming it into FHIR resources. You can use this tool to edit and test Device and FHIR destination mappings, and to export the mappings to be uploaded to a MedTech service in the Azure portal. This tool also helps you understand your device's Device and FHIR destination mapping configurations.
->
-> For more information regarding Device mappings, see our GitHub open source and Azure Docs documentation:
->
-> [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper)
->
-> [Device Content Mapping](https://github.com/microsoft/iomt-fhir/blob/master/docs/Configuration.md#device-content-mapping)
->
-> [How to use Device mappings](how-to-use-device-mappings.md)
-
-1. Under the **Device mapping** tab, enter the Device mapping JSON code for use with your MedTech service.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\configure-device-mapping-empty.png" alt-text="Screenshot of empty Device mapping page with red box around required information." lightbox="media\iot-deploy-manual-in-portal\configure-device-mapping-empty.png":::
-
-2. Once Device mapping is configured, select the **Next: Destination >** button to configure the destination properties associated with your MedTech service.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\configure-device-mapping-completed.png" alt-text="Screenshot of Device mapping page and the Destination button with red box around both." lightbox="media\iot-deploy-manual-in-portal\configure-device-mapping-completed.png":::
-
-## Configure Destination properties
-
-1. Under the **Destination** tab, enter the destination properties associated with your MedTech service.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\configure-destination-mapping-empty.png" alt-text="Screenshot of Destination mapping page with red box around required information." lightbox="media\iot-deploy-manual-in-portal\configure-destination-mapping-empty.png":::
-
- 1. Name of your **FHIR server**.
-
- The **FHIR Server** name (also known as the **FHIR service**) is located by using the **Search** bar at the top of the screen to go to the FHIR service that you've deployed and by selecting the **Properties** button. Copy and paste the **Name** string into the **FHIR Server** text field. In this example, the **FHIR Server** name is `fs-azuredocsdemo`.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\get-fhir-service-name.png" alt-text="Screenshot of the FHIR Server properties with a red box around the Properties button and FHIR service name."lightbox="media\iot-deploy-manual-in-portal\get-fhir-service-name.png":::
-
- 2. Enter the **Destination Name**.
-
- The **Destination Name** is a friendly name for the destination. Enter a unique name for your destination. In this example, the **Destination Name** is `fs-azuredocsdemo`.
-
- 3. Select **Create** or **Lookup** for the **Resolution Type**.
-
- > [!NOTE]
- >
- > For the MedTech service destination to create a valid observation resource in the FHIR service, a device resource and patient resource **must** exist in the FHIR service, so the observation can properly reference the device that created the data, and the patient the data was measured from. There are two modes the MedTech service can use to resolve the device and patient resources.
-
- **Create**
-
- The MedTech service destination attempts to retrieve a device resource from the FHIR service using the [device identifier](https://www.hl7.org/fhir/device-definitions.html#Device.identifier) included in the normalized message. It also attempts to retrieve a patient resource from the FHIR service using the [patient identifier](https://www.hl7.org/fhir/patient-definitions.html#Patient.identifier) included in the normalized message. If either resource isn't found, new resources will be created (device, patient, or both) containing just the identifier contained in the normalized message. When you use the **Create** option, both a device identifier and a patient identifier can be configured in the device mapping. In other words, when the MedTech service destination is in **Create** mode, it can function normally **without** adding device and patient resources to the FHIR service.
-
- **Lookup**
-
- The MedTech service destination attempts to retrieve a device resource from the FHIR service using the device identifier included in the normalized message. If the device resource isn't found, an error will occur, and the data won't be processed. For **Lookup** to function properly, a device resource with an identifier matching the device identifier included in the normalized message **must** exist and the device resource **must** have a reference to a patient resource that also exists. In other words, when the MedTech service destination is in the Lookup mode, device and patient resources **must** be added to the FHIR service before data can be processed. If the MedTech service attempts to look up resources that don't exist on the FHIR service, a **DeviceNotFoundException** and/or a **PatientNotFoundException** error(s) will be generated based on which resources aren't present.
-
- > [!TIP]
- >
- > For more information regarding Destination mappings, see our GitHub and Azure Docs documentation:
- >
- > [FHIR mapping](https://github.com/microsoft/iomt-fhir/blob/master/docs/Configuration.md#fhir-mapping).
- >
- > [How to us FHIR destination mappings](how-to-use-fhir-mappings.md)
-
-2. Under **Destination Mapping**, enter the JSON code inside the code editor.
-
- > [!TIP]
- >
- > For information about the Mapper Tool, see [IoMT Connector Data Mapper Tool](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper).
-
-3. You may select the **Review + create** button, or you can optionally select the **Next: Tags >** button if you want to configure tags.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\configure-destination-mapping-completed.png" alt-text="Screenshot of Destination mapping page with red box around both required information." lightbox="media\iot-deploy-manual-in-portal\configure-destination-mapping-completed.png":::
-
-## (Optional) Configure Tags
-
-Tags are name and value pairs used for categorizing resources. For more information about tags, see [Use tags to organize your Azure resources and management hierarchy](../../azure-resource-manager/management/tag-resources.md).
-
-1. Under the **Tags** tab, enter the tag properties associated with the MedTech service.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\optional-create-tags.png" alt-text="Screenshot of optional tags creation page with red box around both required information." lightbox="media\iot-deploy-manual-in-portal\optional-create-tags.png":::
-
- 1. Enter a **Name**.
- 2. Enter a **Value**.
-
-2. Once you've entered your tag(s), select the **Review + create** button.
-
-3. You should notice a **Validation success** message like what's shown in the image below.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\validate-and-review-medtech-service.png" alt-text="Screenshot of validation success and a red box around the Create button." lightbox="media\iot-deploy-manual-in-portal\validate-and-review-medtech-service.png":::
-
- > [!NOTE]
- >
- > If your MedTech service didn't validate, review the validation failure message, and troubleshoot the issue. It's recommended that you review the properties under each MedTech service tab that you've configured.
-
-## Create your MedTech service
-
-1. Select the **Create** button to begin the deployment of your MedTech service.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\create-medtech-service.png" alt-text="Screenshot of a red box around the Create button for the MedTech service." lightbox="media\iot-deploy-manual-in-portal\create-medtech-service.png":::
-
-2. The deployment status of your MedTech service will be displayed.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\deploy-medtech-service-status.png" alt-text="Screenshot of the MedTech service deployment status and a red box around deployment information." lightbox="media\iot-deploy-manual-in-portal\deploy-medtech-service-status.png":::
-
-3. Once your MedTech service is successfully deployed, select the **Go to resource** button to be taken to your MedTech service.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\created-medtech-service.png" alt-text="Screenshot of the MedTech service deployment status and a red box around Go to resource button." lightbox="media\iot-deploy-manual-in-portal\created-medtech-service.png":::
-
-4. Now that your MedTech service has been deployed, we're going to walk through the steps of assigning access roles. Your MedTech service's system-assigned managed identity will require access to your device message event hub and your FHIR service.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\display-medtech-service-configurations.png" alt-text="Screenshot of the MedTech service main configuration page." lightbox="media\iot-deploy-manual-in-portal\display-medtech-service-configurations.png":::
-
-## Granting the MedTech service access to the device message event hub and FHIR service
-
-To ensure that your MedTech service works properly, it's system-assigned managed identity must be granted access via role assignments to your device message event hub and FHIR service.
+- Connect to services using device and destination mapping.
+- Use managed identity to grant access to the device message event hub.
+- Use managed identity to grant access to the FHIR service, enabling FHIR to receive data from the MedTech service.
+- Note: only the ARM QuickStart method requires a shared access key for post-deployment.
### Granting access to the device message event hub
-1. In the **Search** bar at the top center of the Azure portal, enter and select the name of your **Event Hubs Namespace** that was previously created for your MedTech service device messages.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\search-for-event-hubs-namespace.png" alt-text="Screenshot of the Azure portal search bar with red box around the search bar and Azure Event Hubs Namespace." lightbox="media\iot-deploy-manual-in-portal\search-for-event-hubs-namespace.png":::
-
-2. Select the **Event Hubs** button under **Entities**.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\select-medtech-service-event-hubs-button.png" alt-text="Screenshot of the MedTech service Azure Event Hubs Namespace with red box around the Event Hubs button." lightbox="media\iot-deploy-manual-in-portal\select-medtech-service-event-hubs-button.png":::
-
-3. Select the event hub that will be used for your MedTech service device messages. For this example, the device message event hub is named `devicedata'.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\select-event-hub-for-device-messages.png" alt-text="Screenshot of the device message event hub with red box around the Access control (IAM) button." lightbox="media\iot-deploy-manual-in-portal\select-event-hub-for-device-messages.png":::
-
-4. Select the **Access control (IAM)** button.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\select-event-hub-access-control-iam-button.png" alt-text="Screenshot of event hub landing page and a red box around the Access control (IAM) button." lightbox="media\iot-deploy-manual-in-portal\select-event-hub-access-control-iam-button.png":::
-
-5. Select the **Add role assignment** button.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\select-event-hub-add-role-assignment-button.png" alt-text="Screenshot of the Access control (IAM) page and a red box around the Add role assignment button." lightbox="media\iot-deploy-manual-in-portal\select-event-hub-add-role-assignment-button.png":::
-
-6. On the **Add role assignment** page, select the **View** button directly across from the **Azure Event Hubs Data Receiver** role.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\event-hub-add-role-assignment-available-roles.png" alt-text="Screenshot of the Access control (IAM) page and a red box around the Azure Event Hubs Data Receiver text and View button." lightbox="media\iot-deploy-manual-in-portal\event-hub-add-role-assignment-available-roles.png":::
-
- The Azure Event Hubs Data Receiver role allows the MedTech service that's being assigned this role to receive device message data from this event hub.
-
- > [!TIP]
- >
- > For more information about application roles, see [Authentication & Authorization for Azure Health Data Services](.././authentication-authorization.md).
-
-7. Select the **Select role** button.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\event-hub-select-role-button.png" alt-text="Screenshot of the Azure Events Hubs Data Receiver role with a red box around the Select role button." lightbox="media\iot-deploy-manual-in-portal\event-hub-select-role-button.png":::
-
-8. Select the **Next** button.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\select-event-hub-roles-next-button.png" alt-text="Screenshot of the Azure Events Hubs Data Receiver role with a red box around the Next button." lightbox="media\iot-deploy-manual-in-portal\select-event-hub-roles-next-button.png":::
-
-9. In the **Add role assignment** page, select **Managed identity** next to **Assign access to** and **+ Select members** next to **Members**.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\select-event-hubs-managed-identity-and-members-buttons.png" alt-text="Screenshot of the Add role assignment page with a red box around the Managed identity and + Select members buttons." lightbox="media\iot-deploy-manual-in-portal\select-event-hubs-managed-identity-and-members-buttons.png":::
-
-10. When the **Select managed identities** box opens, under the **Managed identity** box, select **MedTech service,** and find your MedTech service system-assigned managed identity under the **Select** box. Once the system-assigned managed identity for your MedTech service is found, select it, and then select the **Select** button.
-
- > [!TIP]
- >
- > The system-assigned managed identify name for your MedTech service is a concatenation of the workspace name and the name of your MedTech service.
- >
- > **"your workspace name"/"your MedTech service name"** or **"your workspace name"/iotconnectors/"your MedTech service name"**
- >
- > For example:
- >
- > **azuredocsdemo/mt-azuredocsdemo** or **azuredocsdemo/iotconnectors/mt-azuredocsdemo**
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\select-medtech-service-mi-for-event-hub-access.png" alt-text="Screenshot of the Select managed identities page with a red box around the Managed identity drop-down box, the selected managed identity and the Select button." lightbox="media\iot-deploy-manual-in-portal\select-medtech-service-mi-for-event-hub-access.png":::
-
-11. On the **Add role assignment** page, select the **Review + assign** button.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\select-review-assign-for-event-hub-managed-identity-add.png" alt-text="Screenshot of the Add role assignment page with a red box around the Review + assign button." lightbox="media\iot-deploy-manual-in-portal\select-review-assign-for-event-hub-managed-identity-add.png":::
-
-12. On the **Add role assignment** confirmation page, select the **Review + assign** button.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\select-review-assign-for-event-hub-managed-identity-confirmation.png" alt-text="Screenshot of the Add role assignment confirmation page with a red box around the Review + assign button." lightbox="media\iot-deploy-manual-in-portal\select-review-assign-for-event-hub-managed-identity-confirmation.png":::
-
-13. After the role assignment has been successfully added to the event hub, a notification will display on your screen with a green check mark. This notification indicates that your MedTech service can now read from your device message event hub.
-
- :::image type="content" source="media\iot-deploy-manual-in-portal\validate-medtech-service-managed-identity-added-to-event-hub.png" alt-text="Screenshot of the MedTech service system-assigned managed identity being successfully granted access to the event hub with a red box around the message." lightbox="media\iot-deploy-manual-in-portal\validate-medtech-service-managed-identity-added-to-event-hub.png":::
-
- > [!TIP]
- >
- > For more information about authorizing access to Event Hubs resources, see [Authorize access with Azure Active Directory](../../event-hubs/authorize-access-azure-active-directory.md).
+For information about granting access to the device message event hub, see [Granting access to the device message event hub](deploy-07-new-post-deploy.md#granting-access-to-the-device-message-event-hub).
### Granting access to the FHIR service
-The steps for granting your MedTech service system-assigned managed identity access to your FHIR service are the same steps that you took to grant access to your device message event hub. The only exception will be that your MedTech service system-assigned managed identity will require **FHIR Data Writer** access versus **Azure Event Hubs Data Receiver**.
-
-The **FHIR Data Writer** role provides read and write access to your FHIR service, which your MedTech service uses to access or persist data. Because the MedTech service is deployed as a separate resource, the FHIR service will receive requests from the MedTech service. If the FHIR service doesnΓÇÖt know who's making the request, it will deny the request as unauthorized.
--
-> [!TIP]
->
-> For more information about assigning roles to the FHIR service, see [Configure Azure Role-based Access Control (RBAC)](.././configure-azure-rbac.md)
->
-> For more information about application roles, see [Authentication & Authorization for Azure Health Data Services](.././authentication-authorization.md).
+For information about granting access to the FHIR service, see [Granting access to the FHIR service](deploy-07-new-post-deploy.md#granting-access-to-the-fhir-service).
## Next steps
-In this article, you've learned how to deploy a MedTech service using the Azure portal. To learn more about how to troubleshoot your MedTech service or Frequently Asked Questions (FAQs) about the MedTech service, see
+In this article, you learned about the different types of deployment for MedTech service. To learn more about MedTech service, see
-> [!div class="nextstepaction"]
->
-> [Troubleshoot the MedTech service](iot-troubleshoot-guide.md)
->
-> [Frequently asked questions (FAQs) about the MedTech service](iot-connector-faqs.md)
+>[!div class="nextstepaction"]
+>[What is MedTech service?](/iot-connector-overview.md).
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Get Started With Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started-with-iot.md
After you have fulfilled the prerequisites and provisioned your services, the ne
### Configuring MedTech service to ingest data
-MedTech service must be configured to ingest data it will receive from an event hub. First you must begin the official deployment process at the Azure portal. For more information about configuring MedTech service using the Azure portal, see [Deployment using the Azure portal](deploy-iot-connector-in-azure.md#prerequisites).
+MedTech service must be configured to ingest data it will receive from an event hub. First you must begin the official deployment process at the Azure portal. For more information about deploying MedTech service using the Azure portal, see [Overview of how to manually deploy the MedTech service using the Azure portal
+](deploy-03-new-manual.md) and [Prerequisites for manually deploying the MedTech service using the Azure portal](deploy-04-new-prereq.md).
-Once you have starting using the portal and added MedTech service to your workspace, you must then configure MedTech service to ingest data from an event hub. For more information about configuring MedTech service to ingest data, see [Configure the MedTech service to ingest data](deploy-iot-connector-in-azure.md#configure-the-medtech-service-to-ingest-data).
+Once you have starting using the portal and added MedTech service to your workspace, you must then configure MedTech service to ingest data from an event hub. For more information about configuring MedTech service to ingest data, see [Configure the MedTech service to ingest data](deploy-05-new-config.md).
### Configuring device mappings
You must configure MedTech to map it to the device you want to receive data from
- Azure Health Data Services provides an open source tool you can use called [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/main/tools/data-mapper) that will help you map your device's data structure to a form that MedTech can use. For more information on device content mapping, see [Device Content Mapping](https://github.com/microsoft/iomt-fhir/blob/main/docs/Configuration.md#device-content-mapping). -- When you are deploying MedTech service, you must set specific device mapping properties. For more information on device mapping properties, see [Configure the Device mapping properties](deploy-iot-connector-in-azure.md#configure-the-device-mapping-properties).
+- When you are deploying MedTech service, you must set specific device mapping properties. For more information on device mapping properties, see [Configure the Device mapping properties](deploy-05-new-config.md).
### Configuring destination mappings Once your device's data is properly mapped to your device's data format, you must then map it to an Observation in the FHIR service. For an overview of FHIR destination mappings, see [How to use the FHIR destination mappings](how-to-use-fhir-mappings.md).
-For step-by-step destination property mapping, see [Configure destination properties](deploy-iot-connector-in-azure.md#configure-destination-properties
+For step-by-step destination property mapping, see [Configure destination properties](deploy-05-new-config.md).
). ### Create and deploy the MedTech service
-If you have completed the prerequisites, provisioning, and configuration, you are now ready to deploy the MedTech service. Create and deploy your MedTech service by following deployment the procedure at [Create your MedTech service](deploy-iot-connector-in-azure.md#create-your-medtech-service).
+If you have completed the prerequisites, provisioning, and configuration, you are now ready to deploy the MedTech service. Create and deploy your MedTech service by following the procedures at [Create your MedTech service](deploy-06-new-deploy.md).
## Step 4: Connect to required services (post deployment)
-When you complete the final [deployment procedure](deploy-iot-connector-in-azure.md#create-your-medtech-service) and don't get any errors, you must link MedTech service to an Event Hubs and the FHIR service. This will enable a connection from MedTech service to an Event Hubs instance and the FHIR service, so that data can flow smoothly from device to FHIR Observation. In order to do this, the Event Hubs instance for device message flow must be granted access via role assignment, so MedTech service can receive Event Hubs data. You must also grant access to The FHIR service via role assignments in order for MedTech to receive the data. There are two parts of the process to connect to required services.
+When you complete the final [deployment procedure](deploy-06-new-deploy.md) and don't get any errors, you must link MedTech service to an Event Hubs and the FHIR service. This will enable a connection from MedTech service to an Event Hubs instance and the FHIR service, so that data can flow smoothly from device to FHIR Observation. In order to do this, the Event Hubs instance for device message flow must be granted access via role assignment, so MedTech service can receive Event Hubs data. You must also grant access to The FHIR service via role assignments in order for MedTech to receive the data. There are two parts of the process to connect to required services.
-For more information about granting access via role assignments, see [Granting the MedTech service access to the device message event hub and FHIR service](deploy-iot-connector-in-azure.md#granting-the-medtech-service-access-to-the-device-message-event-hub-and-fhir-service).
+For more information about granting access via role assignments, see [Granting the MedTech service access to the device message event hub and FHIR service](deploy-07-new-post-deploy.md#granting-the-medtech-service-access-to-the-device-message-event-hub-and-fhir-service).
### Granting access to the device message event hub
healthcare-apis Iot Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-overview.md
After the PaaS deployment is completed, high-velocity and low-velocity patient m
### MedTech service
-When device data has been loaded into Event Hubs service, MedTech service is able to pick it up and convert it into a unified FHIR format in five stages.
+When the device data has been loaded into Event Hubs service, MedTech service can then process it in five stages to convert the data into a unified FHIR format.
These stages are: 1. **Ingest** - MedTech service asynchronously loads the device data from the event hub at very high speed.
-2. **Normalize** - After the data has been ingested, MedTech service uses device mapping to streamline and process it into a normalized schema format.
+2. **Normalize** - After the data has been ingested, MedTech service uses device mapping to streamline and translate it into a normalized schema format.
3. **Group** - The normalized data is then grouped by parameters to prepare it for the next stage of processing. The parameters are: device identity, measurement type, time period, and (optionally) correlation id.
iot-edge How To Vs Code Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-vs-code-develop-module.md
The [IoT Edge Dev Tool](https://github.com/Azure/iotedgedev) simplifies Azure Io
mkdir c:\dev\iotedgesolution ```
-1. Use the **iotedgedev init** command to create a solution and set up your Azure IoT Hub. Use the following command to create an IoT Edge solution for a specified development language.
+1. Use the **iotedgedev solution init** command to create a solution and set up your Azure IoT Hub. Use the following command to create an IoT Edge solution for a specified development language.
# [C](#tab/c) ```bash
- iotedgedev init --template c
+ iotedgedev solution init --template c
``` # [C\#](#tab/csharp) ```bash
- iotedgedev init --template csharp
+ iotedgedev solution init --template csharp
``` The solution includes a default C# module named *filtermodule*.
The [IoT Edge Dev Tool](https://github.com/Azure/iotedgedev) simplifies Azure Io
# [Azure Functions](#tab/azfunctions) ```bash
- iotedgedev init --template csharpfunction
+ iotedgedev solution init --template csharpfunction
``` # [Java](#tab/java) ```bash
- iotedgedev init --template java
+ iotedgedev solution init --template java
``` # [Node.js](#tab/node) ```bash
- iotedgedev init --template nodejs
+ iotedgedev solution init --template nodejs
``` # [Python](#tab/python) ```bash
- iotedgedev init --template python
+ iotedgedev solution init --template python
```
-The *iotedgedev init* script prompts you to complete several steps including:
+The *iotedgedev solution init* script prompts you to complete several steps including:
* Authenticate to Azure * Choose an Azure subscription
load-balancer Ipv6 Dual Stack Standard Internal Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/ipv6-dual-stack-standard-internal-load-balancer-powershell.md
The changes that make the above an internal load balancer front-end configuratio
If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-## Prerequisites
-Before you deploy a dual stack application in Azure, you must configure your subscription for this preview feature using the following Azure PowerShell:
-
-Register as follows:
-```azurepowershell
-Register-AzProviderFeature -FeatureName AllowIPv6VirtualNetwork -ProviderNamespace Microsoft.Network
-Register-AzProviderFeature -FeatureName AllowIPv6CAOnStandardLB -ProviderNamespace Microsoft.Network
-```
-It takes up to 30 minutes for feature registration to complete. You can check your registration status by running the following Azure PowerShell command:
-Check on the registration as follows:
-```azurepowershell
-Get-AzProviderFeature -FeatureName AllowIPv6VirtualNetwork -ProviderNamespace Microsoft.Network
-Get-AzProviderFeature -FeatureName AllowIPv6CAOnStandardLB -ProviderNamespace Microsoft.Network
-```
-After the registration is complete, run the following command:
-
-```azurepowershell
-Register-AzResourceProvider -ProviderNamespace Microsoft.Network
-```
- ## Create a resource group Before you can create your dual-stack virtual network, you must create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). The following example creates a resource group named *dsStd_ILB_RG* in the *east us* location:
load-testing Concept Load Testing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/concept-load-testing-concepts.md
You now know the key concepts of Azure Load Testing to start creating a load tes
- Learn how [Azure Load Testing works](./overview-what-is-azure-load-testing.md#how-does-azure-load-testing-work). - Learn how to [Create and run a load test for a website](./quickstart-create-and-run-load-test.md). - Learn how to [Identify a performance bottleneck in an Azure application](./tutorial-identify-bottlenecks-azure-portal.md).-- Learn how to [Set up continuous regression testing with Azure Pipelines](./tutorial-cicd-azure-pipelines.md).
+- Learn how to [Set up automated regression testing with CI/CD](./tutorial-identify-performance-regression-with-cicd.md).
load-testing How To Appservice Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-appservice-insights.md
In this section, you use [App Service diagnostics](../app-service/overview-diagn
- Learn how to [parameterize a load test](./how-to-parameterize-load-tests.md) with secrets. -- Learn how to [configure automated performance testing](./tutorial-cicd-azure-pipelines.md).
+- Learn how to [configure automated performance testing](./tutorial-identify-performance-regression-with-cicd.md).
load-testing How To Compare Multiple Test Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-compare-multiple-test-runs.md
When there's a performance issue, you can use the server-side metrics to analyze
- Learn more about [exporting the load test results for reporting](./how-to-export-test-results.md). - Learn more about [troubleshooting load test execution errors](./how-to-find-download-logs.md).-- Learn more about [configuring automated performance testing with Azure Pipelines](./tutorial-cicd-azure-pipelines.md).
+- Learn more about [configuring automated performance testing with CI/CD](./tutorial-identify-performance-regression-with-cicd.md).
load-testing How To Configure User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-configure-user-properties.md
To add a user properties file to your load test by using the Azure portal, follo
If you run a load test within your CI/CD workflow, you add the user properties file to the source control repository. You then specify this properties file in the [load test configuration YAML file](./reference-test-config-yaml.md).
-For more information about running a load test in a CI/CD workflow, see the [Automated regression testing tutorial](./tutorial-cicd-azure-pipelines.md).
+For more information about running a load test in a CI/CD workflow, see the [Automated regression testing tutorial](./tutorial-identify-performance-regression-with-cicd.md).
To add a user properties file to your load test, follow these steps:
load-testing How To Create Manage Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-create-manage-test.md
You can perform the following actions:
## Next steps - [Identify performance bottlenecks with Azure Load Testing in the Azure portal](./quickstart-create-and-run-load-test.md)-- [Set up automated load testing with CI/CD in Azure Pipelines](./tutorial-cicd-azure-pipelines.md)-- [Set up automated load testing with CI/CD in GitHub Actions](./tutorial-cicd-github-actions.md)
+- [Set up automated load testing with CI/CD](./tutorial-identify-performance-regression-with-cicd.md)
load-testing How To Define Test Criteria https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-define-test-criteria.md
When the CI/CD workflow runs the load test, the workflow status reflects the sta
- To learn how to parameterize a load test by using secrets, see [Parameterize a load test](./how-to-parameterize-load-tests.md). -- To learn about performance test automation, see [Configure automated performance testing](./tutorial-cicd-azure-pipelines.md).
+- To learn about performance test automation, see [Configure automated performance testing](./tutorial-identify-performance-regression-with-cicd.md).
load-testing How To Export Test Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-export-test-results.md
In this section, you'll retrieve and download the Azure Load Testing results fil
- Learn more about [Troubleshooting test execution errors](./how-to-find-download-logs.md). - For information about comparing test results, see [Compare multiple test results](./how-to-compare-multiple-test-runs.md).-- To learn about performance test automation, see [Configure automated performance testing](./tutorial-cicd-azure-pipelines.md).
+- To learn about performance test automation, see [Configure automated performance testing](./tutorial-identify-performance-regression-with-cicd.md).
load-testing How To High Scale Load https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-high-scale-load.md
If one or multiple instances show a high resource usage, it could impact the tes
## Next steps - For more information about comparing test results, see [Compare multiple test results](./how-to-compare-multiple-test-runs.md).-- To learn about performance test automation, see [Configure automated performance testing](./tutorial-cicd-azure-pipelines.md).
+- To learn about performance test automation, see [Configure automated performance testing](./tutorial-identify-performance-regression-with-cicd.md).
- More information about [service limits and quotas in Azure Load Testing](./resource-limits-quotas-capacity.md).
load-testing How To Monitor Server Side Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-monitor-server-side-metrics.md
When you update the configuration of a load test, all future test runs will use
- Learn how to [set up a high-scale load test](./how-to-high-scale-load.md). -- Learn how to [configure automated performance testing](./tutorial-cicd-azure-pipelines.md).
+- Learn how to [configure automated performance testing](./tutorial-identify-performance-regression-with-cicd.md).
load-testing How To Parameterize Load Tests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-parameterize-load-tests.md
The values of the parameters aren't stored when they're passed from the CI/CD wo
- For information about high-scale load tests, see [Set up a high-scale load test](./how-to-high-scale-load.md). -- To learn about performance test automation, see [Configure automated performance testing](./tutorial-cicd-azure-pipelines.md).
+- To learn about performance test automation, see [Configure automated performance testing](./tutorial-identify-performance-regression-with-cicd.md).
load-testing How To Read Csv Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-read-csv-data.md
To add a CSV file to your load test by using the Azure portal:
::: zone pivot="experience-pipelines,experience-ghactions"
-If you run a load test within your CI/CD workflow, you can add a CSV file to the test configuration YAML file. For more information about running a load test in a CI/CD workflow, see the [Automated regression testing tutorial](./tutorial-cicd-azure-pipelines.md).
+If you run a load test within your CI/CD workflow, you can add a CSV file to the test configuration YAML file. For more information about running a load test in a CI/CD workflow, see the [Automated regression testing tutorial](./tutorial-identify-performance-regression-with-cicd.md).
To add a CSV file to your load test:
To configure your load test to split input CSV files:
- For information about high-scale load tests, see [Set up a high-scale load test](./how-to-high-scale-load.md). -- To learn about performance test automation, see [Configure automated performance testing](./tutorial-cicd-azure-pipelines.md).
+- To learn about performance test automation, see [Configure automated performance testing](./tutorial-identify-performance-regression-with-cicd.md).
load-testing How To Test Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-private-endpoint.md
You can specify the VNET configuration settings in the load test creation/update
:::image type="content" source="media/how-to-test-private-endpoint/edit-test.png" alt-text="Screenshot that shows the Tests page, highlighting the button for editing a test.":::
-1. Review or fill the load test information. Follow these steps to [create or manage a test](./how-to-create-manage-test.md).
- 1. On the **Load** tab, select **Private** traffic mode, and then select your virtual network and subnet. If you have multiple subnets in your virtual network, make sure to select the subnet that will host the injected test engine VMs. :::image type="content" source="media/how-to-test-private-endpoint/create-new-test-load-vnet.png" alt-text="Screenshot that shows the Load tab for creating or updating a load test.":::
+ > [!IMPORTANT]
+ > Make sure you have sufficient permissions for managing virtual networks. You require the [Network Contributor](/azure/role-based-access-control/built-in-roles#network-contributor) role.
+
+1. Review or fill the load test information. Follow these steps to [create or manage a test](./how-to-create-manage-test.md).
+ 1. Select **Review + create** and then **Create** (or **Apply**, when updating an existing test). When the load test starts, Azure Load Testing injects the test engine VMs in your virtual network and subnet. The test script can now access the privately hosted application endpoint in your VNET.
To configure the load test with your virtual network settings, update the [YAML
For more information about the YAML configuration, see [test configuration YAML reference](./reference-test-config-yaml.md).
+ > [!IMPORTANT]
+ > Make sure you have sufficient permissions for managing virtual networks. You require the [Network Contributor](/azure/role-based-access-control/built-in-roles#network-contributor) role.
+ 1. Save the YAML configuration file, and commit your changes to the source code repository. 1. After the CI/CD workflow triggers, your load test starts, and can now access the privately hosted application endpoint in your VNET. ## Troubleshooting
+### Creating or updating the load test fails with `Subnet ID passed is invalid`
+
+To configure a load test in a virtual network, you must have sufficient permissions for managing virtual networks. You require the [Network Contributor](/azure/role-based-access-control/built-in-roles#network-contributor) role, or a parent of this role, on the virtual network. See [Check access for a user to Azure resources](/azure/role-based-access-control/check-access) to verify your permissions.
+ ### Starting the load test fails with `Test cannot be started` To start a load test, you must have sufficient permissions to deploy Azure Load Testing to the virtual network. You require the [Network Contributor](/azure/role-based-access-control/built-in-roles#network-contributor) role, or a parent of this role, on the virtual network. See [Check access for a user to Azure resources](/azure/role-based-access-control/check-access) to verify your permissions.
load-testing Overview What Is Azure Load Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/overview-what-is-azure-load-testing.md
You might also [download the test results](./how-to-export-test-results.md) for
You can integrate Azure Load Testing in your CI/CD pipeline at meaningful points during the development lifecycle. For example, you could automatically run a load test at the end of each sprint or in a staging environment to validate a release candidate build.
-Get started with [adding load testing to your Azure Pipelines CI/CD workflow](./tutorial-cicd-azure-pipelines.md) or use our [Azure Load Testing GitHub action](./tutorial-cicd-github-actions.md).
+Get started with [adding load testing to your CI/CD workflow](./tutorial-identify-performance-regression-with-cicd.md) to quickly identify performance degradation of your application under load.
In the test configuration, you [specify pass/fail rules](./how-to-define-test-criteria.md) to catch performance regressions early in the development cycle. For example, when the average response time exceeds a threshold, the test should fail.
Data stored in your Azure Load Testing resource is automatically encrypted with
Start using Azure Load Testing: - [Quickstart: Load test an existing web application](./quickstart-create-and-run-load-test.md). - [Tutorial: Use a load test to identify performance bottlenecks](./tutorial-identify-bottlenecks-azure-portal.md).-- [Tutorial: Set up automated load testing](./tutorial-cicd-azure-pipelines.md).
+- [Tutorial: Set up automated load testing](./tutorial-identify-performance-regression-with-cicd.md).
- Learn about the [key concepts for Azure Load Testing](./concept-load-testing-concepts.md).
load-testing Reference Test Config Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/reference-test-config-yaml.md
keyVaultReferenceIdentity: /subscriptions/abcdef01-2345-6789-0abc-def012345678/r
## Next steps -- Learn how to build [automated regression testing in your CI/CD workflow](tutorial-cicd-azure-pipelines.md).
+- Learn how to build [automated regression testing in your CI/CD workflow](./tutorial-identify-performance-regression-with-cicd.md).
- Learn how to [parameterize load tests with secrets and environment variables](./how-to-parameterize-load-tests.md). - Learn how to [load test secured endpoints](./how-to-test-secured-endpoints.md).
load-testing Resource Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-limits-quotas-capacity.md
To raise the limit or quota above the default limit, [open an online customer su
## Next steps - Learn how to [set up a high-scale load test](./how-to-high-scale-load.md).-- Learn how to [configure automated performance testing](./tutorial-cicd-azure-pipelines.md).
+- Learn how to [configure automated performance testing](./tutorial-identify-performance-regression-with-cicd.md).
load-testing Tutorial Cicd Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-cicd-azure-pipelines.md
- Title: 'Tutorial: Identify performance regressions with Azure Load Testing and Azure Pipelines'-
-description: 'In this tutorial, you learn how to automate performance regression testing by using Azure Load Testing and Azure Pipelines CI/CD workflows.'
---- Previously updated : 03/28/2022-
-#Customer intent: As an Azure user, I want to learn how to automatically test builds for performance regressions on every merge request and/or deployment by using Azure Pipelines.
--
-# Tutorial: Identify performance regressions with Azure Load Testing Preview and Azure Pipelines
-
-This tutorial describes how to automate performance regression testing by using Azure Load Testing Preview and Azure Pipelines. You'll set up an Azure Pipelines CI/CD workflow to deploy a sample Node.js application on Azure and trigger a load test using the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing). Once the load test finishes, you'll use the Azure Load Testing dashboard to identify performance issues.
-
-You'll deploy a sample Node.js web app on Azure App Service. The web app uses Azure Cosmos DB for storing the data. The sample application also contains an Apache JMeter script to load test three APIs.
-
-If you're using GitHub Actions for your CI/CD workflows, see the corresponding [GitHub Actions tutorial](./tutorial-cicd-github-actions.md).
-
-Learn more about the [key concepts for Azure Load Testing](./concept-load-testing-concepts.md).
-
-You'll learn how to:
-
-> [!div class="checklist"]
-> * Set up your repository with files required for load testing.
-> * Set up Azure Pipelines to integrate with Azure Load Testing.
-> * Run the load test and view results in the pipeline logs.
-> * Define pass/fail criteria for the load test.
-> * Parameterize the load test by using pipeline variables.
-
-> [!IMPORTANT]
-> Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-> [!NOTE]
-> Azure Pipelines has a 60-minute timeout on jobs that are running on Microsoft-hosted agents for private projects. If your load test is running for more than 60 minutes, you'll need to pay for [additional capacity](/azure/devops/pipelines/agents/hosted?tabs=yaml#capabilities-and-limitations). If not, the pipeline will time out without waiting for the test results. You can view the status of the load test in the Azure portal.
-
-## Prerequisites
-
-* An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* An Azure DevOps organization and project. If you don't have an Azure DevOps organization, you can [create one for free](/azure/devops/pipelines/get-started/pipelines-sign-up?view=azure-devops&preserve-view=true). If you need help with getting started with Azure Pipelines, see [Create your first pipeline](/azure/devops/pipelines/create-first-pipeline?preserve-view=true&view=azure-devops&tabs=java%2Ctfs-2018-2%2Cbrowser).
-* A GitHub account, where you can create a repository. If you don't have one, you can [create one for free](https://github.com/).
-
-## Set up the sample application repository
-
-To get started with this tutorial, you first need to set up a sample Node.js web application. The sample application contains an Azure Pipelines definition to deploy the application on Azure and trigger a load test.
--
-## Set up Azure Pipelines access permissions for Azure
-
-In this section, you'll configure your Azure DevOps project to have permissions to access the Azure Load Testing resource.
-
-To access Azure resources, create a service connection in Azure DevOps and use role-based access control to assign the necessary permissions:
-
-1. Sign in to your Azure DevOps organization (`https://dev.azure.com/<yourorganization>`).
-
-1. Select **Project settings** > **Service connections**.
-
-1. Select **+ New service connection**, select the **Azure Resource Manager** service connection, and then select **Next**.
-
-1. Select the **Service Principal (automatic)** authentication method, and then select **Next**.
-
-1. Select the **Subscription** scope level, and then select the Azure subscription that contains your Azure Load Testing resource.
-
- :::image type="content" source="./media/tutorial-cicd-azure-pipelines/new-service-connection.png" alt-text="Screenshot that shows selections for creating a new service connection.":::
-
- You'll use the name of the service connection in a later step to configure the pipeline.
-
-1. Select **Save** to create the connection.
-
-1. Select the service connection from the list, and then select **Manage Service Principal**.
-
- :::image type="content" source="./media/tutorial-cicd-azure-pipelines/manage-service-principal.png" alt-text="Screenshot that shows selections for managing a service principal.":::
-
- You'll see the details of the service principal in the Azure portal. Note the service principal's **Application (Client) ID** value.
-
- :::image type="content" source="./media/tutorial-cicd-azure-pipelines/service-connection-object-id.png" alt-text="Screenshot that shows how to get the application I D for the service connection.":::
-
-1. Assign the Load Test Contributor role to the service principal to allow access to the Azure Load Testing service.
-
- First, retrieve the ID of the service principal object. Select the `objectId` result from the following Azure CLI command:
-
- ```azurecli
- az ad sp show --id "<application-client-id>"
- ```
-
- Next, assign the Load Test Contributor role to the service principal. Replace the placeholder text `<sp-object-id>` with the ID of the service principal object. Also, replace `<subscription-name-or-id>` with your Azure subscription ID.
-
- ```azurecli
- az role assignment create --assignee "<sp-object-id>" \
- --role "Load Test Contributor" \
- --scope /subscriptions/<subscription-name-or-id>/resourceGroups/<resource-group-name> \
- --subscription "<subscription-name-or-id>"
- ```
-
-## Configure the Azure Pipelines workflow to run a load test
-
-In this section, you'll set up an Azure Pipelines workflow that triggers the load test. The sample application repository already contains a pipelines definition file *azure-pipeline.yml*.
-
-The Azure Pipelines workflow performs the following steps for every update to the main branch:
--- Deploy the sample Node.js application to an Azure App Service web app.-- Create an Azure Load Testing resource using the *ARMTemplate/template.json* Azure Resource Manager (ARM) template, if the resource doesn't exist yet. Learn more about ARM templates [here](../azure-resource-manager/templates/overview.md).-- Trigger Azure Load Testing to create and run the load test, based on the Apache JMeter script and the test configuration YAML file in the repository.-- Invoke Azure Load Testing by using the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing) and the sample Apache JMeter script *SampleApp.jmx* and the load test configuration file *SampleApp.yaml*.-
-Follow these steps to configure the Azure Pipelines workflow for your environment:
-
-1. Install the **Azure Load Testing** task extension from the Azure DevOps Marketplace.
-
- :::image type="content" source="./media/tutorial-cicd-azure-pipelines/browse-marketplace.png" alt-text="Screenshot that shows how to browse the Visual Studio Marketplace for extensions.":::
-
- :::image type="content" source="./media/tutorial-cicd-azure-pipelines/marketplace-load-testing-extension.png" alt-text="Screenshot that shows the button for installing the Azure Load Testing extension from the Visual Studio Marketplace.":::
-
-1. In your Azure DevOps project, select **Pipelines**, and then select **Create pipeline**.
-
- :::image type="content" source="./media/tutorial-cicd-azure-pipelines/create-pipeline.png" alt-text="Screenshot that shows selections for creating an Azure pipeline.":::
-
-1. On the **Connect** tab, select **GitHub**.
-
-1. Select **Authorize Azure Pipelines** to allow Azure Pipelines to access your GitHub account for triggering workflows.
-
-1. On the **Select** tab, select the sample application's forked repository.
-
- :::image type="content" source="./media/tutorial-cicd-azure-pipelines/create-pipeline-select-repo.png" alt-text="Screenshot that shows how to select the sample application's GitHub repository.":::
-
- The repository contains an *azure-pipeline.yml* pipeline definition file. The following snippet shows how to use the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing) in Azure Pipelines:
-
- ```yml
- - task: AzureLoadTest@1
- inputs:
- azureSubscription: $(serviceConnection)
- loadTestConfigFile: 'SampleApp.yaml'
- resourceGroup: $(loadTestResourceGroup)
- loadTestResource: $(loadTestResource)
- env: |
- [
- {
- "name": "webapp",
- "value": "$(webAppName).azurewebsites.net"
- }
- ]
- ```
-
- You'll now modify the pipeline to connect to your Azure Load Testing service.
-
-1. On the **Review** tab, replace the following placeholder text in the YAML code:
-
- |Placeholder |Value |
- |||
- |`<Name of your webapp>` | The name of the Azure App Service web app. |
- | `<Name of your webARM Service connection>` | The name of the service connection that you created in the previous section. |
- |`<Azure subscriptionId>` | Your Azure subscription ID. |
- |`<Name of your load test resource>` | The name of your Azure Load Testing resource. |
- |`<Name of your load test resource group>` | The name of the resource group that contains the Azure Load Testing resource. |
-
- :::image type="content" source="./media/tutorial-cicd-azure-pipelines/create-pipeline-review.png" alt-text="Screenshot that shows the Azure Pipelines Review tab when you're creating a pipeline.":::
-
- These variables are used to configure the Azure Pipelines tasks for deploying the sample application to Azure, and to connect to your Azure Load Testing resource.
-
-1. Select **Save and run**, enter text for **Commit message**, and then select **Save and run**.
-
- :::image type="content" source="./media/tutorial-cicd-azure-pipelines/create-pipeline-save.png" alt-text="Screenshot that shows selections for saving and running a new Azure pipeline.":::
-
- Azure Pipelines now runs the CI/CD workflow. You can monitor the status and logs by selecting the pipeline job.
-
- :::image type="content" source="./media/tutorial-cicd-azure-pipelines/create-pipeline-status.png" alt-text="Screenshot that shows how to view pipeline job details.":::
-
-## View load test results
-
-To view the results of the load test in the pipeline log:
-
-1. In your Azure DevOps project, select **Pipelines**, and then select your pipeline definition from the list.
-
-1. Select the pipeline run to view the run summary.
-
- :::image type="content" source="./media/tutorial-cicd-azure-pipelines/create-pipeline-run-summary.png" alt-text="Screenshot that shows the pipeline run summary.":::
-
-1. Select **Load Test** in the **Jobs** section to view the pipeline log.
-
- :::image type="content" source="./media/tutorial-cicd-azure-pipelines/create-pipeline-log.png" alt-text="Screenshot that shows the Azure Pipelines run log.":::
-
- After the load test finishes, you can view the test summary information and the client-side metrics in the pipeline log. The log also shows the URL to go to the Azure Load Testing dashboard for this load test.
-
-2. In the pipeline log view, select **Load Test**, and then select **1 artifact produced** to download the result files for the load test.
-
- :::image type="content" source="./media/tutorial-cicd-azure-pipelines/create-pipeline-download-results.png" alt-text="Screenshot that shows how to download the load test results.":::
-
-## Define test pass/fail criteria
-
-In this section, you'll add criteria to determine whether your load test passes or fails. If at least one of the pass/fail criteria evaluates to `true`, the load test is unsuccessful.
-
-You can specify these criteria in the test configuration YAML file:
-
-1. Edit the *SampleApp.yml* file in your GitHub repository.
-
-1. Add the following snippet at the end of the file:
-
- ```yaml
- failureCriteria:
- - avg(response_time_ms) > 100
- - percentage(error) > 20
- ```
-
- You've now specified pass/fail criteria for your load test. The test will fail if at least one of these conditions is met:
-
- - The aggregate average response time is greater than 100 ms.
- - The aggregate percentage of errors is greater than 20%.
-
-1. Commit and push the changes to the main branch of the repository.
-
- The changes will trigger the Azure Pipelines CI/CD workflow.
-
-1. On the page for pipeline runs, select the most recent entry from the list.
-
- After the load test finishes, you'll notice that the pipeline failed because the average response time was higher than the number that you specified in the pass/fail criteria.
-
- :::image type="content" source="./media/tutorial-cicd-azure-pipelines/test-criteria-failed.png" alt-text="Screenshot that shows pipeline logs after failed test criteria.":::
-
- The Azure Load Testing service evaluates the criteria during the test run. If any of these conditions fails, Azure Load Testing service returns a nonzero exit code. This code informs the CI/CD workflow that the test has failed.
-
-1. Edit the *SampleApp.yml* file and change the test's pass/fail criteria:
-
- ```yaml
- failureCriteria:
-     - avg(response_time_ms) > 5000
-     - percentage(error) > 20
- ```
-
-1. Commit the changes to trigger the Azure Pipelines CI/CD workflow.
-
- :::image type="content" source="./media/tutorial-cicd-azure-pipelines/test-criteria-passed.png" alt-text="Screenshot that shows pipeline logs after all test criteria pass.":::
-
- The load test now succeeds and the pipeline finishes successfully.
-
-## Pass parameters to your load tests from the pipeline
-
-Next, you'll parameterize your load test by using pipeline variables. These variables can be secrets, such as passwords, or non-secrets.
-
-In this tutorial, you'll reconfigure the sample application to accept only secure requests. To send a secure request, you need to pass a secret value in the HTTP request:
-
-1. Edit the *SampleApp.yaml* file in your GitHub repository.
-
- Update the `testPlan` configuration setting to use the *SampleApp_Secrets.jmx* file:
-
- ```yml
- version: v0.1
- testName: SampleApp
- testPlan: SampleApp_Secrets.jmx
- description: 'SampleApp Test with secrets'
- engineInstances: 1
- ```
-
- The *SampleApp_Secrets.jmx* Apache JMeter script uses a user-defined variable that retrieves the secret value with the custom function `${__GetSecret(secretName)}`. Apache JMeter then passes this secret value to the sample application endpoint.
-
-1. Commit the changes to the YAML file.
-
-1. Edit the *config.json* file in your GitHub repository.
-
- Update the `enableSecretsFeature` setting to `true` to reconfigure the sample application to accept only secure requests:
-
- ```json
- {
- "enableSecretsFeature": true
- }
- ```
-
-1. Commit the changes to the *config.json* file.
-
-1. Go to the **Pipelines** page, select your pipeline definition, and then select **Edit**.
-
- :::image type="content" source="./media/tutorial-cicd-azure-pipelines/edit-pipeline.png" alt-text="Screenshot that shows selections for editing a pipeline definition.":::
-
-1. Select **Variables**, and then select **New variable**.
-
-1. Enter the **Name** (**mySecret**) and **Value** (**1797669089**) information. Then select the **Keep this value secret** checkbox to store the variable securely.
-
- :::image type="content" source="./media/tutorial-cicd-azure-pipelines/new-variable.png" alt-text="Screenshot that shows selections for creating a pipeline variable.":::
-
-1. Select **OK**, and then select **Save** to save the new variable.
-
-1. Edit the *azure-pipeline.yml* file to pass the secret to the load test.
-
- Edit the Azure Load Testing task by adding the following YAML snippet:
-
- ```yml
- secrets: |
- [
- {
- "name": "appToken",
- "value": "$(mySecret)"
- }
- ]
- ```
-
-1. Save and run the pipeline.
-
- The Azure Load Testing task securely passes the secret from the pipeline to the test engine. The secret parameter is used only while you're running the load test, and then the value is discarded from memory.
-
-## Clean up resources
--
-## Next steps
-
-You've now created an Azure Pipelines CI/CD workflow that uses Azure Load Testing for automatically running load tests. By using pass/fail criteria, you can set the status of the CI/CD workflow. With parameters, you can make the running of load tests configurable.
-
-* Learn more about the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing).
-* Learn more about [Parameterizing a load test](./how-to-parameterize-load-tests.md).
-* Learn more [Define test pass/fail criteria](./how-to-define-test-criteria.md).
-* Learn more about [Configuring server-side monitoring](./how-to-monitor-server-side-metrics.md).
load-testing Tutorial Cicd Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-cicd-github-actions.md
- Title: 'Tutorial: Automate regression testing with GitHub Actions'-
-description: 'In this tutorial, you learn how to automate performance regression testing by using Azure Load Testing and GitHub Actions CI/CD workflows.'
---- Previously updated : 05/30/2022-
-#Customer intent: As an Azure user, I want to learn how to automatically test builds for performance regressions on every pull request and/or deployment by using GitHub Actions.
--
-# Tutorial: Identify performance regressions with Azure Load Testing Preview and GitHub Actions
-
-This tutorial describes how to automate performance regression testing with Azure Load Testing Preview and GitHub Actions.
-
-You'll set up a GitHub Actions CI/CD workflow to deploy a sample Node.js application on Azure and trigger a load test using the [Azure Load Testing action](https://github.com/marketplace/actions/azure-load-testing).
-
-You'll then define test failure criteria to ensure the application meets your goals. When a criterion isn't met, the CI/CD pipeline will fail. For more information, see [Define load test failure criteria](./how-to-define-test-criteria.md).
-
-Finally, you'll make the load test configurable by passing parameters from the CI/CD pipeline to the JMeter script. For example, you could use a GitHub secret to pass an authentication token the script. For more information, see [Parameterize load tests with secrets and environment variables](./how-to-parameterize-load-tests.md).
-
-If you're using Azure Pipelines for your CI/CD workflows, see the corresponding [Azure Pipelines tutorial](./tutorial-cicd-azure-pipelines.md).
-
-You'll learn how to:
-
-> [!div class="checklist"]
->
-> * Set up your repository with files required for load testing.
-> * Set up a GitHub workflow to integrate with Azure Load Testing.
-> * Run the load test and view results in the workflow.
-> * Define test criteria for the load test to pass or fail based on thresholds.
-> * Parameterize a load test by using GitHub secrets.
-
-> [!IMPORTANT]
-> Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## Prerequisites
-
-* An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* A GitHub account where you can create a repository. If you don't have one, you can [create one for free](https://github.com/).
-
-## Set up the sample application repository
-
-To get started with this tutorial, you first need to set up a sample Node.js web application. The sample application repository contains a GitHub Actions workflow definition that deploys the Node.js application on Azure and then triggers a load test.
--
-## Set up GitHub access permissions for Azure
-
-To grant GitHub Actions access to your Azure Load Testing resource, perform the following steps:
-
-1. Create a service principal that has the permissions to access Azure Load Testing.
-1. Configure a GitHub secret with the service principal information.
-1. Authenticate with Azure using [Azure Login](https://github.com/Azure/login).
-
-### Create a service principal
-
-First, you'll create an Azure Active Directory [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) and grant it the permissions to access your Azure Load Testing resource.
-
-1. Run the following Azure CLI command to create a service principal and assign the *Contributor* role:
-
- ```azurecli
- az ad sp create-for-rbac --name "my-load-test-cicd" --role contributor \
- --scopes /subscriptions/<subscription-id> \
- --sdk-auth
- ```
-
- In the previous command, replace the placeholder text `<subscription-id>` with the Azure subscription ID of your Azure Load Testing resource.
-
- > [!NOTE]
- > You might get a `--sdk-auth` deprecation warning when you run this command. Alternatively, you can use OpenID Connect (OIDC) based authentication for authenticating GitHub with Azure. Learn how to [use the Azure login action with OpenID Connect](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
-
- The output is the role assignment credentials that provide access to your resource. The command outputs a JSON object similar to the following snippet.
-
- ```json
- {
- "clientId": "<GUID>",
- "clientSecret": "<GUID>",
- "subscriptionId": "<GUID>",
- "tenantId": "<GUID>",
- (...)
- }
- ```
-
-1. Copy this JSON object. You'll store this value as a GitHub secret in a later step.
-
-1. Assign the service principal the **Load Test Contributor** role, which grants permission to create, manage and run tests in an Azure Load Testing resource.
-
- First, retrieve the ID of the service principal object by running this Azure CLI command:
-
- ```azurecli
- az ad sp list --filter "displayname eq 'my-load-test-cicd'" -o table
- ```
-
- Next, assign the **Load Test Contributor** role to the service principal.
-
- Replace the placeholder text `<sp-object-id>` with the `ObjectId` value from the previous Azure CLI command. Also, replace `<subscription-id>` with your Azure subscription ID.
-
- ```azurecli
- az role assignment create --assignee "<sp-object-id>" \
- --role "Load Test Contributor" \
- --scope /subscriptions/<subscription-id>/resourceGroups/<resource-group-name> \
- --subscription "<subscription-id>"
- ```
-
- In the previous command, replace the placeholder text `<sp-object-id>` with the `ObjectId` value from the previous Azure CLI command. Also, replace `<subscription-id>` with your Azure subscription ID.
-
-You now have a service principal that the necessary permissions to create and run a load test.
-
-### Configure the GitHub secret
-
-Next, add a GitHub secret **AZURE_CREDENTIALS** to your repository to store the service principal you created earlier. You'll pass this GitHub secret to the Azure Login action to authenticate with Azure.
-
-> [!NOTE]
-> If you're using OpenID Connect to authenticate with Azure, you don't have to pass the service principle object in the Azure login action. Learn how to [use the Azure login action with OpenID Connect](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
-
-1. In [GitHub](https://github.com), browse to your forked repository, select **Settings** > **Secrets** > **New repository secret**.
-
- :::image type="content" source="./media/tutorial-cicd-github-actions/github-new-secret.png" alt-text="Screenshot that shows selections for adding a new repository secret to your GitHub repo.":::
-
-1. Paste the JSON role assignment credentials that you copied previously, as the value of secret variable **AZURE_CREDENTIALS**.
-
- :::image type="content" source="./media/tutorial-cicd-github-actions/github-new-secret-details.png" alt-text="Screenshot that shows the details of the new GitHub repository secret.":::
-
-### Authenticate with Azure
-
-You can now use the `AZURE_CREDENTIALS` secret with the Azure Login action in your CI/CD workflow. The *.github/workflows/workflow.yml* file in the sample application repository already has this configuration:
-
-```yml
-jobs:
- build-and-deploy:
- # The type of runner that the job will run on
- runs-on: ubuntu-latest
-
- # Steps represent a sequence of tasks that will be executed as part of the job
- steps:
- # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- - name: Checkout GitHub Actions
- uses: actions/checkout@v2
-
- - name: Login to Azure
- uses: azure/login@v1
- continue-on-error: false
- with:
- creds: ${{ secrets.AZURE_CREDENTIALS }}
-```
-
-> [!NOTE]
-> If you're using OpenID Connect to authenticate with Azure, you don't have to pass the service principle object in the Azure login action. Learn how to [use the Azure login action with OpenID Connect](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
-
-You've now authorized your GitHub Actions workflow to access your Azure Load Testing resource. You'll now configure the CI/CD workflow to run a load test with Azure Load Testing.
-
-## Configure the GitHub Actions workflow to run a load test
-
-In this section, you'll set up a GitHub Actions workflow that triggers the load test by using the [Azure Load Testing Action](https://github.com/marketplace/actions/azure-load-testing).
-
-The following code snippet shows an example of how to trigger a load test using the `azure/load-testing` action:
-
-```yml
-- name: 'Azure Load Testing'
-uses: azure/load-testing@v1
-with:
- loadTestConfigFile: 'my-jmeter-script.jmx'
- loadTestResource: my-load-test-resource
- resourceGroup: my-resource-group
- env: |
- [
- {
- "name": "webapp",
- "value": "my-web-app.azurewebsites.net"
- }
- ]
-```
-
-The sample application repository already contains a sample workflow file *.github/workflows/workflow.yml*. The GitHub Actions workflow performs the following steps for every update to the main branch:
--- Deploy the sample Node.js application to an Azure App Service web app.-- Create an Azure Load Testing resource using the *ARMTemplate/template.json* Azure Resource Manager (ARM) template, if the resource doesn't exist yet. Learn more about ARM templates [here](../azure-resource-manager/templates/overview.md).-- Invoke Azure Load Testing by using the [Azure Load Testing Action](https://github.com/marketplace/actions/azure-load-testing) and the sample Apache JMeter script *SampleApp.jmx* and the load test configuration file *SampleApp.yaml*.-
-Follow these steps to configure the GitHub Actions workflow for your environment:
-
-1. Open the *.github/workflows/workflow.yml* GitHub Actions workflow file in your sample application's repository.
-
-1. Edit the file and replace the following placeholder text:
-
- |Placeholder |Value |
- |||
- |`<your-azure-web-app>` | The name of the Azure App Service web app. |
- |`<your-azure-load-testing-resource-name>` | The name of your Azure Load Testing resource. |
- |`<your-azure-load-testing-resource-group-name>` | The name of the resource group that contains the Azure Load Testing resource. |
-
- ```yaml
- env:
- AZURE_WEBAPP_NAME: "<your-azure-web-app>"
- LOAD_TEST_RESOURCE: "<your-azure-load-testing-resource-name>"
- LOAD_TEST_RESOURCE_GROUP: "<your-azure-load-testing-resource-group-name>"
- ```
-
- These variables are used to configure the GitHub Actions for deploying the sample application to Azure, and to connect to your Azure Load Testing resource.
-
-1. Commit your changes directly to the main branch.
-
- The commit will trigger the GitHub Actions workflow in your repository. You can verify that the workflow is running by going to the **Actions** tab.
-
-## View load test results
-
-When the load test finishes, view the results in the GitHub Actions workflow log:
-
-1. Select the **Actions** tab in your GitHub repository to view the list of workflow runs.
-
- :::image type="content" source="./media/tutorial-cicd-github-actions/workflow-run-list.png" alt-text="Screenshot that shows the list of GitHub Actions workflow runs.":::
-
-1. Select the workflow run from the list to open the run details and logging information.
-
- :::image type="content" source="./media/tutorial-cicd-github-actions/github-actions-workflow-completed.png" alt-text="Screenshot that shows the workflow logging information.":::
-
- After the load test finishes, you can view the test summary information and the client-side metrics in the workflow log. The log also shows the steps to go to the Azure Load Testing dashboard for this load test.
-
-1. On the screen that shows the workflow run's details, select the **loadTestResults** artifact to download the result files for the load test.
-
- :::image type="content" source="./media/tutorial-cicd-github-actions/github-actions-artifacts.png" alt-text="Screenshot that shows artifacts of the workflow run.":::
-
-## Define test pass/fail criteria
-
-You can use test failure criteria to define thresholds for when a load test should fail. For example, a test might fail when the percentage of failed requests surpasses a specific value.
-
-When at least one of the failure criteria is met, the load test status is failed. As a result, the CI/CD workflow will also fail and the development team can be alerted.
-
-You can specify these criteria in the [test configuration YAML file](./reference-test-config-yaml.md):
-
-1. Edit the *SampleApp.yml* file in your GitHub repository.
-
-1. Add the following snippet at the end of the file:
-
- ```yaml
- failureCriteria:
- - avg(response_time_ms) > 100
- - percentage(error) > 20
- ```
-
- You've now specified pass/fail criteria for your load test. The test will fail if at least one of these conditions is met:
-
- - The aggregate average response time is greater than 100 ms.
- - The aggregate percentage of errors is greater than 20%.
-
-1. Commit and push the changes to the main branch of the repository.
-
- The changes will trigger the GitHub Actions CI/CD workflow.
-
-1. Select the **Actions** tab, and then select the most recent workflow run to view the workflow log.
-
- :::image type="content" source="./media/tutorial-cicd-github-actions/github-actions-workflow-failed.png" alt-text="Screenshot that shows the failed workflow output log.":::
-
- After the load test finishes, you'll notice that the workflow failed because the average response time was higher than the time that you specified in the pass/fail criteria.
-
- The Azure Load Testing service evaluates the criteria during the test run. If any of these conditions fails, Azure Load Testing service returns a nonzero exit code. This code informs the CI/CD workflow that the test has failed.
-
-1. Edit the *SampleApp.yml* file and change the test's pass/fail criteria:
-
- ```yaml
- failureCriteria:
- - avg(response_time_ms) > 5000
- - percentage(error) > 20
- ```
-
-1. Commit the changes to trigger the GitHub Actions workflow.
-
- :::image type="content" source="./media/tutorial-cicd-github-actions/github-actions-workflow-passed.png" alt-text="Screenshot that shows the succeeded workflow output log.":::
-
- The load test now succeeds and the workflow finishes successfully.
-
-## Pass parameters to your load tests from the workflow
-
-Next, you'll parameterize your load test by using workflow variables. These parameters can be secrets, such as passwords, or non-secrets. For more information, see [Parameterize load tests with secrets and environment variables](./how-to-parameterize-load-tests.md).
-
-In this tutorial, you'll now use the *SampleApp_Secrets.jmx* JMeter test script. This script invokes an application endpoint that requires a secure value to be passed as an HTTP header.
-
-1. Edit the *SampleApp.yaml* file in your GitHub repository and update the `testPlan` configuration setting to use the *SampleApp_Secrets.jmx* file.
-
- The `testPlan` setting specifies which JMeter script Azure Load Testing uses.
-
- ```yml
- version: v0.1
- testName: SampleApp
- testPlan: SampleApp_Secrets.jmx
- description: 'SampleApp Test with secrets'
- engineInstances: 1
- ```
-
- The *SampleApp_Secrets.jmx* Apache JMeter script uses a user-defined variable that retrieves the secret value with the custom function `${__GetSecret(secretName)}`. Apache JMeter then passes this secret value to the sample application endpoint.
-
-1. Commit the changes to the YAML file.
-
-1. Edit the *config.json* file in your GitHub repository.
-
- Update the `enableSecretsFeature` setting to `true` to reconfigure the sample application to accept only secure requests:
-
- ```json
- {
- "enableSecretsFeature": true
- }
- ```
-
-1. Commit the changes to the *config.json* file.
-
-1. Add a new secret to your GitHub repository by selecting **Settings** > **Secrets** > **New repository secret**.
-
-1. Enter **MY_SECRET** for **Name**, enter **1797669089** for **Value**, and then select **Add secret**.
-
- :::image type="content" source="./media/tutorial-cicd-github-actions/github-new-secret-jmx.png" alt-text="Screenshot that shows the repository secret that's used in the JMeter script.":::
-
-1. Edit the *.github/workflows/workflow.yml* file to pass the secret to the load test.
-
- Edit the Azure Load Testing action by adding the following YAML snippet:
-
- ```yaml
- secrets: |
- [
- {
- "name": "appToken",
- "value": "${{ secrets.MY_SECRET }}"
- }
- ]
- ```
-
-1. Commit the changes, which trigger the GitHub Actions workflow.
-
- The Azure Load Testing task securely passes the repository secret from the workflow to the test engine. The secret parameter is used only while you're running the load test. Then the parameter's value is discarded from memory.
-
-## Clean up resources
--
-## Next steps
-
-You've now created a GitHub Actions workflow that uses Azure Load Testing for automatically running load tests. By using pass/fail criteria, you can set the status of the CI/CD workflow. With parameters, you can make the running of load tests configurable.
-
-* Learn more about the [key concepts for Azure Load Testing](./concept-load-testing-concepts.md).
-* Learn more about the [Azure Load Testing Action](https://github.com/marketplace/actions/azure-load-testing).
-* Learn how to [parameterize a load test](./how-to-parameterize-load-tests.md).
-* Learn how to [define test pass/fail criteria](./how-to-define-test-criteria.md).
load-testing Tutorial Identify Bottlenecks Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-identify-bottlenecks-azure-portal.md
For Azure Cosmos DB, increase the database RU scale setting:
1. Select **Scale & Settings**, and update the throughput value to **1200**.
- :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/1200-ru-scaling-for-cosmos-db.png" alt-text="Screenshot that shows the updated Azure Cosmos D B scale settings.":::
+ :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/1200-ru-scaling-for-cosmos-db.png" alt-text="Screenshot that shows the updated Azure Cosmos DB scale settings.":::
1. Select **Save** to confirm the changes.
Now that you've increased the database throughput, rerun the load test and verif
1. Check the server-side metrics for Azure Cosmos DB and ensure that the performance has improved.
- :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/cosmos-db-metrics-post-run.png" alt-text="Screenshot that shows the Azure Cosmos D B client-side metrics after update of the scale settings.":::
+ :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/cosmos-db-metrics-post-run.png" alt-text="Screenshot that shows the Azure Cosmos DB client-side metrics after update of the scale settings.":::
The Azure Cosmos DB **Normalized RU Consumption** value is now well below 100%.
As a result, the overall performance of your application has improved.
Advance to the next tutorial to learn how to set up an automated regression testing workflow by using Azure Pipelines or GitHub Actions. > [!div class="nextstepaction"]
-> [Set up automated regression testing](./tutorial-cicd-azure-pipelines.md)
+> [Set up automated regression testing](./tutorial-identify-performance-regression-with-cicd.md)
load-testing Tutorial Identify Performance Regression With Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-identify-performance-regression-with-cicd.md
+
+ Title: 'Tutorial: Automate regression tests with CI/CD'
+
+description: 'In this tutorial, you learn how to automate regression testing by using Azure Load Testing and CI/CD workflows. Quickly identify performance degradation for applications under high load.'
++++ Last updated : 09/29/2022++
+#Customer intent: As an Azure user, I want to learn how to automatically test builds for performance regressions on every merge request and/or deployment by using Azure Pipelines.
++
+# Tutorial: Identify performance regressions by automating load tests with CI/CD
+
+This tutorial describes how to quickly identify performance regressions by using Azure Load Testing Preview and CI/CD tools. Quickly identify when your application experiences degraded performance under load by running load tests in Azure Pipelines or GitHub Actions.
+
+In this tutorial, you'll set up a CI/CD pipeline that runs a load test for a sample application on Azure. You'll verify the application behavior under load directly from the CI/CD dashboard. You'll then use load test fail criteria to get alerted when the application doesn't meet your quality requirements.
+
+In this tutorial, you'll use a sample Node.js application and JMeter script. The tutorial doesn't require any coding or Apache JMeter skills.
+
+You'll learn how to:
+
+> [!div class="checklist"]
+> * Set up the sample application GitHub repository.
+> * Configure service authentication for your CI/CD workflow.
+> * Configure the CI/CD workflow to run a load test.
+> * View the load test results in the CI/CD dashboard.
+> * Define load test fail criteria to identify performance regressions.
+
+> [!NOTE]
+> Azure Pipelines has a 60-minute timeout on jobs that are running on Microsoft-hosted agents for private projects. If your load test is running for more than 60 minutes, you'll need to pay for [additional capacity](/azure/devops/pipelines/agents/hosted?tabs=yaml#capabilities-and-limitations). If not, the pipeline will time out without waiting for the test results. You can view the status of the load test in the Azure portal.
+
+> [!IMPORTANT]
+> Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+* An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* If you're using Azure Pipelines, an Azure DevOps organization and project. If you don't have an Azure DevOps organization, you can [create one for free](/azure/devops/pipelines/get-started/pipelines-sign-up?view=azure-devops&preserve-view=true). If you need help with getting started with Azure Pipelines, see [Create your first pipeline](/azure/devops/pipelines/create-first-pipeline?preserve-view=true&view=azure-devops&tabs=java%2Ctfs-2018-2%2Cbrowser).
+* A GitHub account, where you can create a repository. If you don't have one, you can [create one for free](https://github.com/).
+
+## Set up the sample application repository
+
+To get started with this tutorial, you first need to set up a sample Node.js web application. The sample application contains an Azure Pipelines definition to deploy the application on Azure and trigger a load test.
++
+## Configure service authentication
+
+Before you configure the CI/CD pipeline to run a load test, you'll grant the CI/CD workflow the permissions to access your Azure load testing resource.
+
+# [Azure Pipelines](#tab/pipelines)
+
+To access your Azure Load Testing resource from the Azure Pipelines workflow, you first create a service connection in your Azure DevOps project. The service connection creates an Azure Active Directory [service principal](/active-directory/develop/app-objects-and-service-principals#service-principal-object). This service principal represents your Azure Pipelines workflow in Azure Active Directory.
+
+Next, you grant permissions to this service principal to create and run a load test with your Azure Load Testing resource.
+
+### Create a service connection in Azure Pipelines
+
+Create a service connection in Azure Pipelines so that your CI/CD workflow has access to your Azure subscription. In a next step, you'll then grant permissions to create and run load tests.
+
+1. Sign in to your Azure DevOps organization (`https://dev.azure.com/<your-organization>`), and select your project.
+
+1. Select **Project settings** > **Service connections**.
+
+1. Select **+ New service connection**, select the **Azure Resource Manager** service connection, and then select **Next**.
+
+1. Select the **Service Principal (automatic)** authentication method, and then select **Next**.
+
+1. Enter the service connection information, and then select **Save** to create the service connection.
+
+ | Field | Value |
+ | -- | -- |
+ | **Scope level** | *Subscription*. |
+ | **Subscription** | Select the Azure subscription that will host your load testing resource. |
+ | **Resource group** | Leave empty. The pipeline creates a new resource group for the Azure Load Testing resource. |
+ | **Service connection name** | Enter a unique name for the service connection. You'll use this name later, to configure the pipeline definition. |
+ | **Grant access permission to all pipelines** | Checked. |
+
+1. Select the service connection that you created from the list, and then select **Manage Service Principal**.
+
+ :::image type="content" source="./media/tutorial-identify-performance-regression-with-cicd/manage-service-principal.png" alt-text="Screenshot that shows selections for managing a service principal.":::
+
+1. In the Azure portal, copy the **Application (Client) ID** value.
++
+### Grant access to Azure Load Testing
+
+To grant access to your Azure Load Testing resource, assign the Load Test Contributor role to the service principal. This role grants the service principal access to create and run load tests with your Azure Load Testing service. Learn more about [managing users and roles in Azure Load Testing](./how-to-assign-roles.md).
+
+1. Retrieve the ID of the service principal object using the Azure CLI. Replace the text placeholder `<application-client-id>` with the value you copied.
+
+ ```azurecli-interactive
+ object_id=$(az ad sp show --id "<application-client-id>" --query "id" -o tsv)
+ echo $object_id
+ ```
+
+1. Assign the `Load Test Contributor` role to the service principal:
+
+ ```azurecli-interactive
+ subscription=$(az account show --query "id" -o tsv)
+ echo $subscription
+
+ az role assignment create --assignee $object_id \
+ --role "Load Test Contributor" \
+ --scope /subscriptions/$subscription \
+ --subscription $subscription
+ ```
+
+# [GitHub Actions](#tab/github)
+
+To access your Azure Load Testing resource from the GitHub Actions workflow, you first create an Azure Active Directory [service principal](/active-directory/develop/app-objects-and-service-principals#service-principal-object). This service principal represents your GitHub Actions workflow in Azure Active Directory.
+
+Next, you grant permissions to the service principal to create and run a load test with your Azure Load Testing resource.
++
+### Create a service principal
+
+Create a service principal in the Azure subscription and assign the Contributor role so that your GitHub Actions workflow has access to your Azure subscription. In a next step, you'll then grant permissions to create and run load tests.
+
+1. Create a service principal and assign the `Contributor` role:
+
+ ```azurecli-interactive
+ subscription=$(az account show --query "id" -o tsv)
+ echo $subscription
+
+ az ad sp create-for-rbac --name "my-load-test-cicd" --role contributor \
+ --scopes /subscriptions/$subscription \
+ --sdk-auth
+ ```
+
+ The output is a JSON object that represents the service principal. You'll use this information to authenticate with Azure in the GitHub Actions workflow.
+
+ ```output
+ Creating 'contributor' role assignment under scope '/subscriptions/123abc45-6789-0abc-def1-234567890abc'
+ {
+ "clientId": "00000000-0000-0000-0000-000000000000",
+ "clientSecret": "00000000-0000-0000-0000-000000000000",
+ "subscriptionId": "00000000-0000-0000-0000-000000000000",
+ "tenantId": "00000000-0000-0000-0000-000000000000",
+ "activeDirectoryEndpointUrl": "https://login.microsoftonline.com",
+ "resourceManagerEndpointUrl": "https://management.azure.com/",
+ "activeDirectoryGraphResourceId": "https://graph.windows.net/",
+ "sqlManagementEndpointUrl": "https://management.core.windows.net:8443/",
+ "galleryEndpointUrl": "https://gallery.azure.com/",
+ "managementEndpointUrl": "https://management.core.windows.net/"
+ }
+ ```
+
+ > [!NOTE]
+ > You might get a `--sdk-auth` deprecation warning when you run this command. Alternatively, you can use OpenID Connect (OIDC) based authentication for authenticating GitHub with Azure. Learn how to [use the Azure login action with OpenID Connect](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
+
+1. Copy the output JSON object.
+
+1. Add a GitHub secret **AZURE_CREDENTIALS** to your repository to store the service principal you created earlier. The `azure/login` action in the GitHub Actions workflow uses this secret to authenticate with Azure.
+
+ > [!NOTE]
+ > If you're using OpenID Connect to authenticate with Azure, you don't have to pass the service principal object in the Azure login action. Learn how to [use the Azure login action with OpenID Connect](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
+
+ 1. In [GitHub](https://github.com), browse to your forked repository, and select **Settings** > **Secrets** > **Actions** > **New repository secret**.
+
+ 1. Enter the new secret information, and then select **Add secret** to create a new secret.
+
+ | Field | Value |
+ | -- | -- |
+ | **Name** | *AZURE_CREDENTIALS* |
+ | **Secret** | Paste the JSON role assignment credentials you copied earlier. |
+
+### Grant access to Azure Load Testing
+
+To grant access to your Azure Load Testing resource, assign the Load Test Contributor role to the service principal. This role grants the service principal access to create and run load tests with your Azure Load Testing service. Learn more about [managing users and roles in Azure Load Testing](./how-to-assign-roles.md).
+
+1. Retrieve the ID of the service principal object:
+
+ ```azurecli-interactive
+ object_id=$(az ad sp list --filter "displayname eq 'my-load-test-cicd'" --query "[0].id" -o tsv)
+ echo $object_id
+ ```
+
+1. Assign the `Load Test Contributor` role to the service principal:
+
+ ```azurecli-interactive
+ az role assignment create --assignee $object_id \
+ --role "Load Test Contributor" \
+ --scope /subscriptions/$subscription \
+ --subscription $subscription
+ ```
++
+## Configure the CI/CD workflow to run a load test
+
+You'll now create a CI/CD workflow to create and run a load test for the sample application. The sample application repository already contains a CI/CD workflow definition that first deploys the application to Azure, and then creates a load test based on JMeter test script (*SampleApp.jmx*). You'll update the sample workflow definition file to specify the Azure subscription and application details.
+
+On the first CI/CD workflow run, it creates a new Azure Load Testing resource in your Azure subscription by using the *ARMTemplate/template.json* Azure Resource Manager (ARM) template. Learn more about ARM templates [here](/azure-resource-manager/templates/overview).
+
+# [Azure Pipelines](#tab/pipelines)
+
+You'll create a new Azure pipeline that is linked to your fork of the sample application repository. This repository contains the following items:
+
+- The sample application source code.
+- The *azure-pipelines.yml* pipeline definition file.
+- The *SampleApp.jmx* JMeter test script.
+- The *SampleApp.yaml* Azure Load Testing configuration file.
+
+To create and run the load test, the Azure Pipelines definition uses the [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing) extension from the Azure DevOps Marketplace.
+
+1. Open the [Azure Load Testing task extension](https://marketplace.visualstudio.com/items?itemName=AzloadTest.AzloadTesting) in the Azure DevOps Marketplace, and select **Get it free**.
+
+1. Select your Azure DevOps organization, and then select **Install** to install the extension.
+
+ If you don't have administrator privileges for the selected Azure DevOps organization, select **Request** to request an administrator to install the extension.
+
+1. In your Azure DevOps project, select **Pipelines** in the left navigation, and then select **Create pipeline**.
+
+1. On the **Connect** tab, select **GitHub**.
+
+1. Select **Authorize Azure Pipelines** to allow Azure Pipelines to access your GitHub account for triggering workflows.
+
+1. On the **Select** tab, select the sample application's forked repository.
+
+ :::image type="content" source="./media/tutorial-identify-performance-regression-with-cicd/create-pipeline-select-repo.png" alt-text="Screenshot that shows how to select the sample application's GitHub repository.":::
+
+ Azure Pipelines automatically detects the *azure-pipelines.yml* pipeline definition file.
+
+1. Notice that the pipeline definition contains the `LoadTest` stage, which has two tasks.
+
+ The `AzureResourceManagerTemplateDeployment` task deploys a new Azure load testing resource in your Azure subscription.
+
+ Next, the `AzureLoadTest` [Azure Load Testing task](/azure/devops/pipelines/tasks/test/azure-load-testing) creates and starts a load test. This task uses the `SampleApp.yaml` [load test configuration file](./reference-test-config-yaml.md), which contains the configuration parameters for the load test, such as the number of parallel test engines.
+
+ ```yml
+ - task: AzureLoadTest@1
+ inputs:
+ azureSubscription: $(serviceConnection)
+ loadTestConfigFile: 'SampleApp.yaml'
+ resourceGroup: $(loadTestResourceGroup)
+ loadTestResource: $(loadTestResource)
+ env: |
+ [
+ {
+ "name": "webapp",
+ "value": "$(webAppName).azurewebsites.net"
+ }
+ ]
+ ```
+
+ If a load test already exists, the `AzureLoadTest` task won't create a new load test, but will add a test run to this load test. To identify regressions over time, you can then [compare multiple test runs](./how-to-compare-multiple-test-runs.md).
+
+1. On the **Review** tab, replace the following placeholder text at the beginning of the pipeline definition:
+
+ These variables are used to configure the deployment of the sample application, and to create the load test.
+
+ |Placeholder |Value |
+ |||
+ | `<Name of your webapp>` | The name of the Azure App Service web app. |
+ | `<Name of your webARM Service connection>` | The name of the service connection that you created in the previous section. |
+ | `<Azure subscriptionId>` | Your Azure subscription ID. |
+ | `<Name of your load test resource>` | The name of your Azure Load Testing resource. |
+ | `<Name of your load test resource group>` | The name of the resource group that contains the Azure Load Testing resource. |
+
+ :::image type="content" source="./media/tutorial-identify-performance-regression-with-cicd/create-pipeline-review.png" alt-text="Screenshot that shows the Azure Pipelines Review tab when you're creating a pipeline.":::
+
+1. Select **Save and run**, enter text for **Commit message**, and then select **Save and run**.
+
+ Azure Pipelines now runs the CI/CD workflow and will deploy the sample application and create the load test.
+
+1. Select **Pipelines** in the left navigation, and then select new pipeline run from the list to monitor the status.
+
+ You can view the detailed run log by selecting the pipeline job.
+
+ :::image type="content" source="./media/tutorial-identify-performance-regression-with-cicd/create-pipeline-status.png" alt-text="Screenshot that shows how to view pipeline job details.":::
+
+# [GitHub Actions](#tab/github)
+
+You'll create a GitHub Actions workflow in your fork of the sample application repository. This repository contains the following items:
+
+- The sample application source code.
+- The *.github/workflows/workflow.yml* GitHub Actions workflow.
+- The *SampleApp.jmx* JMeter test script.
+- The *SampleApp.yaml* Azure Load Testing configuration file.
+
+To create and run the load test, the GitHub Actions workflow uses the [Azure Load Testing Action](https://github.com/marketplace/actions/azure-load-testing) from the GitHub Actions Marketplace.
+
+The sample application repository already contains a sample workflow file *.github/workflows/workflow.yml*. The GitHub Actions workflow performs the following steps for every update to the main branch:
++
+- Invoke Azure Load Testing by using the [Azure Load Testing Action](https://github.com/marketplace/actions/azure-load-testing) and the sample Apache JMeter script *SampleApp.jmx* and the load test configuration file *SampleApp.yaml*.
+
+1. Open the *.github/workflows/workflow.yml* GitHub Actions workflow file in your sample application's repository.
+
+1. Notice the `loadTest` job, which creates and runs the load test:
+
+ - The `azure/login` action authenticates with Azure, by using the `AZURE_CREDENTIALS` secret to pass the service principal credentials.
+
+ ```yml
+ - name: Login to Azure
+ uses: azure/login@v1
+ continue-on-error: false
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+ ```
+
+ - The `azure/arm-deploy` action deploys a new Azure load testing resource in your Azure subscription.
+
+ ```yml
+ - name: Create Azure Load Testing resource
+ uses: azure/arm-deploy@v1
+ with:
+ resourceGroupName: ${{ env.LOAD_TEST_RESOURCE_GROUP }}
+ template: ./ARMTemplate/template.json
+ parameters: ./ARMTemplate/parameters.json name=${{ env.LOAD_TEST_RESOURCE }} location="${{ env.LOCATION }}"
+ ```
+
+ - The `azure/load-testing` [Azure Load Testing Action](https://github.com/marketplace/actions/azure-load-testing) creates and starts a load test. This action uses the `SampleApp.yaml` [load test configuration file](./reference-test-config-yaml.md), which contains the configuration parameters for the load test, such as the number of parallel test engines.
+
+ ```yml
+ - name: 'Azure Load Testing'
+ uses: azure/load-testing@v1
+ with:
+ loadTestConfigFile: 'SampleApp.yaml'
+ loadTestResource: ${{ env.LOAD_TEST_RESOURCE }}
+ resourceGroup: ${{ env.LOAD_TEST_RESOURCE_GROUP }}
+ env: |
+ [
+ {
+ "name": "webapp",
+ "value": "${{ env.AZURE_WEBAPP_NAME }}.azurewebsites.net"
+ }
+ ]
+ ```
+
+ If a load test already exists, the `azure/load-testing` action won't create a new load test, but will add a test run to this load test. To identify regressions over time, you can then [compare multiple test runs](./how-to-compare-multiple-test-runs.md).
+
+1. Replace the following placeholder text at the beginning of the workflow definition file:
+
+ These variables are used to configure the deployment of the sample application, and to create the load test.
+
+ |Placeholder |Value |
+ |||
+ |`<Name of your webapp>` | The name of the Azure App Service web app. |
+ |`<Name of your load test resource>` | The name of your Azure Load Testing resource. |
+ |`<Name of your load test resource group>` | The name of the resource group that contains the Azure Load Testing resource. |
+
+ ```yaml
+ env:
+ AZURE_WEBAPP_NAME: "<Name of your webapp>" # set this to your application's name
+ LOAD_TEST_RESOURCE: "<Name of your load test resource>"
+ LOAD_TEST_RESOURCE_GROUP: "<Name of your load test resource group>"
+ ```
+
+1. Commit your changes to the main branch.
+
+ The commit will trigger the GitHub Actions workflow in your repository. Verify that the workflow is running by going to the **Actions** tab.
+++
+## View load test results
+
+Azure Load Testing enables you to view the results of the load test run directly in the CI/CD workflow output. The CI/CD log contains the following client-side metrics:
+
+- Response time metrics: average, minimum, median, maximum, and 90-95-99 percentiles.
+- Number of requests per second.
+- Total number of requests.
+- Total number of errors.
+- Error rate.
+
+In addition, the [load test results file](./how-to-export-test-results.md) is available as a workflow run artifact, which you can download for further reporting.
+
+# [Azure Pipelines](#tab/pipelines)
+
+1. In your Azure DevOps project, select **Pipelines**, and then select your pipeline definition from the list.
+
+1. Select the pipeline run to view the run summary.
+
+ :::image type="content" source="./media/tutorial-identify-performance-regression-with-cicd/create-pipeline-run-summary.png" alt-text="Screenshot that shows the pipeline run summary.":::
+
+1. Select **Load Test** in the **Jobs** section to view the pipeline log.
+
+ :::image type="content" source="./media/tutorial-identify-performance-regression-with-cicd/create-pipeline-log.png" alt-text="Screenshot that shows the Azure Pipelines run log.":::
+
+ After the load test finishes, you can view the test summary information and the client-side metrics in the pipeline log. The log also shows the URL to go to the Azure Load Testing dashboard for this load test.
+
+1. In the pipeline log view, select **Load Test**, and then select **1 artifact produced** to download the result files for the load test.
+
+ :::image type="content" source="./media/tutorial-identify-performance-regression-with-cicd/create-pipeline-download-results.png" alt-text="Screenshot that shows how to download the load test results.":::
+
+# [GitHub Actions](#tab/github)
+
+1. Select the **Actions** tab in your GitHub repository to view the list of workflow runs.
+
+ :::image type="content" source="./media/tutorial-identify-performance-regression-with-cicd/github-actions-workflow-run-list.png" alt-text="Screenshot that shows the list of GitHub Actions workflow runs.":::
+
+1. Select the workflow run from the list to open the run details and logging information.
+
+ :::image type="content" source="./media/tutorial-identify-performance-regression-with-cicd/github-actions-workflow-completed.png" alt-text="Screenshot that shows the workflow logging information.":::
+
+ After the load test finishes, you can view the test summary information and the client-side metrics in the workflow log. The log also shows the steps to go to the Azure Load Testing dashboard for this load test.
+
+1. On the screen that shows the workflow run's details, select the **loadTestResults** artifact to download the result files for the load test.
+
+ :::image type="content" source="./media/tutorial-identify-performance-regression-with-cicd/github-actions-artifacts.png" alt-text="Screenshot that shows artifacts of the workflow run.":::
+++
+## Define test fail criteria
+
+Azure Load Testing enables you to define load test fail criteria. These criteria determine when a load test should pass or fail. For example, your load test should fail when the average response time is greater than a specific value, or when too many errors occur.
+
+When you run a load test as part of a CI/CD pipeline, the status of the pipeline run will reflect the status of the load test. This approach allows you to quickly identify performance regressions, or degraded application behavior when the application is experiencing high load.
+
+In this section, you'll configure test fail criteria based on the average response time and the error rate.
+
+You can specify load test fail criteria for Azure Load Testing in the test configuration YAML file. Learn more about [configuring load test fail criteria](./how-to-define-test-criteria.md).
+
+1. Edit the *SampleApp.yml* file in your fork of the sample application GitHub repository.
+
+1. Add the following snippet at the end of the file:
+
+ ```yaml
+ failureCriteria:
+ - avg(response_time_ms) > 100
+ - percentage(error) > 20
+ ```
+
+ You've now specified pass/fail criteria for your load test. The test will fail if at least one of these conditions is met:
+
+ - The aggregate average response time is greater than 100 ms.
+ - The aggregate percentage of errors is greater than 20%.
+
+1. Commit and push the changes to the main branch of the repository.
+
+ The changes will trigger the CI/CD workflow.
+
+1. After the test finishes, notice that the CI/CD pipeline run has failed.
+
+ In the CI/CD output log, you find that the test failed because one of the fail criteria was met. The load test average response time was higher than the value that you specified in the pass/fail criteria.
+
+ :::image type="content" source="./media/tutorial-identify-performance-regression-with-cicd/test-criteria-failed.png" alt-text="Screenshot that shows pipeline logs after failed test criteria.":::
+
+ The Azure Load Testing service evaluates the criteria during the test run. If any of these conditions fails, Azure Load Testing service returns a nonzero exit code. This code informs the CI/CD workflow that the test has failed.
+
+1. Edit the *SampleApp.yml* file and change the test's pass/fail criteria to increase the criterion for average response time:
+
+ ```yaml
+ failureCriteria:
+     - avg(response_time_ms) > 5000
+     - percentage(error) > 20
+ ```
+
+1. Commit the changes to trigger the CI/CD workflow again.
+
+ After the test finishes, you notice that the load test and the CI/CD workflow run complete successfully.
+
+## Clean up resources
++
+## Next steps
+
+You've now created a CI/CD workflow that uses Azure Load Testing to automate running load tests. By using load test fail criteria, you can set the status of the CI/CD workflow and quickly identify performance and application behavior degradations.
+
+* Learn more about [Configuring server-side monitoring](./how-to-monitor-server-side-metrics.md).
+* Learn more about [Comparing results across multiple test runs](./how-to-compare-multiple-test-runs.md).
+* Learn more about [Parameterizing a load test](./how-to-parameterize-load-tests.md).
+* Learn more about [Defining test pass/fail criteria](./how-to-define-test-criteria.md).
logic-apps Edit App Settings Host Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md
App settings in Azure Logic Apps work similarly to app settings in Azure Functio
| `Workflows.<workflowName>.RuntimeConfiguration.RetentionInDays` | None | Sets the operation options for <*workflowName*>. | | `Workflows.Connection.AuthenticationAudience` | None | Sets the audience for authenticating an Azure-hosted connection. | | `Workflows.WebhookRedirectHostUri` | None | Sets the host name to use for webhook callback URLs. |
+| `Workflows.CustomHostName` | None | Sets the host name to use for workflow and input-output URLs, for example, "logic.contoso.com". For information to configure a custom DNS name, see [Map an existing custom DNS name to Azure App Service](../app-service/app-service-web-tutorial-custom-domain.md) and [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](../app-service/configure-ssl-bindings.md). |
| `WEBSITE_LOAD_ROOT_CERTIFICATES` | None | Sets the thumbprints for the root certificates to be trusted. | | `ServiceProviders.Sql.QueryTimeout` | `00:02:00` <br>(2 min) | Sets the request timeout value for SQL service provider operations. | ||||
logic-apps Logic Apps Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-azure-functions.md
After you find the object ID for your logic app's managed identity and tenant ID
| Property | Required | Value | Description | |-|-|-|-| | **Application (client) ID** | Yes | <*object-ID*> | The unique identifier to use for this app registration. For this scenario, use the object ID from your logic app's managed identity. |
- | **Client secret** | <*client-secret*> | Recommended | The secret value that the app uses to prove its identity when requesting a token. The client secret is created and stored in your app's configuration as a slot-sticky [application setting](../app-service/configure-common.md#configure-app-settings) named **MICROSOFT_PROVIDER_AUTHENTICATION_SECRET**. To manage the secret in Azure Key Vault instead, you can update this setting later to use [Key Vault references](../app-service/app-service-key-vault-references.md). <br><br>- If you provide a client secret value, sign-in operations use the hybrid flow, returning both access and refresh tokens. <br><br>- If you don't provide a client secret, sign-in operations use the OAuth 2.0 implicit grant flow, returning only an ID token. <br><br>These tokens are sent by the provider and stored in the EasyAuth token store. |
+ | **Client secret** | Optional, but recommended | <*client-secret*> | The secret value that the app uses to prove its identity when requesting a token. The client secret is created and stored in your app's configuration as a slot-sticky [application setting](../app-service/configure-common.md#configure-app-settings) named **MICROSOFT_PROVIDER_AUTHENTICATION_SECRET**. To manage the secret in Azure Key Vault instead, you can update this setting later to use [Key Vault references](../app-service/app-service-key-vault-references.md). <br><br>- If you provide a client secret value, sign-in operations use the hybrid flow, returning both access and refresh tokens. <br><br>- If you don't provide a client secret, sign-in operations use the OAuth 2.0 implicit grant flow, returning only an ID token. <br><br>These tokens are sent by the provider and stored in the EasyAuth token store. |
| **Issuer URL** | No | **<*authentication-endpoint-URL*>/<*Azure-AD-tenant-ID*>/v2.0** | This URL redirects users to the correct Azure AD tenant and downloads the appropriate metadata to determine the appropriate token signing keys and token issuer claim value. For apps that use Azure AD v1, omit **/v2.0** from the URL. <br><br>For this scenario, use the following URL: **`https://sts.windows.net/`<*Azure-AD-tenant-ID*>** | | **Allowed token audiences** | No | <*application-ID-URI*> | The application ID URI (resource ID) for the function app. For a cloud or server app where you want to allow authentication tokens from a web app, add the application ID URI for the web app. The configured client ID is always implicitly considered as an allowed audience. <br><br>For this scenario, the value is **`https://management.azure.com`**. Later, you can use the same URI in the **Audience** property when you [set up your function action in your workflow to use the managed identity](create-managed-service-identity.md#authenticate-access-with-identity). <p><p>**Important**: The application ID URI (resource ID) must exactly match the value that Azure AD expects, including any required trailing slashes. | |||||
machine-learning Concept Enterprise Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-enterprise-security.md
We don't recommend that admins revoke the access of the managed identity to the
> > If your workspace has attached AKS clusters, _and they were created before May 14th, 2021_, __do not delete this Azure AD account__. In this scenario, you must first delete and recreate the AKS cluster before you can delete the Azure AD account.
-You can provision the workspace to use user-assigned managed identity, and grant the managed identity additional roles, for example to access your own Azure Container Registry for base Docker images. For more information, see [Use managed identities for access control](how-to-identity-based-service-authentication.md).
-
-You can also configure managed identities for use with Azure Machine Learning compute cluster. This managed identity is independent of workspace managed identity. With a compute cluster, the managed identity is used to access resources such as secured datastores that the user running the training job may not have access to. For more information, see [Identity-based data access to storage services on Azure](how-to-identity-based-data-access.md).
+You can provision the workspace to use user-assigned managed identity, and grant the managed identity additional roles, for example to access your own Azure Container Registry for base Docker images. You can also configure managed identities for use with Azure Machine Learning compute cluster. This managed identity is independent of workspace managed identity. With a compute cluster, the managed identity is used to access resources such as secured datastores that the user running the training job may not have access to. For more information, see [Use managed identities for access control](how-to-identity-based-service-authentication.md).
> [!TIP] > There are some exceptions to the use of Azure AD and Azure RBAC within Azure Machine Learning:
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
Learn more by reading and exploring the following resources:
+ [Learning path: End-to-end MLOps with Azure Machine Learning](/training/paths/build-first-machine-operations-workflow/) + [How to deploy a model to an online endpoint](how-to-deploy-managed-online-endpoints.md) with Machine Learning + [Tutorial: Train and deploy a model](tutorial-train-deploy-notebook.md)
-+ [End-to-end MLOps examples repo](https://github.com/microsoft/MLOps)
+ [CI/CD of machine learning models with Azure Pipelines](/azure/devops/pipelines/targets/azure-machine-learning) + [Machine learning at scale](/azure/architecture/data-guide/big-data/machine-learning-at-scale) + [Azure AI reference architectures and best practices repo](https://github.com/microsoft/AI)
machine-learning Concept Train Machine Learning Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-machine-learning-model.md
The designer lets you train models using a drag and drop interface in your web b
The machine learning CLI is an extension for the Azure CLI. It provides cross-platform CLI commands for working with Azure Machine Learning. Typically, you use the CLI to automate tasks, such as training a machine learning model. * [Use the CLI extension for Azure Machine Learning](how-to-configure-cli.md)
-* [MLOps on Azure](https://github.com/microsoft/MLOps)
+* [MLOps on Azure](https://github.com/Azure/mlops-v2)
* [Train models](how-to-train-model.md) ## VS Code
machine-learning How To Administrate Data Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-administrate-data-authentication.md
The following table lists what identities should be used for specific scenarios:
Data access is complex and it's important to recognize that there are many pieces to it. For example, accessing data from Azure Machine Learning studio is different than using the SDK. When using the SDK on your local development environment, you're directly accessing data in the cloud. When using studio, you aren't always directly accessing the data store from your client. Studio relies on the workspace to access data on your behalf. > [!TIP]
-> If you need to access data from outside Azure Machine Learning, such as using Azure Storage Explorer, *user* identity is probably what is used. Consult the documentation for the tool or service you are using for specific information. For more information on how Azure Machine Learning works with data, see [Identity-based data access to storage services on Azure](how-to-identity-based-data-access.md).
+> If you need to access data from outside Azure Machine Learning, such as using Azure Storage Explorer, *user* identity is probably what is used. Consult the documentation for the tool or service you are using for specific information. For more information on how Azure Machine Learning works with data, see [Setup authentication between AzureML and other services](how-to-identity-based-service-authentication.md).
## Azure Storage Account
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-cluster.md
Use any of these ways to specify a low-priority VM:
# [Python SDK](#tab/python) [!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/resources/compute/compute.ipynb?name=cluster_low_pri)]
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
The data scientist can start, stop, and restart the compute instance. They can u
Define multiple schedules for auto-shutdown and auto-start. For instance, create a schedule to start at 9 AM and stop at 6 PM from Monday-Thursday, and a second schedule to start at 9 AM and stop at 4 PM for Friday. You can create a total of four schedules per compute instance.
-Schedules can also be defined for [create on behalf of](#create-on-behalf-of-preview) compute instances. You can create a schedule that creates the compute instance in a stopped state. Stopped compute instances are particularly useful when you create a compute instance on behalf of another user.
+Schedules can also be defined for [create on behalf of](#create-on-behalf-of-preview) compute instances. You can create a schedule that creates the compute instance in a stopped state. Stopped compute instances are useful when you create a compute instance on behalf of another user.
### Create a schedule in studio
Following is a sample policy to default a shutdown schedule at 10 PM PST.
## Assign managed identity (preview)
-You can assign a system- or user-assigned [managed identity](https://learn.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview) to a compute instance, to autheticate against other Azure resources such as storage. Using managed identities for authentication helps improve workspace security and management. For example you can allow users to access training data only when logged in to compute instance, or use a common user-assigned managed identity to permit access to a specific storage account.
+You can assign a system- or user-assigned [managed identity](/azure/active-directory/managed-identities-azure-resources/overview) to a compute instance, to authenticate against other Azure resources such as storage. Using managed identities for authentication helps improve workspace security and management. For example, you can allow users to access training data only when logged in to a compute instance. Or use a common user-assigned managed identity to permit access to a specific storage account.
You can create compute instance with managed identity from Azure ML Studio:
identity:
- resource_id: identity_resource_id ```
-Once the managed identity is created, enable [identity-based data access enabled](how-to-identity-based-data-access.md) to your storage accounts for that identity. Then, when you worki on the compute instance, the managed identity is used automatically to authenticate against data stores.
+Once the managed identity is created, enable [identity-based data access enabled](how-to-datastore.md) to your storage accounts for that identity. Then, when you work on the compute instance, the managed identity is used automatically to authenticate against data stores.
-You can also use the managed identity manually to authenticate against other Azure resources. For example, to use it to get ARM access token, use following.
+You can also use the managed identity manually to authenticate against other Azure resources. The following example shows how to use it to get an Azure Resource Manager access token:
```python import requests
machine-learning How To Identity Based Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-data-access.md
- Title: Identity-based data access to storage services on Azure-
-description: Learn how to use identity-based data access to connect to storage services on Azure with Azure Machine Learning datastores and the Machine Learning Python SDK.
------ Previously updated : 01/25/2022-
-#Customer intent: As an experienced Python developer, I need to make my data in Azure Storage available to my compute for training my machine learning models.
--
-# Connect to storage by using identity-based data access
-
-In this article, you learn how to connect to storage services on Azure by using identity-based data access and Azure Machine Learning datastores via the [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro).
-
-When you create a datastore that uses **identity-based data access**, your Azure account ([Azure Active Directory token](../active-directory/fundamentals/active-directory-whatis.md)) is used to confirm you have permission to access the storage service. In the **identity-based data access** scenario, no authentication credentials are saved. Only the storage account information is stored in the datastore.
-
-In contrast, datastores that use **credential-based authentication** cache connection information, like your storage account key or SAS token, in the [key vault](https://azure.microsoft.com/services/key-vault/) that's associated with the workspace. This approach has the limitation that other workspace users with sufficient permissions can retrieve those credentials, which may be a security concern for some organization.
-
-To create datastores with **identity-based** data access via the Azure Machine Learning studio UI, see [Connect to data with the Azure Machine Learning studio](v1/how-to-connect-data-ui.md#create-datastores).
-
-To create datastores that use **credential-based** authentication, like access keys or service principals, see [Connect to storage services on Azure](how-to-access-data.md).
-
-## Identity-based data access in Azure Machine Learning
-
-There are two scenarios in which you can apply identity-based data access in Azure Machine Learning. These scenarios are a good fit for identity-based access when you're working with confidential data and need more granular data access management:
--- Accessing storage services-- Training machine learning models-
-The identity-based access allows you to use [role-based access controls (RBAC)](https://learn.microsoft.com/azure/storage/blobs/assign-azure-role-data-access) to restrict which identities, such as users or compute resources, have access to the data.
-
-### Accessing storage services
-
-You can connect to storage services via identity-based data access with Azure Machine Learning datastores or [Azure Machine Learning datasets](./v1/how-to-create-register-datasets.md).
-
-When you use identity-based data access, Azure Machine Learning prompts you for your Azure Active Directory token for data access authentication instead of keeping your credentials in the datastore. That approach allows for data access management at the storage level and keeps credentials confidential.
-
-The same behavior applies when you:
-
-* [Create a dataset directly from storage URLs](#use-data-in-storage).
-* Work with data interactively via a Jupyter Notebook on your local computer or [compute instance](concept-compute-instance.md).
-
-> [!NOTE]
-> Credentials stored via credential-based authentication include subscription IDs, shared access signature (SAS) tokens, and storage access key and service principal information, like client IDs and tenant IDs.
-
-### Working with private data
-
-Certain machine learning scenarios involve working with private data. In such cases, data scientists may not have direct access to data as Azure AD users. In this scenario, a [managed identity](how-to-identity-based-service-authentication.md) of compute can be used for data access authentication, so that data can only be accessed from a compute instance or a machine learning compute cluster executing a training job.
-
-In this approach, the admin grants the compute instance or compute cluster managed identity Storage Blob Data Reader permissions on the storage. The individual data scientists don't need to be granted access. For more information, see [Set up managed identity on a compute cluster](how-to-create-attach-compute-cluster.md#set-up-managed-identity) and [Assign managed identity to a compute instance(preview)](how-to-create-manage-compute-instance.md#assign-managed-identity-preview)
-
-## Prerequisites
--- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).--- An Azure storage account with a supported storage type. These storage types are supported:
- - [Azure Blob Storage](../storage/blobs/storage-blobs-overview.md)
- - [Azure Data Lake Storage Gen1](../data-lake-store/index.yml)
- - [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md)
--- The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/install).--- An Azure Machine Learning workspace.
-
- Either [create an Azure Machine Learning workspace](how-to-manage-workspace.md) or use an [existing one via the Python SDK](how-to-manage-workspace.md#connect-to-a-workspace).
-
-## Create and register datastores
-
-When you register a storage service on Azure as a datastore, you automatically create and register that datastore to a specific workspace. See [Storage access permissions](#storage-access-permissions) for guidance on required permission types. You can also manually create the storage you want to connect to without any special permissions. You just need the name.
-
-See [Work with virtual networks](#work-with-virtual-networks) for details on how to connect to data storage behind virtual networks.
-
-In the following code, notice the absence of authentication parameters like `sas_token`, `account_key`, `subscription_id`, and the service principal `client_id`. This omission indicates that Azure Machine Learning will use identity-based data access for authentication. Creation of datastores typically happens interactively in a notebook or via the studio. So your Azure Active Directory token is used for data access authentication.
-
-> [!NOTE]
-> Datastore names should consist only of lowercase letters, numbers, and underscores.
-
-### Azure blob container
-
-To register an Azure blob container as a datastore, use [`register_azure_blob_container()`](/python/api/azureml-core/azureml.core.datastore%28class%29#register-azure-blob-container-workspace--datastore-name--container-name--account-name--sas-token-none--account-key-none--protocol-none--endpoint-none--overwrite-false--create-if-not-exists-false--skip-validation-false--blob-cache-timeout-none--grant-workspace-access-false--subscription-id-none--resource-group-none-).
-
-The following code creates the `credentialless_blob` datastore, registers it to the `ws` workspace, and assigns it to the `blob_datastore` variable. This datastore accesses the `my_container_name` blob container on the `my-account-name` storage account.
-
-```Python
-# Create blob datastore without credentials.
-blob_datastore = Datastore.register_azure_blob_container(workspace=ws,
- datastore_name='credentialless_blob',
- container_name='my_container_name',
- account_name='my_account_name')
-```
-
-### Azure Data Lake Storage Gen1
-
-Use [register_azure_data_lake()](/python/api/azureml-core/azureml.core.datastore.datastore#register-azure-data-lake-workspace--datastore-name--store-name--tenant-id-none--client-id-none--client-secret-none--resource-url-none--authority-url-none--subscription-id-none--resource-group-none--overwrite-false--grant-workspace-access-false-) to register a datastore that connects to Azure Data Lake Storage Gen1.
-
-The following code creates the `credentialless_adls1` datastore, registers it to the `workspace` workspace, and assigns it to the `adls_dstore` variable. This datastore accesses the `adls_storage` Azure Data Lake Storage account.
-
-```Python
-# Create Azure Data Lake Storage Gen1 datastore without credentials.
-adls_dstore = Datastore.register_azure_data_lake(workspace = workspace,
- datastore_name='credentialless_adls1',
- store_name='adls_storage')
-
-```
-
-### Azure Data Lake Storage Gen2
-
-Use [register_azure_data_lake_gen2()](/python/api/azureml-core/azureml.core.datastore.datastore#register-azure-data-lake-gen2-workspace--datastore-name--filesystem--account-name--tenant-id--client-id--client-secret--resource-url-none--authority-url-none--protocol-none--endpoint-none--overwrite-false-) to register a datastore that connects to Azure Data Lake Storage Gen2.
-
-The following code creates the `credentialless_adls2` datastore, registers it to the `ws` workspace, and assigns it to the `adls2_dstore` variable. This datastore accesses the file system `tabular` in the `myadls2` storage account.
-
-```python
-# Create Azure Data Lake Storage Gen2 datastore without credentials.
-adls2_dstore = Datastore.register_azure_data_lake_gen2(workspace=ws,
- datastore_name='credentialless_adls2',
- filesystem='tabular',
- account_name='myadls2')
-```
--
-## Storage access permissions
-
-To help ensure that you securely connect to your storage service on Azure, Azure Machine Learning requires that you have permission to access the corresponding data storage.
-> [!WARNING]
-> Cross tenant access to storage accounts is not supported. If cross tenant access is needed for your scenario, please reach out to the AzureML Data Support team alias at amldatasupport@microsoft.com for assistance with a custom code solution.
-
-Identity-based data access supports connections to **only** the following storage services.
-
-* Azure Blob Storage
-* Azure Data Lake Storage Gen1
-* Azure Data Lake Storage Gen2
-
-To access these storage services, you must have at least [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](../storage/blobs/assign-azure-role-data-access.md).
-
-If you prefer to not use your user identity (Azure Active Directory), you can also grant a workspace managed-system identity (MSI) permission to create the datastore. To do so, you must have Owner permissions to the storage account and add the `grant_workspace_access= True` parameter to your data register method.
-
-If you're training a model on a remote compute target and want to access the data for training, the compute identity must be granted at least the Storage Blob Data Reader role from the storage service. Learn how to [set up managed identity on a compute cluster](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
-
-## Work with virtual networks
-
-By default, Azure Machine Learning can't communicate with a storage account that's behind a firewall or in a virtual network.
-
-You can configure storage accounts to allow access only from within specific virtual networks. This configuration requires extra steps to ensure data isn't leaked outside of the network. This behavior is the same for credential-based data access. For more information, see [How to configure virtual network scenarios](how-to-access-data.md#virtual-network).
-
-If your storage account has virtual network settings, that dictates what identity type and permissions access is needed. For example for data preview and data profile, the virtual network settings determine what type of identity is used to authenticate data access.
-
-* In scenarios where only certain IPs and subnets are allowed to access the storage, then Azure Machine Learning uses the workspace MSI to accomplish data previews and profiles.
-
-* If your storage is ADLS Gen 2 or Blob and has virtual network settings, customers can use either user identity or workspace MSI depending on the datastore settings defined during creation.
-
-* If the virtual network setting is ΓÇ£Allow Azure services on the trusted services list to access this storage accountΓÇ¥, then Workspace MSI is used.
-
-## Use data in storage
-
-We recommend that you use [Azure Machine Learning datasets](./v1/how-to-create-register-datasets.md) when you interact with your data in storage with Azure Machine Learning.
-
-Datasets package your data into a lazily evaluated consumable object for machine learning tasks like training. Also, with datasets you can [download or mount](v1/how-to-train-with-datasets.md#mount-vs-download) files of any format from Azure storage services like Azure Blob Storage and Azure Data Lake Storage to a compute target.
-
-To create a dataset, you can reference paths from datastores that also use identity-based data access.
-
-* If you're underlying storage account type is Blob or ADLS Gen 2, your user identity needs Blob Reader role.
-* If your underlying storage is ADLS Gen 1, permissions need can be set via the storage's Access Control List (ACL).
-
-In the following example, `blob_datastore` already exists and uses identity-based data access.
-
-```python
-blob_dataset = Dataset.Tabular.from_delimited_files(blob_datastore,'test.csv')
-```
-
-Another option is to skip datastore creation and create datasets directly from storage URLs. This functionality currently supports only Azure blobs and Azure Data Lake Storage Gen1 and Gen2. For creation based on storage URL, only the user identity is needed to authenticate.
-
-```python
-blob_dset = Dataset.File.from_files('https://myblob.blob.core.windows.net/may/keras-mnist-fashion/')
-```
-
-When you submit a training job that consumes a dataset created with identity-based data access, the managed identity of the training compute is used for data access authentication. Your Azure Active Directory token isn't used. For this scenario, ensure that the managed identity of the compute is granted at least the Storage Blob Data Reader role from the storage service. For more information, see [Set up managed identity on compute clusters](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
-
-## User identity based data access for training jobs on compute clusters (preview)
--
-When training on [Azure Machine Learning compute clusters](how-to-create-attach-compute-cluster.md#what-is-a-compute-cluster), you can authenticate to storage with your user Azure Active Directory token.
-
-This authentication mode allows you to:
-* Set up fine-grained permissions, where different workspace users can have access to different storage accounts or folders within storage accounts.
-* Let data scientists re-use existing permissions on storage systems.
-* Audit storage access because the storage logs show which identities were used to access data.
-
-> [!IMPORTANT]
-> This functionality has the following limitations
-> * Feature is only supported for experiments submitted via the [Azure Machine Learning CLI](how-to-configure-cli.md)
-> * Only CommandJobs, and PipelineJobs with CommandSteps and AutoMLSteps are supported
-> * User identity and compute managed identity cannot be used for authentication within same job.
-
-> [!WARNING]
-> This feature is __public preview__ and is __not secure for production workloads__. Ensure that only trusted users have permissions to access your workspace and storage accounts.
->
-> Preview features are provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
->
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-The following steps outline how to set up identity-based data access for training jobs on compute clusters.
-
-1. Grant the user identity access to storage resources. For example, grant StorageBlobReader access to the specific storage account you want to use or grant ACL-based permission to specific folders or files in Azure Data Lake Gen 2 storage.
-
-1. Create an Azure Machine Learning datastore without cached credentials for the storage account. If a datastore has cached credentials, such as storage account key, those credentials are used instead of user identity.
-
-1. Submit a training job with property **identity** set to **type: user_identity**, as shown in following job specification. During the training job, the authentication to storage happens via the identity of the user that submits the job.
-
-> [!NOTE]
-> If the **identity** property is left unspecified and datastore does not have cached credentials, then compute managed identity becomes the fallback option.
-
-```yaml
-command: |
- echo "--census-csv: ${{inputs.census_csv}}"
- python hello-census.py --census-csv ${{inputs.census_csv}}
-code: src
-inputs:
- census_csv:
- type: uri_file
- path: azureml://datastores/mydata/paths/census.csv
-environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
-compute: azureml:cpu-cluster
-identity:
- type: user_identity
-```
-
-## Next steps
-
-* [Create an Azure Machine Learning dataset](./v1/how-to-create-register-datasets.md)
-* [Train with datasets](v1/how-to-train-with-datasets.md)
-* [Create a datastore with key-based data access](v1/how-to-access-data.md)
machine-learning How To Identity Based Service Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-service-authentication.md
Azure Machine Learning is composed of multiple Azure services. There are multipl
* The Azure ML compute cluster uses a __managed identity__ to retrieve connection information for datastores from Azure Key Vault and to pull Docker images from ACR. You can also configure identity-based access to datastores, which will instead use the managed identity of the compute cluster. * Data access can happen along multiple paths depending on the data storage service and your configuration. For example, authentication to the datastore may use an account key, token, security principal, managed identity, or user identity.
- For more information on how data access is authenticated, see the [Data administration](how-to-administrate-data-authentication.md) article.
+ For more information on how data access is authenticated, see the [Data administration](how-to-administrate-data-authentication.md) article. For information on configuring identity based access to data, see [Create datastores](how-to-datastore.md).
* Managed online endpoints can use a managed identity to access Azure resources when performing inference. For more information, see [Access Azure resources from an online endpoint](how-to-access-resources-from-endpoints-managed-identities.md).
In this scenario, Azure Machine Learning service builds the training or inferenc
## Next steps * Learn more about [enterprise security in Azure Machine Learning](concept-enterprise-security.md)
-* Learn about [identity-based data access](how-to-identity-based-data-access.md)
+* Learn about [data administration](how-to-administrate-data-authentication.md)
* Learn about [managed identities on compute cluster](how-to-create-attach-compute-cluster.md).
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md
Last updated 09/21/2022 -+ # Manage Azure Machine Learning workspaces in the portal or with the Python SDK (v2)
Place the file into the directory structure with your Python scripts or Jupyter
When running machine learning tasks using the SDK, you require a MLClient object that specifies the connection to your workspace. You can create an `MLClient` object from parameters, or with a configuration file. * **With a configuration file:** This code will read the contents of the configuration file to find your workspace. You'll get a prompt to sign in if you aren't already authenticated.
machine-learning How To Prepare Datasets For Automl Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prepare-datasets-for-automl-images.md
my_training_data_input = Input(
```
+### Using pre-labeled training data from local machine
+If you have previously labeled data that you would like to use to train your model, you will first need to upload the images to the default Azure Blob Storage of your Azure ML Workspace and register it as a [data asset](how-to-create-data-assets.md).
-## Using pre-labeled training data
-If you have previously labeled data that you would like to use to train your model, you will first need to upload the images to the default Azure Blob Storage of your Azure ML Workspace and register it as a data asset.
+The following script uploads the image data on your local machine at path "./data/odFridgeObjects" to datastore in Azure Blob Storage. It then creates a new data asset with the name "fridge-items-images-object-detection" in your Azure ML Workspace.
+
+If there already exists a data asset with the name "fridge-items-images-object-detection" in your Azure ML Workspace, it will update the version number of the data asset and point it to the new location where the image data uploaded.
# [Azure CLI](#tab/cli) [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
az ml data create -f [PATH_TO_YML_FILE] --workspace-name [YOUR_AZURE_WORKSPACE]
[!Notebook-python[] (~/azureml-examples-v2samplesreorg/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=upload-data)]
+If you already have your data present in an existing datastore and want to create a data asset out of it, you can do so by providing the path to the data in the datastore, instead of providing the path of your local machine. Update the code [above](how-to-prepare-datasets-for-automl-images.md#using-pre-labeled-training-data-from-local-machine) with the following snippet.
+
+# [Azure CLI](#tab/cli)
+
+Create a .yml file with the following configuration.
+
+```yml
+$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+name: fridge-items-images-object-detection
+description: Fridge-items images Object detection
+path: azureml://subscriptions/<my-subscription-id>/resourcegroups/<my-resource-group>/workspaces/<my-workspace>/datastores/<my-datastore>/paths/<path_to_image_data_folder>
+type: uri_folder
+```
+
+# [Python SDK](#tab/python)
+
+
+```Python
+my_data = Data(
+ path="azureml://subscriptions/<my-subscription-id>/resourcegroups/<my-resource-group>/workspaces/<my-workspace>/datastores/<my-datastore>/paths/<path_to_image_data_folder>",
+ type=AssetTypes.URI_FOLDER,
+ description="Fridge-items images Object detection",
+ name="fridge-items-images-object-detection",
+)
+```
++ Next, you will need to get the label annotations in JSONL format. The schema of labeled data depends on the computer vision task at hand. Refer to [schemas for JSONL files for AutoML computer vision experiments](reference-automl-images-schema.md) to learn more about the required JSONL schema for each task type. If your training data is in a different format (like, pascal VOC or COCO), [helper scripts](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to convert the data to JSONL are available in [notebook examples](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/automl-standalone-jobs).
+### Using pre-labeled training data from Azure Blob storage
+If you have your labeled training data present in a container in Azure Blob storage, then you can access it directly from there by [creating a datastore referring to that container](how-to-datastore.md#create-an-azure-blob-datastore).
-### Create MLTable
+## Create MLTable
Once you have your labeled data in JSONL format, you can use it to create `MLTable` as shown below. MLtable packages your data into a consumable object for training.
You can then pass in the `MLTable` as a [data input for your AutoML training job
* [Train computer vision models with automated machine learning](how-to-auto-train-image-models.md). * [Train a small object detection model with automated machine learning](how-to-use-automl-small-object-detect.md).
-* [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
+* [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
machine-learning How To Troubleshoot Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-data-access.md
Errors also include:
- See more information on [data concepts in Azure Machine Learning](concept-data.md) -- [Identity-based data access to storage services on Azure](how-to-identity-based-data-access.md).-
+- [AzureML authentication to other services](how-to-identity-based-service-authentication.md).
+- [Create datastores](how-to-datastore.md)
- [Read and write data in a job](how-to-read-write-data-v2.md)
machine-learning Migrate To V2 Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-managed-online-endpoints.md
For private workspace and VNet scenarios, see [Use network isolation with manage
Redeploy manually with your model files and environment definition. You can find our examples on [azureml-examples](https://github.com/Azure/azureml-examples). Specifically, this is the [SDK example for managed online endpoint](https://github.com/Azure/azureml-examples/tree/main/sdk/python/endpoints/online/managed).
-### With our [upgrade tool](https://aka.ms/moeonboard) (preview)
+### With our [upgrade tool](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/managed/migration) (preview)
This tool will automatically create new managed online endpoint based on your existing web services. Your original services won't be affected. You can safely route the traffic to the new endpoint and then delete the old one. Use the following steps to run the scripts:
Use the following steps to run the scripts:
1. Use a bash shell to run the scripts. For example, a terminal session on Linux or the Windows Subsystem for Linux (WSL). 2. Install [Python SDK V1](/python/api/overview/azure/ml/install) to run the python script. 3. Install [Azure CLI](/cli/azure/install-azure-cli).
-4. Clone the repository to your local env. For example, `git clone https://github.com/Azure/azureml-examples`.
+4. Clone [the repository](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/managed/migration) to your local env. For example, `git clone https://github.com/Azure/azureml-examples`.
5. Edit the following values in the `migrate-service.sh` file. Replace the values with ones that apply to your configuration. * `<SUBSCRIPTION_ID>` - The subscription ID of your Azure subscription that contains your workspace.
machine-learning Samples Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/samples-notebooks.md
The Data Science Virtual Machine (DSVM) is a customized VM image built specifica
## Next steps
-Explore the [MachineLearningNotebooks](https://github.com/Azure/MachineLearningNotebooks) and [AzureML-Examples](https://github.com/Azure/azureml-examples) repositories to discover what Azure Machine Learning can do.
+Explore the [AzureML-Examples](https://github.com/Azure/azureml-examples) repository to discover what Azure Machine Learning can do.
-For more GitHub sample projects and examples, see these repos:
-+ [Microsoft/MLOps](https://github.com/Microsoft/MLOps)
-+ [Microsoft/MLOpsPython](https://github.com/microsoft/MLOpsPython)
+For more examples of MLOps, see [https://github.com/Azure/mlops-v2](https://github.com/Azure/mlops-v2).
Try these tutorials:
managed-grafana How To Transition Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-transition-domain.md
+
+ Title: How to transition to using the grafana.azure.com domain
+description: Learn how to verify that your Azure Managed Grafana workspace is using the correct domain for its endpoint
++++ Last updated : 09/27/2022
+
+
+# Transition to using the grafana.azure.com domain
+
+If you have an Azure Managed Grafana instance that was created on or before April 17 2022, your workspace is accessible through two URLs: one ending in azgrafana.io and one ending in grafana.azure.com. Both links point to the same workspace.
+
+URLs ending in azgrafana.io will be deprecated in favor of the URL ending in grafana.azure.com. To avoid losing access to your Grafana workspace, youΓÇÖll need to verify that you can access your workspace through the grafana.azure.com endpoint, and that any links you may have that point to your workspace are using this endpoint as well.
+
+Azure Managed Grafana workspaces created on or after 18 April 2022 only have a grafana.azure.com URL, so no action is needed to transition those workspaces.
+
+## Verify the transition
+
+Verify that you are set to use the grafana.azure.com domain:
+
+1. In the Azure portal, go to your Azure Managed Grafana resource.
+1. At the top of the **Overview** page, in **Essentials**, look for the endpoint of your Grafana workspace. Verify that the URL ends in grafana.azure.com and that clicking the link takes you to your Grafana endpoint.
+ :::image type="content" source="media/grafana-endpoint/grafana-domain-view-endpoint.png" alt-text="Screenshot of the Azure platform showing the Grafana endpoint URL.":::
+1. If you have any bookmarks or links in your own documentation to your Grafana workspace, make sure that they point to the URL ending in grafana.azure.com listed in the Azure portal.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Service reliability](./high-availability.md)
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. The flexible se
| North Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Norway East | :heavy_check_mark: | :x: | :x: | | Qatar Central | :heavy_check_mark: | :x: | :x: |
-| South Africa North | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| South Africa North | :heavy_check_mark: | :heavy_check_mark: ** | :x: |
| South Central US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | South India | :x: $$ | :x: | :heavy_check_mark: | | Southeast Asia | :heavy_check_mark: | :x: $ | :heavy_check_mark: |
purview How To Policies Data Owner Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-azure-sql-db.md
Title: Provision access by data owner for Azure SQL DB (preview)
-description: Step-by-step guide on how data owners can configure access for Azure SQL DB through Microsoft Purview access policies.
+ Title: Provision access by data owner for Azure SQL Database (preview)
+description: Step-by-step guide on how data owners can configure access for Azure SQL Database through Microsoft Purview access policies.
Previously updated : 08/12/2022 Last updated : 10/03/2022
-# Provision access by data owner for Azure SQL DB (preview)
+# Provision access by data owner for Azure SQL Database (preview)
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
purview Tutorial Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-data-owner-policies-storage.md
To register your resource and enable Data Use Management, follow these steps:
1. If you have a firewall enabled on your Storage account, follow these steps as well: 1. Go into your Azure Storage account in [Azure portal](https://portal.azure.com).
- 1. Navigate to **Security + networking > Networking**.NET
+ 1. Navigate to **Security + networking > Networking**.
1. Choose **Selected Networks** under **Allow access from**. 1. In the **Exceptions** section, select **Allow trusted Microsoft services to access this storage account** and select **Save**.
To register your resource and enable Data Use Management, follow these steps:
:::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/select-data-source-type.png" alt-text="Screenshot showing the policy editor, with Data Resources selected, and Data source Type highlighted in the data resources menu.":::
-1. Select the **Continue** button and transverse the hierarchy to select and underlying data-object (for example: folder, file, etc.). Select **Recursive** to apply the policy from that point in the hierarchy down to any child data-objects. Then select the **Add** button. This will take you back to the policy editor.
+1. Select the **Continue** button and traverse the hierarchy to select and underlying data-object (for example: folder, file, etc.). Select **Recursive** to apply the policy from that point in the hierarchy down to any child data-objects. Then select the **Add** button. This will take you back to the policy editor.
:::image type="content" source="./media/how-to-policies-data-owner-authoring-generic/select-asset.png" alt-text="Screenshot showing the Select asset menu, and the Add button highlighted.":::
Check our demo and related tutorials:
> [!div class="nextstepaction"] > [Demo of access policy for Azure Storage](https://learn-video.azurefd.net/vod/player?id=caa25ad3-7927-4dcc-88dd-6b74bcae98a2) > [Concepts for Microsoft Purview data owner policies](./concept-policies-data-owner.md)
-> [Enable Microsoft Purview data owner policies on all data sources in a subscription or a resource group](./how-to-policies-data-owner-resource-group.md)
+> [Enable Microsoft Purview data owner policies on all data sources in a subscription or a resource group](./how-to-policies-data-owner-resource-group.md)
sentinel File Event Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/file-event-normalization-schema.md
For example: `JohnDoe` (**Actor**) uses `Windows File Explorer` (**Acting proces
| **SrcFileExtension**|Optional | String|The source file extension. <br><br>**Note**: A parser can provide this value the value is available in the log source, and does not need to be extracted from the full path.| |**SrcFileMimeType** |Optional |Enumerated | The Mime or Media type of the source file. Supported values are listed in the [IANA Media Types](https://www.iana.org/assignments/media-types/media-types.xhtml) repository. | |**SrcFileName** |Optional |String | The name of the source file, without a path or a location, but with an extension if relevant. This field should be similar to the last element in the [SrcFilePath](#srcfilepath) field. <br><br>**Note**: A parser can provide this value if the value available in the log source and does not need to be extracted from the full path.|
-| <a name="srcfilepath"></a>**SrcFilePath**| Mandatory|String |The full, normalized path of the source file, including the folder or location, the file name, and the extension. <br><br>For more information, see [Path structure](#path-structure).<br><br>Example: `/etc/init.d/networking` |
-|**SrcFilePathType** |Mandatory | Enumerated| The type of [SrcFilePath](#srcfilepath). For more information, see [Path structure](#path-structure).|
+| <a name="srcfilepath"></a>**SrcFilePath**| Recommended |String |The full, normalized path of the source file, including the folder or location, the file name, and the extension. <br><br>For more information, see [Path structure](#path-structure).<br><br>Example: `/etc/init.d/networking` |
+|**SrcFilePathType** | Recommended | Enumerated| The type of [SrcFilePath](#srcfilepath). For more information, see [Path structure](#path-structure).|
|**SrcFileMD5**|Optional |MD5 | The MD5 hash of the source file. <br><br>Example: `75a599802f1fa166cdadb360960b1dd0` | |**SrcFileSHA1**|Optional |SHA1 |The SHA-1 hash of the source file.<br><br>Example:<br>`d55c5a4df19b46db8c54`<br>`c801c4665d3338acdab0` | |**SrcFileSHA256** | Optional|SHA256 |The SHA-256 hash of the source file. <br><br>Example:<br> `e81bb824c4a09a811af17deae22f22dd`<br>`2e1ec8cbb00b22629d2899f7c68da274`|
sentinel Normalization Common Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-common-fields.md
The currently supported list of vendors and products used in the [EventVendor](#
| Corelight | Zeek | | GCP | Cloud DNS | | Infoblox | NIOS |
-| Microsoft | - AAD<br> - Azure Firewall<br> - Azure File Storage<br> - Azure NSG flows<br> - DNS Server<br> - Microsoft 365 Defender for Endpoint<br> - Microsoft Defender for IoT<br> - Security Events<br> - Sharepoint 365<br>- Sysmon<br> - Sysmon for Linux<br> - VMConnection<br> - Windows Firewall<br> - WireData <br>
+| Microsoft | - AAD<br> - Azure Firewall<br> - Azure Blob Storage<br> - Azure File Storage<br> - Azure NSG flows<br> - Azure Queue Storage<br> - Azure Table Storage <br> - DNS Server<br> - Microsoft 365 Defender for Endpoint<br> - Microsoft Defender for IoT<br> - Security Events<br> - Sharepoint 365<br>- Sysmon<br> - Sysmon for Linux<br> - VMConnection<br> - Windows Firewall<br> - WireData <br>
| Okta | - Okta<BR> - Auth0<br> | | Palo Alto | - PanOS<br> - CDL<br> | | PostgreSQL | PostgreSQL |
service-connector Concept Service Connector Internals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/concept-service-connector-internals.md
Title: Service Connector internals description: Learn about Service Connector internals, the architecture, the connections and how data is transmitted.--++
service-connector How To Troubleshoot Front End Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-troubleshoot-front-end-error.md
Title: Service Connector troubleshooting guidance description: This article lists error messages and suggested actions of Service Connector to use for troubleshooting issues.--++ Last updated 5/25/2022
service-connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/overview.md
Title: What is Service Connector? description: Understand typical use case scenarios for Service Connector, and learn the key benefits of Service Connector.--++
service-connector Tutorial Connect Web App App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-connect-web-app-app-configuration.md
In this tutorial, use the Azure CLI to complete the following tasks:
## Prerequisites -- An Azure account with an active subscription. Your access role within the subscription must be "Contributor" or "Owner". [Create an account for free](https://azure.microsoft.com/free.
+- An Azure account with an active subscription. Your access role within the subscription must be "Contributor" or "Owner". [Create an account for free](https://azure.microsoft.com/free).
- The Azure CLI. You can use it in [Azure Cloud Shell](https://shell.azure.com/) or [install it locally](/cli/azure/install-azure-cli). - [.NET SDK](https://dotnet.microsoft.com/download) - [Git](/devops/develop/git/install-and-set-up-git)
service-connector Tutorial Csharp Webapp Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-csharp-webapp-storage-cli.md
Title: 'Tutorial: Deploy a web application connected to Azure Blob Storage with Service Connector' description: Create a web app connected to Azure Blob Storage with Service Connector.--++ Last updated 05/03/2022
service-connector Tutorial Django Webapp Postgres Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-django-webapp-postgres-cli.md
Title: 'Tutorial: Using Service Connector to build a Django app with Postgres on
description: Create a Python web app with a PostgreSQL database and deploy it to Azure. The tutorial uses the Django framework, the app is hosted on Azure App Service on Linux, and the App Service and Database is connected with Service Connector. ms.devlang: python --++ Last updated 05/03/2022
service-connector Tutorial Java Spring Confluent Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-java-spring-confluent-kafka.md
Title: 'Tutorial: Deploy a Spring Boot app connected to Apache Kafka on Confluen
description: Create a Spring Boot app connected to Apache Kafka on Confluent Cloud with Service Connector in Azure Spring Apps. ms.devlang: java --++ Last updated 05/03/2022
service-connector Tutorial Java Spring Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-java-spring-mysql.md
Title: 'Tutorial: Deploy a Spring Cloud Application Connected to Azure Database for MySQL with Service Connector' description: Create a Spring Boot application connected to Azure Database for MySQL with Service Connector.--++ Last updated 05/03/2022
site-recovery Azure To Azure How To Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication.md
Use the following procedure to replicate Azure VMs to another Azure region. As a
:::image type="Storage" source="./media/azure-to-azure-how-to-enable-replication/storage.png" alt-text="Screenshot of Storage."::: - **Replica-managed disk**: Site Recovery creates new replica-managed disks in the target region to mirror the source VM's managed disks with the same storage type (Standard or premium) as the source VM's managed disk.
- - **Cache storage**: Site Recovery needs extra storage account called cache storage in the source region. All the changes happening on the source VMs are tracked and sent to cache storage account before replicating them to the target location. This storage account should be Standard.
- >[!Note]
- >Azure Site Recovery supports High churn (Public Preview) where you can choose to use **High Churn** for the VM. You can use a *Premium Block Blob* type of storage account. By default, **Normal Churn** is selected. For more information, see [Azure VM Disaster Recovery - High Churn Support](https://learn.microsoft.com/azure/site-recovery/concepts-azure-to-azure-high-churn-support).
-
- :::image type="Cache storage" source="./media/azure-to-azure-how-to-enable-replication/cache-storage.png" alt-text="Screenshot of customize target settings.":::
+ - **Cache storage**: Site Recovery needs extra storage account called cache storage in the source region. All the changes happening on the source VMs are tracked and sent to cache storage account before replicating them to the target location. This storage account should be Standard.
1. **Availability options**: Select appropriate availability option for your VM in the target region. If an availability set that was created by Site Recovery already exists, it's reused. Select **View/edit availability options** to view or edit the availability options. >[!NOTE]
storage Storage Ref Azcopy Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-sync.md
The last modified times are used for comparison. The file is skipped if the last
The sync command differs from the copy command in several ways:
- 1. By default, the recursive flag is true and sync copies all subdirectories. Sync only copies the top-level files inside a directory if the recursive flag is false.
+ 1. By default, the recursive flag is true and sync copies all subdirectories. Sync copies only the top-level files inside a directory if the recursive flag is false.
2. When syncing between virtual directories, add a trailing slash to the path (refer to examples) if there's a blob with the same name as one of the virtual directories. 3. If the 'delete-destination' flag is set to true or prompt, then sync will delete files and blobs at the destination that aren't present at the source.
storage Understanding Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/understanding-billing.md
The following table shows the categorization of each transaction:
| Write transactions | <ul><li>`CreateShare`</li><li>`SetFileServiceProperties`</li><li>`SetShareMetadata`</li><li>`SetShareProperties`</li><li>`SetShareACL`</li></ul> | <ul><li>`CopyFile`</li><li>`Create`</li><li>`CreateDirectory`</li><li>`CreateFile`</li><li>`PutRange`</li><li>`PutRangeFromURL`</li><li>`SetDirectoryMetadata`</li><li>`SetFileMetadata`</li><li>`SetFileProperties`</li><li>`SetInfo`</li><li>`Write`</li><li>`PutFilePermission`</li></ul> | | List transactions | <ul><li>`ListShares`</li></ul> | <ul><li>`ListFileRanges`</li><li>`ListFiles`</li><li>`ListHandles`</li></ul> | | Read transactions | <ul><li>`GetFileServiceProperties`</li><li>`GetShareAcl`</li><li>`GetShareMetadata`</li><li>`GetShareProperties`</li><li>`GetShareStats`</li></ul> | <ul><li>`FilePreflightRequest`</li><li>`GetDirectoryMetadata`</li><li>`GetDirectoryProperties`</li><li>`GetFile`</li><li>`GetFileCopyInformation`</li><li>`GetFileMetadata`</li><li>`GetFileProperties`</li><li>`QueryDirectory`</li><li>`QueryInfo`</li><li>`Read`</li><li>`GetFilePermission`</li></ul> |
-| Other/protocol transactions | | <ul><li>`AbortCopyFile`</li><li>`Cancel`</li><li>`ChangeNotify`</li><li>`Close`</li><li>`Echo`</li><li>`Ioctl`</li><li>`Lock`</li><li>`Logoff`</li><li>`Negotiate`</li><li>`OplockBreak`</li><li>`SessionSetup`</li><li>`TreeConnect`</li><li>`TreeDisconnect`</li><li>`CloseHandles`</li><li>`AcquireFileLease`</li><li>`BreakFileLease`</li><li>`ChangeFileLease`</li><li>`ReleaseFileLease`</li></ul> |
+| Other/protocol transactions | | <ul><li>`AbortCopyFile`</li><li>`Cancel`</li><li>`ChangeNotify`</li><li>`Close`</li><li>`Echo`</li><li>`Ioctl`</li><li>`Lock`</li><li>`Logoff`</li><li>`Negotiate`</li><li>`OplockBreak`</li><li>`SessionSetup`</li><li>`TreeConnect`</li><li>`TreeDisconnect`</li><li>`CloseHandles`</li><li>`AcquireFileLease`</li><li>`BreakFileLease`</li><li>`ChangeFileLease`</li><li>`ReleaseFileLease`</li><li>`BreakShareLease`</li><li>`RenewShareLease`</li><li>`ChangeShareLease`</li></ul> |
| Delete transactions | <ul><li>`DeleteShare`</li></ul> | <ul><li>`ClearRange`</li><li>`DeleteDirectory`</li></li>`DeleteFile`</li></ul> | > [!Note]
update-center Manage Update Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-update-settings.md
To configure update settings on your machines on a single VM, follow these steps
- **Patch orchestration** option provides the following: - **Automatic by operating system** - When the workload running on the VM doesn't have to meet availability targets, operating system updates are automatically downloaded and installed. Machines are rebooted as needed.
- - **Azure-orchestrated (preview)** - Available *Critical* and *Security* patches are downloaded and applied automatically on the Azure VM using [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md). This process kicks off automatically every month when new patches are released. Patch assessment and installation are automatic, and the process includes rebooting the VM as required.
+ - **Azure-orchestrated** - Patch orchestration set to Azure-orchestrated for an Azure VM (not applicable for Arc-enabled server) has two different implications depending on whether customer [schedule](../update-center/scheduled-patching.md#) is attached to it or not.
+
+ | Patch orchestration type | Description
+ |-|-|
+ |Azure-orchestrated with no schedule attached | Machine is enabled for [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md). It implies that the available Critical and Security patches are downloaded and applied automatically on the Azure VM. This process kicks off automatically every month when new patches are released. Patch assessment and installation are automatic, and the process includes rebooting the VM as required.|
+ |Azure-orchestrated with schedule attached | Patching will happen according to the schedule and [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) will not take effect on the machine. Patch orchestration set to Azure-orchestrated is a necessary pre-condition for enabling schedules. You cannot enable a machine for custom schedule unless you set Patch orchestration to Azure-orchestrated. |
+
+ - Available *Critical* and *Security* patches are downloaded and applied automatically on the Azure VM using [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md). This process kicks off automatically every month when new patches are released. Patch assessment and installation are automatic, and the process includes rebooting the VM as required.
- **Manual updates** - Configures the Windows Update agent by setting [configure automatic updates](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates#configure-automatic-updates). - **Image Default** - Only supported for Linux Virtual Machines, this mode honors the default patching configuration in the image used to create the VM.
update-center Scheduled Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/scheduled-patching.md
Update management center (preview) uses maintenance control schedule instead of
> [!Note] > If you set the patch orchestration mode to Azure orchestrated (Automatic By Platform) but don't attach a maintenance configuration to an Azure machine, it is treated as [Automatic Guest patching](../virtual-machines/automatic-vm-guest-patching.md) enabled machine and Azure platform will automatically install updates as per its own schedule.
-1. The maintenance configuration's subscription and the subscriptions of all VMs assigned to the maintenance configuration must be allowlisted with feature flag **Microsoft.Compute/InGuestScheduledPatchVMPreview**.
- ## Schedule recurring updates on single VM
update-center Updates Maintenance Schedules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/updates-maintenance-schedules.md
You can create a schedule on a daily, weekly or hourly cadence as per your requi
Update management center (preview) uses maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see the [Maintenance control documentation](/azure/virtual-machines/maintenance-control). Start using [scheduled patching](scheduled-patching.md) to create and save recurring deployment schedules.
+> [!NOTE]
+> Patch orchestration set to Azure-orchestrated is a pre-condition to enable schedule patching on Azure VM. For more information, see the [list of prerequisites](../update-center/scheduled-patching.md#prerequisites-for-scheduled-patching)
## Automatic VM Guest patching in Azure
virtual-desktop Fslogix Containers Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/fslogix-containers-azure-files.md
Title: Azure Virtual Desktop FSLogix profile containers files - Azure
-description: This article describes FSLogix profile containers within Azure Virtual Desktop and Azure files.
+description: This article describes FSLogix profile containers within Azure Virtual Desktop and Azure Files.
Last updated 01/04/2021
virtual-desktop Rdp Shortpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/rdp-shortpath.md
As RDP Shortpath uses UDP to establish a data flow, if a firewall on your networ
If your users are in a scenario where RDP Shortpath for both managed network and public networks is available to them, then the first algorithm found will be used. Whichever connection gets established first is what the user will use for that session. > [!NOTE]
-> RDP Shortpath doesn't support Symmetric NAT, which is the mapping of a single private source *IP:Port* to a unique public destination *IP:Port*. This is because RDP Shortpath needs to reuse the same external port (or NAT binding) used in the initial connection. Where multiple paths are used, for example a highly available firewall pair, external port reuse cannot be guaranteed. For more information about NAT with Azure virtual networks, see [Source Network Address Translation with virtual networks](../virtual-network/nat-gateway/nat-gateway-resource.md#source-network-address-translation).
+> RDP Shortpath doesn't support Symmetric NAT, which is the mapping of a single private source *IP:Port* to a unique public destination *IP:Port*. This is because RDP Shortpath needs to reuse the same external port (or NAT binding) used in the initial connection. Where multiple paths are used, for example a highly available firewall pair, external port reuse cannot be guaranteed. Azure Firewall and Azure NAT Gateway use Symmetric NAT. For more information about NAT with Azure virtual networks, see [Source Network Address Translation with virtual networks](../virtual-network/nat-gateway/nat-gateway-resource.md#source-network-address-translation).
#### Session host virtual network
virtual-machines Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/run-command.md
Invoke-AzVMRunCommand -ResourceGroupName '<myResourceGroup>' -Name '<myVMName>'
## Limiting access to Run Command
-Listing the run commands or showing the details of a command requires the `Microsoft.Compute/locations/runCommands/read` permission. The built-in [Reader](../../role-based-access-control/built-in-roles.md#reader) role and higher levels have this permission.
+Listing the run commands or showing the details of a command requires the `Microsoft.Compute/locations/runCommands/read` permission on Subscription level. The built-in [Reader](../../role-based-access-control/built-in-roles.md#reader) role and higher levels have this permission.
Running a command requires the `Microsoft.Compute/virtualMachines/runCommand/action` permission. The [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor) role and higher levels have this permission.
virtual-wan User Groups About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/user-groups-about.md
+
+ Title: 'About user groups and IP address pools for point-to-site User VPN'
+
+description: Learn about using user groups to assign IP addresses from specific address pools based on identity or authentication credentials.
+++ Last updated : 09/20/2022+++
+# About user groups and IP address pools for P2S User VPNs (preview)
+
+You can configure P2S User VPNs to assign users IP addresses from specific address pools based on their identity or authentication credentials by creating **User Groups**. This article describes the different configurations and parameters the Virtual WAN P2S VPN gateway uses to determine user groups and assign IP addresses.
+
+## Use cases
+
+Contoso corporation is composed of multiple functional departments, such as Finance, Human Resources and Engineering. Contoso uses Virtual WAN to allow remote workers (users) to connect to Azure Virtual WAN and access resources hosted on-premises or in a Virtual Network connected to the Virtual WAN hub.
+
+However, Contoso has internal security policies where users from the Finance department can only access certain databases and Virtual Machines and users from Human Resources have access to other sensitive applications.
+
+Contoso can configure different user groups for each of their functional departments. This will ensure users from each department are assigned IP addresses from a department-level pre-defined address pool.
+
+Contoso's network administrator can then configure Firewall rules, network security groups (NSG) or access control lists (ACLs) to allow or deny certain users access to resources based on their IP addresses.
+
+## Server configuration concepts
+
+The following sections explain the common terms and values used for server configuration.
+
+### User Groups (policy groups)
+
+A **User Group** or policy group is a logical representation of a group of users that should be assigned IP addresses from the same address pool.
+
+### Group members (policy members)
+
+User groups consist of members. Members don't correspond to individual users but rather define the criteria used to determine which group a connecting user is a part of. A single group can have multiple members. If a connecting user matches the criteria specified for one of the group's members, the user is considered to be part of that group and can be assigned an appropriate IP address.
+The types of member parameters that are available depend on the authentication methods specified in the VPN server configuration. For a full list of available criteria, see the [Available group settings](#available-group-settings) section of this article.
+
+### Default user/policy group
+
+For every P2S VPN server configuration, one group must be selected as default. Users who present credentials that don't match any group settings are considered to be part of the default group. Once a group is created, the default setting of that group can't be changed.
+
+### Group priority
+
+Each group is also assigned a numerical priority. Groups with lower priority are evaluated first. This means that if a user presents credentials that match the settings of multiple groups, they'll be considered part of the group with the lowest priority. For example, if user A presents a credential that corresponds to the IT Group (priority 3) and Finance Group (priority 4), user A will be considered part of the IT Group for purposes of assigning IP addresses.
+
+### Available group settings
+
+The following section describes the different parameters that can be used to define which groups members are a part of. The available parameters vary based on selected authentication methods.
+The table below summarizes the available setting types and acceptable values. For more detailed information on each type of Member Value, view the section corresponding to your authentication type.
+
+|Authentication type|Member type |Member values|Example member value|
+|||||
+Azure Active Directory|AADGroupID|Azure Active Directory Group Object ID |0cf484f2-238e-440b-8c73-7bf232b248dc|
+|RADIUS|AzureRADIUSGroupID|Vendor-specific Attribute Value (hexadecimal) (must begin with 6ad1bd)|6ad1bd23|
+|Certificate|AzureCertificateID|Certificate Common Name domain name (CN=user@red.com)|red|
+
+#### Azure Active Directory authentication (OpenVPN only)
+
+Gateways using Azure Active Directory authentication can use **Azure Active Directory Group Object IDs** to determine which user group a user belongs to. If a user is part of multiple Azure Active Directory groups, they're considered to be part of the Virtual WAN user group that has the lowest numerical priority.
++
+#### Azure Certificate (OpenVPN and IKEv2)
+
+Gateways that use Certificate-based authentication use the **domain name** of user certificate Common Names (CN) to determine which group a connecting user is in. Common Names must be in one of the following formats:
+
+* domain/username
+* username@domain.com
+
+Make sure that the **domain** is the input as a group member.
+
+#### RADIUS server (OpenVPN and IKEv2)
+
+Gateways that use RADIUS-based authentication use a new **Vendor-Specific Attribute (VSA)** to determine VPN user groups.
+When RADIUS-based authentication is configured on the P2S gateway, the gateway serves as a Network Policy Server (NPS) proxy. This means that the P2S VPN gateway serves as a client to authenticate users with your RADIUS server using the RADIUS protocol.
+
+After your RADIUS server has successfully verified the user's credentials, the RADIUS server can be configured to send a new Vendor-Specific Attribute (VSA) as part of Access-Accept packets. The P2S VPN gateway processes the VSA in the Access-Accept packets and assigns specific IP addresses to users based on the value of the VSAs.
+
+Therefore, RADIUS servers should be configured to send a VSA with the same value for all users that are part of the same group.
+
+>[!NOTE]
+> The value of the VSA must be an octet hexadecimal string on the RADIUS server and the Azure. This octet string must begin with **6ad1bd**. The last two hexadecimal digits may be configured freely. For example, 6ad1bd98 is valid but 6ad12323 and 6a1bd2 would not be valid.
+>
+
+The new VSA is **MS-Azure-Policy-ID**.
+
+The MS-Azure-Policy-ID VSA is used by the RADIUS server to send an identifier that is used by P2S VPN server to match an authenticated RADIUS user policy configured on Azure side. This policy is used to select the IP/ Routing configuration (assigned IP address) for the user.
+
+The fields of MS-Azure-Policy-ID MUST be set as follows:
+
+* **Vendor-Type:** An 8-bit unsigned integer that MUST be set to 0x41 (integer: 65).
+* **Vendor-Length:** An 8-bit unsigned integer that MUST be set to the length of the octet string in the Attribute-Specific Value plus 2.
+* **Attribute-Specific Value:** An octet string containing Policy ID configured on Azure Point to Site VPN server.
+
+For configuration information, see [RADIUS - configure NPS for vendor-specific attributes](user-groups-radius.md).
+
+## Gateway concepts
+
+When a Virtual WAN P2S VPN gateway is assigned a VPN server configuration that uses user/policy groups, you can create multiple P2S VPN connection configurations on the gateway.
+
+Each connection configuration can contain one or more VPN server configuration user groups. Each connection configuration is then mapped to one or more IP address pools. Users who connect to this gateway are assigned an IP address based on their identity, credentials, default group, and priority.
+
+In this example, the VPN server configuration has the following groups configured:
+
+|Default|Priority|Group name|Authentication type|Member value|
+||||||
+|Yes|0|Engineering|Azure Active Directory|groupObjectId1|
+|No|1|Finance|Azure Active Directory|groupObjectId2|
+|No|2|PM|Azure Active Directory|groupObjectId3|
+
+This VPN server configuration can be assigned to a P2S VPN gateway in Virtual WAN with:
+
+|Configuration|Groups|Address pools|
+||||
+|Config0|Engineering, PM|x.x.x.x/yy|
+|Config1|Finance|a.a.a.a/bb|
+
+The following result is:
+
+* Users who are connecting to this P2S VPN gateway will be assigned an address from x.x.x.x/yy if they're part of the Engineering or PM Azure Active Directory groups.
+* Users who are part of Finance Azure Active Directory group are assigned IP addresses from a.a.a.a/bb.
+* Because Engineering is the default group, users who aren't part of any configured group are assumed to be part of Engineering and assigned an IP address from x.x.x.x/yy.
+
+## Configuration considerations
++
+## Next steps
+
+* To create User Groups, see [Create User Groups for P2S User VPN](user-groups-create.md).
virtual-wan User Groups Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/user-groups-create.md
+
+ Title: 'Configure user groups and IP address pools for point-to-site User VPNs'
+
+description: Learn how to configure user groups and assign IP addresses from specific address pools based on identity or authentication credentials.
+++ Last updated : 09/22/2022++++
+# Configure user groups and IP address pools for P2S User VPNs (preview)
+
+You can configure P2S User VPNs to assign users IP addresses from specific address pools based on their identity or authentication credentials by creating **User Groups**. This article helps you configure user groups, group members, and prioritize groups. For more information about working with user groups, see [About user groups](user-groups-about.md).
+
+## Configuration considerations
++
+### Additional configuration information
+
+#### Azure Active Directory groups
+
+To create and manage Active Directory groups, see [Manage Azure Active Directory groups and group membership](../active-directory/fundamentals/how-to-manage-groups.md).
+
+* The Azure Active Directory group object ID (and not the group name) needs to be specified as part of the Virtual WAN point-to-site User VPN configuration.
+* Azure Active Directory users can be assigned to be part of multiple Active Directory groups, but Virtual WAN considers users to be part of the Virtual WAN user/policy group that has the lowest numerical priority.
+
+#### RADIUS - NPS vendor-specific attributes
+
+For Network Policy Server (NPS) vendor-specific attributes configuration information, see [RADIUS - configure NPS for vendor-specific attributes](user-groups-radius.md).
+
+#### Generating self-signed certificates
+
+To generate self-signed certificates, see [Generate and export certificates for User VPN P2S connections: PowerShell](certificates-point-to-site.md). To generate a certificate with a specific Common Name, change the **Subject** parameter to the appropriate value (example, xx@domain.com) when running the `New-SelfSignedCertificate` PowerShell command.
+
+## Prerequisites
+
+Before beginning, make sure you've configured a virtual WAN that uses one or more authentication methods. For steps, see [Tutorial: Create a Virtual WAN User VPN P2S connection](virtual-wan-point-to-site-portal.md).
+
+## Create a user group
+
+1. In the Azure portal, go to your **Virtual WAN -> User VPN configurations** page.
+
+1. On the **User VPN configurations** page, select the User VPN Configuration that you want to edit, then click **Edit configuration**.
+
+1. On the **Edit User VPN configuration** page, open the **User Groups** tab.
+
+ :::image type="content" source="./media/user-groups-create/enable-user-groups.png" alt-text="Screenshot of enabling User Groups." lightbox="./media/user-groups-create/enable-user-groups.png":::
+
+1. Select **Yes** to enable user groups. When this server configuration is assigned to a P2S VPN gateway, users who are part of the same user groups are assigned IP addresses from the same address pools. Users who are part of different groups are assigned IP addresses from different groups. When you use this feature, you must select **Default** group for one of the groups that you create.
+
+1. To begin creating a new User Group, fill out the name parameter with the name of the first group.
+
+1. Next to the **Group Name**, click **Configure Group** to open the **Configure Group Settings** page.
+
+ :::image type="content" source="./media/user-groups-create/new-group.png" alt-text="Screenshot of creating a new group." lightbox="./media/user-groups-create/new-group.png":::
+
+1. On the **Configure Group Settings** page, fill in the values for each member that you want to include in this group. A group can contain multiple group members.
+
+ * Create a new member by filling in the **Name** field.
+
+ * Select the **Authentication: Setting Type** from the dropdown. The dropdown is automatically populated based on the selected authentication methods for the User VPN configuration.
+
+ * Type the **Value**. For valid values, see [About user groups](user-groups-about.md).
+
+ :::image type="content" source="./media/user-groups-create/group-members.png" alt-text="Screenshot of configuring values for User Group members." lightbox="./media/user-groups-create/group-members.png":::
+
+1. When you're finished creating the settings for the group, click **Add** and **Okay**.
+
+1. Create any additional groups.
+
+1. Select at least one group as default. Users who aren't part of any group specified on a gateway will be assigned to the default group on the gateway. Also note that you can't modify the "default" status of a group after the group has been created.
+
+ :::image type="content" source="./media/user-groups-create/select-default.png" alt-text="Screenshot of selecting the default group." lightbox="./media/user-groups-create/select-default.png":::
+
+1. Click the arrows to adjust the group priority order.
+
+ :::image type="content" source="./media/user-groups-create/adjust-order.png" alt-text="Screenshot of adjusting the priority order." lightbox="./media/user-groups-create/adjust-order.png":::
+
+1. Click **Review + create** to create and configure. After you create the User VPN configuration, configure the gateway server configuration settings to use the user groups feature.
+
+## Configure gateway settings
+
+1. In the portal, go to your virtual hub and click **User VPN (Point to site)**.
+
+1. On the point to site page, click the **Gateway scale units** link to open the **Edit User VPN gateway**. Adjust the **Gateway scale units** value from the dropdown to determine gateway throughput.
+
+1. For **Point to site server configuration**, select the User VPN configuration that you configured for user groups. If you haven't yet configured these settings, see [Create user groups](user-groups-create.md).
+
+1. Create a new point to site configuration by typing a new **Configuration Name**.
+1. Select one or more groups to be associated with this configuration. All the users who are part of groups that are associated with this configuration will be assigned IP addresses from the same IP address pools.
+
+ Across all configurations for this gateway, you must have exactly one default user group selected.
+
+ :::image type="content" source="./media/user-groups-create/select-groups.png" alt-text="Screenshot of Edit User VPN gateway page with groups selected." lightbox="./media/user-groups-create/select-groups.png":::
+
+1. For **Address Pools**, click **Configure** to open the **Specify Address Pools** page. On this page, associate new address pools with this configuration. Users who are members of groups associated to this configuration will be assigned IP addresses from the specified pools. Based on the number of **Gateway Scale Units** associated to the gateway, you may need to specify more than one address pool. Click **Add** and **Okay** to save your address pools.
+
+ :::image type="content" source="./media/user-groups-create/address-pools.png" alt-text="Screenshot of Specify Address Pools page." lightbox="./media/user-groups-create/address-pools.png":::
+
+1. You'll need one configuration for each set of groups that should be assigned IP addresses from different address pools. Repeat the steps to create more configurations. See [Configuration considerations](#configuration-considerations) for requirements and limitations regarding address pools and groups.
+
+1. After you've created the configurations that you need, click **Edit**, and then **Confirm** to save your settings.
+
+ :::image type="content" source="./media/user-groups-create/confirm.png" alt-text="Screenshot of Confirm settings." lightbox="./media/user-groups-create/confirm.png":::
+
+## Troubleshooting
+
+1. Wireshark or another packet capture can be run in NPS mode and decrypt packets using shared key. You can validate packets are being sent from your RADIUS server to the point-to-site VPN gateway with the right RADIUS VSA configured.
+1. Set up and check NPS Event logging for authentication whether or not users are matching policies.
+1. Every address pool specified on the gateway. Address pools are split into two address pools and assigned to each active-active instance in a point-to-site VPN gateway pair. These split addresses should show up in the effective route table. For example, if you specify 10.0.0.0/24, you should see two /25 routes in the effective route table. If this isn't the case, try changing the address pools defined on the gateway.
+1. Make sure all point-to-site VPN connection configurations are associated to the defaultRouteTable and propagate to the same set of route tables. This should be configured automatically if you're using portal, but if you're using REST, PowerShell or CLI, make sure all propagations and associations are set appropriately.
+1. If you're using the Azure VPN client, make sure the Azure VPN client installed on user devices are the latest version.
+
+## Next steps
+
+* For more information about user groups, see [About user groups and IP address pools for P2S User VPNs](user-groups-about.md).
virtual-wan User Groups Radius https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/user-groups-radius.md
+
+ Title: 'Configure vender-specific attributes for P2S User Groups - RADIUS'
+
+description: Learn how to configure RADIUS/NPS for user groups to assign IP addresses from specific address pools based on identity or authentication credentials.
+++ Last updated : 09/22/2022+++
+# RADIUS - Configure NPS for vendor-specific attributes - P2S user groups (preview)
+
+The following section describes how to configure Windows Server Network Policy Server (NPS) to authenticate users to respond to Access-Request messages with the Vendor Specific Attribute (VSA) used for user group support in Virtual WAN point-to-site-VPN. The following steps assume that your Network Policy Server is already registered to Active Directory. The steps may vary depending on the vendor/version of your NPS server.
+
+The following steps describe setting up single Network Policy on the NPS server. The NPS server will reply with the specified VSA for all users who match this policy, and the value of this VSA can be used on your point-to-site VPN gateway in Virtual WAN.
+
+## Configure
+
+1. Open the **Network Policy Server** management console, and right click **Network Policies -> New** to create a new Network Policy.
+
+ :::image type="content" source="./media/user-groups-radius/network-policy-server.png" alt-text="Screenshot of new network policy." lightbox="./media/user-groups-radius/network-policy-server.png":::
+
+1. In the wizard, select **Access granted** to ensure your RADIUS server can send Access-Accept messages after authentication users. Then, click **Next**.
+
+1. Name the policy and select **Remote Access Server (VPN-Dial up)** as the network access server type. Then, click **Next**.
+
+ :::image type="content" source="./media/user-groups-radius/policy-name.png" alt-text="Screenshot of policy name field." lightbox="./media/user-groups-radius/policy-name.png":::
+
+1. On the **Specify Conditions** page, click **Add** to select a condition. Then, select **User Groups** as the condition and click **Add**. You may also use other Network Policy conditions that are supported by your RADIUS server vendor.
+
+ :::image type="content" source="./media/user-groups-radius/specify.png" alt-text="Screenshot of specifying conditions for User Groups." lightbox="./media/user-groups-radius/specify.png":::
+
+1. On the **User Groups** page, click **Add Groups** and select the Active Directory groups that will use this policy. Then, click **OK** and **OK** again. You'll see the groups you've added in the **User Groups** window. Click **OK** to return to the **Specify Conditions** page and click **Next**.
+
+1. On the **Specify Access Permission** page, select **Access granted** to ensure your RADIUS server can send Access-Accept messages after authenticating users. Then, click **Next**.
+
+ :::image type="content" source="./media/user-groups-radius/specify-access.png" alt-text="Screenshot of the Specify Access Permission page." lightbox="./media/user-groups-radius/specify-access.png":::
+
+1. For **Configuration Authentication Methods**, make any necessary changes, then click **Next**.
+1. For **Configure Constraints** select any necessary settings. Then, click **Next**.
+1. On the **Configure Settings** page, for **RADIUS Attributes**, highlight **Vendor Specific** and click **Add**.
+
+ :::image type="content" source="./media/user-groups-radius/configure-settings.png" alt-text="Screenshot of the Configure Settings page." lightbox="./media/user-groups-radius/configure-settings.png":::
+
+ 1. On the **Add Vendor Specific Attribute** page, scroll to select **Vendor-Specific**.
+
+ :::image type="content" source="./media/user-groups-radius/vendor-specific.png" alt-text="Screenshot of the Add Vendor Specific Attribute page with Vendor-Specific selected." lightbox="./media/user-groups-radius/vendor-specific.png":::
+
+1. Click **Add** to open the **Attribute Information** page. Then, click **Add** to open the **Vendor-Specific Attribute Information** page. Select **Select from list** and select **Microsoft**. Select **Yes. It conforms**. Then, click **Configure Attribute**.
+
+ :::image type="content" source="./media/user-groups-radius/attribute-information.png" alt-text="Screenshot of the Attribute Information page." lightbox="./media/user-groups-radius/attribute-information.png":::
+
+1. On the **Configure VSA (RFC Compliant)** page, select the following values:
+
+ * **Vendor-assigned attribute number**: 65
+ * **Attribute format**: Hexadecimal
+ * **Attribute value**: Set this to the VSA value you have configured on your VPN server configuration, such as 6a1bd08. The VSA value should begin with **6ad1bd**.
+
+1. Click **OK** and **OK** again to close the windows. On the **Attribute Information** page, you'll see the Vendor and Value listed that you just input. Click **OK** to close the window. Then, click **Close** to return to the **Configure Settings** page.
+
+1. The **Configure Settings** now looks similar to the following screenshot:
+
+ :::image type="content" source="./media/user-groups-radius/vendor-value.png" alt-text="Screenshot of the Configure Settings page with Vendor Specific attributes." lightbox="./media/user-groups-radius/vendor-value.png":::
+
+1. Click **Next** and then **Finish**. You can create multiple network policies on your RADIUS server to send different Access-Accept messages to the Virtual WAN point-to-site VPN gateway based on Active Directory group membership or any other mechanism you would like to support.
+
+## Next steps
+
+* For more information about user groups, see [About user groups and IP address pools for P2S User VPNs](user-groups-about.md).
+
+* To configure user groups, see [Configure user groups and IP address pools for P2S User VPNs](user-groups-create.md).
virtual-wan Virtual Wan Expressroute Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-expressroute-portal.md
Verify that you've met the following criteria before beginning your configuratio
* Obtain an IP address range for your hub region. The hub is a virtual network that is created and used by Virtual WAN. The address range that you specify for the hub can't overlap with any of your existing virtual networks that you connect to. It also can't overlap with your address ranges that you connect to on-premises. If you're unfamiliar with the IP address ranges located in your on-premises network configuration, coordinate with someone who can provide those details for you.
-* The ExpressRoute circuit must be a Premium or Standard circuit in order to connect to the hub gateway.
+* The following ExpressRoute circuit SKUs can be connected to the hub gateway: Local, Standard, and Premium
* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
In this section, you create the peering connection between your hub and a VNet.
* **Connection name** - Name your connection. * **Hubs** - Select the hub you want to associate with this connection. * **Subscription** - Verify the subscription.
- * **Virtual network** - Select the virtual network you want to connect to this hub. The virtual network can't have an already existing virtual network gateway (neither VPN, nor ExpressRoute).
+ * **Virtual network** - Select the virtual network you want to connect to this hub. The virtual network can't have an already existing virtual network gateway (neither VPN nor ExpressRoute).
## <a name="connectcircuit"></a>Connect your circuit to the hub gateway Once the gateway is created, you can connect an [ExpressRoute circuit](../expressroute/expressroute-howto-circuit-portal-resource-manager.md) to it.
-* ExpressRoute Standard or Premium circuits that are in ExpressRoute Global Reach-supported locations can connect to a Virtual WAN ExpressRoute gateway and enjoy all Virtual WAN transit capabilities (VPN-to-VPN, VPN, and ExpressRoute transit).
+* ExpressRoute Standard or Premium circuits that are in ExpressRoute Global Reach-supported locations can connect to a Virtual WAN ExpressRoute gateway and enjoy all Virtual WAN transit capabilities (VPN-to-VPN, VPN-to-ExpressRoute, and ExpressRoute-to-ExpressRoute transit).
* ExpressRoute Standard and Premium circuits that are in non-Global Reach locations can connect to Azure resources, but won't be able to use Virtual WAN transit capabilities. ExpressRoute Local is also supported with Azure Virtual WAN virtual hubs. + ### To connect the circuit to the hub gateway In the portal, go to the **Virtual hub -> Connectivity -> ExpressRoute** page. If you have access in your subscription to an ExpressRoute circuit, you'll see the circuit you want to use in the list of circuits. If you donΓÇÖt see any circuits, but have been provided with an authorization key and peer circuit URI, you can redeem and connect a circuit. See [To connect by redeeming an authorization key](#authkey).
virtual-wan Virtual Wan Point To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-point-to-site-portal.md
Title: 'Tutorial: Create a User VPN connection to Azure using Azure Virtual WAN'
description: In this tutorial, learn how to use Azure Virtual WAN to create a User VPN (point-to-site) connection to Azure. - Previously updated : 08/24/2022 Last updated : 09/15/2022