Updates from: 02/23/2022 02:08:04
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner F5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-f5.md
The next attempt at connecting to the application should take you straight to th
Failure to access the protected application could be down to any number of potential factors, including a misconfiguration. -- BIG-IP logs are a great source of information for isolating all authentication and SSO issues. If troubleshooting you should increase the log verbosity level.
+BIG-IP logs are a great source of information for isolating all authentication and SSO issues. If troubleshooting you should increase the log verbosity level.
1. Go to **Access Policy** > **Overview** > **Event Logs** > **Settings**.
Exact root cause is still being investigated by F5 engineering, but issue appear
You should now see the Key (JWT) field populated with the key ID (KID) of the token signing certificate provided through the OpenID URI metadata.
- 5. Finally, select the yellow **Apply Access Policy** option in the top left-hand corner, located next to the F5 logo. Apply those settings and select **Apply** again to refresh the access profile list.
+ 5. Finally, select the yellow **Apply Access Policy** option in the top left-hand corner, located next to the F5 logo. Then select **Apply** again to refresh the access profile list.
See F5ΓÇÖs guidance for more [OAuth client and resource server troubleshooting tips](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-sso-13-0-0/37.html#GUID-774384BC-CF63-469D-A589-1595D0DDFBA2)
active-directory How To Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md
Only one CRL Distribution Point (CDP) for a trusted CA is supported. The CDP can
[!INCLUDE [New-AzureAD](../../../includes/active-directory-authentication-new-trusted-azuread.md)]
+**AuthorityType**
+- Use 0 to indicate that this is a Root Certificate Authority
+- Use 1 to indicate that this is an Intermediate or Issuing Certificate Authority
+
+**crlDistributionPoint**
+
+You can validate the crlDistributionPoint value you provide in the above PowerShell example are valid for the Certificate Authority being added by downloading the CRL and comparing the CA certificate and the CRL Information.
+
+The below table and graphic indicate how to map information from the CA Certificate to the attributes of the downloaded CRL.
+
+| CA Certificate Info | |Downloaded CRL Info|
+|-|:-:|-|
+|Subject |=|Issuer |
+|Subject Key Identifier |=|Authority Key Identifier (KeyID) |
++
+>[!TIP]
+>The value for crlDistributionPoint in the above is the http location for the CAΓÇÖs Certificate Revocation List (CRL). This can be found in a few places.
+>
+>- In the CRL Distribution Point (CDP) attribute of a certificate issued from the CA
+>
+>If Issuing CA is Windows Server
+>
+>- On the [Properties](/windows-server/networking/core-network-guide/cncg/server-certs/configure-the-cdp-and-aia-extensions-on-ca1#to-configure-the-cdp-and-aia-extensions-on-ca1)
+ of the CA in the Certificate Authority Microsoft Management Console (MMC)
+>- On the CA running [certutil](/windows-server/administration/windows-commands/certutil#-cainfo) -cainfo cdp
+
+For additional details see: [Understanding the certificate revocation process](./concept-certificate-based-authentication-technical-deep-dive.md#understanding-the-certificate-revocation-process).
+ ### Remove [!INCLUDE [Remove-AzureAD](../../../includes/active-directory-authentication-remove-trusted-azuread.md)]
active-directory How To Mfa Microsoft Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-microsoft-managed.md
Previously updated : 11/11/2021 Last updated : 02/22/2022
The option to let Azure AD manage the setting is a convenient way for an organiz
## Settings that can be Microsoft managed
-The following table lists settings that can be set to Microsoft managed and whether it is enabled or disabled.
+The following table list each setting that can be set to Microsoft managed and whether that setting is enabled or disabled by default.
| Setting | Configuration | |||
active-directory Howto Authentication Passwordless Security Key On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md
Previously updated : 02/22/2021 Last updated : 02/22/2022
Azure AD can issue Kerberos ticket-granting tickets (TGTs) for one or more of yo
An Azure AD Kerberos Server object is created in your on-premises Active Directory instance and then securely published to Azure Active Directory. The object isn't associated with any physical servers. It's simply a resource that can be used by Azure Active Directory to generate Kerberos TGTs for your Active Directory domain. 1. A user signs in to a Windows 10 device with an FIDO2 security key and authenticates to Azure AD. 1. Azure AD checks the directory for a Kerberos Server key that matches the user's on-premises Active Directory domain.
Run the following steps in each domain and forest in your organization that cont
$domain = "contoso.corp.com" # Enter an Azure Active Directory global administrator username and password.
- $cloudCred = Get-Credential -Message 'An Active Directory user who is a member of the Domain Admins group for a domain and a member of the Enterprise Admins group for a forest.'
+ $cloudCred = Get-Credential -Message 'An Active Directory user who is a member of the Global Administrators group for Azure AD.'
# Enter a domain administrator username and password. $domainCred = Get-Credential -Message 'An Active Directory user who is a member of the Domain Admins group.'
active-directory Troubleshoot Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-sspr-writeback.md
Previously updated : 08/25/2021 Last updated : 02/22/2022
# Troubleshoot self-service password reset writeback in Azure Active Directory
-Azure Active Directory (Azure AD) self-service password reset (SSPR) lets users reset their passwords in the cloud. Password writeback is a feature enabled with [Azure AD Connect](../hybrid/whatis-hybrid-identity.md) that allows password changes in the cloud to be written back to an existing on-premises directory in real time.
+Azure Active Directory (Azure AD) self-service password reset (SSPR) lets users reset their passwords in the cloud. Password writeback is a feature enabled with [Azure AD Connect](../hybrid/whatis-hybrid-identity.md) or [cloud sync](tutorial-enable-cloud-sync-sspr-writeback.md) that allows password changes in the cloud to be written back to an existing on-premises directory in real time.
If you have problems with SSPR writeback, the following troubleshooting steps and common errors may help. If you can't find the answer to your problem, [our support teams are always available](#contact-microsoft-support) to assist you further.
active-directory Zero Trust For Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/zero-trust-for-developers.md
The Microsoft identity platform app registration portal is the primary entry poi
## Next steps -- Zero Trust [Guidance Center](/security/zero-trust/)-- Zero Trust for the Microsoft identity platform developer [whitepaper](https://www.microsoft.com/security/content-library/Search?SearchDataFor=OJZgGWbHnB3Ll5hblDBugaEMQAchNfvkzk5X5AmPM4tK43NHpbF5%2Bky%2Fnuivl7plZz89b%2FuLMMZsMqKeYbhPPw%3D%3D&IsKeywordSearch=evXIpssXVY6lIm6X2K9ieA%3D%3D) (downloadable PDF).
+- Zero Trust [Guidance Center](/security/zero-trust/)
- Microsoft identity platform [best practices and recommendations](./identity-platform-integration-checklist.md).
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-catalog-create.md
To require attributes for access requests:
### Add a resource to a catalog programmatically
-You can also add a resource to a catalog by using Microsoft Graph. A user in an appropriate role, or a catalog and resource owner, with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the API to [create an accessPackageResourceRequest](/graph/api/entitlementmanagement-post-accesspackageresourcerequests?view=graph-rest-beta&preserve-view=true). An application with application permissions can't yet programmatically add a resource without a user context at the time of the request, however.
+You can also add a resource to a catalog by using Microsoft Graph. A user in an appropriate role, or a catalog and resource owner, with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the API to [create an accessPackageResourceRequest](/graph/api/entitlementmanagement-post-accesspackageresourcerequests?view=graph-rest-beta&preserve-view=true). An application with the application permission `EntitlementManagement.ReadWrite.All` and permissions to change resources, such as `Group.ReadWrite.All`, can also add resources to the catalog.
+
+### Add a resource to a catalog with PowerShell
+
+You can also add a resource to a catalog in PowerShell with the `New-MgEntitlementManagementAccessPackageResourceRequest` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.6.0 or later. The following example shows how to add a group to a catalog as a resource.
+
+```powershell
+Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All,Group.ReadWrite.All"
+Select-MgProfile -Name "beta"
+$g = Get-MgGroup -Filter "displayName eq 'Marketing'"
+Import-Module Microsoft.Graph.Identity.Governance
+$catalog = Get-MgEntitlementManagementAccessPackageCatalog -Filter "displayName eq 'Marketing'"
+$nr = New-Object Microsoft.Graph.PowerShell.Models.MicrosoftGraphAccessPackageResource
+$nr.OriginId = $g.Id
+$nr.OriginSystem = "AadGroup"
+$rr = New-MgEntitlementManagementAccessPackageResourceRequest -CatalogId $catalog.Id -AccessPackageResource $nr
+$ar = Get-MgEntitlementManagementAccessPackageCatalog -AccessPackageCatalogId $catalog.Id -ExpandProperty accessPackageResources
+$ar.AccessPackageResources
+```
## Remove resources from a catalog
active-directory Workbook Authentication Prompts Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-authentication-prompts-analysis.md
++
+ Title: Authentication prompts analysis workbook in Azure AD | Microsoft Docs
+description: Learn how to use the authentication prompts analysis workbook.
+
+documentationcenter: ''
++
+editor: ''
+++++ Last updated : 02/22/2022++++++
+# Authentication prompts analysis workbook
+
+As an IT Pro, you want the right information about authentication prompts in your environment so that you can detect unexpected prompts and investigate further. Providing you with this type of information is the goal of the **authentication prompts analysis workbook**.
+
+This article provides you with an overview of this workbook.
++
+## Description
+
+![Workbook category](./media/workbook-authentication-prompts-analysis/workbook-category.png)
++
+Have you recently heard of complaints from your users about getting too many authentication prompts?
+
+Over prompting users impacts your user's productivity and often leads users getting phished for MFA. To be clear, MFA is essential! We are not talking about if you should require MFA but how frequently you should prompt your users.
+
+Typically, this scenario is caused by:
+
+- Misconfigured applications
+- Over aggressive prompts policies
+- Cyber-attacks
+
+The authentication prompts analysis workbook identifies various types of authentication prompts. The types are based on different pivots including users, applications, operating system, processes and more.
+
+You can use this workbook in the following scenarios:
+
+- You received aggregated feedback of too many prompts.
+- To detect over prompting attributed to one specific authentication method, policy application, or device.
+- To view authentication prompt counts of high-profile users.
+- To track legacy TLS and other authentication process details.
+
+
+
+
+## Sections
+
+This workbook breaks down authentication prompts by:
+
+- Method
+- Device state
+- Application
+- User
+- Status
+- Operating System
+- Process detail
+- Policy
++
+![Authentication prompts by authentication method](./media/workbook-authentication-prompts-analysis/authentication-prompts-by-authentication-method.png)
+++
+In many environments, the most used apps are business productivity apps. Anything that isnΓÇÖt expected should be investigated. The charts below show authentication prompts by application.
+++
+![Authentication prompts by application](./media/workbook-authentication-prompts-analysis/authentication-prompts-by-application.png)
+
+The prompts by application list-view shows additional information such as timestamps, and request IDs that help with investigations.
+
+Additionally, you get a summary of the average and median prompts count for your tenant.
++
+![Prompts by application](./media/workbook-authentication-prompts-analysis/prompts-by-authentication-method.png)
++
+This workbook also helps track impactful ways to improve your usersΓÇÖ experience and reduce prompts and the relative percentage.
++
+![Recommendations for reducing prompts](./media/workbook-authentication-prompts-analysis/recommendations-for-reducing-prompts.png)
+
+
++
+## Filters
++
+Take advantage of the filters for more granular views of the data:
++
+![Filter](./media/workbook-authentication-prompts-analysis/filters.png)
+
+Filtering for a specific user that has many authentication requests or only showing applications with sign-in failures can also lead to interesting findings to continue to remediate.
+
+## Best practices
++
+If data isn't showing up or seems to be showing up incorrectly, please confirm that you have set the **Log Analytics Workspace** and **Subscriptions** on the proper resources.
++
+![Set workspace and subscriptions](./media/workbook-authentication-prompts-analysis/workspace-and-subscriptions.png)
+
+If the visuals are taking too much time to load, try reducing the Time filter to 24 hours or less.
+
+![Set filter](./media/workbook-authentication-prompts-analysis/set-filter.png)
++++
+## Next steps
+
+- To understand more about the different policies that impact MFA prompts, see [Optimize reauthentication prompts and understand session lifetime for Azure AD Multi-Factor Authentication](../authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md).
+
+- To learn more about the different vulnerabilities of different MFA methods, see [All your creds are belong to us!](https://aka.ms/allyourcreds).
+
+- To learn how to move users from telecom-based methods to the Authenticator app, see [How to run a registration campaign to set up Microsoft Authenticator - Microsoft Authenticator app](../authentication/how-to-mfa-registration-campaign.md).
+
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md
az aks update \
The above example updates cluster autoscaler on the single node pool in *myAKSCluster* to a minimum of *1* and maximum of *5* nodes. > [!NOTE]
-> The cluster autoscaler makes scaling decisions based on the minimum and maximum counts set on each node pool, but it does not enforce them after updating the min or max counts. For example, setting a min count of 5 when the current node count is 3 will not immediately scale the pool up to 5. If the minimum count on the node pool has a value higher than the current number of nodes, the new min or max settings will be respected when there are enough unschedulable pods present that would require 2 new additional nodes and trigger an autoscaler event. After the scale event, the new count limits are respected.
+> The cluster autoscaler will enforce the minimum count in cases where the actual count drops below the minimum due to external factors, such as during a spot eviction or when changing the minimum count value from the AKS API.
Monitor the performance of your applications and services, and adjust the cluster autoscaler node counts to match the required performance.
aks Csi Secrets Store Identity Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-identity-access.md
Azure Active Directory (Azure AD) pod-managed identities use AKS primitives to a
kubectl apply -f secretproviderclass.yaml ```
-1. Create a pod by using the following YAML, using the name of your identity:
+1. Create a pod by using the following YAML:
```yml # This is a sample pod definition for using SecretProviderClass and the user-assigned identity to access your key vault
Azure Active Directory (Azure AD) pod-managed identities use AKS primitives to a
kubectl apply -f secretproviderclass.yaml ```
-1. Create a pod by using the following YAML, using the name of your identity:
+1. Create a pod by using the following YAML:
```yml # This is a sample pod definition for using SecretProviderClass and system-assigned identity to access your key vault
aks Kubernetes Walkthrough Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-walkthrough-portal.md
description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using the Azure portal. Previously updated : 07/01/2021 Last updated : 1/13/2022 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run and monitor applications using the managed Kubernetes service in Azure.
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
* Run a multi-container application with a web front-end and a Redis instance in the cluster. * Monitor the health of the cluster and pods that run your application.
-![Image of browsing to Azure Vote sample application](media/container-service-kubernetes-walkthrough/azure-voting-application.png)
This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.c
- **Primary node pool**: * Leave the default values selected.
- ![Create AKS cluster - provide basic information](media/kubernetes-walkthrough-portal/create-cluster-basics.png)
+ :::image type="content" source="media/kubernetes-walkthrough-portal/create-cluster-basics.png" alt-text="Create AKS cluster - provide basic information":::
> [!NOTE] > You can change the preset configuration when creating your cluster by selecting *View all preset configurations* and choosing a different option.
- > ![Create AKS cluster - portal preset options](media/kubernetes-walkthrough-portal/cluster-preset-options.png)
+ > :::image type="content" source="media/kubernetes-walkthrough-portal/cluster-preset-options.png" alt-text="Create AKS cluster - portal preset options":::
4. Select **Next: Node pools** when complete.
Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.c
* Browsing to the AKS cluster resource group and selecting the AKS resource. * Per example cluster dashboard below: browsing for *myResourceGroup* and selecting *myAKSCluster* resource.
- ![Example AKS dashboard in the Azure portal](media/kubernetes-walkthrough-portal/aks-portal-dashboard.png)
+ :::image type="content" source="media/kubernetes-walkthrough-portal/aks-portal-dashboard.png" alt-text="Example AKS dashboard in the Azure portal":::
## Connect to the cluster
Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP a
azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m ```
-To see the Azure Vote app in action, open a web browser to the external IP address of your service.
+To see the Azure Vote app in action, open a web browser to the external IP address of you
-![Image of browsing to Azure Vote sample application](media/container-service-kubernetes-walkthrough/azure-voting-application.png)
## Monitor health and logs
Metric data takes a few minutes to populate in the Azure portal. To see current
The `azure-vote-back` and `azure-vote-front` containers will display, as shown in the following example:
-![View the health of running containers in AKS](media/kubernetes-walkthrough-portal/monitor-containers.png)
To view logs for the `azure-vote-front` pod, select **View in Log Analytics** from the top of the *azure-vote-front | Overview* area on the right side. These logs include the *stdout* and *stderr* streams from the container.
-![View the containers logs in AKS](media/kubernetes-walkthrough-portal/monitor-container-logs.png)
## Delete cluster To avoid Azure charges, clean up your unnecessary resources. Select the **Delete** button on the AKS cluster dashboard. You can also use the [az aks delete][az-aks-delete] command in the Cloud Shell: ```azurecli
-az aks delete --resource-group myResourceGroup --name myAKSCluster --no-wait
+az aks delete --resource-group myResourceGroup --name myAKSCluster --yes --no-wait
``` > [!NOTE] > When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete].
Pre-existing container images were used in this quickstart to create a Kubernete
In this quickstart, you deployed a Kubernetes cluster and then deployed a multi-container application to it. Access the Kubernetes web dashboard for your AKS cluster. - To learn more about AKS by walking through a complete example, including building an application, deploying from Azure Container Registry, updating a running application, and scaling and upgrading your cluster, continue to the Kubernetes cluster tutorial. > [!div class="nextstepaction"]
aks Open Service Mesh About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-about.md
The OSM project was originated by Microsoft and has since been donated and is go
OSM can be added to your Azure Kubernetes Service (AKS) cluster by enabling the OSM add-on using the [Azure CLI][osm-azure-cli] or a [Bicep template][osm-bicep]. The OSM add-on provides a fully supported installation of OSM that is integrated with AKS. > [!IMPORTANT]
-> The OSM add-on installs version *0.11.1* of OSM on your cluster.
+> The OSM add-on installs version *1.0.0* of OSM on your cluster.
## Capabilities and features
aks Open Service Mesh Deploy Addon Az Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-az-cli.md
This article shows you how to install the OSM add-on on an AKS cluster and verify it is installed and running. > [!IMPORTANT]
-> The OSM add-on installs version *0.11.1* of OSM on your cluster.
+> The OSM add-on installs version *1.0.0* of OSM on your cluster.
## Prerequisites
az aks show --resource-group myResourceGroup --name myAKSCluster --query 'addon
## Verify the OSM mesh is running on your cluster
-In addition to verifying the OSM add-on has been enabled on you cluster, you can also verify the version, status, and configuration of the OSM mesh running on your cluster.
+In addition to verifying the OSM add-on has been enabled on your cluster, you can also verify the version, status, and configuration of the OSM mesh running on your cluster.
To verify the version of the OSM mesh running on your cluster, use `kubectl` to display the image version of the *osm-controller* deployment. For example:
Alternatively, you can uninstall the OSM add-on and the related resources from y
## Next steps
-This article showed you how to install the OSM add-on on an AKS cluster and verify it is installed an running. With the the OSM add-on on your cluster you can [Deploy a sample application][osm-deploy-sample-app] or [Onboard an existing application][osm-onboard-app] to work with your OSM mesh.
+This article showed you how to install the OSM add-on on an AKS cluster and verify it is installed and running. With the OSM add-on on your cluster you can [Deploy a sample application][osm-deploy-sample-app] or [Onboard an existing application][osm-onboard-app] to work with your OSM mesh.
[aks-ephemeral]: cluster-configuration.md#ephemeral-os [osm-sample]: open-service-mesh-deploy-new-application.md [osm-uninstall]: open-service-mesh-uninstall-add-on.md [smi]: https://smi-spec.io/ [osm-deploy-sample-app]: https://release-v1-0.docs.openservicemesh.io/docs/getting_started/install_apps/
-[osm-onboard-app]: https://release-v1-0.docs.openservicemesh.io/docs/guides/app_onboarding/
+[osm-onboard-app]: https://release-v1-0.docs.openservicemesh.io/docs/guides/app_onboarding/
aks Open Service Mesh Deploy Addon Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-bicep.md
This article will discuss how to deploy the OSM add-on to AKS using a [Bicep](../azure-resource-manager/bicep/index.yml) template. > [!IMPORTANT]
-> The OSM add-on installs version *0.11.1* of OSM on your cluster.
+> The OSM add-on installs version *1.0.0* of OSM on your cluster.
[Bicep](../azure-resource-manager/bicep/overview.md) is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. Bicep can be used in place of creating Azure [ARM](../azure-resource-manager/templates/overview.md) templates for deploying your infrastructure-as-code Azure resources.
Alternatively, you can uninstall the OSM add-on and the related resources from y
## Next steps
-This article showed you how to install the OSM add-on on an AKS cluster and verify it is installed an running. With the the OSM add-on on your cluster you can [Deploy a sample application][osm-deploy-sample-app] or [Onboard an existing application][osm-onboard-app] to work with your OSM mesh.
+This article showed you how to install the OSM add-on on an AKS cluster and verify it is installed and running. With the OSM add-on on your cluster you can [Deploy a sample application][osm-deploy-sample-app] or [Onboard an existing application][osm-onboard-app] to work with your OSM mesh.
<!-- Links --> <!-- Internal -->
This article showed you how to install the OSM add-on on an AKS cluster and veri
[az-extension-update]: /cli/azure/extension#az_extension_update [osm-uninstall]: open-service-mesh-uninstall-add-on.md [osm-deploy-sample-app]: https://release-v1-0.docs.openservicemesh.io/docs/getting_started/install_apps/
-[osm-onboard-app]: https://release-v1-0.docs.openservicemesh.io/docs/guides/app_onboarding/
+[osm-onboard-app]: https://release-v1-0.docs.openservicemesh.io/docs/guides/app_onboarding/
aks Open Service Mesh Uninstall Add On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-uninstall-add-on.md
# Uninstall the Open Service Mesh (OSM) add-on from your Azure Kubernetes Service (AKS) cluster
-This article shows you how to uninstall the OMS add-on and related resources from you AKS cluster.
+This article shows you how to uninstall the OMS add-on and related resources from your AKS cluster.
## Disable the OSM add-on from your cluster
The above example removes the OSM add-on from the *myAKSCluster* in *myResourceG
## Remove additional OSM resources
-After the OSM add-on is disabled, the following resources remain on the cluster:
+After the OSM add-on is disabled, use `osm uninstall cluster-wide-resources` to uninstall the remaining resource on the cluster. For example:
-1. OSM meshconfig custom resource
-2. OSM control plane secrets
-3. OSM mutating webhook configuration
-4. OSM validating webhook configuration
-5. OSM CRDs
+```console
+osm uninstall cluster-wide-resources
+```
> [!IMPORTANT] > You must remove these additional resources after you disable the OSM add-on. Leaving these resources on your cluster may cause issues if you enable the OSM add-on again in the future.
-To remove these remaining resources:
-
-1. Delete the meshconfig config resource
-```azurecli-interactive
-kubectl delete --ignore-not-found meshconfig -n kube-system osm-mesh-config
-```
-
-2. Delete the OSM control plane secrets
-```azurecli-interactive
-kubectl delete --ignore-not-found secret -n kube-system osm-ca-bundle mutating-webhook-cert-secret validating-webhook-cert-secret crd-converter-cert-secret
-```
-
-3. Delete the OSM mutating webhook configuration
-```azurecli-interactive
-kubectl delete mutatingwebhookconfiguration -l app.kubernetes.io/name=openservicemesh.io,app.kubernetes.io/instance=osm,app=osm-injector --ignore-not-found
-```
-
-4. Delete the OSM validating webhook configuration
-```azurecli-interactive
-kubectl delete validatingwebhookconfiguration -l app.kubernetes.io/name=openservicemesh.io,app.kubernetes.io/instance=osm,app=osm-controller --ignore-not-found
-```
-
-5. Delete the OSM CRDs: For guidance on OSM's CRDs and how to delete them, refer to [this documentation](https://release-v0-11.docs.openservicemesh.io/docs/getting_started/uninstall/#removal-of-osm-cluster-wide-resources).
app-service Webjobs Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/webjobs-create.md
To learn more, see [Scheduling a triggered WebJob](webjobs-dotnet-deploy-vs.md#s
You can manage the running state individual WebJobs running in your site in the [Azure portal](https://portal.azure.com). Just go to **Settings** > **WebJobs**, choose the WebJob, and you can start and stop the WebJob. You can also view and modify the password of the webhook that runs the WebJob.
-You can also [add an application setting](configure-common.md#configure-app-settings) named `WEBJOB_STOPPED` with a value of `1` to stop all WebJobs running on your site. This can be handy as a way to prevent conflicting WebJobs from running both in staging and production slots. You can similarly use a value of `1` for the `WEBJOBS_DISABLE_SCHEDULE` setting to disable triggered WebJobs in the site or a staging slot. For slots, remember to enable the **Deployment slot setting** option so that the setting itself doesn't get swapped.
+You can also [add an application setting](configure-common.md#configure-app-settings) named `WEBJOBS_STOPPED` with a value of `1` to stop all WebJobs running on your site. This can be handy as a way to prevent conflicting WebJobs from running both in staging and production slots. You can similarly use a value of `1` for the `WEBJOBS_DISABLE_SCHEDULE` setting to disable triggered WebJobs in the site or a staging slot. For slots, remember to enable the **Deployment slot setting** option so that the setting itself doesn't get swapped.
## <a name="ViewJobHistory"></a> View the job history
automation Delete Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/delete-account.md
Title: Delete your Azure Automation account
-description: This article tells how to delete your Automation account across the different configuration scenarios.
+ Title: Manage your Azure Automation account
+description: This article tells how to delete and your Automation account across the different configuration scenarios and restore a deleted Automation account
-# How to delete your Azure Automation account
+# Manage your Azure Automation account
After you enable an Azure Automation account to help automate IT or business process, or enable its other features to support operations management of your Azure and non-Azure machines such as Update Management, you may decide to stop using the Automation account. If you have enabled features that depend on integration with an Azure Monitor Log Analytics workspace, there are more steps required to complete this action.
+This article tells you how to completely remove your Automation account through the Azure portal, using Azure PowerShell, the Azure CLI, or the REST API and restore your deleted Azure Automation account.
+
+## Delete your Azure Automation account
+ Removing your Automation account can be done using one of the following methods based on the supported deployment models: * Delete the resource group containing the Automation account.
Removing your Automation account can be done using one of the following methods
* Unlink the Log Analytics workspace from the Automation account and delete the Automation account. * Delete the feature from your linked workspace, unlink the account from the workspace, and then delete the Automation account.
-This article tells you how to completely remove your Automation account through the Azure portal, using Azure PowerShell, the Azure CLI, or the REST API.
-## Prerequisite
+### Prerequisite
+ Verify there aren't any [Resource Manager locks](../azure-resource-manager/management/lock-resources.md) applied at the subscription, resource group, or resource, which prevents accidental deletion or modification of critical resources. If you've deployed the Start/Stop VMs during off-hours solution, it sets the lock level to **CanNotDelete** against several dependent resources in the Automation account (specifically its runbooks and variables). Remove any locks before deleting the Automation account. > [!NOTE]
While it attempts to unlink the Automation account, you can track the progress u
After the Automation account is successfully unlinked from the workspace, perform the steps in the [standalone Automation account](#delete-a-standalone-automation-account) section to delete the account.
+## Restore a deleted Automation account
+
+You can recover a deleted automation account from Azure portal.
+
+To recover an Automation account, ensure that the following conditions are met:
+- You've created the Automation account with the Azure Resource Manager deployment model and deleted within the past 30 days.
+- Before you attempt to recover a deleted Automation account, ensure that resource group for that account exists.
+
+> [!NOTE]
+> You can't recover your Automation account if the resource group is deleted.
+
+### Recover a deleted Automation account
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to your Automation account.
+1. On the **Automation Accounts** page, select **Manage deleted accounts**.
+
+ :::image type="content" source="media/restore-deleted-account/automation-accounts-main-page-inline.png" alt-text="Screenshot showing the selection of Manage deleted accounts option." lightbox="media/restore-deleted-account/automation-accounts-main-page-expanded.png":::
+
+1. In the **Manage deleted automation accounts** pane, select **Subscription** from the drop-down list.
+
+ :::image type="content" source="media/restore-deleted-account/select-subscription-inline.png" alt-text="Screenshot showing the selection of subscription." lightbox="media/restore-deleted-account/select-subscription-expanded.png":::
+
+ Deleted accounts list in that subscription is displayed.
+
+1. Select the checkbox for the accounts you want to restore and click **Recover**.
+
+ :::image type="content" source="media/restore-deleted-account/recover-automation-account-inline.png" alt-text="Screenshot showing the recovery of deleted Automation account." lightbox="media/restore-deleted-account/recover-automation-account-expanded.png":::
+
+ A notification appears to confirm that account is restored.
+
+ :::image type="content" source="media/restore-deleted-account/notification-inline.png" alt-text="Screenshot showing the notification of restoring the deleted Automation account." lightbox="media/restore-deleted-account/notification-expanded.png":::
+++ ## Next steps To create an Automation account from the Azure portal, see [Create a standalone Azure Automation account](automation-create-standalone-account.md). If you prefer to create your account using a template, see [Create an Automation account using an Azure Resource Manager template](quickstart-create-automation-account-template.md).
automation Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new-archive.md
Automation account and State Configuration availability in Japan West region. Fo
**Type :** New feature
-You can use the new Azure Policy compliance rule to allow creation of jobs, webhooks, and job schedules to run only on Hybrid Worker groups.
+You can use the new Azure Policy compliance rule to allow creation of jobs, webhooks, and job schedules to run only on Hybrid Worker groups. Read the details in [Use Azure Policy to enforce job execution on Hybrid Runbook Worker](/azure/automation/enforce-job-execution-hybrid-worker?tabs=azure-cli)
### Update Management availability in East US, France Central, and North Europe regions
azure-app-configuration Quickstart Feature Flag Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-dotnet.md
The .NET Feature Management libraries extend the framework with feature flag sup
## Connect to an App Configuration store
-1. Right-click your project, and select **Manage NuGet Packages**. On the **Browse** tab, search and add the following NuGet packages to your project. If you can't find them, select the **Include prerelease** check box.
+1. Right-click your project, and select **Manage NuGet Packages**. On the **Browse** tab, search and add the following NuGet packages to your project.
``` Microsoft.Extensions.DependencyInjection
azure-arc Configure Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-managed-instance.md
Previously updated : 11/03/2021 Last updated : 02/22/2022
This article explains how to configure Azure Arc-enabled SQL managed instance.
### Configure using CLI
-You can edit the configuration of Azure Arc-enabled SQL Managed Instances with the CLI. Run the following command to see configuration options.
+You can update the configuration of Azure Arc-enabled SQL Managed Instances with the CLI. Run the following command to see configuration options.
```azurecli
-az sql mi-arc edit --help
+az sql mi-arc update --help
``` You can update the available memory and cores for an Azure Arc-enabled SQL managed instance using the following command: ```azurecli
-az sql mi-arc edit --cores-limit 4 --cores-request 2 --memory-limit 4Gi --memory-request 2Gi -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s
+az sql mi-arc update --cores-limit 4 --cores-request 2 --memory-limit 4Gi --memory-request 2Gi -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s
``` The following example sets the cpu core and memory requests and limits. ```azurecli
-az sql mi-arc edit --cores-limit 4 --cores-request 2 --memory-limit 4Gi --memory-request 2Gi -n sqlinstance1 --k8s-namespace arc --use-k8s
+az sql mi-arc update --cores-limit 4 --cores-request 2 --memory-limit 4Gi --memory-request 2Gi -n sqlinstance1 --k8s-namespace arc --use-k8s
``` To view the changes made to the Azure Arc-enabled SQL managed instance, you can use the following commands to view the configuration yaml file:
You can configure server configuration settings for Azure Arc-enabled SQL manage
SQL Server agent is disabled by default. It can be enabled by running the following command: ```azurecli
-az sql mi-arc edit -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s --agent-enabled true
+az sql mi-arc update -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s --agent-enabled true
``` As an example: ```azurecli
-az sql mi-arc edit -n sqlinstance1 --k8s-namespace arc --use-k8s --agent-enabled true
+az sql mi-arc update -n sqlinstance1 --k8s-namespace arc --use-k8s --agent-enabled true
``` ### Enable Trace flags Trace flags can be enabled as follows: ```azurecli
-az sql mi-arc edit -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s --trace-flags "3614,1234"
+az sql mi-arc update -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s --trace-flags "3614,1234"
```
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
First, the function.json file must be updated to include a `route` in the HTTP t
```json {
-"scriptFile": "__init__.py",
-"bindings": [
- {
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "name": "req",
- "methods": [
- "get",
- "post"
- ],
- "route": "/{*route}"
- },
- {
- "type": "http",
- "direction": "out",
- "name": "$return"
- }
-]
+ "scriptFile": "__init__.py",
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get",
+ "post"
+ ],
+ "route": "/{*route}"
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "$return"
+ }
+ ]
} ```
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
recommendations: false Previously updated : 02/11/2022 Last updated : 02/22/2022 # Compare Azure Government and global Azure
Table below lists API endpoints in Azure vs. Azure Government for accessing and
||Azure SQL Database|database.windows.net|database.usgovcloudapi.net|| |**Identity**|Azure AD|login.microsoftonline.com|login.microsoftonline.us|| |||certauth.login.microsoftonline.com|certauth.login.microsoftonline.us||
+|||passwordreset.microsoftonline.com|passwordreset.microsoftonline.us||
|**Integration**|Service Bus|servicebus.windows.net|servicebus.usgovcloudapi.net|| |**Internet of Things**|Azure IoT Hub|azure-devices.net|azure-devices.us|| ||Azure Maps|atlas.microsoft.com|atlas.azure.us||
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
recommendations: false Previously updated : 01/26/2022 Last updated : 02/18/2022 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
For current Azure Government regions and available services, see [Products avail
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services in scope for FedRAMP High, DoD IL2, DoD IL4, DoD IL5, and DoD IL6 authorizations across Azure, Azure Government, and Azure Government Secret cloud environments. For other authorization details in Azure Government Secret and Azure Government Top Secret, contact your Microsoft account representative. ## Azure public services by audit scope
-*Last updated: January 2022*
+*Last updated: February 2022*
### Terminology used
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Service Health](https://azure.microsoft.com/features/service-health/) | &#x2705; | &#x2705; | | [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; | | **Service** | **FedRAMP High** | **DoD IL2** |
+| [Azure Sign-up portal](https://signup.azure.com/) | &#x2705; | &#x2705; |
| [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/) | &#x2705; | &#x2705; | | [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) | &#x2705; | &#x2705; | | [Azure Sphere](https://azure.microsoft.com/services/azure-sphere/) | &#x2705; | &#x2705; |
+| [Azure Spring Cloud](https://azure.microsoft.com/services/spring-cloud/) | &#x2705; | &#x2705; |
| [Azure SQL Database](https://azure.microsoft.com/services/sql-database/) | &#x2705; | &#x2705; | | [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) (formerly Data Box Edge) **&ast;** | &#x2705; | &#x2705; | | [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
**&ast;&ast;** FedRAMP High authorization for Azure Databricks is applicable to limited regions in Azure. To configure Azure Databricks for FedRAMP High use, contact your Microsoft or Databricks representative. ## Azure Government services by audit scope
-*Last updated: January 2022*
+*Last updated: February 2022*
### Terminology used
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Advisor](https://azure.microsoft.com/services/advisor/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Analysis Services](https://azure.microsoft.com/services/analysis-services/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md) | &#x2705; | &#x2705; | | | |
+| [Azure Arc-enabled Servers](../../azure-arc/servers/overview.md) | &#x2705; | &#x2705; | | | |
| [Azure Archive Storage](https://azure.microsoft.com/services/storage/archive/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Backup](https://azure.microsoft.com/services/backup/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Bastion](https://azure.microsoft.com/services/azure-bastion/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Azure Resource Graph](../../governance/resource-graph/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Azure Resource Mover](https://azure.microsoft.com/services/resource-mover/) | &#x2705; | &#x2705; | | | |
+| [Azure Route Server](https://azure.microsoft.com/services/route-server/) | &#x2705; | &#x2705; | | | |
| [Azure Scheduler](../../scheduler/index.yml) (replaced by [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Service Fabric](https://azure.microsoft.com/services/service-fabric/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Service Health](https://azure.microsoft.com/features/service-health/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
azure-monitor Alerts Common Schema Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema-definitions.md
Any alert instance describes the resource that was affected and the cause of the
```json { "alertContext": {
- "properties": null,
+ "properties": {
+ "name1": "value1",
+ "name2": "value2"
+ },
"conditionType": "LogQueryCriteria", "condition": { "windowSize": "PT10M", "allOf": [ { "searchQuery": "Heartbeat",
- "metricMeasure": null,
+ "metricMeasureColumn": "CounterValue",
"targetResourceTypes": "['Microsoft.Compute/virtualMachines']", "operator": "LowerThan", "threshold": "1",
azure-monitor Alerts Log Api Switch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-api-switch.md
description: Learn how to switch to the log alerts management to ScheduledQueryR
Previously updated : 01/25/2022 Last updated : 02/22/2022 # Upgrade legacy rules management to the current Log Alerts API from legacy Log Analytics Alert API
$switchJSON = '{"scheduledQueryRulesEnabled": true}'
armclient PUT /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview $switchJSON ```
+You can also use [Azure CLI](/cli/azure/reference-index#az-rest) tool:
+
+```bash
+az rest --method post --url /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview --body '{"scheduledQueryRulesEnabled": true}'
+```
+ If the switch is successful, the response is: ```json
You can also use [ARMClient](https://github.com/projectkudu/ARMClient) tool:
armclient GET /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview ```
+You can also use [Azure CLI](/cli/azure/reference-index#az-rest) tool:
+
+```bash
+az rest --method get --url /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview
+```
+ If the Log Analytics workspace was switched to [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules), the response is: ```json
If the Log Analytics workspace wasn't switched, the response is:
- Learn about the [Azure Monitor - Log Alerts](./alerts-unified-log.md). - Learn how to [manage your log alerts using the API](alerts-log-create-templates.md). - Learn how to [manage log alerts using PowerShell](./alerts-manage-alerts-previous-version.md#manage-log-alerts-using-powershell).-- Learn more about the [Azure Alerts experience](./alerts-overview.md).
+- Learn more about the [Azure Alerts experience](./alerts-overview.md).
azure-monitor Change Analysis Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/change-analysis-troubleshoot.md
description: Learn how to troubleshoot problems in Application Change Analysis.
Previously updated : 01/07/2022 Last updated : 02/17/2022
This general unauthorized error message occurs when the current user does not ha
* To view infrastructure changes returned by Azure Resource Graph and Azure Resource Manager, reader access is required. * For web app in-guest file changes and configuration changes, contributor role is required.
-You may not immediately see web app in-guest file changes and configuration changes. While we work on providing the option to restart the app in the Azure portal, the current procedure is:
+## Cannot see in-guest changes for newly enabled Web App.
-1. User adds the hidden tracking tag, notifying the scheduled worker.
-2. Scheduled worker scans the web app within a few hours.
-3. While scanning, scheduled worker creates a handshake file via AST.
-4. The Web App team checks that handshake file when it restarts.
+You may not immediately see web app in-guest file changes and configuration changes. Restart your web app and you should be able to view changes within 30 minutes. If not, contact the [Change Analysis help team](mailto:changeanalysishelp@microsoft.com).
## Diagnose and solve problems tool for virtual machines
azure-percept Create People Counting Solution With Azure Percept Devkit Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/create-people-counting-solution-with-azure-percept-devkit-vision.md
Title: Create a people counting solution with Azure Percept Vision description: This guide will focus on detecting and counting people using the Azure Percept DK hardware, Azure IoT Hub, Azure Stream Analytics, and Power BI dashboard. -+
azure-percept Voice Control Your Inventory Then Visualize With Power Bi Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/voice-control-your-inventory-then-visualize-with-power-bi-dashboard.md
Title: Voice control your inventory and visualize with Power BI dashboard
+ Title: Voice control your inventory with Azure Percept Audio
description: This article will give detailed instructions for building the main components of the solution and deploying the edge speech AI.-+
-# Tutorial: Voice control your inventory and visualize with Power BI Dashboard
-This article will give detailed instructions for building the main components of the solution and deploying the edge speech AI. The solution uses Azure Percept DK device and the Audio SoM, Azure Speech Services -Custom Commands, Azure Function App, SQL Database, and Power BI. Users can learn how to manage their inventory with voice using Azure Percept Audio and visualize results with Power BI. The goal of this article is to empower users to create a basic inventory management solution.
+# Voice control your inventory with Azure Percept Audio
+This article will give detailed instructions for building the main components of the solution and deploying the edge speech AI. The solution uses the Azure Percept DK device and the Audio SoM, Azure Speech Services -Custom Commands, Azure Function App, SQL Database, and Power BI. Users can learn how to manage their inventory with voice using Azure Percept Audio and visualize results with Power BI. The goal of this article is to empower users to create a basic inventory management solution.
Users who want to take their solution further can add an additional edge module for visual inventory inspection or expand on the inventory visualizations within Power BI.
In this tutorial, you learn how to:
- Create an [Azure SQL server](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/azure-sql/database/single-database-create-quickstart.md)
-## Software Architecture
+## Software architecture
![Solution Architecture](./media/voice-control-your-inventory-images/voice-control-solution-architect.png)
-## Section 1: Create an Azure SQL Server and SQL Database
+## Step 1: Create an Azure SQL Server and SQL Database
In this section, you will learn how to create the table for this lab. This table will be the main source of truth for your current inventory and the basis of data visualized in Power BI. 1. Set SQL server firewall
In this section, you will learn how to create the table for this lab. This table
```
- ![create the table](./media/voice-control-your-inventory-images/create-sql-table.png)
+ :::image type="content" source="./media/voice-control-your-inventory-images/create-sql-table.png" alt-text="Create SQL table.":::
-## Section 2: Create an Azure function project and publish to Azure
+## Step 2: Create an Azure Functions project and publish to Azure
In this section, you will use Visual Studio Code to create a local Azure Functions project in Python. Later in this article, you'll publish your function code to Azure. 1. Go to the [GitHub link](https://github.com/microsoft/Azure-Percept-Reference-Solutions/tree/main/voice-control-inventory-management) and clone the repository 1. Click Code and HTTPS tab
- ![Code and HTTPS tab](./media/voice-control-your-inventory-images/clone-git.png)
+ :::image type="content" source="./media/voice-control-your-inventory-images/clone-git.png" alt-text="Code and HTTPS tab.":::
2. Copy the command below in your terminal to clone the repository ![clone the repository](./media/voice-control-your-inventory-images/clone-git-command.png)
In this section, you will use Visual Studio Code to create a local Azure Functio
git clone https://github.com/microsoft/Azure-Percept-Reference-Solutions/tree/main/voice-control-inventory-management ```
-2. Enable Azure Function
+2. Enable Azure Functions.
+ 1. Click Azure Logo in the task bar ![Azure Logo in the task bar](./media/voice-control-your-inventory-images/select-azure-icon.png)
In this section, you will use Visual Studio Code to create a local Azure Functio
3. Create your local project 1. Create a folder (ex: airlift_az_func) for your project workspace ![Create a folder](./media/voice-control-your-inventory-images/create-new-folder.png)
- 2. Choose the Azure icon in the Activity bar, then in the Azure: Functions area, select the <strong>Create new project...</strong> icon
+ 2. Choose the Azure icon in the Activity bar, then in Functions, select the <strong>Create new project...</strong>icon.
![select Azure icon](./media/voice-control-your-inventory-images/select-function-visio-studio.png) 3. Choose the directory location you just created for your project workspace and choose **Select**. ![the directory location](./media/voice-control-your-inventory-images/select-airlift-folder.png)
In this section, you will use Visual Studio Code to create a local Azure Functio
![Provide a function name](./media/voice-control-your-inventory-images/http-example.png) 8. <strong>Authorization level</strong>: Choose <strong>Anonymous</strong>, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/azure-functions/functions-bindings-http-webhook-trigger.md). ![power pi dashboard](./media/voice-control-your-inventory-images/create-http-trigger.png)
- 9. <strong>Select how you would like to open your project</strong>: Choose Add to workspace. Trust folder and enable all features
+ 9. <strong>Select how you would like to open your project</strong>: Choose Add to workspace. Trust folder and enable all features.
+
![Authorization keys](./media/voice-control-your-inventory-images/trust-authorize.png)
- 10. You will see the HTTPExample function has been initiated
+ 1. You will see the HTTPExample function has been initiated
![ HTTPExample function](./media/voice-control-your-inventory-images/modify-init-py.png) 4. Develop CRUD.py to update Azure SQL on Azure Function 1. Replace the content of the <strong>__init__.py</strong> in [here](https://github.com/microsoft/Azure-Percept-Reference-Solutions/blob/main/voice-control-inventory-management/azure-functions/__init__.py) by copying the raw content of <strong>__init__.py</strong>
- [ ![copying the raw content](./media/voice-control-your-inventory-images/copy-raw-content-mini.png) ](./media/voice-control-your-inventory-images/copy-raw-content.png#lightbox)
+ :::image type="content" source="./media/voice-control-your-inventory-images/copy-raw-content-mini.png" alt-text="Copy raw contents." lightbox="./media/voice-control-your-inventory-images/copy-raw-content.png":::
2. Drag and drop the <strong>CRUD.py</strong> to the same layer of <strong>init.py</strong> ![Drag and drop-1](./media/voice-control-your-inventory-images/crud-file.png) ![Drag and drop-2](./media/voice-control-your-inventory-images/show-crud-file.png) 3. Update the value of the <strong>sql server full address</strong>, <strong>database</strong>, <strong>username</strong>, <strong>password</strong> you created in section 1 in <strong>CRUD.py</strong>
- [ ![Update the value-1](./media/voice-control-your-inventory-images/server-name-mini.png) ](./media/voice-control-your-inventory-images/server-name.png#lightbox)
+ :::image type="content" source="./media/voice-control-your-inventory-images/server-name-mini.png" alt-text="Update the values."lightbox="./media/voice-control-your-inventory-images/server-name.png":::
![Update the value-2](./media/voice-control-your-inventory-images/server-parameter.png) 4. Replace the content of the <strong>requirements.txt</strong> in here by copying the raw content of requirements.txt ![Replace the content-1](./media/voice-control-your-inventory-images/select-requirements-u.png)
- [ ![Replace the content-2](./media/voice-control-your-inventory-images/view-requirement-file-mini.png) ](./media/voice-control-your-inventory-images/view-requirement-file.png#lightbox)
+ :::image type="content" source="./media/voice-control-your-inventory-images/view-requirement-file-mini.png" alt-text="Replace the content." lightbox= "./media/voice-control-your-inventory-images/view-requirement-file.png":::
5. Press ΓÇ£Ctrl + sΓÇ¥ to save the content 5. Sign in to Azure
In this section, you will use Visual Studio Code to create a local Azure Functio
1. When the function executes in Azure and returns a response, a notification is raised in Visual Studio Code. ![a notification](./media/voice-control-your-inventory-images/example-output.png)
-## Section 3: Import an Inventory Speech template to Custom Commands
+## Step 3: Import an inventory speech template to Custom Commands
In this section, you will import an existing application config json file to Custom Commands. 1. Create an Azure Speech resource in a region that supports Custom Commands.
- 1. Click [Create Speech Services portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) to create an Azure Speech resource
+ 1. Click [Create Speech Services portal](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) to create an Azure Speech resource
1. Select your Subscription 2. Use the Resource group you just created in exercise 1 3. Select the Region(Please check here to see the support region in custom commands)
In this section, you will import an existing application config json file to Cus
1. In a web browser, go to [Speech Studio](https://speech.microsoft.com/portal). 2. Select <strong>Custom Commands</strong>. The default view is a list of the Custom Commands applications you have under your selected subscription.
- ![Custom Commands applications](./media/voice-control-your-inventory-images/cognitive-service.png)
+ :::image type="content" source="./media/voice-control-your-inventory-images/cognitive-service.png" alt-text="Custom Commands applications.":::
3. Select your Speech <strong>subscription</strong> and <strong>resource group</strong>, and then select <strong>Use resource</strong>. ![Select your Speech](./media/voice-control-your-inventory-images/speech-studio.png) 3. Import an existing application config as a new Custom Commands project
In this section, you will import an existing application config json file to Cus
10. Next, select <strong>Create</strong> to create your project. After the project is created, select your project. You should now see overview of your new Custom Commands application.
-## Section 4: Train, test, and publish the Custom Command
+## Step 4: Train, test, and publish the Custom Commands
In this section, you will train, test, and publish your Custom Commands 1. Replace the web endpoints URL 1. Click Web endpoints and replace the URL 2. Replace the value in the URL to the <strong>HTTP Trigger Url</strong> you noted down in section 2 (ex: `https://xxx.azurewebsites.net/api/httpexample`)
- ![Replace the value in the URL](./media/voice-control-your-inventory-images/web-point-url.png)
+ :::image type="content" source="./media/voice-control-your-inventory-images/web-point-url.png" alt-text="Replace the value in the URL.":::
2. Create LUIS prediction resource 1. Click <strong>settings</strong> and create a <strong>S0</strong> prediction resource under LUIS <strong>prediction resource</strong>.
- ![prediction resource-1](./media/voice-control-your-inventory-images/predict-source.png)
+ :::image type="content" source="./media/voice-control-your-inventory-images/predict-source.png" alt-text="Prediction resource-1.":::
![prediction resource-2](./media/voice-control-your-inventory-images/tier-s0.png) 3. Train and Test with your custom command 1. Click <strong>Save</strong> to save the Custom Commands Project 2. Click <strong>Train</strong> to Train your custom commands service
- ![custom commands service-1](./media/voice-control-your-inventory-images/train-model.png)
+ :::image type="content" source="./media/voice-control-your-inventory-images/train-model.png" alt-text="Custom commands train model.":::
3. Click <strong>Test</strong> to test your custom commands service
- ![custom commands service-2](./media/voice-control-your-inventory-images/test-model.png)
+ :::image type="content" source="./media/voice-control-your-inventory-images/test-model.png" alt-text="Custom commands test model.":::
4. Type ΓÇ£Add 2 green boxesΓÇ¥ in the pop-up window to see if it can respond correctly ![pop-up window](./media/voice-control-your-inventory-images/outcome.png) 4. Publish your custom command 1. Click Publish to publish the custom commands
- ![publish the custom commands](./media/voice-control-your-inventory-images/publish.png)
+ :::image type="content" source="./media/voice-control-your-inventory-images/publish.png" alt-text="Publish the custom commands.":::
5. Note down your application ID, speech key in the settings for further use
- ![application id](./media/voice-control-your-inventory-images/application-id.png)
+ :::image type="content" source="./media/voice-control-your-inventory-images/application-id.png" alt-text="Application ID.":::
-## Section 5: Deploy modules to your Devkit
+## Step 5: Deploy modules to your Devkit
In this section, you will learn how to use deployment manifest to deploy modules to your device. 1. Set IoT Hub Connection String 1. Go to your IoT Hub service in Azure portal. Click <strong>Shared access policies</strong> -> <strong>Iothubowner</strong> 2. Click <strong>Copy</strong> the get the <strong>primary connection string</strong>
- ![primary connection string](./media/voice-control-your-inventory-images/iot-hub-owner.png)
+ :::image type="content" source="./media/voice-control-your-inventory-images/iot-hub-owner.png" alt-text="Primary connection string.":::
3. In Explorer of VS Code, click "Azure IoT Hub". ![click on hub](./media/voice-control-your-inventory-images/azure-iot-hub-studio.png) 4. Click "Set IoT Hub Connection String" in context menu
In this section, you will learn how to use deployment manifest to deploy modules
![choose device](./media/voice-control-your-inventory-images/iot-hub-device.png) 4. Check your log of the azurespeechclient module 1. Go to Azure portal to click your Azure IoT Hub
- ![select hub](./media/voice-control-your-inventory-images/voice-iothub.png)
+ !:::image type="content" source="./media/voice-control-your-inventory-images/voice-iothub.png" alt-text="Select IoT hub.":::
2. Click IoT Edge
- ![go to edge](./media/voice-control-your-inventory-images/portal-iotedge.png)
+ :::image type="content" source="./media/voice-control-your-inventory-images/portal-iotedge.png" alt-text="Go to IoT edge.":::
3. Click your Edge device to see if the modules run well
- ![confirm module](./media/voice-control-your-inventory-images/device-id.png)
+ :::image type="content" source="./media/voice-control-your-inventory-images/device-id.png" alt-text="Confirm modules.":::
4. Click <strong>azureearspeechclientmodule</strong> module
- ![select ear mod](./media/voice-control-your-inventory-images/azure-ear-module.png)
+ :::image type="content" source="./media/voice-control-your-inventory-images/azure-ear-module.png" alt-text="Select ear module.":::
5. Click <strong>Troubleshooting</strong> tab of the azurespeechclientmodule
- ![selct client mod](./media/voice-control-your-inventory-images/troubleshoot.png)
+ ![select client mod](./media/voice-control-your-inventory-images/troubleshoot.png)
5. Check your log of the azurespeechclient module 1. Change the Time range to 3 minutes to check the latest log
In this section, you will learn how to use deployment manifest to deploy modules
2. Speak <strong>ΓÇ£Computer, remove 2 red boxesΓÇ¥</strong> to your Azure Percept Audio (Computer is the wake word to wake Azure Percept DK, and remove 2 red boxes is the command) Check the log in the speech log if it shows <strong>ΓÇ£sure, remove 2 red boxes. 2 red boxes have been removed.ΓÇ¥</strong>
- ![verify log](./media/voice-control-your-inventory-images/speech-regconizing.png)
+ :::image type="content" source="./media/voice-control-your-inventory-images/speech-regconizing.png" alt-text="Verify log.":::
>[!NOTE] >If you have set up the wake word before, please use the wake work you set up to wake your DK.
-## Section 6: Import dataset from Azure SQL to Power BI
+## Step 6: Import dataset from Azure SQL to Power BI
In this section, you will create a Power BI report and check if the report has been updated after you speak commands to your Azure Percept Audio. 1. Open the Power BI Desktop Application and import data from Azure SQL Server 1. Click close of the pop-up window
In this section, you will create a Power BI report and check if the report has b
2. Click ΓÇ£RefreshΓÇ¥. You will see the number of green boxes has been updated. ![Power BI report-9](./media/voice-control-your-inventory-images/refresh-power-bi.png)
-Congratulations! You finally know how to develop your own voice assistant. It is not easy to configure such a lot of configurations and set up the custom commands for the first time. But you did it! You can start trying more complex scenarios after this tutorial. Looking forward to seeing you design more interesting scenarios and let voice assistant help in the future.
+Congratulations! You now know how to develop your own voice assistant! You went through a lot of configuration and set up the custom commands for the first time. Great job! Now you can start trying more complex scenarios after this tutorial. Looking forward to seeing you design more interesting scenarios and let voice assistant help in the future.
<!-- 6. Clean up resources Required. If resources were created during the tutorial. If no resources were created,
resources with the following steps:
1. Login to the [Azure portal](https://portal.azure.com), go to `Resource Group` you have been using for this tutorial. Delete the SQL DB, Azure Function, and Speech Service resources.
-2. Go into [Azure Percept Studio](https://portal.azure.com/#blade/AzureEdgeDevices/Main/overview), select your device from the `Device` blade, click the `Speech` tab within your device, and under `Configuration` remove reference to your custom command.
+2. Go into [Azure Percept Studio](https://ms.portal.azure.com/#blade/AzureEdgeDevices/Main/overview), select your device from the `Device` blade, click the `Speech` tab within your device, and under `Configuration` remove reference to your custom command.
3. Go in to [Speech Studio](https://speech.microsoft.com/portal) and delete project created for this tutorial.
azure-resource-manager Outputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/outputs.md
Title: Outputs in Bicep description: Describes how to define output values in Bicep Previously updated : 11/12/2021 Last updated : 02/20/2022 # Outputs in Bicep
To get an output value from a module, use the following syntax:
<module-name>.outputs.<property-name> ```
-The following example shows how to set the IP address on a load balancer by retrieving a value from a module. The name of the module is `publicIP`.
+The following example shows how to set the IP address on a load balancer by retrieving a value from a module.
-```bicep
-publicIPAddress: {
- id: publicIP.outputs.resourceID
-}
-```
## Get output values
azure-resource-manager Private Module Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/private-module-registry.md
Title: Create private registry for Bicep module description: Learn how to set up an Azure container registry for private Bicep modules Previously updated : 01/03/2022 Last updated : 02/21/2022 # Create private registry for Bicep modules
To share [modules](modules.md) within your organization, you can create a privat
To work with module registries, you must have [Bicep CLI](./install.md) version **0.4.1008 or later**. To use with Azure CLI, you must also have version **2.31.0 or later**; to use with Azure PowerShell, you must also have version **7.0.0** or later.
+### Microsoft Learn
+
+If you would rather learn about parameters through step-by-step guidance, see [Share Bicep modules by using private registries](/learn/modules/share-bicep-modules-using-private-registries) on **Microsoft Learn**.
+ ## Configure private registry A Bicep registry is hosted on [Azure Container Registry (ACR)](../../container-registry/container-registry-intro.md). Use the following steps to configure your registry for modules.
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Title: Move operation support by resource type description: Lists the Azure resource types that can be moved to a new resource group, subscription, or region. Previously updated : 12/30/2021 Last updated : 02/22/2022 # Move operation support for resources
Jump to a resource provider namespace:
> - [Microsoft.CognitiveServices](#microsoftcognitiveservices) > - [Microsoft.Commerce](#microsoftcommerce) > - [Microsoft.Compute](#microsoftcompute)
+> - [Microsoft.Confluent](#microsoftconfluent)
> - [Microsoft.Consumption](#microsoftconsumption) > - [Microsoft.ContainerInstance](#microsoftcontainerinstance) > - [Microsoft.ContainerRegistry](#microsoftcontainerregistry)
Jump to a resource provider namespace:
> | virtualmachines / extensions | Yes | Yes | No | > | virtualmachinescalesets | Yes | Yes | No |
+## Microsoft.Confluent
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Resource group | Subscription | Region move |
+> | - | -- | - | -- |
+> | organizations | No | No | No |
+ ## Microsoft.Consumption > [!div class="mx-tableFixed"]
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.AppConfiguration/configurationStores | [ListKeys](/rest/api/appconfiguration/configurationstores/listkeys) | | Microsoft.AppPlatform/Spring | [listTestKeys](/rest/api/azurespringcloud/services/listtestkeys) | | Microsoft.Automation/automationAccounts | [listKeys](/rest/api/automation/keys/listbyautomationaccount) |
-| Microsoft.Batch/batchAccounts | [listkeys](/rest/api/batchmanagement/batchaccount/getkeys) |
+| Microsoft.Batch/batchAccounts | [listKeys](/rest/api/batchmanagement/batchaccount/getkeys) |
| Microsoft.BatchAI/workspaces/experiments/jobs | listoutputfiles | | Microsoft.Blockchain/blockchainMembers | [listApiKeys](/rest/api/blockchain/2019-06-01-preview/blockchainmembers/listapikeys) | | Microsoft.Blockchain/blockchainMembers/transactionNodes | [listApiKeys](/rest/api/blockchain/2019-06-01-preview/transactionnodes/listapikeys) |
The possible uses of `list*` are shown in the following table.
| Microsoft.DataShare/accounts/shareSubscriptions | [listSourceShareSynchronizationSettings](/rest/api/datashare/2020-09-01/sharesubscriptions/listsourcesharesynchronizationsettings) | | Microsoft.DataShare/accounts/shareSubscriptions | [listSynchronizationDetails](/rest/api/datashare/2020-09-01/sharesubscriptions/listsynchronizationdetails) | | Microsoft.DataShare/accounts/shareSubscriptions | [listSynchronizations](/rest/api/datashare/2020-09-01/sharesubscriptions/listsynchronizations) |
-| Microsoft.Devices/iotHubs | [listkeys](/rest/api/iothub/iothubresource/listkeys) |
-| Microsoft.Devices/iotHubs/iotHubKeys | [listkeys](/rest/api/iothub/iothubresource/getkeysforkeyname) |
-| Microsoft.Devices/provisioningServices/keys | [listkeys](/rest/api/iot-dps/iotdpsresource/listkeysforkeyname) |
-| Microsoft.Devices/provisioningServices | [listkeys](/rest/api/iot-dps/iotdpsresource/listkeys) |
+| Microsoft.Devices/iotHubs | [listKeys](/rest/api/iothub/iothubresource/listkeys) |
+| Microsoft.Devices/iotHubs/iotHubKeys | [listKeys](/rest/api/iothub/iothubresource/getkeysforkeyname) |
+| Microsoft.Devices/provisioningServices/keys | [listKeys](/rest/api/iot-dps/iotdpsresource/listkeysforkeyname) |
+| Microsoft.Devices/provisioningServices | [listKeys](/rest/api/iot-dps/iotdpsresource/listkeys) |
| Microsoft.DevTestLab/labs | [ListVhds](/rest/api/dtl/labs/listvhds) | | Microsoft.DevTestLab/labs/schedules | [ListApplicable](/rest/api/dtl/schedules/listapplicable) | | Microsoft.DevTestLab/labs/users/serviceFabrics | [ListApplicableSchedules](/rest/api/dtl/servicefabrics/listapplicableschedules) |
The possible uses of `list*` are shown in the following table.
| Microsoft.DomainRegistration/topLevelDomains | [listAgreements](/rest/api/appservice/topleveldomains/listagreements) | | Microsoft.EventGrid/domains | [listKeys](/rest/api/eventgrid/controlplane-version2021-12-01/domains/list-shared-access-keys) | | Microsoft.EventGrid/topics | [listKeys](/rest/api/eventgrid/controlplane-version2021-12-01/topics/list-shared-access-keys) |
-| Microsoft.EventHub/namespaces/authorizationRules | [listkeys](/rest/api/eventhub) |
-| Microsoft.EventHub/namespaces/disasterRecoveryConfigs/authorizationRules | [listkeys](/rest/api/eventhub) |
-| Microsoft.EventHub/namespaces/eventhubs/authorizationRules | [listkeys](/rest/api/eventhub) |
+| Microsoft.EventHub/namespaces/authorizationRules | [listKeys](/rest/api/eventhub) |
+| Microsoft.EventHub/namespaces/disasterRecoveryConfigs/authorizationRules | [listKeys](/rest/api/eventhub) |
+| Microsoft.EventHub/namespaces/eventhubs/authorizationRules | [listKeys](/rest/api/eventhub) |
| Microsoft.ImportExport/jobs | [listBitLockerKeys](/rest/api/storageimportexport/bitlockerkeys/list) | | Microsoft.Kusto/Clusters/Databases | [ListPrincipals](/rest/api/azurerekusto/databases/listprincipals) | | Microsoft.LabServices/users | [ListEnvironments](/rest/api/labservices/globalusers/listenvironments) |
The possible uses of `list*` are shown in the following table.
| Microsoft.OperationalInsights/workspaces | listKeys | | Microsoft.PolicyInsights/remediations | [listDeployments](/rest/api/policy/remediations/listdeploymentsatresourcegroup) | | Microsoft.RedHatOpenShift/openShiftClusters | [listCredentials](/rest/api/openshift/openshiftclusters/listcredentials) |
-| Microsoft.Relay/namespaces/authorizationRules | [listkeys](/rest/api/relay/namespaces/listkeys) |
-| Microsoft.Relay/namespaces/disasterRecoveryConfigs/authorizationRules | listkeys |
-| Microsoft.Relay/namespaces/HybridConnections/authorizationRules | [listkeys](/rest/api/relay/hybridconnections/listkeys) |
+| Microsoft.Relay/namespaces/authorizationRules | [listKeys](/rest/api/relay/namespaces/listkeys) |
+| Microsoft.Relay/namespaces/disasterRecoveryConfigs/authorizationRules | listKeys |
+| Microsoft.Relay/namespaces/HybridConnections/authorizationRules | [listKeys](/rest/api/relay/hybridconnections/listkeys) |
| Microsoft.Relay/namespaces/WcfRelays/authorizationRules | [listkeys](/rest/api/relay/wcfrelays/listkeys) | | Microsoft.Search/searchServices | [listAdminKeys](/rest/api/searchmanagement/2021-04-01-preview/admin-keys/get) | | Microsoft.Search/searchServices | [listQueryKeys](/rest/api/searchmanagement/2021-04-01-preview/query-keys/list-by-search-service) |
-| Microsoft.ServiceBus/namespaces/authorizationRules | [listkeys](/rest/api/servicebus/stable/namespaces-authorization-rules/list-keys) |
-| Microsoft.ServiceBus/namespaces/disasterRecoveryConfigs/authorizationRules | [listkeys](/rest/api/servicebus/stable/disasterrecoveryconfigs/listkeys) |
-| Microsoft.ServiceBus/namespaces/queues/authorizationRules | [listkeys](/rest/api/servicebus/stable/queues-authorization-rules/list-keys) |
-| Microsoft.ServiceBus/namespaces/topics/authorizationRules | [listkeys](/rest/api/servicebus/stable/topics%20%E2%80%93%20authorization%20rules/list-keys) |
-| Microsoft.SignalRService/SignalR | [listkeys](/rest/api/signalr/signalr/listkeys) |
+| Microsoft.ServiceBus/namespaces/authorizationRules | [listKeys](/rest/api/servicebus/stable/namespaces-authorization-rules/list-keys) |
+| Microsoft.ServiceBus/namespaces/disasterRecoveryConfigs/authorizationRules | [listKeys](/rest/api/servicebus/stable/disasterrecoveryconfigs/listkeys) |
+| Microsoft.ServiceBus/namespaces/queues/authorizationRules | [listKeys](/rest/api/servicebus/stable/queues-authorization-rules/list-keys) |
+| Microsoft.ServiceBus/namespaces/topics/authorizationRules | [listKeys](/rest/api/servicebus/stable/topics%20%E2%80%93%20authorization%20rules/list-keys) |
+| Microsoft.SignalRService/SignalR | [listKeys](/rest/api/signalr/signalr/listkeys) |
| Microsoft.Storage/storageAccounts | [listAccountSas](/rest/api/storagerp/storageaccounts/listaccountsas) |
-| Microsoft.Storage/storageAccounts | [listkeys](/rest/api/storagerp/storageaccounts/listkeys) |
+| Microsoft.Storage/storageAccounts | [listKeys](/rest/api/storagerp/storageaccounts/listkeys) |
| Microsoft.Storage/storageAccounts | [listServiceSas](/rest/api/storagerp/storageaccounts/listservicesas) | | Microsoft.StorSimple/managers/devices | [listFailoverSets](/rest/api/storsimple/devices/listfailoversets) | | Microsoft.StorSimple/managers/devices | [listFailoverTargets](/rest/api/storsimple/devices/listfailovertargets) |
The possible uses of `list*` are shown in the following table.
| microsoft.web/apimanagementaccounts/apis/connections | `listSecrets` | | microsoft.web/sites/backups | [list](/rest/api/appservice/webapps/listbackups) | | Microsoft.Web/sites/config | [list](/rest/api/appservice/webapps/listconfigurations) |
-| microsoft.web/sites/functions | [listkeys](/rest/api/appservice/webapps/listfunctionkeys)
+| microsoft.web/sites/functions | [listKeys](/rest/api/appservice/webapps/listfunctionkeys)
| microsoft.web/sites/functions | [listSecrets](/rest/api/appservice/webapps/listfunctionsecrets) |
-| microsoft.web/sites/hybridconnectionnamespaces/relays | [listkeys](/rest/api/appservice/appserviceplans/listhybridconnectionkeys) |
+| microsoft.web/sites/hybridconnectionnamespaces/relays | [listKeys](/rest/api/appservice/appserviceplans/listhybridconnectionkeys) |
| microsoft.web/sites | [listsyncfunctiontriggerstatus](/rest/api/appservice/webapps/listsyncfunctiontriggers) | | microsoft.web/sites/slots/functions | [listSecrets](/rest/api/appservice/webapps/listfunctionsecretsslot) | | microsoft.web/sites/slots/backups | [list](/rest/api/appservice/webapps/listbackupsslot) |
azure-sql Data Discovery And Classification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/data-discovery-and-classification-overview.md
ms.devlang: --++ Previously updated : 02/16/2022 Last updated : 02/22/2022 tags: azure-synapse # Data Discovery & Classification
azure-sql Vnet Service Endpoint Rule Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/vnet-service-endpoint-rule-overview.md
You must already have a subnet that's tagged with the particular virtual network
:::image type="content" source="../../sql-database/media/sql-database-vnet-service-endpoint-rule-overview/portal-firewall-vnet-firewalls-and-virtual-networks.png" alt-text="Azure SQL logical server properties, Firewalls and Virtual Networks highlighted" lightbox="../../sql-database/media/sql-database-vnet-service-endpoint-rule-overview/portal-firewall-vnet-firewalls-and-virtual-networks.png":::
-1. Set **Allow access to Azure services** to **OFF**.
+1. Set **Allow Azure services and resources to access this server** to **No**.
> [!IMPORTANT] > If you leave the control set to **ON**, your server accepts communication from any subnet inside the Azure boundary. That is communication that originates from one of the IP addresses that's recognized as those within ranges defined for Azure datacenters. Leaving the control set to **ON** might be excessive access from a security point of view. The Microsoft Azure Virtual Network service endpoint feature in coordination with the virtual network rules feature of SQL Database together can reduce your security surface area.
Error 40615 is similar, except it relates to *IP address rules* on the firewall.
- REST API Reference, including JSON - Azure CLI - ARM templates>
+-->
azure-sql Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/resource-limits.md
Support for the premium-series hardware generations (public preview) is currentl
| Central US | Yes | | | East US | Yes | | | East US 2 | Yes | |
-| France Central | | Yes |
| Germany West Central | | Yes | | Japan East | Yes | | | Korea Central | Yes | |
azure-video-analyzer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/release-notes.md
Title: Azure Video Analyzer for Media (formerly Video Indexer) release notes | M
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Analyzer for Media (formerly Video Indexer). Previously updated : 01/04/2022 Last updated : 02/18/2022
To stay up-to-date with the most recent Azure Video Analyzer for Media (former V
* Bug fixes * Deprecated functionality
+## February 2022
+
+### Leverage open-source code to create ARM based account
+
+Added new code samples including HTTP calls to use Video Analyzer for Media create, read, update and delete (CRUD) ARM API for solution developers. See [this sample](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/ARM-Samples/Create-Account
+).
## January 2022 ### Improved audio effects detection
When indexing a video through our advanced video settings, you can view the new
### Public preview of Video Analyzer for Media account management based on ARM
-Azure Video Analyzer for Media introduces a public preview of Azure Resource Manager (ARM) based account management. You can leverage ARM-based APIs to create, edit, and delete an account from the Azure portal.
+Azure Video Analyzer for Media introduces a public preview of Azure Resource Manager (ARM) based account management. You can leverage ARM-based Video Analyzer for Media APIs to create, edit, and delete an account from the [Azure portal](https://portal.azure.com/#home).
+
+> [!NOTE]
+> The Government cloud includes support for CRUD ARM based accounts from Video Analyzer for Media API and from the Azure portal.
+>
+> There is currently no support from the Video Analyzer for Media [website](https://www.videoindexer.ai).
For more information go to [create a Video Analyzer for Media account](https://techcommunity.microsoft.com/t5/azure-ai/azure-video-analyzer-for-media-is-now-available-as-an-azure/ba-p/2912422).
azure-vmware Configure Identity Source Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-identity-source-vcenter.md
You'll run the `Get-ExternalIdentitySources` cmdlet to list all external identit
## Add Active Directory over LDAP with SSL
-You'll run the `New-AvsLDAPSIdentitySource` cmdlet to add an AD over LDAP with SSL as an external identity source to use with SSO into vCenter.
+You'll run the `New-LDAPSIdentitySource` cmdlet to add an AD over LDAP with SSL as an external identity source to use with SSO into vCenter.
1. Download the certificate for AD authentication and upload it to an Azure Storage account as blob storage. If multiple certificates are required, upload each certificate individually.
You'll run the `New-AvsLDAPSIdentitySource` cmdlet to add an AD over LDAP with S
>[!IMPORTANT] >Make sure to copy each SAS string, because they will no longer be available once you leave this page.
-1. Select **Run command** > **Packages** > **New-AvsLDAPSIdentitySource**.
+1. Select **Run command** > **Packages** > **New-LDAPSIdentitySource**.
1. Provide the required values or change the default values, and then select **Run**. | **Field** | **Value** | | | |
- | **Name** | User-friendly name of the external identity source, for example, **avslap.local**. |
+ | **Name** | User-friendly name of the external identity source, for example, **avslab.local**. |
| **DomainName** | The FQDN of the domain. | | **DomainAlias** | For Active Directory identity sources, the domain's NetBIOS name. Add the NetBIOS name of the AD domain as an alias of the identity source if you're using SSPI authentications. | | **PrimaryUrl** | Primary URL of the external identity source, for example, **ldaps://yourserver:636**. |
You'll run the `New-AvsLDAPSIdentitySource` cmdlet to add an AD over LDAP with S
>[!NOTE] >We don't recommend this method. Instead, use the [Add Active Directory over LDAP with SSL](#add-active-directory-over-ldap-with-ssl) method.
-You'll run the `New-AvsLDAPIdentitySource` cmdlet to add AD over LDAP as an external identity source to use with SSO into vCenter.
+You'll run the `New-LDAPIdentitySource` cmdlet to add AD over LDAP as an external identity source to use with SSO into vCenter.
-1. Select **Run command** > **Packages** > **New-AvsLDAPIdentitySource**.
+1. Select **Run command** > **Packages** > **New-LDAPIdentitySource**.
1. Provide the required values or change the default values, and then select **Run**.
azure-vmware Configure Storage Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-storage-policy.md
You can assign a VM storage policy in an initial deployment of a VM or when you
The Run command lets authorized users change the default or existing VM storage policy to an available policy for a VM post-deployment. There are no changes made on the disk-level VM storage policy. You can always change the disk level VM storage policy as per your requirements.
->[!NOTE]
->Run commands are executed one at a time in the order submitted.
+> [!NOTE]
+> Run commands are executed one at a time in the order submitted.
In this how-to, you learn how to:
You'll run the `Get-StoragePolicy` cmdlet to list the vSAN based storage policie
## Set storage policy on VM
-You'll run the `Set-AvsVMStoragePolicy` cmdlet to Modify vSAN based storage policies on an individual VM.
+You'll run the `Set-VMStoragePolicy` cmdlet to Modify vSAN based storage policies on an individual VM or on a group of VMs sharing a similar VM name. For example, if you have 3 VMs named "MyVM1", "MyVM2", "MyVM3", supplying "MyVM*" to the VMName parameter would change the StoragePolicy on all three VMs.
->[!NOTE]
->You cannot use the vSphere Client to change the default storage policy or any existing storage policies for a VM.
+> [!NOTE]
+> You cannot use the vSphere Client to change the default storage policy or any existing storage policies for a VM.
-1. Select **Run command** > **Packages** > **Set-AvsVMStoragePolicy**.
+1. Select **Run command** > **Packages** > **Set-VMStoragePolicy**.
1. Provide the required values or change the default values, and then select **Run**.
You'll run the `Set-AvsVMStoragePolicy` cmdlet to Modify vSAN based storage poli
1. Check **Notifications** to see the progress.
+## Set storage policy on all VMs in a location
+
+You'll run the `Set-LocationStoragePolicy` cmdlet to Modify vSAN based storage policies on all VMs in a location where a location is the name of a cluster, resource pool, or folder. For example, if you have 3 VMs in Cluster-3, supplying "Cluster-3" would change the storage policy on all 3 VMs.
+
+> [!NOTE]
+> You cannot use the vSphere Client to change the default storage policy or any existing storage policies for a VM.
+
+1. Select **Run command** > **Packages** > **Set-LocationStoragePolicy**.
+
+1. Provide the required values or change the default values, and then select **Run**.
+
+ | **Field** | **Value** |
+ | | |
+ | **Location** | Name of the target VM. |
+ | **StoragePolicyName** | Name of the storage policy to set. For example, **RAID-FTT-1**. |
+ | **Retain up to** | Retention period of the cmdlet output. The default value is 60. |
+ | **Specify name for execution** | Alphanumeric name, for example, **changeVMStoragePolicy**. |
+ | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
+
+1. Check **Notifications** to see the progress.
++ ## Specify storage policy for a cluster You'll run the `Set-ClusterDefaultStoragePolicy` cmdlet to specify default storage policy for a cluster,
->[!NOTE]
->Changing the storage policy of the default management cluster (Cluster-1) isn't allowed.
+> [!NOTE]
+> Changing the storage policy of the default management cluster (Cluster-1) isn't allowed.
1. Select **Run command** > **Packages** > **Set-ClusterDefaultStoragePolicy**.
backup About Azure Vm Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/about-azure-vm-restore.md
This article describes how the [Azure Backup service](./backup-overview.md) rest
- **Item Level Restore (ILR):** Restoring individual files or folders inside the VM from the recovery point -- **Availability (Replication types)**: Azure Backup offers two types of replication to keep your storage/data highly available:
+- **Availability (Replication types)**: Azure Backup offers three types of replication to keep your storage/data highly available:
- [Locally redundant storage (LRS)](../storage/common/storage-redundancy.md#locally-redundant-storage) replicates your data three times (it creates three copies of your data) in a storage scale unit in a datacenter. All copies of the data exist within the same region. LRS is a low-cost option for protecting your data from local hardware failures. - [Geo-redundant storage (GRS)](../storage/common/storage-redundancy.md#geo-redundant-storage) is the default and recommended replication option. GRS replicates your data to a secondary region (hundreds of miles away from the primary location of the source data). GRS costs more than LRS, but GRS provides a higher level of durability for your data, even if there's a regional outage. - [Zone-redundant storage (ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage) replicates your data in [availability zones](../availability-zones/az-overview.md#availability-zones), guaranteeing data residency and resiliency in the same region. ZRS has no downtime. So your critical workloads that require [data residency](https://azure.microsoft.com/resources/achieving-compliant-data-residency-and-security-with-azure/), and must have no downtime, can be backed up in ZRS.
This article describes how the [Azure Backup service](./backup-overview.md) rest
- [Frequently asked questions about VM restore](./backup-azure-vm-backup-faq.yml) - [Supported restore methods](./backup-support-matrix-iaas.md#supported-restore-methods)-- [Troubleshoot restore issues](./backup-azure-vms-troubleshoot.md#restore)
+- [Troubleshoot restore issues](./backup-azure-vms-troubleshoot.md#restore)
backup Backup Azure Database Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql.md
You can configure backup on multiple databases across multiple Azure PostgreSQL
:::image type="content" source="./media/backup-azure-database-postgresql/choose-an-azure-postgresql-server-inline.png" alt-text="Screenshot showing how to choose an Azure PostgreSQL server." lightbox="./media/backup-azure-database-postgresql/choose-an-azure-postgresql-server-expanded.png":::
-1. **Assign Azure key vault** that stores the credentials to connect to the selected database. To assign the key vault at the individual row level, click **Select a key vault and secret**. You can also assign the key vault by multi-selecting the rows and click Assign key vault in the top menu of the grid.
+1. **Assign Azure key vault** that stores the credentials to connect to the selected database. You should have already [created the relevant secrets](#create-secrets-in-the-key-vault) in the key vault. To assign the key vault at the individual row level, click **Select a key vault and secret**. You can also assign the key vault by multi-selecting the rows and click Assign key vault in the top menu of the grid.
:::image type="content" source="./media/backup-azure-database-postgresql/assign-azure-key-vault-inline.png" alt-text="Screenshot showing how to assign Azure key vault." lightbox="./media/backup-azure-database-postgresql/assign-azure-key-vault-expanded.png":::
backup Backup Azure Restore Files From Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-files-from-vm.md
Title: Recover files and folders from Azure VM backup description: In this article, learn how to recover files and folders from an Azure virtual machine recovery point. Previously updated : 03/12/2020 Last updated : 02/22/2022 # Recover files from Azure virtual machine backup
To restore files or folders from the recovery point, go to the virtual machine a
![File recovery menu](./media/backup-azure-restore-files-from-vm/file-recovery-blade.png)
+> [!IMPORTANT]
+> Users should note the performance limitations of this feature. As pointed out in the footnote section of the above blade, this feature should be used when the total size of recovery is not beyond 10 GB and you could get data transfer speeds of around 1 GB per hour
+ 4. From the **Select recovery point** drop-down menu, select the recovery point that holds the files you want. By default, the latest recovery point is already selected. 5. Select **Download Executable** (for Windows Azure VMs) or **Download Script** (for Linux Azure VMs, a python script is generated) to download the software used to copy files from the recovery point.
backup Tutorial Backup Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-backup-azure-vm.md
A [Recovery Services vault](backup-azure-recovery-services-vault-overview.md) is
Create the vault as follows:
-1. Use the [New-AzRecoveryServicesVault](/powershell/module/az.recoveryservices/new-azrecoveryservicesvault)to create the vault. Specify the resource group name and location of the VM you want to back up.
+1. Use the [New-AzRecoveryServicesVault](/powershell/module/az.recoveryservices/new-azrecoveryservicesvault) to create the vault. Specify the resource group name and location of the VM you want to back up.
```powershell New-AzRecoveryServicesVault -Name myRSvault -ResourceGroupName "myResourceGroup" -Location "EastUS"
To enable and backup up the Azure VM in this tutorial, we do the following:
1. Specify a container in the vault that holds your backup data with [Get-AzRecoveryServicesBackupContainer](/powershell/module/az.recoveryservices/get-Azrecoveryservicesbackupcontainer). 2. Each VM for backup is an item. To start a backup job, you obtain information about the VM with [Get-AzRecoveryServicesBackupItem](/powershell/module/az.recoveryservices/Get-AzRecoveryServicesBackupItem).
-3. Run an on-demand backup with[Backup-AzRecoveryServicesBackupItem](/powershell/module/az.recoveryservices/backup-Azrecoveryservicesbackupitem).
+3. Run an on-demand backup with [Backup-AzRecoveryServicesBackupItem](/powershell/module/az.recoveryservices/backup-Azrecoveryservicesbackupitem).
* The first initial backup job creates a full recovery point. * After the initial backup, each backup job creates incremental recovery points. * Incremental recovery points are storage and time-efficient, as they only transfer changes made since the last backup.
cognitive-services How To Configure Rhel Centos 7 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-configure-rhel-centos-7.md
export LD_LIBRARY_PATH=/usr/local/lib64:$LD_LIBRARY_PATH
# (note, use the actual path to extracted files!) export PATH=/usr/local/bin:$PATH hash -r # reset cached paths in the current shell session just in case
-export LD_LIBRARY_PATH=/path/to/extracted/SpeechSDK-Linux-<version>/lib/x64:$LD_LIBRARY_PATH
+export LD_LIBRARY_PATH=/path/to/extracted/SpeechSDK-Linux-<version>/lib/centos7-x64:$LD_LIBRARY_PATH
```
+> [!NOTE]
+> Starting with the Speech SDK 1.19.0 release, the Linux .tar package contains specific libraries for RHEL/CentOS 7. These are in `lib/centos7-x64` as shown in the environment setting example for `LD_LIBRARY_PATH` above. Speech SDK libraries in `lib/x64` are for all the other supported Linux x64 distributions (including RHEL/CentOS 8) and don't work on RHEL/CentOS 7.
+ ## Next steps > [!div class="nextstepaction"]
cognitive-services Speech Ssml Phonetic Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-ssml-phonetic-sets.md
Previously updated : 01/13/2022 Last updated : 02/17/2022
Phonetic alphabets are used with the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) to improve the pronunciation of text-to-speech voices. To learn when and how to use each alphabet, see [Use phonemes to improve pronunciation](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation).
-## Speech service phonetic alphabet
-
-For some locales, Speech service defines its own phonetic alphabets, which ordinarily map to the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet). The seven locales that support the Microsoft Speech API (SAPI, or `sapi`) are en-US, fr-FR, de-DE, es-ES, ja-JP, zh-CN, and zh-TW.
-
-You set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation).
-
-### [en-US](#tab/en-US)
-
-#### English suprasegmentals
-
-|Example&nbsp;1 (onset for consonant, word-initial for vowel)|Example&nbsp;2 (intervocalic for consonant, word-medial nucleus for vowel)|Example&nbsp;3 (coda for consonant, word-final for vowel)|Comments|
-|--|--|--|--|
-| burger /b er **1** r - g ax r/ | falafel /f ax - l aa **1** - f ax l/ | guitar /g ih - t aa **1** r/ | The Speech service phone set puts stress after the vowel of the stressed syllable. |
-| inopportune /ih **2** - n aa - p ax r - t uw 1 n/ | dissimilarity /d ih - s ih **2**- m ax - l eh 1 - r ax - t iy/ | workforce /w er 1 r k - f ao **2** r s/ | The Speech service phone set puts stress after the vowel of the sub-stressed syllable. |
-
-#### English vowels
-
-| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|--|-||--|--|
-| iy | `i` | **ea**t | f**ee**l | vall**ey** |
-| ih | `ɪ` | **i**f | f**i**ll | |
-| ey | `eɪ` | **a**te | g**a**te | d**ay** |
-| eh | `ɛ` | **e**very | p**e**t | m**eh** (rare word-final) |
-| ae | `æ` | **a**ctive | c**a**t | n**ah** (rare word-final) |
-| aa | `ɑ` | **o**bstinate | p**o**ppy | r**ah** (rare word-final) |
-| ao | `ɔ` | **o**range | c**au**se | Ut**ah** |
-| uh | `ʊ` | b**oo**k | | |
-| ow | `oʊ` | **o**ld | cl**o**ne | g**o** |
-| uw | `u` | **U**ber | b**oo**st | t**oo** |
-| ah | `ʌ` | **u**ncle | c**u**t | |
-| ay | `aɪ` | **i**ce | b**i**te | fl**y** |
-| aw | `aʊ` | **ou**t | s**ou**th | c**ow** |
-| oy | `ɔɪ` | **oi**l | j**oi**n | t**oy** |
-| y uw | `ju` | **Yu**ma | h**u**man | f**ew** |
-| ax | `ə` | **a**go | wom**a**n | are**a** |
-
-#### English R-colored vowels
-
-| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|--|-|--|-||
-| ih r | `ɪɹ` | **ear**s | t**ir**amisu | n**ear** |
-| eh r | `ɛɹ` | **air**plane | app**ar**ently | sc**ar**e |
-| uh r | `ʊɹ` | | | c**ur**e |
-| ay r | `aɪɹ` | **Ire**land | f**ir**eplace | ch**oir** |
-| aw r | `aʊɹ` | **hour**s | p**ower**ful | s**our** |
-| ao r | `ɔɹ` | **or**ange | m**or**al | s**oar** |
-| aa r | `ɑɹ` | **ar**tist | st**ar**t | c**ar** |
-| er r | `ɝ` | **ear**th | b**ir**d | f**ur** |
-| ax r | `ɚ` | | all**er**gy | supp**er** |
-
-#### English semivowels
-
-| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|--|-|||--|
-| w | `w` | **w**ith, s**ue**de | al**w**ays | |
-| y | `j` | **y**ard, f**e**w | on**i**on | |
-
-#### English aspirated oral stops
-
-| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|--|-|--|-||
-| p | `p` | **p**ut | ha**pp**en | fla**p** |
-| b | `b` | **b**ig | num**b**er | cra**b** |
-| t | `t` | **t**alk | capi**t**al | sough**t** |
-| d | `d` | **d**ig | ran**d**om | ro**d** |
-| k | `k` | **c**ut | sla**ck**er | Ira**q** |
-| g | `g` | **g**o | a**g**o | dra**g** |
-
-#### English nasal stops
-
-| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|--|-|||-|
-| m | `m` | **m**at, smash | ca**m**era | roo**m** |
-| n | `n` | **n**o, s**n**ow | te**n**t | chicke**n** |
-| ng | `ŋ` | | li**n**k | s**ing** |
-
-#### English fricatives
-
-| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|--|-|-|||
-| f | `f` | **f**ork | le**f**t | hal**f** |
-| v | `v` | **v**alue | e**v**ent | lo**v**e |
-| th | `╬╕` | **th**in | empa**th**y | mon**th** |
-| dh | `├░` | **th**en | mo**th**er | smoo**th** |
-| s | `s` | **s**it | ri**s**k | fact**s** |
-| z | `z` | **z**ap | bu**s**y | kid**s** |
-| sh | `ʃ` | **sh** e | abbrevia**ti**on | ru**sh** |
-| zh | `ʒ` | **J**acques | plea**s**ure | gara**g**e |
-| h | `h` | **h**elp | en**h**ance | a-**h**a! |
-
-#### English affricates
-
-| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|--|-|--|--||
-| ch | `tʃ` | **ch**in | fu**t**ure | atta**ch** |
-| jh | `dʒ` | **j**oy | ori**g**inal | oran**g**e |
-
-#### English approximants
-
-| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|--|-|--||--|
-| l | `l` | **l**id, g**l**ad | pa**l**ace | chi**ll** |
-| r | `╔╣` | **r**ed, b**r**ing | bo**rr**ow | ta**r** |
-
-### [fr-FR](#tab/fr-FR)
-
-#### French suprasegmentals
-
-The Speech service phone set puts stress after the vowel of the stressed syllable. However, the `fr-FR` Speech service phone set doesn't support the IPA substress 'ˌ'. If the IPA substress is needed, you should use the IPA directly.
-
-#### French vowels
-
-| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|--|-||--|--|
-| a | `a` | **a**rbre | p**a**tte | ir**a** |
-| aa | `ɑ` | | p**â**te | p**a**s |
-| aa ~ | `ɑ̃` | **en**fant | enf**en**t | t**em**ps |
-| ax | `ə` | | p**e**tite | l**e** |
-| eh | `ɛ` | **e**lle | p**e**rdu | ét**ai**t |
-| eu | `ø` | **œu**fs | cr**eu**ser | qu**eu**e |
-| ey | `e` | ému | crétin | ôté |
-| eh ~ | `ɛ̃` | **im**portant | p**ein**ture | mat**in** |
-| iy | `i` | **i**dée | pet**i**te | am**i** |
-| oe | `œ` | **œu**f | p**eu**r | |
-| oh | `ɔ` | **o**bstacle | c**o**rps | |
-| oh ~ | `ɔ̃` | **on**ze | r**on**deur | b**on** |
-| ow | `o` | **au**diteur | b**eau**coup | p**├┤** |
-| oe ~ | `œ̃ ` | **un** | l**un**di | br**un** |
-| uw | `u` | **ou**trage | intr**ou**vable | **ou** |
-| uy | `y` | **u**ne | p**u**nir | él**u** |
-
-#### French consonants
-
-| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|--|-|-||-|
-| b | `b` | **b**ête | ha**b**ille | ro**b**e |
-| d | `d` | **d**ire | ron**d**eur | chau**d**e |
-| f | `f` | **f**emme | su**ff**ixe | bo**f** |
-| g | `g` | **g**auche | é**g**ale | ba**gu**e |
-| ng | `ŋ` | | | park**ing**[<sup>1</sup>](#fr-1) |
-| hy | `ɥ` | h**u**ile | n**u**ire | |
-| k | `k` | **c**arte | é**c**aille | be**c** |
-| l | `l` | **l**ong | é**l**ire | ba**l** |
-| m | `m` | **m**adame | ai**m**er | po**mm**e |
-| n | `n` | **n**ous | te**n**ir | bo**nn**e |
-| nj | `╔▓` | | | pei**gn**e |
-| p | `p` | **p**atte | re**p**as | ca**p** |
-| r | `ʁ` | **r**at | cha**r**iot | senti**r** |
-| s | `s` | **s**ourir | a**ss**ez | pa**ss**e |
-| sh | `ʃ` | **ch**anter | ma**ch**ine | po**ch**e |
-| t | `t` | **t**ête | ô**t**er | ne**t** |
-| v | `v` | **v**ent | in**v**enter | rê**v**e |
-| w | `w` | **ou**i | f**ou**ine | |
-| y | `j` | **y**od | p**i**étiner | Marse**ille** |
-| z | `z` | **z **éro | rai**s**onner | ro**s**e |
-| zh | `ʒ` | **j**ardin | man**g**er | piè**g**e |
-| | `n‿` | | | u**n** arbre |
-| | `t‿` | | | quan**d** |
-| | `z‿` | | | di**x** |
-
-<a id="fr-1"></a>
-**1** *Only for some foreign words*.
-
-> [!TIP]
-> The `fr-FR` Speech service phone set doesn't support the following French liasions, `n‿`, `t‿`, and `z‿`. If they are needed, you should consider using the IPA directly.
-
-### [de-DE](#tab/de-DE)
-
-#### German suprasegmentals
-
-| Example&nbsp;1 (Onset for consonant, word-initial for vowel) | Example&nbsp;2 (Intervocalic for consonant, word-medial nucleus for vowel) | Example&nbsp;3 (Coda for consonant, word-final for vowel) | Comments |
-|--|--|--|--|
-| anders /a **1** n - d ax r s/ | Multiplikationszeichen /m uh l - t iy - p l iy - k a - ts y ow **1** n s - ts ay - c n/ | Biologie /b iy - ow - l ow - g iy **1**/ | Speech service phone set put stress after the vowel of the stressed syllable |
-| Allgemeinwissen /a **2** l - g ax - m ay 1 n - v ih - s n/ | Abfallentsorgungsfirma /a 1 p - f a l - ^ eh n t - z oh **2** ax r - g uh ng s - f ih ax r - m a/ | Computertomographie /k oh m - p y uw 1 - t ax r - t ow - m ow - g r a - f iy **2**/ | The Speech service phone set puts stress after the vowel of the sub-stressed syllable |
-
-#### German vowels
-
-| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|--|--||||
-| a: | `aː` | **A**ber | Maßst**a**b | Schem**a** |
-| a | `a` | **A**bfall | B**a**ch | Agath**a** |
-| oh | `ɔ` | **O**sten | Pf**o**sten | |
-| eh: | `ɛː` | **Ä**hnlichkeit | B**ä**r | Fasci**ae**[<sup>1</sup>](#de-v-1) |
-| eh | `ɛ` | **ä**ndern | Proz**e**nt | Amygdal**ae** |
-| ax | `ə` | 'v**e**rstauen[<sup>2</sup>](#de-v-2) | Aach**e**n | Frag**e** |
-| iy | `iː` | **I**ran | abb**ie**gt | Relativitätstheor**ie** |
-| ih | `ɪ` | **I**nnung | s**i**ngen | Wood**y** |
-| eu | `øː` | **Ö**sen | abl**ö**sten | Malm**ö** |
-| ow | `o`, `oː` | **o**hne | Balk**o**n | Trept**ow** |
-| oe | `œ` | **Ö**ffnung | bef**ö**rdern | |
-| ey | `e`, `eː` | **E**berhard | abf**e**gt | b |
-| uw | `uː` | **U**do | H**u**t | Akk**u** |
-| uh | `ʊ` | **U**nterschiedes | b**u**nt | |
-| ue | `yː` | **Ü**bermut | pfl**ü**gt | Men**ü** |
-| uy | `ʏ` | **ü**ppig | S**y**stem | |
-
-<a id="de-v-1"></a>
-**1** *Only in words of foreign origin, such as Fasci**ae***.<br>
-<a id="de-v-2"></a>
-**2** *Word-initial only in words of foreign origin, such as **A**ppointment. Syllable-initial in 'v**e**rstauen*.
-
-#### German diphthong
-
-| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|--|-|--|--|--|
-| ay | `ai` | **ei**nsam | Unabhängigk**ei**t | Abt**ei** |
-| aw | `au` | **au**ßen | abb**au**st | St**au** |
-| oy | `ɔy`, `ɔʏ̯` | **Eu**phorie | tr**äu**mt | sch**eu** |
-
-#### German semivowels
-
-| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|--|-|--|--||
-| ax r | `ɐ` | | abänd**er**n | lock**er** |
-
-#### German consonants
-
-| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|--|--|--|--|--|
-| b | `b` | **B**ank | | Pu**b**[<sup>1</sup>](#de-c-1) |
-| c | `ç` | **Ch**emie | mögli**ch**st | i**ch**[<sup>2</sup>](#de-c-2) |
-| d | `d` | **d**anken | Len**d**l[<sup>3</sup>](#de-c-3) | Clau**d**e[<sup>4</sup>](#de-c-4) |
-| jh | `ʤ` | **J**eff | gemana**g**t | Chan**g**e[<sup>5</sup>](#de-c-5) |
-| f | `f` | **F**ahrtdauer | angri**ff**slustig | abbruchrei**f** |
-| g | `g` | **g**ut | Gre**g**[<sup>6</sup>](#de-c-6) | |
-| h | `h` | **H**ausanbau | | |
-| y | `j` | **J**od | Reakt**i**on | hu**i** |
-| k | `k` | **K**oma | Aspe**k**t | Flec**k** |
-| l | `l` | **l**au | ähne**l**n | zuvie**l** |
-| m | `m` | **M**ut | A**m**t | Leh**m** |
-| n | `n` | **n**un | u**n**d | Huh**n** |
-| ng | `ŋ` | **Ng**uyen[<sup>7</sup>](#de-c-7) | Schwa**nk** | R**ing** |
-| p | `p` | **P**artner | abru**p**t | Ti**p** |
-| pf | `pf` | **Pf**erd | dam**pf**t | To**pf** |
-| r | `ʀ`, `r`, `ʁ` | **R**eise | knu**rr**t | Haa**r** |
-| s | `s` | **S**taccato[<sup>8</sup>](#de-c-8) | bi**s**t | mie**s** |
-| sh | `ʃ` | **Sch**ule | mi**sch**t | lappi**sch** |
-| t | `t` | **T**raum | S**t**raße | Mu**t** |
-| ts | `ts` | **Z**ug | Ar**z**t | Wit**z** |
-| ch | `tʃ` | **Tsch**echien | aufgepu**tsch**t | bundesdeu**tsch** |
-| v | `v` | **w**inken | Q**u**alle | Gr**oo**ve[<sup>9</sup>](#de-c-9) |
-| x | `x`[<sup>10</sup>](#de-c-10), `ç`[<sup>11</sup>](#de-c-11) | Ba**ch**erach[<sup>12</sup>](#de-c-12) | Ma**ch**t mögli**ch**st | Schma**ch** 'i**ch** |
-| z | `z` | **s**uper | | |
-| zh | `ʒ` | **G**enre | B**re**ezinski | Edvi**g**e |
-
-<a id="de-c-1"></a>
-**1** *Only in words of foreign origin, such as Pu**b***.<br>
-<a id="de-c-2"></a>
-**2** *Soft "ch" after "e" and "i"*.<br>
-<a id="de-c-3"></a>
-**3** *Only in words of foreign origin, such as Len**d**l*.<br>
-<a id="de-c-4"></a>
-**4** *Only in words of foreign origin, such as Clau**d**e*.<br>
-<a id="de-c-5"></a>
-**5** *Only in words of foreign origin, such as Chan**g**e*.<br>
-<a id="de-c-6"></a>
-**6** *Word-terminally only in words of foreign origin, such as Gre**g***.<br>
-<a id="de-c-7"></a>
-**7** *Only in words of foreign origin, such as **Ng**uyen*.<br>
-<a id="de-c-8"></a>
-**8** *Only in words of foreign origin, such as **S**taccato*.<br>
-<a id="de-c-9"></a>
-**9** *Only in words of foreign origin, such as Gr**oo**ve*.<br>
-<a id="de-c-10"></a>
-**10** *The IPA `x` is a hard "ch" after all non-front vowels (a, aa, oh, ow, uh, uw, and the diphthong aw)*.<br>
-<a id="de-c-11"></a>
-**11** *The IPA `ç` is a soft "ch" after front vowels (ih, iy, eh, ae, uy, ue, oe, eu, and diphthongs ay, oy) and consonants*.<br>
-<a id="de-c-12"></a>
-**12** *Word-initial only in words of foreign origin, such as **J**uan. Syllable-initial also in words such as Ba**ch**erach*.<br>
-
-#### German oral consonants
-
-| `sapi` | `ipa` | Example |
-|--|-|--|
-| ^ | `ʔ` | beachtlich /b ax - ^ a 1 x t - l ih c/ |
-
-> [!NOTE]
-> We need to add a [gs\] phone between two distinct vowels, except when the two vowels are a genuine diphthong. This oral consonant is a glottal stop. For more information, see [glottal stop](http://en.wikipedia.org/wiki/Glottal_stop).
-
-### [es-ES](#tab/es-ES)
-
-#### Spanish vowels
-
-| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|--|-|--||--|
-| a | `a` | **a**lto | c**a**ntar | cas**a** |
-| i | `i` | **i**bérica | av**i**spa | tax**i** |
-| e | `e` | **e**lefante | at**e**nto | elefant**e** |
-| o | `o` | **o**caso | enc**o**ntrar | ocasenc**o** |
-| u | `u` | **u**sted | p**u**nta | Juanl**u** |
-
-#### Spanish consonants
-
-| `sapi` | `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|--|||-|-|
-| b | `b` | **b**aobab | | am**b** |
-| | `╬▓` | | bao**b**ab | baoba**b** |
-| ch | `tʃ` | **ch**eque | co**ch**e | Marraque**ch** |
-| d | `d` | **d**edo | | portlan**d** |
-| | `├░` | | de**d**o | verda**d** |
-| f | `f` | **f**ácil | ele**f**ante | pu**f** |
-| g | `g` | **g**anga | | d├│pin**g** |
-| | `ɣ` | | a**g**ua | tuare**g** |
-| j | `j` | **i**odo | cal**i**ente | re**y** |
-| jj | `j.j` `jj` | | vi**ll**a | |
-| k | `k` | **c**oche | bo**c**a | titáni**c** |
-| l | `l` | **l**ápiz | a**l**a | corde**l** |
-| ll | `ʎ` | **ll**ave | desarro**ll**o | |
-| m | `m` | **m**order | a**m**ar | álbu**m** |
-| n | `n` | **n**ada | ce**n**a | rat├│**n** |
-| nj | `╔▓` | **├▒**a├▒a | ara**├▒**azo | |
-| p | `p` | **p**oca | to**p**o | sto**p** |
-| r | `╔╛` | | ca**r**a | abri**r** |
-| rr | `r` | **r**adio | co**rr**e | pu**rr** |
-| s | `s` | **s**aco | va**s**o | pelo**s** |
-| t | `t` | **t**oldo | a**t**ar | disque**t** |
-| th | `θ` | **z**ebra | a**z**ul | lápi**z** |
-| w | `w` | h**u**eso | ag**u**a | gua**u** |
-| x | `x` | **j**ota | a**j**o | relo**j** |
-
-> [!TIP]
-> The `es-ES` Speech service phone set doesn't support the following Spanish IPA: `β`, `ð`, and `ɣ`. If they're needed, consider using the IPA directly.
-
-### [zh-CN](#tab/zh-CN)
-
-The Speech service phone set for `zh-CN` is based on the native phone [Pinyin](https://en.wikipedia.org/wiki/Pinyin).
-
-#### Tone
-
-| Pinyin tone | `sapi` | Character example |
-|-|--|-|
-| mā | ma 1 | 妈 |
-| má | ma 2 | 麻 |
-| mǎ | ma 3 | 马 |
-| mà | ma 4 | 骂 |
-| ma | ma 5 | σÿ¢ |
-
-#### Example
-
-| Character | Speech service |
-|--|-|
-| 组织关系 | zu 3 - zhi 1 - guan 1 - xi 5 |
-| 累进 | lei 3 -jin 4 |
-| 西宅巷 | xi 1 - zhai 2 - xiang 4 |
-
-### [zh-TW](#tab/zh-TW)
-
-The Speech service phone set for `zh-TW` is based on the native phone [Bopomofo](https://en.wikipedia.org/wiki/Bopomofo).
-
-#### Tone
-
-| Speech service tone | Bopomofo tone | Example (word) | Speech service phones | Bopomofo | Pinyin (拼音) |
-|||-|--|-|-|
-| ˉ | empty | 偵 | ㄓㄣˉ | ㄓㄣ | zhēn |
-| ˊ | ˊ | 察 | ㄔㄚˊ | ㄔㄚˊ | chá |
-| ˇ | ˇ | 打 | ㄉㄚˇ | ㄉㄚˇ | dǎ |
-| ˋ | ˋ | 望 | ㄨㄤˋ | ㄨㄤˋ | wàng |
-| ˙ | ˙ | 影子 | 一ㄥˇ ㄗ˙ | 一ㄥˇ ㄗ˙ | yǐng zi |
-
-#### Examples
-
-| Character | `sapi` |
-|--|-|
-| 狗 | ㄍㄡˇ |
-| 然后 | ㄖㄢˊㄏㄡˋ |
-| 剪掉 | ㄐㄧㄢˇㄉㄧㄠˋ |
-
-### [ja-JP](#tab/ja-JP)
-
-The Speech service phone set for `ja-JP` is based on the native phone [Kana](https://en.wikipedia.org/wiki/Kana) set.
-
-#### Stress
-
-| `sapi` | `ipa` |
-|--|-|
-| `ˈ` | `ˈ` mainstress |
-| `+` | `ˌ` substress |
-
-#### Examples
-
-| Character | `sapi` | `ipa` |
-|--||-|
-| 合成 | ゴ'ウセ | goˈwɯseji |
-| 所有者 | ショュ'ウ?ャ | ɕjojɯˈwɯɕja |
-| 最適化 | サィテキカ+ | sajitecikaˌ |
--
-***
-
-## International Phonetic Alphabet
-
-For the following locales, Speech service uses the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet).
-
-You set `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation).
-
-These locales all use the IPA stress and syllable symbols that are listed here:
+Speech service supports the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet) stress and syllable symbols that are listed here. You set `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation).
|`ipa` | Symbol | |-|-|
These locales all use the IPA stress and syllable symbols that are listed here:
| `ˌ` | Secondary stress | | `.` | Syllable boundary |
+For some locales, Speech service defines its own phonetic alphabets, which ordinarily map to the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet). The seven locales that support the Microsoft Speech API (SAPI, or `sapi`) are en-US, fr-FR, de-DE, es-ES, ja-JP, zh-CN, and zh-TW. For those seven locales, you set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation).
-Select a tab to view the IPA phonemes that are specific to each locale.
-
-### [ca-ES](#tab/ca-ES)
-
-| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|-|-||-|
-| `a` | **a**men | am**a**ro | est**à** |
-| `ɔ` | **o**dre | ofert**o**ri | microt**ò** |
-| `ə` | **e**stan | s**e**ré | aigu**a** |
-| `b` | **b**aba | do**b**la | |
-| `β` | **v**ià | ba**b**a | |
-| `t͡ʃ` | **tx**adià | ma**tx**ucs | fa**ig** |
-| `d̪` | **d**edicada | con**d**uïa | navida**d** |
-| `├░` | **Th**e_Sun | de**d**icada | trinida**d** |
-| `e` | **é**rem | f**e**ta | ser**é** |
-| `ɛ` | **e**cosistema | incorr**e**cta | hav**er** |
-| `f` | **f**acilitades | a**f**ectarà | àgra**f** |
-| `g` | **g**racia | con**g**ratula | |
-| `ɣ` | | ai**g**ua | |
-| `i` | **i**tinerants | it**i**nerants | zomb**i** |
-| `j` | **hi**ena | espla**i**a | cofo**i** |
-| `d͡ʒ` | **dj**akarta | composta**tg**e | geor**ge** |
-| `k` | **c**urós | dode**c**à | doble**c** |
-| `l` | **l**aberint | mio**l**ar | preva**l** |
-| `ʎ` | **ll**igada | mi**ll**orarà | perbu**ll** |
-| `m` | **m**acadàmies | fe**m**ar | subli**m** |
-| `n` | **n**ecessaris | sa**n**itaris | alterame**nt** |
-| `ŋ` | | algo**n**quí | albe**nc** |
-| `╔▓` | **ny**asa | reme**n**jar | alema**ny** |
-| `o` | **o**mbra | ret**o**ndre | omissi**├│** |
-| `p` | **p**egues | este**p**a | ca**p** |
-| `ɾ` | | ca**r**o | càrte**r** |
-| `r` | **r**abada | ca**rr**o | lof├▓fo**r** |
-| `s` | **c**eri | cur**s**ar | cu**s** |
-| `ʃ` | **x**acar | micro**x**ip | midra**ix** |
-| `t̪` | **t**abacaires | es**t**ratifica | debatu**t** |
-| `θ` | **c**eará | ve**c**inos | Álvare**z** |
-| `u` | **u**niversitaris | candidat**u**res | cron**o** |
-| `w` | **w**estfalià | ina**u**gurar | inscri**u** |
-| `x` | **j**uanita | mu**j**eres | heinri**ch** |
-| `z` | **z**elar | bra**s**ils | alian**ze** |
--
-### [en-GB](#tab/en-GB)
-
-#### Vowels
-
-| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|-||--|-|
-| `ɑː` | | f**a**st | br**a** |
-| `æ` | | f**a**t | |
-| `ʌ` | | b**u**g | |
-| `ɛə` | | | h**air** |
-| `aʊ` | **ou**t | m**ou**th | h**ow** |
-| `ə` | **a** | | driv**er** |
-| `aɪ` | | f**i**ve | |
-| `ɛ` | **e**gg | dr**e**ss | |
-| `ɜː` | **er**nest | sh**ir**t | f**ur** |
-| `eɪ` | **ai**lment | l**a**ke | p**ay** |
-| `ɪ` | | add**i**ng | |
-| `ɪə` | | b**ear**d | h**ear** |
-| `iː` | **ea**t | s**ee**d | s**ee** |
-| `ɒ` | | p**o**d | |
-| `ɔː` | | d**aw**n | |
-| `əʊ` | | c**o**de | pill**ow** |
-| `ɔɪ` | | p**oi**nt | b**oy** |
-| `ʊ` | | l**oo**k | |
-| `ʊə` | | | t**our** |
-| `uː` | | f**oo**d | t**wo** |
-
-#### Consonants
-
-| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|-||--|-|
-| `b ` | **b**ike | ri**bb**on | ri**b** |
-| `tʃ ` | **ch**allenge | na**t**ure | ri**ch** |
-| `d ` | **d**ate | ca**dd**y | sli**d** |
-| `├░` | **th**is | fa**th**er | brea**the** |
-| `f ` | **f**ace | lau**gh**ing | enou**gh** |
-| `g ` | **g**old | bra**gg**ing | be**g** |
-| `h ` | **h**urry | a**h**ead | |
-| `j` | **y**es | | |
-| `dʒ` | **g**in | ba**dg**er | bri**dge** |
-| `k ` | **c**at | lu**ck**y | tru**ck** |
-| `l ` | **l**eft | ga**ll**on | fi**ll** |
-| `m ` | **m**ile | li**m**it | ha**m** |
-| `n ` | **n**ose | pho**n**etic | ti**n** |
-| `ŋ ` | | si**ng**er | lo**ng** |
-| `p ` | **p**rice | su**p**er | ti**p** |
-| `╔╣` | **r**ate | ve**r**y | |
-| `s ` | **s**ay | si**ss**y | pa**ss** |
-| `ʃ ` | **sh**op | ca**sh**ier | lea**sh** |
-| `t ` | **t**op | ki**tt**en | be**t** |
-| `╬╕` | **th**eatre | ma**the**matics | brea**th** |
-| `v` | **v**ery | li**v**er | ha**ve** |
-| `w ` | **w**ill | | |
-| `z ` | **z**ero | bli**zz**ard | ro**se** |
--
-### [es-MX](#tab/es-MX)
-
-#### Vowels
-
-| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3|
-|-||-|-|
-| `ɑ` | **a**zúcar | tom**a**te | rop**a** |
-| `e` | **e**so | rem**e**ro | am**é** |
-| `i` | h**i**lo | liqu**i**do | ol**í** |
-| `o` | h**o**gar | ol**o**te | cas**o** |
-| `u` | **u**no | ning**u**no | tab**├║** |
-
-#### Consonants
-
-| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3|
-|-||-|-|
-| `b` | **b**ote | | |
-| `╬▓` | ├│r**b**ita | envol**v**ente | |
-| `t͡ʃ` | **ch**ico | ha**ch**a | |
-| `d` | **d**átil | | |
-| `├░` | or**d**en | o**d**a | |
-| `f` | **f**oco | o**f**icina | |
-| `g` | **g**ajo | | |
-| `ɣ` | a**g**ua | ho**gu**era | |
-| `j` | **i**odo | cal**i**ente | re**y** |
-| `j͡j` | | o**ll**a | |
-| `k` | **c**asa | á**c**aro | |
-| `l` | **l**oco | a**l**a | |
-| `ʎ` | **ll**ave | en**y**ugo | |
-| `m` | **m**ata | a**m**ar | |
-| `n` | **n**ada | a**n**o | |
-| `╔▓` | **├▒**o├▒o | a**├▒**o | |
-| `p` | **p**apa | pa**p**a | |
-| `╔╛` | | a**r**o | |
-| `r` | **r**ojo | pe**rr**o | |
-| `s` | **s**illa | a**s**a | |
-| `t` | **t**omate | | sof**t** |
-| `w` | h**u**evo | | |
-| `x` | **j**arra | ho**j**a | |
--
-### [it-IT](#tab/it-IT)
-
-#### Vowels
-
-| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|-||--|--|
-| `a` | **a**mo | s**a**no | scort**a** |
-| `ai` | **ai**cs | abb**ai**no | m**ai** |
-| `aʊ` | **au**dio | r**au**co | b**au** |
-| `e` | **e**roico | v**e**nti / numb**e**r | sapor**e** |
-| `ɛ` | **e**lle | avv**e**nto | lacch**è** |
-| `ej` | **ei**ra | em**ai**l | l**ei** |
-| `ɛu` | **eu**ro | n**eu**ro | |
-| `ei` | | as**ei**tà | scultor**ei** |
-| `eu` | **eu**ropeo | f**eu**dale | |
-| `i` | **i**taliano | v**i**no | sol**i** |
-| `u` | **u**nico | l**u**na | zeb**├╣** |
-| `o` | **o**besità | stra**o**rdinari | amic**o** |
-| `ɔ` | **o**tto | b**o**tte / str**o**kes | per**ò** |
-| `oj` | | oppi**oi**di | |
-| `oi` | **oi**b├▓ | intellettual**oi**de | Gameb**oy** |
-| `ou` | | sh**ow** | talksh**ow** |
-
-#### Consonants
+See the sections in this article for the phonemes that are specific to each locale.
-| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|-||--|--|
-| `b` | **b**ene | e**b**anista | Euroclu**b** |
-| `bː` | | go**bb**a | |
-| `ʧ` | **c**enare | a**c**ido | fren**ch** |
-| `tʃː` | | bra**cc**io | |
-| `kː` | | pa**cc**o | Innsbru**ck** |
-| `d` | **d**ente | a**d**orare | interlan**d** |
-| `dː` | | ca**dd**e | |
-| `ʣ` | **z**ero | or**z**o | |
-| `ʣː` | | me**zz**o | |
-| `f` | **f**ame | a**f**a | ale**f** |
-| `fː` | | be**ff**a | blu**ff** |
-| `ʤ` | **g**ente | a**g**ire | bei**ge** |
-| `ʤː` | | o**gg**i | |
-| `g` | **g**ara | al**gh**e | smo**g** |
-| `gː` | | fu**gg**a | Zue**gg** |
-| `ʎ` | **gl**i | ammira**gl**i | |
-| `ʎː` | | fo**gl**ia | |
-| `ɲː` | | ba**gn**o | |
-| `╔▓` | **gn**occo | padri**gn**o | Montai**gne** |
-| `j` | **i**eri | p**i**ede | freewif**i** |
-| `k` | **c**aro | an**ch**e | ti**c** ta**c** |
-| `l` | **l**ana | a**l**ato | co**l** |
-| `lː` | | co**ll**a | fu**ll** |
-| `m` | **m**ano | a**m**are | Ada**m** |
-| `mː` | | gra**mm**o | |
-| `n` | **n**aso | la**n**a | no**n** |
-| `nː` | | pa**nn**a | |
-| `p` | **p**ane | e**p**ico | sto**p** |
-| `pː` | | co**pp**a | |
-| `╔╛` | **r**ana | moto**r**e | pe**r** |
-| `r.r` | | ca**rr**o | Sta**rr** |
-| `s` | **s**ano | ca**s**cata | lapi**s** |
-| `sː` | | ca**ss**a | cordle**ss** |
-| `ʃ` | **sc**emo | Gram**sc**i | sla**sh** |
-| `ʃː` | | a**sc**ia | fich**es** |
-| `t` | **t**ana | e**t**erno | al**t** |
-| `tː` | | zi**tt**o | |
-| `ʦ` | **ts**unami | turbolen**z**a | subtes**ts** |
-| `ʦː` | | bo**zz**a | |
-| `v` | **v**ento | a**v**aro | Asimo**v** |
-| `vː` | | be**vv**i | |
-| `w` | **u**ovo | d**u**omo | Marlo**we** |
+## ca-ES
-### [pt-BR](#tab/pt-BR)
+## de-DE
-#### Vowels
+## en-GB
-| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|-|--||--|
-| `i` | **i**lha | f**i**car | com**i** |
-| `ĩ` | **in**tacto | p**in**tar | aberd**een** |
-| `ɑ` | **á**gua | d**a**da | m**á** |
-| `ɔ` | **o**ra | p**o**rta | cip**ó** |
-| `u` | **u**fanista | m**u**la | per**u** |
-| `ũ` | **un**s | p**un**gente | k**uhn** |
-| `o` | **o**rtopedista | f**o**fo | av**├┤** |
-| `e` | **e**lefante | el**e**fante | voc**ê** |
-| `ɐ̃` | **an**ta | c**an**ta | amanh**ã** |
-| `ɐ` | **a**qui | am**a**ciar | dad**a** |
-| `ɛ` | **e**la | s**e**rra | at**é** |
-| `ẽ` | **en**dorfina | p**en**der | |
-| `õ` | **on**tologia | c**on**to | |
+## en-US
-#### Consonants
+## es-ES
-| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|-|--||--|
-| `w̃` | | | atualizaçã**o** |
-| `w` | **w**ashington | ág**u**a | uso**u** |
-| `p` | **p**ato | ca**p**ital | |
-| `b` | **b**ola | ca**b**eça | |
-| `t` | **t**ato | ra**t**o | |
-| `d` | **d**ado | ama**d**o | |
-| `g` | **g**ato | mara**g**ato | |
-| `m` | **m**ato | co**m**er | |
-| `n` | **n**o | a**n**o | |
-| `ŋ` | **nh**oque | ni**nh**o | |
-| `f` | **f**aca | a**f**ago | |
-| `v` | **v**aca | ca**v**ar | |
-| `╔╣` | | pa**r**a | ama**r** |
-| `s` | **s**atisfeito | amas**s**ado | casado**s** |
-| `z` | **z**ebra | a**z**ar | |
-| `ʃ` | **ch**eirar | ma**ch**ado | |
-| `ʒ` | **jaca** | in**j**usta | |
-| `x` | **r**ota | ca**rr**eta | |
-| `tʃ` | **t**irar | a**t**irar | |
-| `dʒ` | **d**ia | a**d**iar | |
-| `l` | **l**ata | a**l**eto | |
-| `ʎ` | **lh**ama | ma**lh**ado | |
-| `j̃` | | inabalavelme**n**te | hífe**n** |
-| `j` | | ca**i**xa | sa**i** |
-| `k` | **c**asa | ensa**c**ado | |
+## es-MX
+## fr-FR
-### [pt-PT](#tab/pt-PT)
+## it-IT
-| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|-|-|--||
-| `a` | **á**bdito | consul**a**r | medir**á** |
-| `ɐ` | **a**bacaxi | dom**a**ção | long**a** |
-| `ɐ͡j` | **ei**dético | dir**ei**ta | detect**ei** |
-| `ɐ̃` | **an**verso | viaj**an**te | af**ã** |
-| `ɐ͡j̃`| **an**gels | viag**en**s | tamb**ém** |
-| `ɐ͡w̃`| **hão** | significaç**ão**zinha | gab**ão** |
-| `ɐ͡w` | | s**au**dar | hell**o** |
-| `a͡j` | **ai**rosa | cultur**ai**s | v**ai** |
-| `ɔ` | **ho**ra | dep**ó**sito | l**ó** |
-| `ɔ͡j` | **ói**s | her**ói**co | d**ói** |
-| `a͡w` | **ou**tlook | inc**au**to | p**au** |
-| `ə` | **e**xtremo | sapr**e**mar | noit**e** |
-| `b` | **b**acalhau | ta**b**aco | clu**b** |
-| `d` | **d**ado | da**d**o | ban**d** |
-| `ɾ` | **r**ename | ve**r**ás | chuta**r** |
-| `e` | **e**clipse | hav**e**r | buff**et** |
-| `ɛ` | **e**co | hib**é**rnios | pat**é** |
-| `ɛ͡w` | | pirin**éu**s | escarc**éu** |
-| `ẽ` | **em**baçado | dirim**en**te | ám**en** |
-| `e͡w` | **eu** | d**eu**s | beb**eu** |
-| `f` | **f**im | e**f**icácia | gol**f** |
-| `g` | **g**adinho | ape**g**o | blo**g** |
-| `i` | **i**greja | aplaud**i**do | escrev**i** |
-| `ĩ` | **im**paciente | esp**in**çar | manequ**im** |
-| `i͡w` | | n**iu**e | garant**iu** |
-| `j` | **i**ode | desassoc**i**ado | substitu**i** |
-| `k` | **k**iwi | trafi**c**ado | sna**ck** |
-| `l` | **l**aborar | pe**l**ada | fu**ll** |
-| `ɫ` | | po**l**vo | brasi**l** |
-| `ʎ` | **lh**anamente | anti**lh**as | |
-| `m` | **m**aça | ama**nh**ã | mode**m** |
-| `n` | **n**utritivo | campa**n**a | sca**n** |
-| `╔▓` | **nh**ambu-grande | toalhi**nh**a | pe**nh** |
-| `o` | **o**fir | consumad**o**r | stacatt**o** |
-| `o͡j` | **oi**rar | n**oi**te | f**oi** |
-| `õ` | **om**brão | barr**on**da | d**om** |
-| `o͡j̃`| | ocupaç**õe**s | exp**õe** |
-| `p` | **p**ai | crá**p**ula | lapto**p** |
-| `ʀ` | **r**ecordar | gue**rr**a | chauffeu**r** |
-| `s` | **s**eco | gro**ss**eira | bo**ss** |
-| `ʃ` | **ch**uva | du**ch**ar | médio**s** |
-| `t` | **t**abaco | pelo**t**a | inpu**t** |
-| `u` | **u**bi | fac**u**ltativo | fad**o** |
-| `u͡j` | **ui**var | arr**ui**vado | f**ui** |
-| `ũ` | **um**bilical | f**un**cionar | fór**um** |
-| `u͡j̃`| | m**ui**to | |
-| `v` | **v**aca | combatí**v**el | pavlo**v** |
-| `w` | **w**affle | restit**u**ir | katofi**o** |
-| `z` | **z**âmbia | pra**z**er | ja**zz** |
+## ja-JP
+## pt-BR
-### [ru-RU](#tab/ru-RU)
+## pt-PT
-#### Vowels
+## ru-RU
-| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|-||-|-|
-| `a` | **а**дрес | р**а**дость | бед**а** |
-| `ʌ` | **о**блаков | з**а**стенчивость | внучк**а** |
-| `ə` | | ябл**о**чн**о**го | |
-| `ɛ` | **э**пос | б**е**лка | каф**е** |
-| `i` | **и**ней | л**и**ст | соловь**и** |
-| `ɪ` | **и**гра | м**е**дведь | мгновень**е** |
-| `ɨ` | **э**нергия | л**ы**с**ы**й | вес**ы** |
-| `ɔ` | **о**крик | м**о**т | весл**о** |
-| `u` | **у**жин | к**у**ст | пойд**у** |
+## zh-CN
-#### Consonants
+## zh-TW
-| `ipa` | Example&nbsp;1 | Example&nbsp;2 | Example&nbsp;3 |
-|-||-|-|
-| `p` | **п**рофессор | по**п**лавок | укро**п** |
-| `pʲ` | **П**етербург | осле**п**ительно | сте**пь** |
-| `b` | **б**ольшой | со**б**ака | |
-| `bʲ` | **б**елый | у**б**едить | |
-| `t` | **т**айна | с**т**аренький | тви**д** |
-| `tʲ` | **т**епло | учи**т**ель | сине**ть** |
-| `d` | **д**оверчиво | не**д**алеко | |
-| `dʲ` | **д**ядя | е**д**иница | |
-| `k` | **к**рыло | ку**к**уруза | кустарни**к** |
-| `kʲ` | **к**ипяток | неяр**к**ий | |
-| `g` | **г**роза | немно**г**о | |
-| `gʲ` | **г**ерань | помо**г**ите | |
-| `x` | **х**ороший | по**х**од | ду**х** |
-| `xʲ` | **х**илый | хи**х**иканье | |
-| `f` | **ф**антазия | шка**ф**ах | кро**в** |
-| `fʲ` | **ф**естиваль | ко**ф**е | вер**фь** |
-| `v` | **в**нучка | сине**в**а | |
-| `vʲ` | **в**ертеть | с**в**ет | |
-| `s` | **с**казочник | ле**с**ной | карапу**з** |
-| `sʲ` | **с**еять | по**с**ередине | зажгли**сь** |
-| `z` | **з**аяц | зве**з**да | |
-| `zʲ` | **з**емляника | со**з**ерцал | |
-| `ʂ` | **ш**уметь | п**ш**ено | мы**шь** |
-| `ʐ` | **ж**илище | кру**ж**евной | |
-| `t͡s` | **ц**елитель | Вене**ц**ия | незнакоме**ц** |
-| `t͡ɕ` | **ч**асы | о**ч**арование | мя**ч** |
-| `ɕː` | **щ**елчок | о**щ**у**щ**ать | ле**щ** |
-| `m` | **м**олодежь | нес**м**отря | то**м** |
-| `mʲ` | **м**еч | ды**м**ить | се**мь** |
-| `n` | **н**ачало | око**н**це | со**н** |
-| `nʲ` | **н**ебо | ли**н**ялый | тюле**нь** |
-| `l` | **л**ужа | до**л**гожитель | ме**л** |
-| `lʲ` | **л**ицо | неда**л**еко | со**ль** |
-| `r` | **р**адость | со**р**ока | дво**р** |
-| `rʲ` | **р**ябина | набе**р**ежная | две**рь** |
-| `j` | **е**сть | ма**я**к | игрушечны**й** |
***
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Phonetic alphabets are composed of phones, which are made up of letters, numbers
| Attribute | Description | Required or optional | |--|-||
-| `alphabet` | Specifies the phonetic alphabet to use when you synthesize the pronunciation of the string in the `ph` attribute. The string that specifies the alphabet must be specified in lowercase letters. The following options are the possible alphabets that you can specify:<ul><li>`ipa` &ndash; [International Phonetic Alphabet (IPA)](speech-ssml-phonetic-sets.md#speech-service-phonetic-alphabet)</li><li>`sapi` &ndash; [Speech service phonetic alphabet ](speech-ssml-phonetic-sets.md#speech-service-phonetic-alphabet)</li><li>`ups` &ndash; [Universal Phone Set](https://documentation.help/Microsoft-Speech-Platform-SDK-11/17509a49-cae7-41f5-b61d-07beaae872ea.htm)</li></ul><br>The alphabet applies only to the `phoneme` in the element.| Optional |
+| `alphabet` | Specifies the phonetic alphabet to use when you synthesize the pronunciation of the string in the `ph` attribute. The string that specifies the alphabet must be specified in lowercase letters. The following options are the possible alphabets that you can specify:<ul><li>`ipa` &ndash; See [SSML phonetic alphabets](speech-ssml-phonetic-sets.md)</li><li>`sapi` &ndash; See [SSML phonetic alphabets](speech-ssml-phonetic-sets.md)</li><li>`ups` &ndash; See [Universal Phone Set](https://documentation.help/Microsoft-Speech-Platform-SDK-11/17509a49-cae7-41f5-b61d-07beaae872ea.htm)</li></ul><br>The alphabet applies only to the `phoneme` in the element.| Optional |
| `ph` | A string containing phones that specify the pronunciation of the word in the `phoneme` element. If the specified string contains unrecognized phones, text-to-speech rejects the entire SSML document and produces none of the speech output specified in the document. | Required if using phonemes | **Examples**
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/v2-preview/quickstart.md
Translator is a cloud-based neural machine translation service that is part of t
## Prerequisites
-To use the [Custom Translator](https://preview.portal.customtranslator.azure.ai/) preview portal, you will need the following:
+To use the [Custom Translator](https://portal.customtranslator.azure.ai/) preview portal, you will need the following:
* A [Microsoft account](https://signup.live.com).
To use the [Custom Translator](https://preview.portal.customtranslator.azure.ai/
See [how to create a Translator resource](../../translator-how-to-signup.md).
-Once you have the above prerequisites, sign in to the [Custom Translator](https://preview.portal.customtranslator.azure.ai/) preview portal to create workspaces, build projects, upload files, train models, and publish your custom solution.
+Once you have the above prerequisites, sign in to the [Custom Translator](https://portal.customtranslator.azure.ai/) preview portal to create workspaces, build projects, upload files, train models, and publish your custom solution.
You can read an overview of translation and custom translation, learn some tips, and watch a getting started video in the [Azure AI technical blog](https://techcommunity.microsoft.com/t5/azure-ai/customize-a-translation-to-make-sense-in-a-specific-context/ba-p/2811956).
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/language-support.md
| Turkish | `tr` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Turkmen | `tk` |Γ£ö|||| | Ukrainian | `uk` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| 🆕 </br> Upper Sorbian | `hsb` |✔|||||
| Urdu | `ur` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Uyghur | `ug` |Γ£ö|||| | Uzbek (Latin | `uz` |Γ£ö|||Γ£ö||
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/call-api.md
Previously updated : 01/07/2022 Last updated : 02/16/2022
See the [application development lifecycle](../overview.md#application-developme
## Deploy your model
-1. Go to your project in [Language studio](https://aka.ms/custom-extraction)
+Go to your project in [Language studio](https://aka.ms/custom-extraction).
-2. Select **Deploy model** from the left side menu.
-3. Select the model you want to deploy, then select **Deploy model**. If you deploy your model through the Language Studio, your `deployment-name` is `prod`.
+If you deploy your model through the Language Studio, your `deployment-name` is `prod`.
> [!TIP] > You can test your model in Language Studio by sending samples of text for it to classify.
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/create-project.md
For information on authorizing access to your Azure blob storage account and dat
## Create a custom named entity recognition project
+Once your resource and storage container are configured, create a new custom NER project. A project is a work area for building your custom AI models based on your data. Your project can only be accessed by you and others who have contributor access to the Azure resource being used.
+ [!INCLUDE [Create custom NER project](../includes/create-project.md)]
+Review the data you entered and select **Create Project**.
+ ## Next steps After your project is created, you can start [tagging your data](tag-data.md), which will inform your entity extraction model how to interpret text, and is used for training and evaluation.
cognitive-services Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/tutorials/cognitive-search.md
Previously updated : 02/02/2022 Last updated : 02/04/2022
In this tutorial, you learn how to:
## Create a custom NER project through Language studio
-1. Sign in to [Language Studio](https://aka.ms/languageStudio). A window will appear to let you select your subscription and Language resource. Select the resource you created in the above step.
-2. Under the **Extract information** section of Language Studio, select **custom named entity recognition** from the available services, and select it.
-
-3. Select **Create new project** from the top menu in your projects page. Creating a project will let you tag data, train, evaluate, improve, and deploy your models.
-
-4. If youΓÇÖve created your resource using the steps above in this [guide](../how-to/create-project.md#azure-resources), the **Connect storage** step will be completed already. If not, you need to assign [roles for your storage account](../how-to/create-project.md#required-roles-for-your-storage-account) before connecting it to your resource
-
-5. Enter project information, including a name, description, and the language of the files in your project. You wonΓÇÖt be able to change the name of your project later.
- >[!TIP]
- > Your dataset doesn't have to be entirely in the same language. You can have multiple files, each with different supported languages. If your dataset contains files of different languages or if you expect different languages during runtime, select **enable multi-lingual dataset** when you enter the basic information for your project.
-
-6. Select the container where youΓÇÖve uploaded your data. For this tutorial weΓÇÖll use the tags file you downloaded from the sample data.
-
-7. Review the data you entered and select **Create Project**.
+Select the container where youΓÇÖve uploaded your data. For this tutorial weΓÇÖll use the tags file you downloaded from the sample data. Review the data you entered and select **Create Project**.
## Train your model
In this tutorial, you learn how to:
## Deploy your model
-1. Select **Deploy model** from the left side menu.
-2. Select the model you want to deploy and from the top menu click on **Deploy model**. If you deploy your model through Language Studio, your `deployment-name` will be `prod`.
+If you deploy your model through Language Studio, your `deployment-name` will be `prod`.
## Use CogSvc language utilities tool for Cognitive search integration
communication-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/authentication.md
To set up a service principal, [create a registered application from the Azure C
Communication services support Azure AD authentication but do not support managed identity for Communication services resources. You can find more details, about the managed identity support in the [Azure Active Directory documentation](../../active-directory/managed-identities-azure-resources/services-support-managed-identities.md).
+Use our [Trusted authentication service hero sample](../samples/trusted-auth-sample.md) to map Azure Communication Services access tokens with your Azure Active Directory.
+ ### User Access Tokens User access tokens are generated using the Identity SDK and are associated with users created in the Identity SDK. See an example of how to [create users and generate tokens](../quickstarts/access-tokens.md). Then, user access tokens are used to authenticate participants added to conversations in the Chat or Calling SDK. For more information, see [add chat to your app](../quickstarts/chat/get-started.md). User access token authentication is different compared to access key and Azure AD authentication in that it is used to authenticate a user rather than a secured Azure resource.
The user identity is intended to act as a primary key for logs and metrics colle
> [!div class="nextstepaction"] > [Create and manage Communication Services resources](../quickstarts/create-communication-resource.md)+
+> [!div class="nextstepaction"]
> [Create an Azure Active Directory service principal application from the Azure CLI](../quickstarts/identity/service-principal-from-cli.md)
-> [Create User Access Tokens](../quickstarts/access-tokens.md)
+
+> [!div class="nextstepaction"]
+> [Create user access tokens](../quickstarts/access-tokens.md)
+
+> [!div class="nextstepaction"]
+> [Trusted authentication service hero sample](../samples/trusted-auth-sample.md)
For more information, see the following articles:-- [Learn about client and server architecture](../concepts/client-and-server-architecture.md)
+- [Learn about client and server architecture](../concepts/client-and-server-architecture.md)
communication-services Client And Server Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/client-and-server-architecture.md
Azure Communication Services clients must present `user access tokens` to access
- **Concept:** [User Identity](identity-model.md) - **Quickstart:** [Create and manage access tokens](../quickstarts/access-tokens.md) - **Tutorial:** [Build a identity management services use Azure Functions](../tutorials/trusted-service-tutorial.md)
+- **Sample:** [Trusted authentication service hero sample](../samples/trusted-auth-sample.md)
> [!IMPORTANT] > For simplicity, we do not show user access management and token distribution in subsequent architecture flows.
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/known-issues.md
Chrome version 98 introduced a regression with anormal generation of video keyfr
### Some Android devices failing to join calls and meetings.
-A number of specific Android devices fail to start, join or accept calls and meetings. The devices that run into this issue, won't recover and will fail on every attempt. These are mostly Samsung moodel A devices, particularly models A326U, A125U and A215U.
+A number of specific Android devices fail to start, join or accept calls and meetings. The devices that run into this issue, won't recover and will fail on every attempt. These are mostly Samsung model A devices, particularly models A326U, A125U and A215U.
- This is a known regression introduced on [Chromium](https://bugs.chromium.org/p/webrtc/issues/detail?id=13223). ### iOS 15.1 users joining group calls or Microsoft Teams meetings.
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
The tables below summarize current availability:
|:|:|:|:|:|:| |Denmark |Toll-Free | | |Public Preview |Public Preview* | |Denmark |Local | | |Public Preview |Public Preview* |
-|USA (includes PR) |Toll-Free |GA |GA |Public Preview |Public Preview* |
-|USA (includes PR) |Local | | |Public Preview |Public Preview* |
*** \* Available through Azure Bot Framework and Dynamics only
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/overview.md
After creating a Communication Services resource you can start building client s
|**[Join your calling app to a Teams meeting](./quickstarts/voice-video-calling/get-started-teams-interop.md)**|Azure Communication Services can be used to build custom meeting experiences that interact with Microsoft Teams. Users of your Communication Services solution(s) can interact with Teams participants over voice, video, chat, and screen sharing.| |**[Get started with chat](./quickstarts/chat/get-started.md)**|The Azure Communication Services Chat SDK is used to add rich real-time text chat into your applications.| |**[Connect a Microsoft Bot to a phone number](https://github.com/microsoft/botframework-telephony)**|Telephony channel is a channel in Microsoft Bot Framework that enables the bot to interact with users over the phone. It leverages the power of Microsoft Bot Framework combined with the Azure Communication Services and the Azure Speech Services. |
+| **[Add visual communication experiences](https://aka.ms/acsstorybook)** | The UI Library for Azure Communication Services enables you to easily add rich, visual communication experiences to your applications for both calling and chat. |
## Samples
Learn more about the Azure Communication Services SDKs with the resources below.
|**[Calling SDK overview](./concepts/voice-video-calling/calling-sdk-features.md)**|Review the Communication Services Calling SDK overview.| |**[Chat SDK overview](./concepts/chat/sdk-features.md)**|Review the Communication Services Chat SDK overview.| |**[SMS SDK overview](./concepts/sms/sdk-features.md)**|Review the Communication Services SMS SDK overview.|
+|**[UI Library overview](https://aka.ms/acsstorybook))**| Review the UI Library for the Communication Services |
## Other Microsoft Communication Services
communication-services Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/access-tokens.md
You might also want to:
- [Learn about authentication](../concepts/authentication.md) - [Add chat to your app](./chat/get-started.md) - [Learn about client and server architecture](../concepts/client-and-server-architecture.md)
+ - [Deploy trusted authentication service hero sample](../samples/trusted-auth-sample.md)
communication-services Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/get-started.md
In this quickstart you learned how to:
You may also want to:
+ - Get started with the [UI Library](https://aka.ms/acsstorybook)
- Learn about [chat concepts](../../concepts/chat/concepts.md) - Familiarize yourself with [Chat SDK](../../concepts/chat/sdk-features.md) - Using [Chat SDK in your React Native](./react-native.md) application.
communication-services Quick Create Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/identity/quick-create-identity.md
You may also want to:
- [Learn about authentication](../../concepts/authentication.md) - [Learn about client and server architecture](../../concepts/client-and-server-architecture.md)
+ - [Deploy trusted authentication service hero sample](../../samples/trusted-auth-sample.md)
communication-services Get Started Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-interop.md
Title: Quickstart - Teams interop on Azure Communication Services
-description: In this quickstart, you'll learn how to join an Teams meeting with the Azure Communication Calling SDK.
+description: In this quickstart, you'll learn how to join a Teams meeting with the Azure Communication Calling SDK.
Last updated 06/30/2021
If you want to clean up and remove a Communication Services subscription, you ca
For more information, see the following articles: - Check out our [calling hero sample](../../samples/calling-hero-sample.md)
+- Get started with the [UI Library](https://aka.ms/acsstorybook)
- Learn about [Calling SDK capabilities](./getting-started-with-calling.md) - Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Get Started With Video Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-with-video-calling.md
If you want to clean up and remove a Communication Services subscription, you ca
## Next steps For more information, see the following articles: -- Check out our [web calling sample](../../samples/web-calling-sample.md)
+- Check out our [calling hero sample](../../samples/calling-hero-sample.md)
+- Get started with the [UI Library](https://aka.ms/acsstorybook)
- Learn about [Calling SDK capabilities](./getting-started-with-calling.md?pivots=platform-web) - Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Getting Started With Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/getting-started-with-calling.md
If you want to clean up and remove a Communication Services subscription, you ca
For more information, see the following articles: - Check out our [calling hero sample](../../samples/calling-hero-sample.md)
+- Get started with the [UI Library](https://aka.ms/acsstorybook)
- Learn about [Calling SDK capabilities]() - Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Calling Hero Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/calling-hero-sample.md
Title: Group calling hero sample
+ Title: Calling hero sample
description: Overview of calling hero sample using Azure Communication Services to enable developers to learn more about the inner workings of the sample. -+
zone_pivot_groups: acs-web-ios-android
-# Get started with the group calling hero sample
+# Get started with the calling hero sample
::: zone pivot="platform-web" [!INCLUDE [Web Calling Hero Sample](./includes/web-calling-hero.md)]
communication-services Chat Hero Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/chat-hero-sample.md
Title: Group Chat Hero Sample
+ Title: Chat Hero Sample
description: Overview of chat hero sample using Azure Communication Services to enable developers to learn more about the inner workings of the sample and learn how to modify it. -+
-# Get started with the group chat hero sample
+# Get started with the chat hero sample
> [!IMPORTANT] > [This sample is available **on GitHub**.](https://github.com/Azure-Samples/communication-services-web-chat-hero)
When you press the "Start a Chat" button, the web application fetches a user acc
:::image type="content" source="./media/chat/pre-chat.png" alt-text="Screenshot showing the application's pre-chat screen.":::
-Once your configure your display name and emoji, you can join the chat session. Now you will see the main chat canvas where the core chat experience lives.
+Once you configure your display name and emoji, you can join the chat session. Now you will see the main chat canvas where the core chat experience lives.
Components of the main chat screen:
You can test the sample locally by opening multiple browser sessions with the UR
1. Open an instance of PowerShell, Windows Terminal, Command Prompt or equivalent and navigate to the directory that you'd like to clone the sample to. 2. `git clone https://github.com/Azure-Samples/communication-services-web-chat-hero.git`
-3. Get the `Connection String` from the Azure portal. For more information on connection strings, see [Create an Azure Communication Services resources](../quickstarts/create-communication-resource.md)
-4. Once you get the `Connection String`, Add the connection string to the **Chat/appsettings.json** file found under the Chat folder. Input your connection string in the variable: `ResourceConnectionString`.
+3. Get the `Connection String` and `Endpoint URL` from the Azure portal. For more information on connection strings, see [Create an Azure Communication Services resources](../quickstarts/create-communication-resource.md)
+4. Once you get the `Connection String` and `Endpoint URL`, Add both values to the **Server/appsettings.json** file found under the Chat Hero Sample folder. Input your connection string in the variable: `ResourceConnectionString` and endpoint URL in the variable: `EndpointUrl`.
### Local run
For more information, see the following articles:
- Learn about [chat concepts](../concepts/chat/concepts.md) - Familiarize yourself with our [Chat SDK](../concepts/chat/sdk-features.md)-- Review the [Contoso Med App](https://github.com/Azure-Samples/communication-services-contoso-med-app) sample
+- Check out the chat components in the [UI Library](https://azure.github.io/communication-ui-library/)
## Additional reading
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/overview.md
Azure Communication Services has many samples available, which you can use to te
| Sample Name | Description | Languages/Platforms Available | | : | : | : |
-| [Group Calling Hero Sample](./calling-hero-sample.md) | Provides a sample of creating a group calling application. | [Web](https://github.com/Azure-Samples/communication-services-web-calling-hero), [iOS](https://github.com/Azure-Samples/communication-services-ios-calling-hero), [Android](https://github.com/Azure-Samples/communication-services-android-calling-hero) |
+| [Calling Hero Sample](./calling-hero-sample.md) | Provides a sample of creating a calling application. | [Web](https://github.com/Azure-Samples/communication-services-web-calling-hero), [iOS](https://github.com/Azure-Samples/communication-services-ios-calling-hero), [Android](https://github.com/Azure-Samples/communication-services-android-calling-hero) |
+| [Chat Hero Sample](./chat-hero-sample.md) | Provides a sample of creating a chat application. | [Web](https://github.com/Azure-Samples/communication-services-web-chat-hero) | |
+| [Trusted Authentication Server Sample](./trusted-auth-sample.md) | Provides a sample implementation of a trusted authentication service used to generate user and access tokens for Azure Communication Services. The service by default maps generated identities to Azure Active Directory | [node.JS](https://github.com/Azure-Samples/communication-services-authentication-hero-nodejs), [C#](https://github.com/Azure-Samples/communication-services-authentication-hero-csharp)
| [Web Calling Sample](./web-calling-sample.md) | A step by step walk-through of ACS Calling features, including PSTN, within the Web. | [Web](https://github.com/Azure-Samples/communication-services-web-calling-tutorial/) |
-| [Chat Hero Sample](./chat-hero-sample.md) | Provides a sample of creating a chat application. | [Web](https://github.com/Azure-Samples/communication-services-web-chat-hero) |
-| [Contoso Medical App](https://github.com/Azure-Samples/communication-services-contoso-med-app) | Sample app demonstrating a patient-doctor flow. | Web & Node.js |
-| [Contoso Retail App](https://github.com/Azure-Samples/communication-services-contoso-retail-app) | Sample app demonstrating a retail support flow. | ASP.NET, .NET Core, JavaScript/Web |
-| [WPF Calling Sample](https://github.com/Azure-Samples/communication-services-web-calling-wpf-sample) | Sample app for Windows demonstrating calling functionality | WPF / Node.js |
| [Network Traversal Sample]( https://github.com/Azure-Samples/communication-services-network-traversal-hero) | Sample app demonstrating network traversal functionality | Node.js ## Quickstart samples
communication-services Trusted Auth Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/trusted-auth-sample.md
+
+ Title: Trusted Authentication Service Hero Sample
+
+description: Overview of trusted authentication services hero sample using Azure Communication Services.
+++++ Last updated : 06/30/2021+++
+zone_pivot_groups: acs-js-csharp
++
+# Get started with the trusted authentication service hero sample
+
+> [!IMPORTANT]
+> This sample is available **on GitHub** for [node.JS](https://github.com/Azure-Samples/communication-services-authentication-hero-nodejs) and [C#](https://github.com/Azure-Samples/communication-services-authentication-hero-csharp).
+
+## Overview
+
+Azure Communication Services requires developers to generate user and access token credentials inside of a trusted authentication service. Azure Communication Services is identity-agnostic, to learn more check out our [conceptual documentation](../concepts/identity-model.md).
+
+This repository provides a sample of a server implementation of an authentication service for Azure Communication Services. It uses best practices to build a trusted backend service that issues Azure Communication Services credentials and maps them to Azure Active Directory identities.
+
+This sample can help you in the following scenarios:
+- As a developer, you need to enable an authentication flow to generate Azure Communication Services user identities mapped to an Azure Active Directory identity. Using this identity you then will provision access tokens to be used in calling and chat experiences.
+- As a developer, you need to enable an authentication flow for Custom Teams Endpoint, which is done by using an Microsoft 365 Azure Active Directory identity of a Teams' user to fetch an Azure Communication Services token to be able to join Teams calling/chat.
+
+> [!NOTE]
+>If you are looking to get started with Azure Communication Services, but are still in learning / prototyping phases, check out our [quickstarts for getting started with Azure communication services users and access tokens](../quickstarts/access-tokens.md?pivots=programming-language-csharp).
+
+![Screenshot of the Azure Communication Services Authentication Server Sample Architecture](./media/auth/acs-authentication-server-sample-overview-flow.png)
+
+Since this sample only focuses on the server APIs, the client application is not part of it. If you want to add the client application to login user using Azure Active Directory, then follow the MSAL samples [here](https://github.com/AzureAD/microsoft-authentication-library-for-js).
+
+## Prerequisites
+
+To be able to run this sample, you will need to:
+
+- Register a Client and Server (Web API) applications in Azure Active Directory as part of [On Behalf Of workflow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-on-behalf-of-flow). Follow instructions on [registrations set up guideline](https://github.com/Azure-Samples/communication-services-authentication-hero-csharp/blob/main/docs/deployment-guides/set-up-app-registrations.md)
+- A deployed Azure Communication Services resource. [Create an Azure Communication Services resource](../quickstarts/create-communication-resource.md?tabs=linux&pivots=platform-azp).
+- Update the Server (Web API) application with information from the app registrations.
+
+
cosmos-db Emulator Command Line Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/emulator-command-line-parameters.md
To view the list of options, type `Microsoft.Azure.Cosmos.Emulator.exe /?` at th
| ComputePort | Specified the port number to use for the Compute Interop Gateway service. The Gateway's HTTP endpoint probe port is calculated as ComputePort + 79. Hence, ComputePort and ComputePort + 79 must be open and available. The default value is 8900. | Microsoft.Azure.Cosmos.Emulator.exe /ComputePort=\<computeport\> | \<computeport\>: Single port number | | EnableMongoDbEndpoint=3.2 | Enables MongoDB API 3.2 | Microsoft.Azure.Cosmos.Emulator.exe /EnableMongoDbEndpoint=3.2 | | | EnableMongoDbEndpoint=3.6 | Enables MongoDB API 3.6 | Microsoft.Azure.Cosmos.Emulator.exe /EnableMongoDbEndpoint=3.6 | |
+| EnableMongoDbEndpoint=4.0 | Enables MongoDB API 4.0 | Microsoft.Azure.Cosmos.Emulator.exe /EnableMongoDbEndpoint=4.0 | |
| MongoPort | Specifies the port number to use for MongoDB compatibility API. Default value is 10255. |Microsoft.Azure.Cosmos.Emulator.exe /MongoPort=\<mongoport\>|\<mongoport\>: Single port number| | EnableCassandraEndpoint | Enables Cassandra API | Microsoft.Azure.Cosmos.Emulator.exe /EnableCassandraEndpoint | | | CassandraPort | Specifies the port number to use for the Cassandra endpoint. Default value is 10350. | Microsoft.Azure.Cosmos.Emulator.exe /CassandraPort=\<cassandraport\> | \<cassandraport\>: Single port number |
cosmos-db Partial Document Update Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partial-document-update-getting-started.md
if (response.isSuccessStatusCode()) {
Support for Partial document update (Patch API) in the [Azure Cosmos DB JavaScript SDK](sql/sql-api-sdk-node.md) is available from version *3.15.0* onwards. You can download it from the [NPM Registry](https://www.npmjs.com/package/@azure/cosmos/v/3.15.0) > [!NOTE]
-> A complete partial document update sample can be found in the [.js v3 samples repository](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples/v3/typescript/src/ItemManagement.ts#L167) on GitHub.
+> A complete partial document update sample can be found in the [.js v3 samples repository](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples/v3/typescript/src/ItemManagement.ts#L167) on GitHub. In the sample, as the container is created without a partition key specified, the Javascript SDK
+resolves the partition key values from the items through the container's partition
+key definition.
**Executing a single patch operation**
cosmos-db Partial Document Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partial-document-update.md
# Partial document update in Azure Cosmos DB+ [!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)]
-Azure Cosmos DB Partial Document Update feature (also known as Patch API) provides a convenient way to modify a document in a container. Currently, to update a document the client needs to read it, execute Optimistic Concurrency Control checks (if necessary), update the document locally and then send it over the wire as a whole document Replace API call.
+Azure Cosmos DB Partial Document Update feature (also known as Patch API) provides a convenient way to modify a document in a container. Currently, to update a document the client needs to read it, execute Optimistic Concurrency Control checks (if necessary), update the document locally and then send it over the wire as a whole document Replace API call.
Partial document update feature improves this experience significantly. The client can only send the modified properties/fields in a document without doing a full document replace operation. Key benefits of this feature include: -- **Improved developer productivity**: Provides a convenient API for ease of use and the ability to conditionally update the document.
+- **Improved developer productivity**: Provides a convenient API for ease of use and the ability to conditionally update the document.
- **Performance improvements**: Avoids extra CPU cycles on the client side, reduces end-to-end latency and network bandwidth. - **Multi-region writes**: Supports automatic and transparent conflict resolution with partial updates on discrete paths within the same document.
-
+
+> [!NOTE]
+> *Partial document update* operation is based on the [RFC spec](https://www.rfc-editor.org/rfc/rfc6902#appendix-A.14). To escape a ~ character you need to add 0 or a 1 to the end.
+
+An example target JSON document:
+
+```json
+{
+ "/": 9,
+ "~1": 10
+}
+```
+
+A JSON Patch document:
+
+```json
+[{ "op": "test", "path": "/~01", "value": 10 }]
+```
+
+The resulting JSON document:
+
+```json
+{
+ "/": 9,
+ "~1": 10
+}
+```
+ ## Supported operations The table below summarizes the operations supported by this feature.
-> [!NOTE]
-> *target path* refers to a location within the JSON document
+> [!NOTE]
+> _target path_ refers to a location within the JSON document
-| **Operation type** | **Description** |
-| | -- |
-| **Add** | `Add` performs one of the following, depending on the target path: <br/><ul><li>If the target path specifies an element that does not exist, it is added.</li><li>If the target path specifies an element that already exists, its value is replaced.</li><li>If the target path is a valid array index, a new element will be inserted into the array at the specified index. It shifts existing elements to the right.</li><li>If the index specified is equal to the length of the array, it will append an element to the array. Instead of specifying an index, you can also use the `-` character. It will also result in the element being appended to the array.</li></ul> <br/> **Note**: Specifying an index greater than the array length will result in an error.|
-| **Set** | `Set` operation is similar to `Add` except in the case of Array data type - if the target path is a valid array index, the existing element at that index is updated.|
-| **Replace** | `Replace` operation is similar to `Set` except it follows _strict_ replace only semantics. In case the target path specifies an element or an array that does not exist, it results in an error. |
-| **Remove** | `Remove` performs one of the following, depending on the target path: <br/><ul><li>If the target path specifies an element that does not exist, it results in an error. </li><li> If the target path specifies an element that already exists, it is removed. </li><li> If the target path is an array index, it will be deleted and any elements above the specified index are shifted one position to the left.</li></ul> <br/> **Note**: Specifying an index equal to or greater than the array length would result in an error. |
-| **Increment** | This operator increments a field by the specified value. It can accept both positive and negative values. If the field does not exist, it creates the field and sets it to the specified value. |
+| **Operation type** | **Description** |
+| | -- |
+| **Add** | `Add` performs one of the following, depending on the target path: <br/><ul><li>If the target path specifies an element that does not exist, it is added.</li><li>If the target path specifies an element that already exists, its value is replaced.</li><li>If the target path is a valid array index, a new element will be inserted into the array at the specified index. It shifts existing elements to the right.</li><li>If the index specified is equal to the length of the array, it will append an element to the array. Instead of specifying an index, you can also use the `-` character. It will also result in the element being appended to the array.</li></ul> <br/> **Note**: Specifying an index greater than the array length will result in an error. |
+| **Set** | `Set` operation is similar to `Add` except in the case of Array data type - if the target path is a valid array index, the existing element at that index is updated. |
+| **Replace** | `Replace` operation is similar to `Set` except it follows _strict_ replace only semantics. In case the target path specifies an element or an array that does not exist, it results in an error. |
+| **Remove** | `Remove` performs one of the following, depending on the target path: <br/><ul><li>If the target path specifies an element that does not exist, it results in an error. </li><li> If the target path specifies an element that already exists, it is removed. </li><li> If the target path is an array index, it will be deleted and any elements above the specified index are shifted one position to the left.</li></ul> <br/> **Note**: Specifying an index equal to or greater than the array length would result in an error. |
+| **Increment** | This operator increments a field by the specified value. It can accept both positive and negative values. If the field does not exist, it creates the field and sets it to the specified value. |
## Supported modes Partial document update feature supports the following modes of operation. Refer to the [Getting Started](partial-document-update-getting-started.md) document for code examples.
-* **Single document patch**: You can patch a single document based on its ID and the partition key. It is possible to execute multiple patch operations on a single document. The [maximum limit is 10 operations](partial-document-update-faq.yml#is-there-a-limit-to-the-number-of-partial-document-update-operations-).
-
-* **Multi-document patch**: Multiple documents within the same partition key can be patched as a [part of a transaction](transactional-batch.md). This transaction will be committed only if all the operations succeed in the order they are described. If any operation fails, the entire transaction is rolled back.
+- **Single document patch**: You can patch a single document based on its ID and the partition key. It is possible to execute multiple patch operations on a single document. The maximum limit is 10 operations.
-* **Conditional Update** For the aforementioned modes, it is also possible to add a SQL-like filter predicate (for example, *from c where c.taskNum = 3*) such that the operation fails if the pre-condition specified in the predicate is not satisfied.
+- **Multi-document patch**: Multiple documents within the same partition key can be patched as a [part of a transaction](transactional-batch.md). This transaction will be committed only if all the operations succeed in the order they are described. If any operation fails, the entire transaction is rolled back.
-* You can also use the bulk APIs of supported SDKs to execute one or more patch operations on multiple documents.
+- **Conditional Update** For the aforementioned modes, it is also possible to add a SQL-like filter predicate (for example, _from c where c.taskNum = 3_) such that the operation fails if the pre-condition specified in the predicate is not satisfied.
+- You can also use the bulk APIs of supported SDKs to execute one or more patch operations on multiple documents.
## Similarities and differences ### Add vs Set
-`Set` operation is similar to `Add` for all data types except `Array`. An `Add` operation at any (valid) index, results in the addition of an element at the specified index and any existing elements in array end up shifting to the right. This is in contrast to `Set` operation that updates the existing element at the specified index.
+`Set` operation is similar to `Add` for all data types except `Array`. An `Add` operation at any (valid) index, results in the addition of an element at the specified index and any existing elements in array end up shifting to the right. This is in contrast to `Set` operation that updates the existing element at the specified index.
### Add vs Replace
Partial document update feature supports the following modes of operation. Refer
`Set` operation adds a property if it doesn't already exist (except if there was an `Array`). `Replace` operation will fail if the property does not exist (applies to `Array` data type as well).
-> [!NOTE]
+> [!NOTE]
> `Replace` is a good candidate where the user expects some of the properties to be always present and allows you to assert/enforce that. ## REST API reference for Partial document update
The [Azure Cosmos DB REST API](/rest/api/cosmos-db/) provides programmatic acces
For example, here is what the request looks like for a `set` operation using Partial document update. ```json
-PATCH https://querydemo.documents.azure.com/dbs/FamilyDatabase/colls/FamilyContainer/docs/Andersen.1 HTTP/1.1
-x-ms-documentdb-partitionkey: ["Andersen"]
-x-ms-date: Tue, 29 Mar 2016 02:28:29 GMT
+PATCH https://querydemo.documents.azure.com/dbs/FamilyDatabase/colls/FamilyContainer/docs/Andersen.1 HTTP/1.1
+x-ms-documentdb-partitionkey: ["Andersen"]
+x-ms-date: Tue, 29 Mar 2016 02:28:29 GMT
Authorization: type%3dmaster%26ver%3d1.0%26sig%3d92WMAkQv0Zu35zpKZD%2bcGSH%2b2SXd8HGxHIvJgxhO6%2fs%3d Content-Type:application/json_patch+json
-Cache-Control: no-cache
-User-Agent: Microsoft.Azure.DocumentDB/2.16.12
-x-ms-version: 2015-12-16
-Accept: application/json
-Host: querydemo.documents.azure.com
-Cookie: x-ms-session-token#0=602; x-ms-session-token=602
-Content-Length: calculated when request is sent
+Cache-Control: no-cache
+User-Agent: Microsoft.Azure.DocumentDB/2.16.12
+x-ms-version: 2015-12-16
+Accept: application/json
+Host: querydemo.documents.azure.com
+Cookie: x-ms-session-token#0=602; x-ms-session-token=602
+Content-Length: calculated when request is sent
Connection: keep-alive
-
+ {"operations":[{ "op" :"set", "path":"/Parents/0/FamilyName","value":"Bob" }]} ```
-## Document level vs path level conflict resolution
+## Document level vs path level conflict resolution
If your Azure Cosmos DB account is configured with multiple write regions, [conflicts and conflict resolution policies](conflict-resolution-policies.md) are applicable at the document level, with Last Write Wins (`LWW`) being the default conflict resolution policy. For partial document updates, patch operations across multiple regions detect and resolve conflicts at a more granular path level.
This can be better understood with an example.
Assume that you have following document in Azure Cosmos DB: ```json
-{ΓÇ»
-   "id":1,
-   "name":"John Doe",
-   "email":"jdoe@contoso.com",
-   "phone":[ 
-      "12345",
- "67890"
-   ],
-   "level":"gold"
-}
+{
+ "id": 1,
+ "name": "John Doe",
+ "email": "jdoe@contoso.com",
+ "phone": ["12345", "67890"],
+ "level": "gold"
+}
``` The below Patch operations are issued concurrently by different clients in different regions: -- `Set` attribute `/level` to platinum
+- `Set` attribute `/level` to platinum
- `Remove` 67890 from `/phone` :::image type="content" source="./media/partial-document-update/patch-multi-region-conflict-resolution.png" alt-text="An image that shows conflict resolution in concurrent multi-region partial update operations." border="false" lightbox="./media/partial-document-update/patch-multi-region-conflict-resolution.png":::
-Since Patch requests were made to non-conflicting paths within the document, these will be conflict resolved automatically and transparently (as opposed to Last Writer Wins at a document level).
+Since Patch requests were made to non-conflicting paths within the document, these will be conflict resolved automatically and transparently (as opposed to Last Writer Wins at a document level).
The client will see the following document after conflict resolution: ```json
-{ΓÇ»
-   "id":1,
-   "name":"John Doe",
-   "email":"jdoe@contoso.com",
-   "phone":[ 
-      "12345"
-   ],
-   "level":"platinum",
-}
+{
+ "id": 1,
+ "name": "John Doe",
+ "email": "jdoe@contoso.com",
+ "phone": ["12345"],
+ "level": "platinum"
+}
``` > [!NOTE]
cosmos-db Modeling Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/modeling-data.md
Now, if I had an author, I immediately know which books they've written, and con
## Hybrid data models
-We've now looked embedding (or denormalizing) and referencing (or normalizing) data, each have their upsides and each have compromises as we've seen.
+We've now looked at embedding (or denormalizing) and referencing (or normalizing) data which, each have their upsides and compromises as we've seen.
It doesn't always have to be either or, don't be scared to mix things up a little.
cost-management-billing Reservation Renew https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-renew.md
You can renew reservations to automatically purchase a replacement when an exist
Renewing a reservation creates a new reservation when the existing reservation expires. It doesn't extend the term of the existing reservation.
-Opt in to automatically renew at any time. The renewal price is available 30 days before the expiry of existing reservation. When you enable renewal more than 30 days before the reservation expiration, you're sent an email detailing renewal costs 30 days before expiration. The reservation price might change between the time that you lock the renewal price and the renewal time. If so, your renewal cost is the lower of the two costs. You can make changes to the reservation quantity. If you do, the renewal is updated to use the in-market price set at the time of the quantity change.
+Opt in to automatically renew at any time. The renewal price is available 30 days before the expiry of existing reservation. When you enable renewal more than 30 days before the reservation expiration, you're sent an email detailing renewal costs 30 days before expiration. The reservation price might change between the time that you lock the renewal price and the renewal time. If so, your renewal will not be processed and you can purchase a new reservation in order to continue getting the benefit.
There's no obligation to renew and you can opt out of the renewal at any time before the existing reservation expires.
data-factory Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery.md
In Azure Data Factory, continuous integration and delivery (CI/CD) means moving
## CI/CD lifecycle
-[!NOTE] See also: [Continuous deployment improvements](continuous-integration-delivery-improvements.md#continuous-deployment-improvements)
+> [!NOTE]
+> For more information, see [Continuous deployment improvements](continuous-integration-delivery-improvements.md#continuous-deployment-improvements).
Below is a sample overview of the CI/CD lifecycle in an Azure data factory that's configured with Azure Repos Git. For more information on how to configure a Git repository, see [Source control in Azure Data Factory](source-control.md).
data-lake-store Data Lake Store Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-overview.md
Data Lake Storage Gen1 also provides enterprise-grade security for the stored da
Data Lake Storage Gen1 can store any data in its native format, without requiring any prior transformations. Data Lake Storage Gen1 does not require a schema to be defined before the data is loaded, leaving it up to the individual analytic framework to interpret the data and define a schema at the time of the analysis. The ability to store files of arbitrary sizes and formats makes it possible for Data Lake Storage Gen1 to handle structured, semi-structured, and unstructured data.
-Data Lake Storage Gen1 containers for data are essentially folders and files. You operate on the stored data using SDKs, the Azure portal, and Azure Powershell. If you put your data into the store using these interfaces and using the appropriate containers, you can store any type of data. Data Lake Storage Gen1 does not perform any special handling of data based on the type of data it stores.
+Data Lake Storage Gen1 containers for data are essentially folders and files. You operate on the stored data using SDKs, the Azure portal, and Azure PowerShell. If you put your data into the store using these interfaces and using the appropriate containers, you can store any type of data. Data Lake Storage Gen1 does not perform any special handling of data based on the type of data it stores.
## <a name="DataLakeStoreSecurity"></a>Securing data
data-lake-store Data Lake Store Performance Tuning Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-performance-tuning-guidance.md
After you've addressed the source hardware and network connectivity bottlenecks,
| Tool | Settings | More Details | |--|||
-| Powershell | PerFileThreadCount, ConcurrentFileCount | [Link](./data-lake-store-get-started-powershell.md) |
+| PowerShell | PerFileThreadCount, ConcurrentFileCount | [Link](./data-lake-store-get-started-powershell.md) |
| AdlCopy | Azure Data Lake Analytics units | [Link](./data-lake-store-copy-data-azure-storage-blob.md#performance-considerations-for-using-adlcopy) | | DistCp | -m (mapper) | [Link](./data-lake-store-copy-data-wasb-distcp.md#performance-considerations-while-using-distcp) | | Azure Data Factory| parallelCopies | [Link](../data-factory/copy-activity-performance.md) |
data-share Samples Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/samples-powershell.md
Last updated 01/03/2022
The following table includes links to sample Azure PowerShell scripts for Azure Data Share.
-|Powershell Samples|Description|
+|PowerShell Samples|Description|
||| |[Create a new data share account](scripts/powershell/create-new-share-account-powershell.md)| This PowerShell script creates a new data share account. | |[Create a new data share](scripts/powershell/create-new-share-powershell.md)| This PowerShell script creates a new data share. |
databox-online Azure Stack Edge Gpu Manage Edge Resource Groups Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-manage-edge-resource-groups-portal.md
Follow these steps to view the Edge resource groups for the current subscription
![Screenshot of the Resources view for virtual machines on an Azure Stack Edge device. The Edge Resource groups tab is shown and highlighted.](media/azure-stack-edge-gpu-manage-edge-resource-groups-portal/edge-resource-groups-01.png) > [!NOTE]
- > You can get the same listing by using [Get-AzResource](/powershell/module/az.resources/get-azresource?view=azps-6.1.0&preserve-view=true) in Azure Powershell after you set up the Azure Resource Manager environment on your device. For more information, see [Connect to Azure Resource Manager](azure-stack-edge-gpu-connect-resource-manager.md).
+ > You can get the same listing by using [Get-AzResource](/powershell/module/az.resources/get-azresource?view=azps-6.1.0&preserve-view=true) in Azure PowerShell after you set up the Azure Resource Manager environment on your device. For more information, see [Connect to Azure Resource Manager](azure-stack-edge-gpu-connect-resource-manager.md).
## Delete an Edge resource group
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account to Microsoft Defender for Cloud description: Defend your AWS resources with Microsoft Defender for Cloud Previously updated : 12/14/2021 Last updated : 02/21/2022 zone_pivot_groups: connect-aws-accounts
Microsoft Defender for Cloud protects workloads in Azure, Amazon Web Services (A
To protect your AWS-based resources, you can connect an account with one of two mechanisms: -- **Classic cloud connectors experience** - As part of the initial multi-cloud offering, we introduced these cloud connectors as a way to connect your AWS and GCP accounts. If you've already configured an AWS connector through the classic cloud connectors experience, we recommend deleting these connectors (as explained in [Remove classic connectors](#remove-classic-connectors)), and connecting the account again using the newer mechanism. If you don't do this before creating the new connector through the environment settings page, do so afterwards to avoid seeing duplicate recommendations.
+- **Classic cloud connectors experience** - As part of the initial multi-cloud offering, we introduced these cloud connectors as a way to connect your AWS and GCP projects. If you've already configured an AWS connector through the classic cloud connectors experience, we recommend deleting these connectors (as explained in [Remove classic connectors](#remove-classic-connectors)), and connecting the account again using the newer mechanism. If you don't do this before creating the new connector through the environment settings page, do so afterwards to avoid seeing duplicate recommendations.
- **Environment settings page (in preview)** (recommended) - This preview page provides a greatly improved, simpler, onboarding experience (including auto provisioning). This mechanism also extends Defender for Cloud's enhanced security features to your AWS resources: - **Defender for Cloud's CSPM features** extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's [asset inventory page](asset-inventory.md) is a multi-cloud enabled feature helping you manage your AWS resources alongside your Azure resources. - **Microsoft Defender for Containers** extends Defender for Cloud's container threat detection and advanced defenses to your **Amazon EKS clusters**.
- - **Microsoft Defender for servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more.
+ - **Microsoft Defender for servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds.md?tabs=tab/features-multi-cloud) table.
For a reference list of all the recommendations Defender for Cloud can provide for AWS resources, see [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md).
This screenshot shows AWS accounts displayed in Defender for Cloud's [overview d
## Prerequisites -- To connect an AWS account to your Azure subscription, you'll obviously need access to an AWS account.
+- Access to an AWS account.
- **To enable the Defender for Kubernetes plan**, you'll need: - At least one Amazon EKS cluster with permission to access to the EKS K8s API server. If you need to create a new EKS cluster, follow the instructions in [Getting started with Amazon EKS ΓÇô eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html). - The resource capacity to create a new SQS queue, Kinesis Fire Hose delivery stream, and S3 bucket in the cluster's region. - **To enable the Defender for servers plan**, you'll need:
- - Microsoft Defender for servers enabled (see [Quickstart: Enable enhanced security features](enable-enhanced-security.md).
- - An active AWS account with EC2 instances managed by AWS Systems Manager (SSM) and using SSM agent. Some Amazon Machine Images (AMIs) have the SSM agent pre-installed, their AMIs are listed in [AMIs with SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-technical-details.html#ami-preinstalled-agent). If your EC2 instances don't have the SSM Agent, follow the relevant instructions from Amazon:
- - [Install SSM Agent for a hybrid environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html)
- - [Install SSM Agent for a hybrid environment (Linux)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-linux.html)
-
+ - Microsoft Defender for servers enabled on your subscription. Learn how to enable plans in the [Enable enhanced security features](enable-enhanced-security.md) article.
+ - An active AWS account, with EC2 instances.
+ - Azure Arc for servers installed on your EC2 instances.
+ - (Recommended) Use the auto provisioning process to install Azure Arc on all of your existing, and future EC2 instances managed by AWS Systems Manager (SSM) and using the SSM agent. Some Amazon Machine Images (AMIs) already have the SSM agent pre-installed. If that is the case, their AMI's are listed in [AMIs with SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-technical-details.html#ami-preinstalled-agent). If your EC2 instances don't have the SSM Agent, you will need to install it using either of the following relevant instructions from Amazon:
+ - [Install SSM Agent for a hybrid environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html)
+ - [Install SSM Agent for a hybrid environment (Linux)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-linux.html)
+ - To manually install Azure Arc on your existing and future EC2 instances, follow the instructions in the [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) recommendation.
+ - Additional extensions should be enabled on the Arc-connected machines. These extensions are currently configured in the subscription level. It means that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings with regards to these components.
+ - Microsoft Defender for Endpoint
+ - VA solution (TVM/ Qualys)
+ - Log Analytics (LA) agent on Arc machines. Ensure the selected workspace has security solution installed.
+
+> [!Note]
+> Without the Arc agent, you will be unable to take advantage of Defender for server's value. The Arc agent can also be installed manually, and not by the auto-provisioning process.
## Connect your AWS account
Follow the steps below to create your AWS cloud connector.
If you have any existing connectors created with the classic cloud connectors experience, remove them first:
-1. From Defender for Cloud's menu, open **Environment settings** and select the option to switch back to the classic connectors experience.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Defender for Cloud** > **Environment settings**.
- :::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="Switching back to the classic cloud connectors experience in Defender for Cloud.":::From Defender for Cloud's menu, open **Environment settings**.
+1. Select the option to switch back to the classic connectors experience.
+
+ :::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="Switching back to the classic cloud connectors experience in Defender for Cloud.":::
-1. For each connector, select the “…” at the end of the row, and select **Delete**.
+1. For each connector, select the three dot button **…** at the end of the row, and select **Delete**.
-1. On AWS, delete the role ARN or the credentials created for the integration.
+1. On AWS, delete the role ARN, or the credentials created for the integration.
### Create a new connector
-1. From Defender for Cloud's menu, open **Environment settings**.
+Ensure that all relevant pre-requisites are enabled in order to use all of the available capabilities of Defender for servers on AWS
+Also, the Defender for Servers plan should be enabled on the subscription.
+
+Deploy Azure Arc on your EC2 instances to use as the vehicle to Azure. You can deploy Azure Arc on your EC2 instance in 3 different ways:
+- (Recommended) Use the Defender for Servers Arc auto-provisioning process. Azure Arc is enabled by default in the onboarding process. The process requires owner permissions on the subscription.
+- Manual installation through Arc for servers.
+- Through a recommendation, which will appear on the Microsoft Defender for Cloud's Recommendations page.
+
+Additional extensions should be enabled on Arc-connected machines. These extensions are currently configured on the subscription level, and will be applied to all the multi-cloud accounts, and projects (from both AWS and GCP)
+ - Microsoft Defender for Endpoint
+ - VA solution (TVM/ Qualys)
+ - LA agent on Arc machines (Ensure that the selected workspace has the security solution installed)
+
+**To create a new connector**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Defender for Cloud** > **Environment settings**.
+ 1. Select **Add environment** > **Amazon Web Services**. :::image type="content" source="media/quickstart-onboard-aws/add-aws-account-environment-settings.png" alt-text="Connecting an AWS account to an Azure subscription.":::
-1. Enter the details of the AWS account, including the location where you'll store the connector resource, and select **Next: Select plans**.
+1. Enter the details of the AWS account, including the location where you'll store the connector resource.
:::image type="content" source="media/quickstart-onboard-aws/add-aws-account-details.png" alt-text="Step 1 of the add AWS account wizard: Enter the account details.":::
-1. The select plans tab is where you choose which Defender for Cloud capabilities to enable for this AWS account.
+1. Select **Next: Select plans**.
> [!NOTE]
- > Each capability has its own requirements for permissions and might incur charges.
+ > Each plan has its own requirements for permissions, and might incur charges.
:::image type="content" source="media/quickstart-onboard-aws/add-aws-account-plans-selection.png" alt-text="The select plans tab is where you choose which Defender for Cloud capabilities to enable for this AWS account."::: > [!IMPORTANT]
- > To present the current status of your recommendations, the CSPM plan queries the AWS resource APIs several times a day. These read-only API calls incur no charges, but they *are* registered in CloudTrail if you've enabled a trail for read events. As explained in [the AWS documentation](https://aws.amazon.com/cloudtrail/pricing/), there are no additional charges for keeping one trail. If you're exporting the data out of AWS (for example, to an external SIEM), this increased volume of calls might also increase ingestion costs. In such cases, We recommend filtering out the read-only calls from the Defender for Cloud user or role ARN: arn:aws:iam::[accountId]:role/CspmMonitorAws (this is the default role name, confirm the role name configured on your account).
+ > To present the current status of your recommendations, the CSPM plan queries the AWS resource APIs several times a day. These read-only API calls incur no charges, but they *are* registered in CloudTrail if you've enabled a trail for read events. As explained in [the AWS documentation](https://aws.amazon.com/cloudtrail/pricing/), there are no additional charges for keeping one trail. If you're exporting the data out of AWS (for example, to an external SIEM), this increased volume of calls might also increase ingestion costs. In such cases, We recommend filtering out the read-only calls from the Defender for Cloud user or role ARN: `arn:aws:iam::[accountId]:role/CspmMonitorAws` (this is the default role name, confirm the role name configured on your account).
+
+1. By default the **Servers** plan is set to **On**. This is necessary to extend Defender for server's coverage to your AWS EC2.
+
+ - (Optional) Select **Configure**, to edit the configuration as required.
- - To extend Defender for Servers coverage to your AWS EC2, set the **Servers** plan to **On** and edit the configuration as required.
+1. By default the **Containers** plan is set to **On**. This is necessary to have Defender for Kubernetes protect your AWS EKS clusters.
- - For Defender for Kubernetes to protect your AWS EKS clusters, Azure Arc-enabled Kubernetes and the Defender extension should be installed. Set the **Containers** plan to **On**, and use the dedicated Defender for Cloud recommendation to deploy the extension (and Arc, if necessary) as explained in [Protect Amazon Elastic Kubernetes Service clusters](defender-for-kubernetes-introduction.md#protect-amazon-elastic-kubernetes-service-clusters).
+ > [!Note]
+ > Azure Arc-enabled Kubernetes, and the Defender extension should be installed. Use the dedicated Defender for Cloud recommendation to deploy the extension (and Arc, if necessary) as explained in [Protect Amazon Elastic Kubernetes Service clusters](defender-for-kubernetes-introduction.md#protect-amazon-elastic-kubernetes-service-clusters).
-1. Complete the setup:
- 1. Select **Next: Configure access**.
- 1. Download the CloudFormation template.
- 1. Using the downloaded CloudFormation template, create the stack in AWS as instructed on screen.
- 1. Select **Next: Review and generate**.
- 1. Select **Create**.
+1. Select **Next: Configure access**.
+
+1. Download the CloudFormation template.
+
+1. Using the downloaded CloudFormation template, create the stack in AWS as instructed on screen.
+
+1. Select **Next: Review and generate**.
+
+1. Select **Create**.
Defender for Cloud will immediately start scanning your AWS resources and you'll see security recommendations within a few hours. For a reference list of all the recommendations Defender for Cloud can provide for AWS resources, see [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md).
For other operating systems, the SSM Agent should be installed manually using th
Connecting your AWS account is part of the multi-cloud experience available in Microsoft Defender for Cloud. For related information, see the following page: - [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md).-- [Connect your GCP accounts to Microsoft Defender for Cloud](quickstart-onboard-gcp.md)
+- [Connect your GCP projects to Microsoft Defender for Cloud](quickstart-onboard-gcp.md)
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Title: Connect your GCP account to Microsoft Defender for Cloud
+ Title: Connect your GCP project to Microsoft Defender for Cloud
description: Monitoring your GCP resources from Microsoft Defender for Cloud Previously updated : 11/09/2021 Last updated : 02/22/2022
+zone_pivot_groups: connect-gcp-accounts
-# Connect your GCP accounts to Microsoft Defender for Cloud
+# Connect your GCP projects to Microsoft Defender for Cloud
[!INCLUDE [Banner for top of topics](./includes/banner.md)]
With cloud workloads commonly spanning multiple cloud platforms, cloud security
Microsoft Defender for Cloud protects workloads in Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
-Adding a GCP account to an Azure subscription connects Defender for Cloud with GCP Security Command. Defender for Cloud can then protect your resources across both of these cloud environments and provide:
+To protect your GCP-based resources, you can connect an account in two different ways:
-- Detection of security misconfigurations-- A single view showing Defender for Cloud recommendations and GCP Security Command Center findings-- Incorporation of your GCP resources into Defender for Cloud's secure score calculations-- Integration of GCP Security Command Center recommendations based on the CIS standard into the Defender for Cloud's regulatory compliance dashboard
+- **Classic cloud connectors experience** - As part of the initial multi-cloud offering, we introduced these cloud connectors as a way to connect your AWS and GCP projects.
-> [!IMPORTANT]
-> At Ignite Fall 2021, we announced an updated way of connecting your accounts from other cloud providers. This uses the new **Environment settings** page. GCP accounts aren't supported from that page. To connect a GCP account to your Azure subscription, you'll need to use the classic cloud connectors experience as described below.
+- **Environment settings page** (Recommended) - This page provides the onboarding experience (including auto provisioning). This mechanism also extends Defender for Cloud's enhanced security features to your GCP resources:
+
+ - **Defender for Cloud's CSPM features** extends to your GCP resources. This agentless plan assesses your GCP resources according to GCP-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to GCP. Defender for Cloud's [asset inventory page](asset-inventory.md) is a multi-cloud enabled feature helping you manage your GCP resources alongside your Azure resources.
+ - **Microsoft Defender for servers** brings threat detection and advanced defenses to your GCP VM instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [Supported features for virtual machines and servers table](supported-machines-endpoint-solutions-clouds.md)
:::image type="content" source="./media/quickstart-onboard-gcp/gcp-account-in-overview.png" alt-text="Screenshot of GCP projects shown in Microsoft Defender for Cloud's overview dashboard." lightbox="./media/quickstart-onboard-gcp/gcp-account-in-overview.png"::: +
+## Availability
+
+|Aspect|Details|
+|-|:-|
+| Release state: | Preview <br> The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to the Azure features that are in beta, preview, or otherwise not yet released into general availability. |
+|Pricing:|The **CSPM plan** is free.<br> The **Defender for servers** plan is billed at the same price as the [Microsoft Defender for servers](defender-for-servers-introduction.md) plan for Azure machines. If a GCP VM instance doesn't have the Azure Arc agent deployed, you won't be charged for that machine.|
+|Required roles and permissions:| **Contributor** on the relevant Azure Subscription|
+|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet, Other Gov)|
+|||
+
+## Remove 'classic' connectors
+
+If you have any existing connectors created with the classic cloud connectors experience, remove them first:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Defender for Cloud** > **Environment settings**.
+
+1. Select the option to switch back to the classic connectors experience.
+
+ :::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="Switching back to the classic cloud connectors experience in Defender for Cloud.":::
+
+1. For each connector, select the three dot button **…** at the end of the row, and select **Delete**.
+
+## Connect your GCP projects
+
+When connecting your GCP projects to specific Azure subscriptions, consider the [Google Cloud resource hierarchy](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy#resource-hierarchy-detail) and these guidelines:
+
+- You can connect your GCP projects to Microsoft Defender for Cloud on the project level.
+- You can connect multiple projects to one Azure subscription.
+- You can connect multiple projects to multiple Azure subscriptions.
+
+Follow the steps below to create your GCP cloud connector.
+
+**To connect your GCP project**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Defender for Cloud** > **Environment settings**.
+
+1. Select **+ Add environment**.
+
+1. Select the **Google Cloud Platform**.
+
+ :::image type="content" source="media/quickstart-onboard-gcp/google-cloud.png" border="false" alt-text="Screenshot of the location of the Google cloud environment button.":::
+
+1. Enter all relevant information.
+
+ :::image type="content" source="media/quickstart-onboard-gcp/create-connector.png" alt-text="Screenshot of the Create GCP connector page where you need to enter all relevant information.":::
+
+1. Select the **Next: Select Plans**.
+
+1. Toggle the plans you want to connect to **On**. By default all necessary prerequisites and components will be provisioned. (Optional) Learn how to [configure each plan](#optional-configure-selected-plans).
+
+1. Select the **Next: Configure access**.
+
+1. Select **Copy**.
+
+ :::image type="content" source="media/quickstart-onboard-gcp/copy-button.png" alt-text="Screenshot showing the location of the copy button.":::
+
+1. Select the **GCP Cloud Shell >**.
+
+1. The GCP Cloud Shell will open.
+
+1. Paste the script into the Cloud Shell terminal and run it.
+
+1. Ensure that the following resources were created:
+
+ - CSPM service account reader role
+ - MDFC identity federation
+ - CSPM identity pool
+ - *Microsoft Defender for Servers* service account (when the servers plan is enabled)
+ - *Azure-Arc for servers onboarding* service account (when the Arc for servers auto-provisioning is enabled)
+
+1. (**Servers only**) When Arc auto-provisioning is enabled, copy the unique numeric ID presented at the end of the Cloud Shell script.
+
+ :::image type="content" source="media/quickstart-onboard-gcp/powershell-unique-id.png" alt-text="Screenshot showing the unique numeric id to be copied." lightbox="media/quickstart-onboard-gcp/powershell-unique-id-expanded.png":::
+
+ To locate the unique numeric ID in the GCP portal, Navigate to **IAM & Admin** > **Service Accounts**, in the Name column, locate `Azure-Arc for servers onboarding` and copy the unique numeric ID number (OAuth 2 Client ID).
+
+1. Navigate back to the Microsoft Defender for Cloud portal.
+
+1. (Optional) If you changed any of the names of any of the resources, update the names in the appropriate fields.
+
+1. (**Servers only**) Select **Azure-Arc for servers onboarding**
+
+ :::image type="content" source="media/quickstart-onboard-gcp/unique-numeric-id.png" alt-text="Screenshot showing the Azure-Arc for servers onboarding section of the screen.":::
+
+ Enter the service account unique ID, which is generated automatically after running the GCP Cloud Shell.
+
+1. Select the **Next: Review and generate >**.
+
+1. Ensure the information presented is correct.
+
+1. Select the **Create**.
+
+After creating a connector, a scan will start on your GCP environment. New recommendations will appear in Defender for Cloud after up to 6 hours. If you enabled agent auto-provisioning, Arc agent installation will occur automatically for each new resource detected.
+
+## (Optional) Configure selected plans
+
+By default, all plans are toggled to `On`, on the plans select screen.
++
+### Configure the servers plan
+
+Connect your GCP VM instances to Azure Arc in order to have full visibility to Microsoft Defender for Servers security content.
+
+Microsoft Defender for servers brings threat detection and advanced defenses to your GCP VMs instances.
+To have full visibility to Microsoft Defender for Servers security content, ensure you have the following requirements configured:
+
+- Microsoft Defender for servers enabled on your subscription. Learn how to enable plans in the [Enable enhanced security features](enable-enhanced-security.md) article.
+
+- Azure Arc for servers installed on your VM instances.
+ - **(Recommended) Auto-provisioning** - Auto-provisioning is enabled by default in the onboarding process and requires owner permissions on the subscription. Arc auto-provisioning process is using the OS config agent on GCP end. Additional information on Arc auto-provisioning can be found in the [OS config agent on the GCP machines](https://cloud.google.com/compute/docs/images/os-details#vm-manager) article.
+ - **Manual installation** - You can manually connect your VM instances to Azure Arc for servers. Instances in projects with Defender for Servers plan enabled that are not connected to Arc will be surfaced by the recommendation ΓÇ£GCP VM instances should be connected to Azure ArcΓÇ¥. Use the ΓÇ£FixΓÇ¥ option offered in this recommendation to install Azure Arc on the selected machines.
+
+- The following extensions should be enabled on the Arc-connected machines according to your needs:
+ - Microsoft Defender for Endpoint
+ - VA solution (TVM/ Qualys)
+ - Log Analytics (LA) agent on Arc machines. Ensure the selected workspace has security solution installed.
+
+ These extensions are currently configured as auto-provisioning settings on the subscription level. All GCP projects and AWS accounts under this subscription will inherit the subscription settings. Learn how to [configure auto-provisioning on your subscription](enable-data-collection.md#configure-auto-provisioning-for-agents-and-extensions-from-microsoft-defender-for-cloud).
+
+**To configure the Servers plan**:
+
+1. Follow the steps to [Connect your GCP project](#connect-your-gcp-project).
+
+1. On the Select plans screen select **View configuration**.
+
+ :::image type="content" source="media/quickstart-onboard-gcp/view-configuration.png" alt-text="Screenshot showing where to click to configure the Servers plan.":::
+
+1. On the Auto provisioning screen, toggle the switches on, or off depending on your need.
+
+ :::image type="content" source="media/quickstart-onboard-gcp/auto-provision-screen.png" alt-text="Screenshot showing the toggle switches for the Servers plan.":::
+
+ > [!Note]
+ > If Azure Arc is toggled **Off**, you will need to follow the manual installation process mentioned above.
+
+1. Select **Save**.
+++ ## Availability |Aspect|Details|
Adding a GCP account to an Azure subscription connects Defender for Cloud with G
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)| |||
-## Connect your GCP account
+## Connect your GCP project
Create a connector for every organization you want to monitor from Defender for Cloud.
-When connecting your GCP accounts to specific Azure subscriptions, consider the [Google Cloud resource hierarchy](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy#resource-hierarchy-detail) and these guidelines:
+When connecting your GCP projects to specific Azure subscriptions, consider the [Google Cloud resource hierarchy](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy#resource-hierarchy-detail) and these guidelines:
-- You can connect your GCP accounts to Defender for Cloud in the *organization* level
+- You can connect your GCP projects to Defender for Cloud in the *organization* level
- You can connect multiple organizations to one Azure subscription - You can connect multiple organizations to multiple Azure subscriptions - When you connect an organization, all *projects* within that organization are added to Defender for Cloud
Learn more about the [Security Command Center API](https://cloud.google.com/secu
:::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="Switching back to the classic cloud connectors experience in Defender for Cloud.":::
-1. Select add GCP account.
+1. Select add GCP project.
1. In the onboarding page, do the following and then select **Next**. 1. Validate the chosen subscription. 1. In the **Display name** field, enter a display name for the connector.
When the connector is successfully created and GCP Security Command Center has b
- Security recommendations for your GCP resources will appear in the Defender for Cloud portal and the regulatory compliance dashboard 5-10 minutes after onboard completes: :::image type="content" source="./media/quickstart-onboard-gcp/gcp-resources-in-recommendations.png" alt-text="GCP resources and recommendations in Defender for Cloud's recommendations page":::
-## Monitoring your GCP resources
+## Monitor your GCP resources
As shown above, Microsoft Defender for Cloud's security recommendations page displays your GCP resources together with your Azure and AWS resources for a true multi-cloud view.
To view all the active recommendations for your resources by resource type, use
:::image type="content" source="./media/quickstart-onboard-gcp/gcp-resource-types-in-inventory.png" alt-text="Asset inventory page's resource type filter showing the GCP options":::
-## FAQ - Connecting GCP accounts to Microsoft Defender for Cloud
-
-### Can I connect multiple GCP organizations to Defender for Cloud?
-Yes. Defender for Cloud's GCP connector connects your Google Cloud resources at the *organization* level.
-
-Create a connector for every GCP organization you want to monitor from Defender for Cloud. When you connect an organization, all projects within that organization are added to Defender for Cloud.
-
-Learn about the Google Cloud resource hierarchy in [Google's online docs](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy).
-
+## FAQ - Connecting GCP projects to Microsoft Defender for Cloud
### Is there an API for connecting my GCP resources to Defender for Cloud?
-Yes. To create, edit, or delete Defender for Cloud cloud connectors with a REST API, see the details of the [Connectors API](/rest/api/securitycenter/connectors).
+Yes. To create, edit, or delete Defender for Cloud cloud connectors with a REST API, see the details of the [Connectors API](/rest/api/securitycenter/security-connectors).
## Next steps
-Connecting your GCP account is part of the multi-cloud experience available in Microsoft Defender for Cloud. For related information, see the following page:
+Connecting your GCP project is part of the multi-cloud experience available in Microsoft Defender for Cloud. For related information, see the following page:
- [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md) - [Google Cloud resource hierarchy](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy)--Learn about the Google Cloud resource hierarchy in Google's online docs
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 02/17/2022 Last updated : 02/22/2022 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
Updates in February include: - [Kubernetes workload protection for Arc enabled K8s clusters](#kubernetes-workload-protection-for-arc-enabled-k8s-clusters)
+- [Native CSPM for GCP and threat protection for GCP compute instances](#native-cspm-for-gcp-and-threat-protection-for-gcp-compute-instances)
### Kubernetes workload protection for Arc enabled K8s clusters
Defender for Containers for Kubernetes workloads previously only protected AKS.
Learn how to [set up your Kubernetes workload protection](kubernetes-workload-protections.md#set-up-your-workload-protection) for AKS and Azure Arc enabled Kubernetes clusters.
+### Native CSPM for GCP and threat protection for GCP compute instances
+
+The new automated onboarding of GCP environments allows you to protect GCP workloads with Microsoft Defender for Cloud. Defender for Cloud protects your resources with the following plans:
+
+- **Defender for Cloud's CSPM** features extend to your GCP resources. This agentless plan assesses your GCP resources according to the GCP-specific security recommendations which are provided with Defender for Cloud. GCP recommendations are included in your secure score, and the resources will be assessed for compliance with the built-in GCP CIS standard. Defender for Cloud's asset inventory page is a multi-cloud enabled feature helping you manage your resources across Azure, AWS, and GCP.
+
+- **Microsoft Defender for servers** brings threat detection and advanced defenses to your GCP compute instances. This plan includes the integrated license for Microsoft Defender for Endpoint, vulnerability assessment scanning, and more.
+
+ For a full list of available features, see [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds.md). Automatic onboarding capabilities will allow you to easily connect any existing, and new compute instances discovered in your environment.
+
+Learn how to protect, and [connect your GCP projects](quickstart-onboard-gcp.md) with Microsoft Defender for Cloud.
+ ## January 2022 Updates in January include:
defender-for-cloud Supported Machines Endpoint Solutions Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds.md
Title: Microsoft Defender for Cloud's features according to OS, machine type, and cloud description: Learn about the availability of Microsoft Defender for Cloud features according to OS, machine type, and cloud deployment. Previously updated : 02/01/2022 Last updated : 02/10/2022
The **tabs** below show the features of Microsoft Defender for Cloud that are av
### [**Multi-cloud machines**](#tab/features-multi-cloud)
-| **Feature** | **Availability in AWS** |
-|--|::|
-| [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö |
-| [Virtual machine behavioral analytics (and security alerts)](alerts-reference.md) | Γ£ö |
-| [Fileless security alerts](alerts-reference.md#alerts-windows) | Γ£ö |
-| [Network-based security alerts](other-threat-protections.md#network-layer) | - |
-| [Just-in-time VM access](just-in-time-access-usage.md) | - |
-| [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö |
-| [File integrity monitoring](file-integrity-monitoring-overview.md) | Γ£ö |
-| [Adaptive application controls](adaptive-application-controls.md) | Γ£ö |
-| [Network map](protect-network-resources.md#network-map) | - |
-| [Adaptive network hardening](adaptive-network-hardening.md) | - |
-| [Regulatory compliance dashboard & reports](regulatory-compliance-dashboard.md) | Γ£ö |
-| [Docker host hardening](harden-docker-hosts.md) | Γ£ö |
-| Missing OS patches assessment | Γ£ö |
-| Security misconfigurations assessment | Γ£ö |
-| [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds.md#supported-endpoint-protection-solutions-) | Γ£ö |
-| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) |
-| Third-party vulnerability assessment | - |
-| [Network security assessment](protect-network-resources.md) | - |
-| | |
+| **Feature** | **Availability in AWS** | **Availability in GCP** |
+|--|:-:|
+| [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö | Γ£ö |
+| [Virtual machine behavioral analytics (and security alerts)](alerts-reference.md) | Γ£ö | Γ£ö |
+| [Fileless security alerts](alerts-reference.md#alerts-windows) | Γ£ö | Γ£ö |
+| [Network-based security alerts](other-threat-protections.md#network-layer) | - | - |
+| [Just-in-time VM access](just-in-time-access-usage.md) | - | - |
+| [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö | Γ£ö |
+| [File integrity monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö |
+| [Adaptive application controls](adaptive-application-controls.md) | Γ£ö | Γ£ö |
+| [Network map](protect-network-resources.md#network-map) | - | - |
+| [Adaptive network hardening](adaptive-network-hardening.md) | - | - |
+| [Regulatory compliance dashboard & reports](regulatory-compliance-dashboard.md) | Γ£ö | Γ£ö |
+| [Docker host hardening](harden-docker-hosts.md) | Γ£ö | Γ£ö |
+| Missing OS patches assessment | Γ£ö | Γ£ö |
+| Security misconfigurations assessment | Γ£ö | Γ£ö |
+| [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds.md#supported-endpoint-protection-solutions-) | Γ£ö | Γ£ö |
+| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) |
+| Third-party vulnerability assessment | - | - |
+| [Network security assessment](protect-network-resources.md) | - | - |
+| | |
defender-for-cloud Tutorial Security Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-security-policy.md
To enable a disabled policy and ensure it's assessed for your resources:
1. Open the **Security policy** page.
-1. From the **Default initiative**, **Industry & regulatory standards**, or **Your custom initiatives** sections, select the relevant initiative with the policy you want to enable.
+1. From the **Default initiative** or **Your custom initiatives** sections, select the relevant initiative with the policy you want to enable.
1. Open the **Parameters** section and search for the policy that invokes the recommendation that you want to disable.
defender-for-iot Concept Agent Portfolio Overview Os Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-agent-portfolio-overview-os-support.md
Title: Agent portfolio overview and OS support (Preview) description: Microsoft Defender for IoT provides a large portfolio of agents based on the device type. Previously updated : 11/09/2021 Last updated : 01/09/2022
Microsoft Defender for IoT provides a large portfolio of agents based on the device type.
-## Standalone agent
+## Standalone and Edge agent
-The standalone agent covers most of the Linux Operating Systems (OS), which can be deployed as a binary package or as a source code that can be incorporated as part of the firmware and allow modification and customization based on customer needs. The following are some examples of supported OS:
+Most of the Linux Operating Systems (OS) are covered by both agents. The agents can be deployed as a binary package, or as a source code that can be incorporated as part of the firmware. The customer can modify, and customize the agents as needed. The following are some examples of supported OS:
| Operating system | AMD64 | ARM32v7 | ARM64 |
-|--|--|--|--|
+|--|--|--|
| Debian 9 | Γ£ô | Γ£ô | |
-| Ubuntu 18.04 | Γ£ô | | Γ£ô |
-| Ubuntu 20.04 | Γ£ô | | |
+| Debian 10| Γ£ô | Γ£ô | |
+| Debian 11| Γ£ô | Γ£ô | |
+| Ubuntu 18.04 | Γ£ô | Γ£ô | Γ£ô |
+| Ubuntu 20.04 | Γ£ô | Γ£ô | Γ£ô |
-For a more granular view of the micro agent operating system dependencies, see [Linux dependencies](concept-micro-agent-linux-dependencies.md#linux-dependencies).
+For a more granular view of the micro agent-operating system dependencies, see [Linux dependencies](concept-micro-agent-linux-dependencies.md#linux-dependencies).
-For additional information, supported operating systems, or to request access to the source code so you can incorporate it as a part of the device's firmware, contact your account manager, or send an email to <defender_micro_agent@microsoft.com>.
+For additional information on supported operating systems, or to request access to the source code so you can incorporate it as a part of the device's firmware, contact your account manager, or send an email to <defender_micro_agent@microsoft.com>.
## Azure RTOS micro agent
defender-for-iot Concept Event Aggregation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-event-aggregation.md
Defender for IoT security agents collects data, and system events from your local device, and sends the data to the Azure cloud for processing, and analytics. The Defender for IoT micro agent collects many types of device events including new processes, and all new connection events. Both the new process, and new connection events may occur frequently on a device. This capability is important for comprehensive security, however, the number of messages the security agents send may quickly meet, or exceed your IoT Hub quota, and cost limits. These messages, and events contain highly valuable security information that is crucial to protecting your device.
-To reduce the number of messages, and costs while maintaining your device's security, Defender for IoT agents aggregates the following types of events:
+To reduce the number of messages, and costs while maintaining your device's security, Defender for IoT agents aggregate the following types of events:
- ProcessCreate (Linux only) -- Login activity (Linux only)- - Network ConnectionCreate -- System information--- Baseline-
-Event-based collectors are collectors that are triggered based on corresponding activity from within the device. For example, ``a process was started in the device``.
+Event-based collectors are collectors that are triggered based on corresponding activity from within the device. For example, ``a process was started in the device``.
Triggered based collectors are collectors that are triggered in a scheduled manner based on the customer's configurations.
The data collected for each event is:
| **Transport_protocol** | Can be TCP, UDP, or ICMP. | | **Application protocol** | The application protocol associated with the connection. | | **Extended properties** | The Additional details of the connection. For example, `host name`. |
+| **DNS hit count** | Total hit count of DNS requests |
+ ## Login collector (event-based collector)
-The Login collector, collects user logins, logouts, and failed login attempts.
+The Login collector, collects user sign-ins, sign-outs, and failed sign-in attempts.
+
+The Login collector supports the following types of collection methods:
+
+- **Syslog**. If syslog is running on the device, the Login collector collects SSH sign-in events via the syslog file named **auth.log**.
-Currently SSH, and Telnet are fully supported.
+- **Pluggable Authentication Modules (PAM)**. Collects SSH, telnet, and local sign-in events. For more information, see [Configure Pluggable Authentication Modules (PAM) to audit sign-in events](configure-pam-to-audit-sign-in-events.md).
The following data is collected: | Parameter | Description| |--|--|
-| **operation** | The Login, Logout, or LoginFailed. |
+| **operation** | One of the following: `Login`, `Logout`, `LoginFailed` |
| **process_id** | The Linux PID. | | **user_name** | The Linux user. |
-| **executable** | The terminal device. For example, tty1..6, or pts/n. |
-| **remote_address** | The source of connection, either remote IP in IPv6 or IPv4 format, or 127.0.0.1/0.0.0.0 to indicate local connection. |
+| **executable** | The terminal device. For example, `tty1..6` or `pts/n`. |
+| **remote_address** | The source of connection, either a remote IP address in IPv6 or IPv4 format, or `127.0.0.1/0.0.0.0` to indicate local connection. |
-> [!Note]
-> A login event is captured when a terminal is opened on a device, before the user actually logs in. This event has a TTY process, login event type, and a username. For example, `LOGIN`.
## System information (trigger based collector))
The data collected for each event is:
| **Remediation** | The recommendation for remediation from CIS. | | **Severity** | The severity level. |
+## SBoM (trigger based)
+
+The SBoM (Software Bill of Materials) collector collects the packages installed on the device periodically.
+
+The data collected on each package includes:
+
+|Parameter |Description |
+|||
+|**Name** | The package name |
+|**Version** | The package version |
+|**Vendor** | The package's vendor, which is the **Maintainer** field in deb packages |
+| | |
++ ## Next steps Check your [Defender for IoT security alerts](concept-security-alerts.md).
defender-for-iot Concept Micro Agent Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-micro-agent-configuration.md
After any change in configuration, the collector will immediately send all unsen
> [!Note] > Aggregation mode is supported, but it is not configurable.
-## Event-based collectors configurations
+## General configuration
-These configurations include process, and network activity collectors.
+Define the frequency in which messages are sent for each priority level. All values are required.
-| Setting Name | Setting option | Description | Default setting |
-|--|--|--|--|
-| **Interval** | High, Medium, or Low | Define frequency of sending. | Medium |
-| **Aggregation mode** | True, or False | Whether to process event aggregation for an identical event. | True |
-| **Cache size** | cycle FIFO | The number of events collected in between the data is sent. | 256 |
-| **Disable collector** | True, or False | Whether or not the collector is operational. | False |
+Default values are as follows:
-## Trigger-based collectors configurations
+| Frequency | Time period (in minutes) |
+|--|--|
+| **Low** | 1440 (24 hours) |
+| **Medium** | 120 (2 hours) |
+| **High** | 30 (.5 hours) |
+| | |
-These configurations include system information, and baseline collectors.
+To reduce the number of messages sent to cloud, each priority should be set as a multiple of the one below it. For example, High: 60 minutes, Medium: 120 minutes, Low: 480 minutes.
+
+The syntax for configuring the frequencies is as follows:
-| Setting Name | Setting option | Description | Default setting |
+`"CollectorsCore_PriorityIntervals"` : `"<High>,<Medium>,<Low>"`
+
+For example:
+
+`"CollectorsCore_PriorityIntervals"` : `"30,120,1440"`
+
+## Baseline collector-specific settings
+
+| Setting Name | Setting options | Description | Default |
|--|--|--|--|
-| **Interval** | High, Medium, or Low | The frequency in which data is sent. | Low |
-| **Disable collector** | True, or False | Whether or not the collector is operational. | False |
+| **Baseline_GroupsDisabled** | A list of Baseline group names, separated by a comma. <br><br>For example: `Time Synchronization, Network Parameters Host` | Defines the full list of Baseline group names that should be disabled. | Null |
+| **Baseline_ChecksDisabled** |A list of Baseline check IDs, separated by a comma. <br><br>For example: `3.3.5,2.2.1.1` | Defines the full list of Baseline check IDs that should be disabled. | Null |
+| | | | |
+
-## Network activity collector specific settings
+## Event-based collector configurations
-| Setting Name | Setting option | Description | Default setting |
+These configurations include process, and network activity collectors.
+
+| Setting Name | Setting options | Description | Default |
|--|--|--|--|
-| Devices | A list of the network devices separated by a comma. For example, "eth0,eth1" | The list of network devices (interfaces) that the agent will use to monitor the traffic. If a network device is not listed, the Network Raw events will not be recorded for the missing device.| "eth0" |
+| **Interval** | `High` <br>`Medium`<br>`Low` | Determines the sending frequency. | `Medium` |
+| **Aggregation mode** | `True` <br>`False` | Determines whether to process event aggregation for an identical event. | `True` |
+| **Cache size** | cycle FIFO | Defines the number of events collected in between the the times that data is sent. | `256` |
+| **Disable collector** | `True` <br> `False` | Determines whether or not the collector is operational. | `False` |
+| | | | |
-## General configuration
+## IoT Hub Module-specific settings
-Define the frequency in which messages are sent for each priority level. The default values are listed below:
+| Setting Name | Setting options | Description | Default |
+|--|--|--|--|
+| **IothubModule_MessageTimeout** | Positive integer, including limits | Defines the number of minutes to retain messages in the outbound queue to the IoT Hub, after which point the messages are dropped. | `2880` (=2 days) |
+| | | | |
+## Network activity collector-specific settings
-| Frequency | Time period (in minutes) |
-|--|--|
-| Low | 1440 (24 hours) |
-| Medium | 120 (2 hours) |
-| High | 30 (.5 hours) |
+| Setting Name | Setting options | Description | Default |
+|--|--|--|--|
+| **Devices** | A list of the network devices separated by a comma. <br><br>For example `eth0,eth1` | Defines the list of network devices (interfaces) that the agent will use to monitor the traffic. <br><br>If a network device is not listed, the Network Raw events will not be recorded for the missing device.| `eth0` |
+| | | | |
-To reduce the number of messages sent to cloud, each priority should be set as a multiple of the one below it. For example, High: 60 minutes, Medium: 120 minutes, Low: 480 minutes.
+## Process collector specific-settings
-The syntax for configuring the frequencies is as follows:
-`"CollectorsCore_PriorityIntervals"` : `"<High>,<Medium>,<Low>"`
+| Setting Name | Setting options | Description | Default |
+|--|--|--|--|
+| **Process_Mode** | `1` = Auto <br>`2` = Netlink <br>`3`= Polling | Determines the process collector mode. In `Auto` mode, the agent first tries to enable the Netlink mode. <br><br>If that fails, it will automatically fall back / switch to the Polling mode.| `1` |
+|**Process_PollingInterval** |Integer |Defines the polling interval in microseconds. This value is used when the **Process_Mode** is in `Polling` mode. | `100000` (=0.1 second) |
+| | | | |
-For example:
+## Trigger-based collector configurations
-`"CollectorsCore_PriorityIntervals"` : `"30,120,1440"`
+These configurations include system information, and baseline collectors.
+
+| Setting Name | Setting options | Description | Default |
+|--|--|--|--|
+| **Interval** | `High` <br>`Medium`<br>`Low` | The frequency in which data is sent. | `Low` |
+| **Disable collector** | `True` <br> `False` | Whether or not the collector is operational. | `False` |
+| | | | |
-Please note, that you must specify all 3 values.
## Next steps
-Learn about the [Micro agent event collection (Preview)](concept-event-aggregation.md).
+For more information, see [Configure a micro agent twin](how-to-configure-micro-agent-twin.md).
defender-for-iot Concept Security Agent Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-security-agent-authentication.md
- Title: Security agent authentication (Preview)
-description: Perform micro agent authentication with two possible methods.
Previously updated : 11/09/2021---
-# Micro agent authentication methods (Preview)
-
-There are two options for authentication with the Defender for IoT micro agent:
--- Connection string --- Certificate -
-## Authentication using a connection string
-
-In order to use a connection string, you need to add a file that uses the connection string encoded in utf-8 in the Defender for Cloud agent directory in a file named `connection_string.txt`. For example,
-
-```azurecli
-echo ΓÇ£<connection string>ΓÇ¥ > connection_string.txt
-```
-
-Once you have done that, you should then restart the service using this command.
-
-```azurecli
-sudo systemctl restart defender-iot-micro-agent.service
-```
-
-## Authentication using a certificate
--
-To perform authentication using a certificate:
-
-1. Place the PEM-encoded public part of a certificate into the Defender for Cloud agent directory, in a file called `certificate_public.pem`.
-1. Place the PEM-encoded private key, into the Defender for Cloud agent directory, in a file called `certificate_private.pem`.
-1. Place the appropriate connection string in a file named `connection_string.txt`. For example,
-
- ```azurecli
- HostName=<the host name of the iot hub>;DeviceId=<the id of the device>;ModuleId=<the id of the module>;x509=true
- ```
-
- This action indicates that the Defender for Cloud agent will expect a certificate to be provided for authentication.
-
-1. restart the service using the following code:
-
- ```azurecli
- sudo systemctl restart defender-iot-micro-agent.service
- ```
-
-## Ensure the micro agent is running correctly
-
-1. Run the following command:
- ```azurecli
- systemctl status defender-iot-micro-agent.service
- ```
-1. Check that the service is stable by making sure it is **active** and that the uptime of the process is appropriate:
-
- :::image type="content" source="media/concept-security-agent-authentication/active.png" alt-text="Ensure your service is stable by making sure it is active.":::
-
-## Next steps
-
-Check your [Security posture ΓÇô CIS benchmark](concept-security-posture.md).
defender-for-iot Configure Pam To Audit Sign In Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/configure-pam-to-audit-sign-in-events.md
+
+ Title: Configure Pluggable Authentication Modules (PAM) to audit sign-in events (Preview)
+description: Learn how to configure Pluggable Authentication Modules (PAM) to audit sign-in events when syslog is not configured for your device.
Last updated : 02/20/2022+++
+# Configure Pluggable Authentication Modules (PAM) to audit sign-in events
+
+This article provides a sample process for configuring Pluggable Authentication Modules (PAM) to audit SSH, Telnet, and terminal sign-in events on an unmodified Ubuntu 20.04 or 18.04 installation.
+
+PAM configurations may vary between devices and Linux distributions.
+
+For more information, see [Login collector (event-based collector)](concept-event-aggregation.md#login-collector-event-based-collector).
+
+## Prerequisites
+
+Before you get started, make sure that you have a Defender for IoT Micro Agent.
+
+Configuring PAM requires technical knowledge.
+
+For more information, see [Tutorial: Install the Defender for IoT micro agent (Preview)](tutorial-standalone-agent-binary-installation.md).
+
+## Modify PAM configuration to report sign-in and sign-out events
+
+This procedure provides a sample process for configuring the collection of successful sign-in events.
+
+Our example is based on an unmodified Ubuntu 20.04 or 18.04 installation, and the steps in this process may differ for your system.
+
+1. Locate the following files:
+
+ - `/etc/pam.d/sshd`
+ - `/etc/pam.d/login`
+
+1. Append the following lines to the end of each file:
+
+ ```txt
+ // report login
+ session [default=ignore] pam_exec.so type=open_session /usr/libexec/defender_iot_micro_agent/pam/pam_audit.sh 0
+
+ // report logout
+ session [default=ignore] pam_exec.so type=close_session /usr/libexec/defender_iot_micro_agent/pam/pam_audit.sh 1
+ ```
+
+## Modify the PAM configuration to report sign-in failures
+
+This procedure provides a sample process for configuring the collection of failed sign-in attempts.
+
+This example in this procedure is based on an unmodified Ubuntu 18.04 or 20.04 installation. The files and commands listed below may differ per configuration or as a result of modifications.
+
+1. Locate the `/etc/pam.d/common-auth` file and look for the following lines:
+
+ ```txt
+ # here are the per-package modules (the "Primary" block)
+ auth    [success=1 default=ignore]  pam_unix.so nullok_secure
+ # here's the fallback if no module succeeds
+ auth    requisite           pam_deny.so
+ ```
+
+ This section authenticates via the `pam_unix.so` module. In case of authentication failure, this section continues to the `pam_deny.so` module to prevent access.
+
+1. Replace the indicated lines of code with the following:
+
+ ```txt
+ # here are the per-package modules (the "Primary" block)
+ auth [success=1 default=ignore] pam_unix.so nullok_secure
+ auth [success=1 default=ignore] pam_exec.so quiet /usr/libexec/defender_iot_micro_agent/pam/pam_audit.sh 2
+ auth [success=1 default=ignore] pam_echo.so
+ # here's the fallback if no module succeeds
+ auth requisite pam_deny.so
+ ```
+
+ In this modified section, PAM skips one module to the `pam_echo.so` module, and then skips the `pam_deny.so` module and authenticates successfully.
+
+ In case of failure, PAM continues to report the sign-in failure to the agent log file, and then skips one module to the `pam_deny.so` module, which blocks access.
+
+## Validate your configuration
+
+This procedure describes how to verify that you've configured PAM correctly to audit sign-in events.
+
+1. Sign in to the device using SSH, and then sign-out.
+
+1. Sign in to the device using SSH, using incorrect credentials to create a failed sign-in event.
+
+1. Access your device and run the following command:
+
+ ```bash
+ cat /var/lib/defender_iot_micro_agent/pam.log
+ ```
+
+1. Verify that lines similar to the following are logged, for a successful sign-in (`open_session`), sign-out (`close_session`), and a sign-in failure (`auth`):
+
+ ```txt
+ 2021-10-31T18:10:31+02:00,16356631,2589842,open_session,sshd,user,192.168.0.101,ssh,0
+ 2021-10-31T18:26:19+02:00,16356719,199164,close_session,sshd, user,192.168.0.201,ssh,1
+ 2021-10-28T17:44:13+03:00,163543223,3572596,auth,sshd,user,143.24.20.36,ssh,2
+ ```
+
+1. Repeat the verification procedure with Telnet and terminal connections.
+
+## Next steps
+
+For more information, see [Micro agent event collection (Preview)](concept-event-aggregation.md).
defender-for-iot How To Configure Micro Agent Twin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-configure-micro-agent-twin.md
Learn how to configure a micro agent twin.
:::image type="content" source="media/tutorial-micro-agent-configuration/desired.png" alt-text="Screenshot of the sample output of the module identity twin.":::
+ For example:
+
+ ```
+ "desired": {
+ "Baseline_Disabled": false,
+ "Baseline_MessageFrequency": "Low",
+ "Baseline_GroupsDisabled": "",
+ "Baseline_ChecksDisabled": "",
+ "SystemInformation_Disabled": false,
+ "SystemInformation_MessageFrequency": "Low",
+ "SBoM_Disabled": false,
+ "SBoM_MessageFrequency": "Low",
+ "NetworkActivity_Disabled": false,
+ "NetworkActivity_MessageFrequency": "Medium",
+ "NetworkActivity_Devices": "eth0",
+ "NetworkActivity_CacheSize": 256,
+ "Process_Disabled": false,
+ "Process_MessageFrequency": "Medium",
+ "Process_PollingInterval": 100000,
+ "Process_Mode": 1,
+ "Process_CacheSize": 256,
+ "LogCollector_Disabled": false,
+ "LogCollector_MessageFrequency": "Low",
+ "Heartbeat_Disabled": false,
+ "Heartbeat_MessageFrequency": "Low",
+ "Login_Disabled": false,
+ "Login_MessageFrequency": "Medium",
+ "IothubModule_MessageTimeout": 2880,
+ "CollectorsCore_PriorityIntervals": "30,120,1440"
+ }
+ ```
+ The agent successfully set the new configuration if the value of `"latest_state"`, under the `"reported"` section will show `"success"`. :::image type="content" source="media/tutorial-micro-agent-configuration/reported-success.png" alt-text="Screenshot of a successful configuration change.":::
defender-for-iot How To Configure With Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-configure-with-sentinel.md
- Title: Configure Microsoft Sentinel with Defender for IoT for device builders
-description: This article explains how to configure Microsoft Sentinel to receive data from your Defender for IoT for device builders solution.
- Previously updated : 11/09/2021--
-# Connect your data from Defender for IoT for device builders to Microsoft Sentinel (Public preview)
-
-Use the Defender for IoT connector to stream all your Defender for IoT events into Microsoft Sentinel.
-
-This integration enables organizations to quickly detect multistage attacks that often cross IT and OT boundaries. Additionally, Defender for IoTΓÇÖs integration with Microsoft Sentinel's security orchestration, automation, and response (SOAR) capabilities enables automated response and prevention using built-in OT-optimized playbooks.
-
-## Prerequisites
--- **Read** and **Write** permissions on the Workspace onto which Microsoft Sentinel is deployed-- **Defender for IoT** must be **enabled** on your relevant IoT Hub(s)-- You must have **Contributor** permissions on the **Subscription** you want to connect-
-## Connect to Defender for IoT
-
-1. In Microsoft Sentinel, select **Data connectors** and then select the **Defender for IoT** (formerly Azure Security Center for IoT) from the gallery.
-
-1. From the bottom of the right pane, click **Open connector page**.
-
-1. Click **Connect**, next to each IoT Hub subscription whose alerts and device alerts you want to stream into Microsoft Sentinel.
- - You will receive an error message if Defender for IoT is not enabled on at least one IoT Hub within a subscription. Enable Defender for IoT within the IoT Hub to remove the error.
-
-1. You can decide whether you want the alerts from Defender for IoT to automatically generate incidents in Microsoft Sentinel. Under **Create incidents**, select **Enable** to enable the default analytics rule to automatically create incidents from the generated alerts. This rule can be changed or edited under **Analytics** > **Active rules**.
-
-> [!NOTE]
-> It can take 10 seconds or more for the **Subscription** list to refresh after making connection changes.
-
-## Log Analytics alert view
-
-To use the relevant schema in Log Analytics to display the Defender for IoT alerts:
-
-1. Open **Logs** > **SecurityInsights** > **SecurityAlert**, or search for **SecurityAlert**.
-
-1. Filter to see only Defender for IoT generated alerts using the following kql filter:
-
-```kusto
-SecurityAlert | where ProductName == "Azure Security Center for IoT"
-```
-
-### Service notes
-
-After connecting a **Subscription**, the hub data is available in Microsoft Sentinel approximately 15 minutes later.
-
-## Next steps
-
-In this document, you learned how to connect Defender for IoT to Microsoft Sentinel. To learn more about threat detection and security data access, see the following articles:
--- Learn how to use Microsoft Sentinel to [Quickstart: Get started with Microsoft Sentinel](../../sentinel/get-visibility.md).-- Learn how to [Access your IoT security data](how-to-security-data-access.md)
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/release-notes.md
Title: What's new in Microsoft Defender for IoT for device builders description: Learn about the latest updates for Defender for IoT device builders. Previously updated : 01/10/2022 Last updated : 02/20/2022
-# What's new
+# What's new
[!INCLUDE [Banner for top of topics](../includes/banner.md)]
Noted features are in PREVIEW. The [Azure Preview Supplemental Terms](https://az
Listed below are the support, breaking change policies for Defender for IoT, and the versions of Defender for IoT that are currently available.
+## February 2022
+
+**Version 4.1.1**:
+
+- **Micro agent for Edge is now in Public Preview**: The micro-agent supports IoT Edge devices, with an easy installation and identity provisioning process that uses an automatically provisioned module identity to authenticate Edge devices without the need to perform any manual authentication.
+
+ For more information, see [Install Defender for IoT micro agent for Edge (Preview)](how-to-install-micro-agent-for-edge.md).
+
+- **New directory structure**: Now aligned with the standard Linux installation directory structure.
+
+ Due to this change, updates to version 4.1.1 require you to reauthenticate the micro agent and save your connection string in the new location. For more information, see [Upgrade the Microsoft Defender for IoT micro agent](upgrade-micro-agent.md).
+
+- **SBoM collector**: The SBoM collector now collects the packages installed on the device periodically. For more information, see [Micro agent event collection (Preview)](concept-event-aggregation.md).
+
+- **CIS benchmarks**:
+
+ - CIS Linux distribution independent benchmarks now supports recommendations based on CIS Distribution Independent Linux Benchmarks, version 2.0.0.
+
+ - CIS benchmark recommendations now support the ability to disable specific CIS Benchmarks checks or groups through twin configurations. For more information, see [Micro agent configurations (Preview)](concept-micro-agent-configuration.md).
+
+- **Micro agent supported devices list expands**: The micro agent now supports Debian 11 AMD64 and ARM32v7 devices, as well as Ubuntu Server 18.04 ARM32 Linux devices & Ubuntu Server 20.04 ARM32 & ARM64 Linux devices.
+
+ For more information, see [Agent portfolio overview and OS support (Preview)](concept-agent-portfolio-overview-os-support.md).
+
+- **DNS hit count**: network collector now includes DNS hit count field that can be visible through Log Analytics, which can help indicate if a DNS request was part of an automatic query.
+
+ For more information, see [Network Connection events (event-based collector)](concept-event-aggregation.md#network-connection-events-event-based-collector).
+
+- **Login Collector**: Now supporting login collector using: SYSLOG collecting SSH login events and PAM collecting SSH, telnet and local login events using the pluggable authentication modules stack. For more information, see [Micro agent event collection (Preview)](concept-event-aggregation.md).
++ ## November 2021 **Version 3.13.1**:
defender-for-iot Troubleshoot Defender Micro Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/troubleshoot-defender-micro-agent.md
If an issue occurs when the micro agent is run, you can run the micro agent in a
```bash sudo systectl stop defender-iot-micro-agent
-cd /var/defender_iot_micro_agent/
+cd /etc/defender_iot_micro_agent/
sudo ./defender_iot_micro_agent ```
defender-for-iot Tutorial Investigate Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-investigate-security-alerts.md
There are no resources to clean up.
## Next steps > [!div class="nextstepaction"]
-> Learn how to [Connect your data from Defender for IoT for device builders to Microsoft Sentinel (Public preview)](how-to-configure-with-sentinel.md)
+> Learn how to [integrate Microsoft Sentinel and Microsoft Defender for IoT](/azure/sentinel/iot-solution?toc=%2Fazure%2Fdefender-for-iot%2Forganizations%2Ftoc.json&bc=%2Fazure%2Fdefender-for-iot%2Fbreadcrumb%2Ftoc.json&tabs=use-out-of-the-box-analytics-rules-recommended)
defender-for-iot Tutorial Standalone Agent Binary Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-standalone-agent-binary-installation.md
Title: Install the Microsoft Defender for IoT micro agent (Preview) description: Learn how to install and authenticate the Defender for IoT micro agent. Previously updated : 01/13/2022 Last updated : 02/20/2022 #Customer intent: As an Azure admin I want to install the Defender for IoT agent on devices connected to an Azure IoT Hub
You will need to copy the module identity connection string from the DefenderIoT
:::image type="content" source="media/quickstart-standalone-agent-binary-installation/copy-button.png" alt-text="Select the copy button to copy the Connection string (primary key).":::
-### Configure authentication using the module identity connection string
-
-**To configure the agent to authenticate using a module identity connection string**:
-
-1. Create a file named `connection_string.txt` containing the copied connection string encoded in utf-8 in the Defender for Cloud agent directory `/var/defender_iot_micro_agent` path by entering the following command:
+1. Create a file named `connection_string.txt` containing the copied connection string encoded in utf-8 in the Defender for Cloud agent directory `/etc/defender_iot_micro_agent` path by entering the following command:
```bash
- sudo bash -c 'echo "<connection string>" > /var/defender_iot_micro_agent/connection_string.txt'
+ sudo bash -c 'echo "<connection string>" > /etc/defender_iot_micro_agent/connection_string.txt'
```
- The `connection_string.txt` will now be located in the following path location `/var/defender_iot_micro_agent/connection_string.txt`.
+ The `connection_string.txt` will now be located in the following path location `/etc/defender_iot_micro_agent/connection_string.txt`.
1. Restart the service using this command:
You will need to copy the module identity connection string from the DefenderIoT
1. Procure a certificate by following [these instructions](../../iot-hub/tutorial-x509-scripts.md).
-1. Place the PEM-encoded public part of the certificate, and the private key, in to the Defender for Cloud Agent Directory in to the file called `certificate_public.pem`, and `certificate_private.pem`.
+1. Place the PEM-encoded public part of the certificate, and the private key, in `/etc/defender_iot_micro_agent`, to files called `certificate_public.pem`, and `certificate_private.pem`.
1. Place the appropriate connection string in to the `connection_string.txt` file. The connection string should look like this:
You can test the system by creating a trigger file on the device. The trigger fi
1. Create a file on the file system with the following command: ```bash
- sudo touch /tmp/DefenderForIoTOSBaselineTrigger.txt
+ sudo touch /tmp/DefenderForIoTOSBaselineTrigger.txt
```
+1. Make sure that your Log Analytics workspace is attached to your IoT hub. For more information, see [Create a log analytics workspace](tutorial-configure-agent-based-solution.md#create-a-log-analytics-workspace).
+ 1. Restart the agent using the command: ```bash
You can test the system by creating a trigger file on the device. The trigger fi
Allow up to one hour for the recommendation to appear in the hub.
-A baseline validation failure recommendation will occur in the hub, with a `CceId` of CIS-debian-9-DEFENDER_FOR_IOT_TEST_CHECKS-0.0:
+A baseline recommendation called 'IoT_CISBenchmarks_DIoTTest' is created. You can query this recommendation fro Log Analytics as follows:
+
+```kusto
+SecurityRecommendation
+
+| where RecommendationName contains "IoT_CISBenchmarks_DIoTTest"
+
+| where DeviceId contains "<device-id>"
+
+| top 1 by TimeGenerated desc
+```
+
+For example:
## Install a specific micro agent version
defender-for-iot Upgrade Micro Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/upgrade-micro-agent.md
+
+ Title: Upgrade the Microsoft Defender for IoT micro agent
+description: Learn how to upgrade your Defender for IoT micro agent for device builders.
Last updated : 02/20/2022+++
+# Upgrade the Microsoft Defender for IoT micro agent
+
+This article describes how to upgrade a Microsoft Defender for IoT micro agent with the latest software version.
+
+For more information, see our [release notes for device builders](release-notes.md).
+
+## Upgrade a standalone micro agent
+
+1. Ensure that you have upgraded the apt. Run:
+
+ ```bash
+ sudo apt-get update
+ ```
+
+1. Install the Defender for IoT micro agent on Debian or Ubuntu-based Linux distributions. Run:
+
+ ```bash
+ sudo apt-get install defender-iot-micro-agent
+ ```
+
+## Upgrade a micro agent for Edge
+
+1. Ensure that you have upgraded the apt. Run:
+
+ ```bash
+ sudo apt-get update
+ ```
+
+1. Install the Defender for IoT micro agent on Debian or Ubuntu-based Linux distributions for Edge. Run:
+
+ ```bash
+ sudo apt-get install defender-iot-micro-agent-edge
+ ```
+
+## Upgrade a standalone micro agent from a legacy version
+
+This section is relevant specifically when upgrading a micro agent from version 3.13.1 or lower to version 4.1.1 or higher.
+
+In version 4.1.1, the standalone micro agent directory changed to align with standard Linux installation directory structures. This change requires customers to reauthenticate the micro agent and modify the connection string location.
+
+1. Upgrade your micro agent as described [above](#upgrade-a-standalone-micro-agent).
+
+1. Reauthenticate your micro agent. For more information, see [Authenticate the micro agent](tutorial-standalone-agent-binary-installation.md#authenticate-the-micro-agent).
+
+## Install a specific version of the micro agent
+
+Specify a version number in your command to install the specified micro agent version.
+
+Use the following command syntax:
+
+```bash
+sudo apt-get install defender-iot-micro-agent=<version>
+```
+
+## Next steps
+
+For more information, see:
+
+- [Install Defender for IoT micro agent for Edge (Preview)](how-to-install-micro-agent-for-edge.md)
+
+- [Tutorial: Create a DefenderIotMicroAgent module twin (Preview)](tutorial-create-micro-agent-module-twin.md)
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alert-engine-messages.md
# Alert types and descriptions
-This article provides information on the alert types, descriptions, and severity that may be generated from the Defender for IoT engines. This information can be used to help map alerts into playbooks, define Forwarding rules, Exclusion rules, and custom alerts as well as define the appropriate rules within a SIEM. Alerts appear in the Alerts window, which allows you to manage the alert event.
-
-> [!NOTE]
-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, it will be removed from this article.
+This article provides information on the alert types, descriptions, and severity that may be generated from the Defender for IoT engines. This information can be used to help map alerts into playbooks, define Forwarding rules, Exclusion rules, and custom alerts and define the appropriate rules within a SIEM. Alerts appear in the Alerts window, which allows you to manage the alert event.
### Alert news New alerts may be added and existing alerts may be updated or disabled. Certain disabled alerts can be re-enabled from the Support page of the sensor console. Alerts tht can be re-enabled are marked with an asterisk (*) in the tables below.
-You may have configured newly disabled alerts in your Forwarding rules. If this is the case, you may need to update related Defender for IoT Exclusion rules, or update SIEM rules and playbooks where relevant.
+You may have configured newly disabled alerts in your Forwarding rules. If so, you may need to update related Defender for IoT Exclusion rules, or update SIEM rules and playbooks where relevant.
See [What's new in Microsoft Defender for IoT?](release-notes.md#whats-new-in-microsoft-defender-for-iot) for detailed information about changes made to alerts.
Policy engine alerts describe detected deviations from learned baseline behavior
| Title | Description | Severity | |--|--|--| | Beckhoff Software Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
-| Database Login Failed | A failed login attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major |
+| Database Login Failed | A failed sign in attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major |
| Emerson ROC Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
-| External address within the network communicated with Internet | A source device defined as part of your network is communicating with Internet addresses. The source is not authorized to communicate with Internet addresses. | Critical |
-| Field Device Discovered Unexpectedly | A new source device was detected on the network but has not been authorized. | Major |
+| External address within the network communicated with Internet | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical |
+| Field Device Discovered Unexpectedly | A new source device was detected on the network but hasn't been authorized. | Major |
| Firmware Change Detected | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | | Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
-| Foxboro I/A Unauthorized Operation | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| FTP Login Failed | A failed login attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major |
-| Function Code Raised Unauthorized Exception | A source device (slave) returned an exception to a destination device (master). | Major |
+| Foxboro I/A Unauthorized Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| FTP Login Failed | A failed sign in attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major |
+| Function Code Raised Unauthorized Exception | A source device (secondary) returned an exception to a destination device (primary). | Major |
| GOOSE Message Type Settings | Message (identified by protocol ID) settings were changed on a source device. | Warning | | Honeywell Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
-| * Illegal HTTP Communication | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Internet Access Detected | A source device defined as part of your network is communicating with Internet addresses. The source is not authorized to communicate with Internet addresses. | Major |
+| * Illegal HTTP Communication | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Internet Access Detected | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Major |
| Mitsubishi Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
-| Modbus Address Range Violation | A master device requested access to a new slave memory address. | Major |
+| Modbus Address Range Violation | A primary device requested access to a new secondary memory address. | Major |
| Modbus Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major |
-| New Activity Detected - CIP Class | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - CIP Class Service | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - CIP PCCC Command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - CIP Symbol | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - EtherNet/IP I/O Connection | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - EtherNet/IP Protocol Command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - GSM Message Code | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - LonTalk Command Codes | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Port Discovery | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Warning |
-| New Activity Detected - LonTalk Network Variable | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - Ovation Data Request | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - Read/Write Command (AMS Index Group) | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - Read/Write Command (AMS Index Offset) | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - Unauthorized DeltaV Message Type | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - Unauthorized DeltaV ROC Operation | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - Unauthorized RPC Message Type | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - Using AMS Protocol Command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - Using Siemens SICAM Command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - Using Suitelink Protocol command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - Using Suitelink Protocol sessions | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Activity Detected - Using Yokogawa VNetIP Command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| New Asset Detected | A new source device was detected on the network but has not been authorized. (Note that this alert applies to devices discovered in OT subnets. New devices discoverd in IT subnets do not trigger an alert.) | Major |
-| New LLDP Device Configuration | A new source device was detected on the network but has not been authorized. | Major |
-| Omron FINS Unauthorized Command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - CIP Class | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - CIP Class Service | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - CIP PCCC Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - CIP Symbol | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - EtherNet/IP I/O Connection | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - EtherNet/IP Protocol Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - GSM Message Code | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - LonTalk Command Codes | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Port Discovery | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Warning |
+| New Activity Detected - LonTalk Network Variable | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Ovation Data Request | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Read/Write Command (AMS Index Group) | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Read/Write Command (AMS Index Offset) | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Unauthorized DeltaV Message Type | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Unauthorized DeltaV ROC Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Unauthorized RPC Message Type | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Using AMS Protocol Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Using Siemens SICAM Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Using Suitelink Protocol command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Using Suitelink Protocol sessions | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Activity Detected - Using Yokogawa VNetIP Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| New Asset Detected | A new source device was detected on the network but hasn't been authorized. <br><br>This alert applies to devices discovered in OT subnets. New devices discovered in IT subnets don't trigger an alert.| Major |
+| New LLDP Device Configuration | A new source device was detected on the network but hasn't been authorized. | Major |
+| Omron FINS Unauthorized Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
| S7 Plus PLC Firmware Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | | Sampled Values Message Type Settings | Message (identified by protocol ID) settings were changed on a source device. | Warning |
-| Suspicion of Illegal Integrity Scan | A scan was detected on a DNP3 source device (outstation). This scan was not authorized as learned traffic on your network. | Major |
-| Toshiba Computer Link Unauthorized Command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Minor |
-| Unauthorized ABB Totalflow File Operation | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized ABB Totalflow Register Operation | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized Access to Siemens S7 Data Block | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices has not been authorized as learned traffic on your network. | Warning |
-| Unauthorized Access to Siemens S7 Plus Object | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized Access to Wonderware Tag | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices has not been authorized as learned traffic on your network. | Major |
-| Unauthorized BACNet Object Access | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized BACNet Route | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized Database Login | A login attempt between a source client and destination server was detected. Communication between these devices has not been authorized as learned traffic on your network. | Major |
-| Unauthorized Database Operation | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized Emerson ROC Operation | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized GE SRTP File Access | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized GE SRTP Protocol Command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized GE SRTP System Memory Operation | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized HTTP Activity | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| * Unauthorized HTTP SOAP Action | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| * Unauthorized HTTP User Agent | An unauthorized application was detected on a source device. The application has not been authorized as a learned application on your network. | Major |
-| Unauthorized Internet Connectivity Detected | A source device defined as part of your network is communicating with Internet addresses. The source is not authorized to communicate with Internet addresses. | Critical |
-| Unauthorized Mitsubishi MELSEC Command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized MMS Program Access | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices has not been authorized as learned traffic on your network. | Major |
-| Unauthorized MMS Service | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized Multicast/Broadcast Connection | A Multicast/Broadcast connection was detected between a source device and other devices. Multicast/Broadcast communication is not authorized. | Critical |
-| Unauthorized Name Query | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized OPC UA Activity | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized OPC UA Request/Response | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Suspicion of Illegal Integrity Scan | A scan was detected on a DNP3 source device (outstation). This scan wasn't authorized as learned traffic on your network. | Major |
+| Toshiba Computer Link Unauthorized Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Minor |
+| Unauthorized ABB Totalflow File Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized ABB Totalflow Register Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized Access to Siemens S7 Data Block | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Warning |
+| Unauthorized Access to Siemens S7 Plus Object | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized Access to Wonderware Tag | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major |
+| Unauthorized BACNet Object Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized BACNet Route | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized Database Login | A sign in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major |
+| Unauthorized Database Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized Emerson ROC Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized GE SRTP File Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized GE SRTP Protocol Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized GE SRTP System Memory Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized HTTP Activity | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| * Unauthorized HTTP SOAP Action | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| * Unauthorized HTTP User Agent | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major |
+| Unauthorized Internet Connectivity Detected | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical |
+| Unauthorized Mitsubishi MELSEC Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized MMS Program Access | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major |
+| Unauthorized MMS Service | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized Multicast/Broadcast Connection | A Multicast/Broadcast connection was detected between a source device and other devices. Multicast/Broadcast communication isn't authorized. | Critical |
+| Unauthorized Name Query | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized OPC UA Activity | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized OPC UA Request/Response | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
| Unauthorized Operation was detected by a User Defined Rule | Traffic was detected between two devices. This activity is unauthorized based on a Custom Alert Rule defined by a user. | Major |
-| Unauthorized PLC Configuration Read | The source device is not defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Warning |
-| Unauthorized PLC Configuration Write | The source device sent a command to read/write the program of a destination controller. This activity was not previously seen. | Major |
-| Unauthorized PLC Program Upload | The source device sent a command to read/write the program of a destination controller. This activity was not previously seen. | Major |
-| Unauthorized PLC Programming | The source device is not defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Critical |
-| Unauthorized Profinet Frame Type | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized SAIA S-Bus Command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized Siemens S7 Execution of Control Function | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized Siemens S7 Execution of User Defined Function | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized Siemens S7 Plus Block Access | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized Siemens S7 Plus Operation | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized SMB Login | A login attempt between a source client and destination server was detected. Communication between these devices has not been authorized as learned traffic on your network. | Major |
-| Unauthorized SNMP Operation | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized SSH Access | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unauthorized Windows Process | An unauthorized application was detected on a source device. The application has not been authorized as a learned application on your network. | Major |
-| Unauthorized Windows Service | An unauthorized application was detected on a source device. The application has not been authorized as a learned application on your network. | Major |
+| Unauthorized PLC Configuration Read | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Warning |
+| Unauthorized PLC Configuration Write | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major |
+| Unauthorized PLC Program Upload | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major |
+| Unauthorized PLC Programming | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Critical |
+| Unauthorized Profinet Frame Type | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized SAIA S-Bus Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized Siemens S7 Execution of Control Function | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized Siemens S7 Execution of User Defined Function | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized Siemens S7 Plus Block Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized Siemens S7 Plus Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized SMB Login | A sign in attempt between a source client and destination server was detected. Communication between these devices has not been authorized as learned traffic on your network. | Major |
+| Unauthorized SNMP Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized SSH Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unauthorized Windows Process | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major |
+| Unauthorized Windows Service | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major |
| Unauthorized Operation was detected by a User Defined Rule | New traffic parameters were detected. This parameter combination violates a user defined rule | Major |
-| Unpermitted Modbus Schneider Electric Extension | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unpermitted Usage of ASDU Types | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unpermitted Usage of DNP3 Function Code | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
-| Unpermitted Usage of Internal Indication (IIN) | A DNP3 source device (outstation) reported an internal indication (IIN) that has not authorized as learned traffic on your network. | Major |
-| Unpermitted Usage of Modbus Function Code | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unpermitted Modbus Schneider Electric Extension | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unpermitted Usage of ASDU Types | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unpermitted Usage of DNP3 Function Code | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
+| Unpermitted Usage of Internal Indication (IIN) | A DNP3 source device (outstation) reported an internal indication (IIN) that hasn't authorized as learned traffic on your network. | Major |
+| Unpermitted Usage of Modbus Function Code | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major |
## Anomaly engine alerts
Anomaly engine alerts describe detected anomalies in network activity.
| * Abnormal Number of Parameters in HTTP Header | The source device sent an abnormal message. This may indicate an attempt to attack the destination device. | Critical | | Abnormal Periodic Behavior In Communication Channel | A change in the frequency of communication between the source and destination devices was detected. | Minor | | Abnormal Termination of Applications | An excessive number of stop commands were detected on a source device. This may be the result of an operational issue or an attempt to manipulate the device. | Major |
-| Abnormal Traffic Bandwidth | Abnormal bandwidth was detected on a channel. Bandwidth appears to be significantly lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning |
-| Abnormal Traffic Bandwidth Between Devices | Abnormal bandwidth was detected on a channel. Bandwidth appears to be significantly lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning |
-| Address Scan Detected | A source device was detected scanning network devices. This device has not been authorized as a network scanning device. | Critical |
-| ARP Address Scan Detected | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address has not been authorized as valid ARP scanning address. | Critical |
-| ARP Address Scan Detected | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address has not been authorized as valid ARP scanning address. | Critical |
+| Abnormal Traffic Bandwidth | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning |
+| Abnormal Traffic Bandwidth Between Devices | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning |
+| Address Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical |
+| ARP Address Scan Detected | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. | Critical |
+| ARP Address Scan Detected | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. | Critical |
| ARP Spoofing | An abnormal quantity of packets was detected in the network. This could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. | Warning |
-| Excessive Login Attempts | A source device was seen performing excessive login attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
-| Excessive Number of Sessions | A source device was seen performing excessive login attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
+| Excessive Login Attempts | A source device was seen performing excessive sign in attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
+| Excessive Number of Sessions | A source device was seen performing excessive sign in attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
| Excessive Restart Rate of an Outstation | An excessive number of restart commands were detected on a source device. This may be the result of an operational issue or an attempt to manipulate the device. | Major |
-| Excessive SMB login attempts | A source device was seen performing excessive login attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
+| Excessive SMB login attempts | A source device was seen performing excessive sign in attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
| ICMP Flooding | An abnormal quantity of packets was detected in the network. This could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. | Warning | |* Illegal HTTP Header Content | The source device initiated an invalid request. | Critical |
-| Inactive Communication Channel | A communication channel between two devices was inactive during a period in which activity is usually seen. This might indicate that the program generating this traffic was changed, or the program might be unavailable. It is recommended to review the configuration of installed program and verify it is configured properly. | Warning |
-| Long Duration Address Scan Detected | A source device was detected scanning network devices. This device has not been authorized as a network scanning device. | Critical |
-| Password Guessing Attempt Detected | A source device was seen performing excessive login attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
-| PLC Scan Detected | A source device was detected scanning network devices. This device has not been authorized as a network scanning device. | Critical |
-| Port Scan Detected | A source device was detected scanning network devices. This device has not been authorized as a network scanning device. | Critical |
+| Inactive Communication Channel | A communication channel between two devices was inactive during a period in which activity is usually seen. This might indicate that the program generating this traffic was changed, or the program might be unavailable. It's recommended to review the configuration of installed program and verify that it's configured properly. | Warning |
+| Long Duration Address Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical |
+| Password Guessing Attempt Detected | A source device was seen performing excessive sign in attempts to a destination server. This may be a brute force attack. The server may be compromised by a malicious actor. | Critical |
+| PLC Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical |
+| Port Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical |
| Unexpected message length | The source device sent an abnormal message. This may indicate an attempt to attack the destination device. | Critical | | Unexpected Traffic for Standard Port | Traffic was detected on a device using a port reserved for another protocol. | Major |
Protocol engine alerts describe detected deviations in the packet structure, or
| Firmware Update | A source device sent a command to update firmware on a destination device. Verify that recent programming, configuration and firmware upgrades made to the destination device are valid. | Warning | | Function Code Not Supported by Outstation | The destination device received an invalid request. | Major | | Illegal BACNet message | The source device initiated an invalid request. | Major |
-| Illegal Connection Attempt on Port 0 | A source device attempted to connect to destination device on port number zero (0). For TCP, port 0 is reserved and cannot be used. For UDP, the port is optional and a value of 0 means no port. There is usually no service on a system that listens on port 0. This event may indicate an attempt to attack the destination device, or indicate that an application was programmed incorrectly. | Minor |
+| Illegal Connection Attempt on Port 0 | A source device attempted to connect to destination device on port number zero (0). For TCP, port 0 is reserved and canΓÇÖt be used. For UDP, the port is optional and a value of 0 means no port. There's usually no service on a system that listens on port 0. This event may indicate an attempt to attack the destination device, or indicate that an application was programmed incorrectly. | Minor |
| Illegal DNP3 Operation | The source device initiated an invalid request. | Major | | Illegal MODBUS Operation (Exception Raised by Master) | The source device initiated an invalid request. | Major | | Illegal MODBUS Operation (Function Code Zero) | The source device initiated an invalid request. | Major |
Protocol engine alerts describe detected deviations in the packet structure, or
| Initiation of an Obsolete Function Code (Initialize Data) | The source device initiated an invalid request. | Minor | | Initiation of an Obsolete Function Code (Save Config) | The source device initiated an invalid request. | Minor | | Master Requested an Application Layer Confirmation | The source device initiated an invalid request. | Warning |
-| Modbus Exception | A source device (slave) returned an exception to a destination device (master). | Major |
+| Modbus Exception | A source device (secondary) returned an exception to a destination device (primary). | Major |
| Slave Device Received Illegal ASDU Type | The destination device received an invalid request. | Major | | Slave Device Received Illegal Command Cause of Transmission | The destination device received an invalid request. | Major | | Slave Device Received Illegal Common Address | The destination device received an invalid request. | Major |
Protocol engine alerts describe detected deviations in the packet structure, or
| Unknown Object Sent to Outstation | The destination device received an invalid request. | Major | | Usage of a Reserved Function Code | The source device initiated an invalid request. | Major | | Usage of Improper Formatting by Outstation | The source device initiated an invalid request. | Warning |
-| Usage of Reserved Status Flags (IIN) | A DNP3 source device (outstation) used the reserved Internal Indicator 2.6. It is recommended to check the device's configuration. | Warning |
+| Usage of Reserved Status Flags (IIN) | A DNP3 source device (outstation) used the reserved Internal Indicator 2.6. It's recommended to check the device's configuration. | Warning |
## Malware engine alerts
Malware engine alerts describe detected malicious network activity.
| Connection Attempt to Known Malicious IP | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | | Invalid SMB Message (DoublePulsar Backdoor Implant) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | | Malicious Domain Name Request | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
-| Malware Test File Detected - EICAR AV Success | An EICAR AV test file was detected in traffic between two devices. The file is not malware. It is used to confirm that the antivirus software is installed correctly; demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Major |
+| Malware Test File Detected - EICAR AV Success | An EICAR AV test file was detected in traffic between two devices. The file isn't malware. It's used to confirm that the antivirus software is installed correctly; demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Major |
| Suspicion of Conficker Malware | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
-| Suspicion of Denial Of Service Attack | A source device attempted to initiate an excessive number of new connections to a destination device. This may be a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, impact performance and service availability, or cause unrecoverable errors. | Critical |
+| Suspicion of Denial Of Service Attack | A source device attempted to initiate an excessive number of new connections to a destination device. This may be a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, affect performance and service availability, or cause unrecoverable errors. | Critical |
| Suspicion of Malicious Activity | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | | Suspicion of Malicious Activity (BlackEnergy) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | | Suspicion of Malicious Activity (DarkComet) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
Malware engine alerts describe detected malicious network activity.
| Suspicion of Remote Windows Service Management | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | | Suspicious Executable File Detected on Endpoint | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | | Suspicious Traffic Detected | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
+| Backup Activity with Antivirus Signatures | Traffic detected between the source device and the destination backup server triggered this alert. The traffic includes backup of antivirus software that might contain malware signatures. This is most likely legitimate backup activity. | Warning |
## Operational engine alerts
Operational engine alerts describe detected operational incidents, or malfunctio
| Continuous Event Buffer Overflow at Outstation | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. | Major | | Controller Reset | A source device sent a reset command to a destination controller. The controller stopped operating temporarily and started again automatically. | Warning | | Controller Stop | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning |
-| Device Failed to Receive a Dynamic IP Address | The source device is configured to receive a dynamic IP address from a DHCP server but did not receive an address. This indicates a configuration error on the device, or an operational error in the DHCP server. It is recommended to notify the network administrator of the incident | Major |
-| Device is Suspected to be Disconnected (Unresponsive) | A source device did not respond to a command sent to it. It may have been disconnected when the command was sent. | Major |
+| Device Failed to Receive a Dynamic IP Address | The source device is configured to receive a dynamic IP address from a DHCP server but didn't receive an address. This indicates a configuration error on the device, or an operational error in the DHCP server. It's recommended to notify the network administrator of the incident | Major |
+| Device is Suspected to be Disconnected (Unresponsive) | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. | Major |
| EtherNet/IP CIP Service Request Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | | EtherNet/IP Encapsulation Protocol Command Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | | Event Buffer Overflow in Outstation | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. | Major |
-| Expected Backup Operation Did Not Occur | Expected backup/file transfer activity did not occur between two devices. This may indicate errors in the backup / file transfer process. | Major |
+| Expected Backup Operation Did Not Occur | Expected backup/file transfer activity didn't occur between two devices. This may indicate errors in the backup / file transfer process. | Major |
| GE SRTP Command Failure | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | | GE SRTP Stop PLC Command was Sent | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | | GOOSE Control Block Requires Further Configuration | A source device sent a GOOSE message indicating that the device needs commissioning. This means the GOOSE control block requires further configuration and GOOSE messages are partially or completely non-operational. | Major | | GOOSE Dataset Configuration was Changed | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | | Honeywell Controller Unexpected Status | A Honeywell Controller sent an unexpected diagnostic message indicating a status change. | Warning | |* HTTP Client Error | The source device initiated an invalid request. | Warning |
-| Illegal IP Address | System detected traffic between a source device and IP address which is an invalid address. This may indicate wrong configuration or an attempt to generate illegal traffic. | Minor |
-| Master-Slave Authentication Error | The authentication process between a DNP3 source device (master) and a destination device (outstation) failed. | Minor |
+| Illegal IP Address | System detected traffic between a source device and an IP address that is an invalid address. This may indicate wrong configuration or an attempt to generate illegal traffic. | Minor |
+| Master-Slave Authentication Error | The authentication process between a DNP3 source device (primary) and a destination device (outstation) failed. | Minor |
| MMS Service Request Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | | No Traffic Detected on Sensor Interface | A sensor stopped detecting network traffic on a network interface. | Critical | | OPC UA Server Raised an Event That Requires User's Attention | An OPC UA server sent an event notification to a client. This type of event requires user attention | Major |
Operational engine alerts describe detected operational incidents, or malfunctio
| Sampled Values Message Dataset Configuration was Changed | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | | Slave Device Unrecoverable Failure | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | | Suspicion of Hardware Problems in Outstation | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major |
-| Suspicion of Unresponsive MODBUS Device | A source device did not respond to a command sent to it. It may have been disconnected when the command was sent. | Minor |
+| Suspicion of Unresponsive MODBUS Device | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. | Minor |
| Traffic Detected on Sensor Interface | A sensor resumed detecting network traffic on a network interface. | Warning |
-\* The alert is disabled by default, but can be enabled again. To enable the alert, navigate to the Support page, find the alert and select **Enable**.You need administrative level permissions to access the Support page.
+\* The alert is disabled by default, but can be enabled again. To enable the alert, navigate to the Support page, find the alert and select **Enable**. You need administrative level permissions to access the Support page.
## Next steps
defender-for-iot Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture.md
Title: What is agentless solution architecture description: Learn about Microsoft Defender for IoT agentless architecture and information flow. Previously updated : 11/09/2021 Last updated : 02/06/2022 # Microsoft Defender for IoT architecture
Tightly integrated with your SOC workflows and run books, it enables easy priori
- Control all sensors ΓÇô configure and monitor all sensors from a single location.
- :::image type="content" source="media/updates/alerts-and-site-management-v2.png" alt-text="Manage all of your alerts and information.":::
+ :::image type="content" source="media/architecture/initial-dashboard.png" alt-text="Screen shot of dashboard." lightbox="media/architecture/initial-dashboard.png":::
### Azure portal
defender-for-iot Concept Supported Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-supported-protocols.md
# Support for IoT, OT, ICS, and SCADA protocols
-Microsoft Defender for IoT provides an open and interoperable Operation Technology (OT) cybersecurity platform. Defender for IoT is deployed in many different locations and reduces IoT, IT, and ICS risk with deployments in demanding, and complex OT environments across all industry verticals and geographies.
+Microsoft Defender for IoT provides an open and interoperable Operation Technology (OT) cybersecurity platform. Defender for IoT reduces IoT, IT, and ICS risk with deployments in demanding and complex OT environments across all industry verticals and geographies.
## Supported protocols
-Microsoft Defender for IoT supports a broad range of protocols across a diverse enterprise, and includes industrial automation equipment across all industrial sectors, enterprise networks, and building management system (BMS) environments. For custom or proprietary protocols, Microsoft offers an SDK that makes it easy to develop, test, and deploy custom protocol dissectors as plugins. The SDK does all this without divulging proprietary information, such as how the protocols are designed, or by sharing PCAPs that may contain sensitive information. Supported protocols are listed below.
+Defender for IoT supports a broad range of protocols across a diverse enterprise. Supported protocols include industrial automation equipment across all industrial sectors, enterprise networks, and building management system (BMS) environments.
+
+For custom or proprietary protocols, Microsoft offers an SDK that makes it easy to develop, test, and deploy custom protocol dissectors as plugins. The SDK does all this without divulging proprietary information, such as how the protocols are designed, or by sharing PCAPs that may contain sensitive information. Supported protocols are listed below.
### Supported protocols (passive monitoring)
We invite you to join our community here: <horizon-community@microsoft.com>
## Next steps
-[Customize alert rules](how-to-accelerate-alert-incident-response.md#customize-alert-rules).
+[Customize alert rules](how-to-accelerate-alert-incident-response.md#customize-alert-rules)
+[About forwarded alert information](how-to-forward-alert-information-to-partners.md#about-forwarded-alert-information)
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
We recommend that you group multiple sensors monitoring the same networks in one
For more information, see [Connect sensors to the on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md#connect-sensors-to-the-on-premises-management-console).
-## Populate Microsoft Sentinel with alert information (optional)
-
-Send alert information to Microsoft Sentinel by configuring Microsoft Sentinel. See [Connect your data from Defender for IoT to Microsoft Sentinel](how-to-configure-with-sentinel.md).
## Next steps ##
defender-for-iot How To Activate And Set Up Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-sensor.md
You can access console tools from the side menu. Tools help you:
| Tools| Description | |||
-| Event timeline | View a timeline with information about alerts, network events, and user operations. For more information, see [Event timeline](how-to-track-sensor-activity.md#event-timeline).|
-| Data mining | Generate comprehensive and granular information about your network's devices at various layers. For more information, see [Sensor data mining queries](how-to-create-data-mining-queries.md#sensor-data-mining-queries).|
-| Trends and Statistics | View trends and statistics about an extensive range of network traffic and activity. As a small example, display charts and graphs showing top traffic by port, connectivity drops by hours, S7 traffic by control function, number of devices per VLAN, SRTP errors by day, or Modbus traffic by function. For more information, see [Sensor trends and statistics reports](how-to-create-trends-and-statistics-reports.md#sensor-trends-and-statistics-reports).
+| Event timeline | View a timeline with information about alerts, network events, and user operations. For more information, see [Track sensor activity](how-to-track-sensor-activity.md).|
+| Data mining | Generate comprehensive and granular information about your network's devices at various layers. For more information, see [Sensor data mining queries](how-to-create-data-mining-queries.md).|
+| Trends and Statistics | View trends and statistics about an extensive range of network traffic and activity. As a small example, display charts and graphs showing top traffic by port, connectivity drops by hours, S7 traffic by control function, number of devices per VLAN, SRTP errors by day, or Modbus traffic by function. For more information, see [Sensor trends and statistics reports](how-to-create-trends-and-statistics-reports.md).
| Risk Assessment | Proactively address vulnerabilities, identify risks such as missing patches or unauthorized applications. Detect changes to device configurations, controller logic, and firmware. Prioritize fixes based on risk scoring and automated threat modeling. For more information, see [Risk assessment reporting](how-to-create-risk-assessment-reports.md#risk-assessment-reporting).| | Attack Vector | Display a graphical representation of a vulnerability chain of exploitable devices. These vulnerabilities can give an attacker access to key network devices. The Attack Vector Simulator calculates attack vectors in real time and analyzes all attack vectors for a specific target. For more information, see [Attack vector reporting](how-to-create-attack-vector-reports.md#attack-vector-reporting).|
defender-for-iot How To Analyze Programming Details Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-analyze-programming-details-changes.md
The Programming Analysis window displays both authorized and unauthorized progra
Access the Programming Analysis window from the: -- [Event Timeline](#event-timeline)
+- [Event Timeline](how-to-track-sensor-activity.md)
- [Unauthorized Programming Alerts](#unauthorized-programming-alerts)
defender-for-iot How To Configure Windows Endpoint Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-configure-windows-endpoint-monitoring.md
Title: Configure Windows endpoint monitoring
-description: Enrich data resolved on devices by working with Windows endpoint monitoring (WMI).
Previously updated : 11/09/2021
+ Title: Configure Windows endpoint monitoring for Defender for IoT devices
+description: Set up Windows endpoint monitoring (WMI) for Windows information on devices.
Last updated : 02/01/2022 # Configure Windows endpoint monitoring (WMI)
-With the Windows endpoint monitoring capability, you can configure Microsoft Defender for IoT to selectively probe Windows systems. This provides you with more focused and accurate information about your devices, such as service pack levels.
+Use WMI to scan Windows systems for focused and accurate device information, such as service pack levels. You can scan specific IP address ranges and hosts. You can perform scheduled or manual scans. When a scan is finished, you can view the results in a CSV log file. The log contains all the IP addresses that were probed, and success and failure information for each address. There's also an error code, which is a free string derived from the exception. Note that:
-You can configure probing with specific ranges and hosts, and configure it to be performed only as often as desired. You accomplish selective probing by using the Windows Management Instrumentation (WMI), which is Microsoft's standard scripting language for managing Windows systems.
+- You can run only one scan at a time.
+- You get the best results with users who have domain or local administrator privileges.
+- Only the scan of the last log is kept in the system.
-> [!NOTE]
-> - You can run only one scan at a time.
-> - You get the best results with users who have domain or local administrator privileges.
-> - Before you begin the WMI configuration, configure a firewall rule that opens outgoing traffic from the sensor to the scanned subnet by using UDP port 135 and all TCP ports above 1024.
-When the probe is finished, a log file with all the probing attempts is available from the option to export a log. The log contains all the IP addresses that were probed. For each IP address, the log shows success and failure information. There's also an error code, which is a free string derived from the exception. The scan of the last log only is kept in the system.
+## Set up a firewall rule
-You can perform scheduled scans or manual scans. When a scan is finished, you can view the results in a CSV file.
+Before you begin scanning, create a firewall rule that allows outgoing traffic from the sensor to the scanned subnet by using UDP port 135 and all TCP ports above 1024.
-**Prerequisites**
-Configure a firewall rule that opens outgoing traffic from the sensor to the scanned subnet by using UDP port 135 and all TCP ports above 1024.
+## Set up scanning
-## Perform an automatic scan
+1. In Defender for Cloud select **System Settings**.
+1. Under **Network monitoring**, select **Windows Endpoint Monitoring (WMI)**
+1. In the **Windows Endpoint Monitoring (WMI) dialog, select **Add ranges**. You can also import and export ranges.
+1. Specify the IP address range you want to scan. You can add multiple ranges.
+1. Add your user name and password, and ensure that **Enable** is toggled on.
+1. In **Scan will run**, specify when you want the automatic scan to run. You can set an hourly interval between scans, or a specific scan time.
+1. If you want to run a scan immediately with the configured settings, select **Manually scan**.
+1. Select **Save** to save the automatic scan settings.
+1. When the scan is finished, select to view/export scan results.
-This section describes how to perform an automatic scan
-
-**To perform an automatic scan:**
-
-1. On the side menu, select **System Settings**.
-
-2. Select **Windows Endpoint Monitoring** :::image type="icon" source="media/how-to-control-what-traffic-is-monitored/windows-endpoint-monitoring-icon-v2.png" border="false":::.
-
- :::image type="content" source="media/how-to-control-what-traffic-is-monitored/windows-endpoint-monitoring-screen-v2.png" alt-text="Screenshot that shows the selection of Windows Endpoint Monitoring.":::
-
-3. On the **Scan Schedule** pane, configure options as follows:
-
- - **By fixed intervals (in hours)**: Set the scan schedule according to intervals in hours.
-
- - **By specific times**: Set the scan schedule according to specific times and select **Save Scan**.
-
- :::image type="content" source="media/how-to-control-what-traffic-is-monitored/schedule-a-scan-screen-v2.png" alt-text="Screenshot that shows the Save Scan button.":::
-
-4. To define the scan range, select **Set scan ranges**.
-
-5. Set the IP address range and add your user and password.
-
- :::image type="content" source="media/how-to-control-what-traffic-is-monitored/edit-scan-range-screen.png" alt-text="Screenshot that shows adding a user and password.":::
-
-6. To exclude an IP range from a scan, select **Disable** next to the range.
-
-7. To remove a range, select :::image type="icon" source="media/how-to-control-what-traffic-is-monitored/remove-scan-icon.png" border="false"::: next to the range.
-
-8. Select **Save**. The **Edit Scan Ranges Configuration** dialog box closes, and the number of ranges appears in the **Scan Ranges** pane.
-
-## Perform a manual scan
-
-**To perform a manual scan:**
-
-1. On the side menu, select **System Settings**.
-
-2. Select **Windows Endpoint Monitoring** :::image type="icon" source="media/how-to-control-what-traffic-is-monitored/windows-endpoint-monitoring-icon-v2.png" border="false":::.
-
- :::image type="content" source="media/how-to-control-what-traffic-is-monitored/windows-endpoint-monitoring-screen-v2.png" alt-text="Screenshot that shows the Windows Endpoint Monitoring setup screen.":::
-
-3. In the **Actions** pane, select **Start scan**. A status bar appears on the **Actions** pane and shows the progress of the scanning process.
-
- :::image type="content" source="media/how-to-control-what-traffic-is-monitored/started-scan-screen-v2.png" alt-text="Screenshot that shows the Start scan button.":::
-
-## View scan results
-
-**To view scan results:**
-
-1. When the scan is finished, on the **Actions** pane, select **View Scan Results**. The CSV file with the scan results is downloaded to your computer.
defender-for-iot How To Control What Traffic Is Monitored https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-control-what-traffic-is-monitored.md
Title: Control what traffic is monitored description: Sensors automatically perform deep packet detection for IT and OT traffic and resolve information about network devices, such as device attributes and network behavior. Several tools are available to control the type of traffic that each sensor detects. Previously updated : 11/09/2021 Last updated : 02/03/2022
After the learning period is complete and the Learning mode is disabled, the sen
When Smart IT Learning is enabled, the sensor tracks network traffic that generates nondeterministic IT behavior based on specific alert scenarios.
-The sensor monitors this traffic for seven days. If it detects the same nondeterministic IT traffic within the seven days, it continues to monitor the traffic for another seven days. When the traffic is not detected for a full seven days, Smart IT Learning is disabled for that scenario. New traffic detected for that scenario will only then generate alerts and notifications.
+The sensor monitors this traffic for seven days. If it detects the same nondeterministic IT traffic within the seven days, it continues to monitor the traffic for another seven days. When the traffic isn't detected for a full seven days, Smart IT Learning is disabled for that scenario. New traffic detected for that scenario will only then generate alerts and notifications.
Working with Smart IT Learning helps you reduce the number of unnecessary alerts and notifications caused by noisy IT scenarios.
If your sensor is controlled by the on-premises management console, you can't di
The learning capabilities (Learning and Smart IT Learning) are enabled by default.
-To enable or disable learning:
+**To enable or disable learning:**
-- Select **System Settings** and toggle the **Learning** and **Smart IT Learning** options.
+1. Select **System settings** > **Network monitoring** > **Detection Engines and Network Modelling**.
+1. Enable or disable the **Learning** and **Smart IT Learning** options.
## Configure subnets
In some cases, such as environments that use public ranges as internal ranges, y
- No alerts will be sent about unauthorized internet activity, which reduces notifications and alerts received on external addresses.
-To configure subnets:
+**To configure subnets:**
1. On the side menu, select **System Settings**.
-2. In the **System Setting** window, select **Subnets**.
+1. Select **Basic**, and then select **Subnets**.
- :::image type="content" source="media/how-to-control-what-traffic-is-monitored/edit-subnets-configuration-screen.png" alt-text="Screenshot that shows the subnet configuration screen.":::
+3. To add subnets automatically when new devices are discovered, keep the **Auto Subnets Learning** checkbox selected.
-3. To add subnets automatically when new devices are discovered, keep **Auto Subnets Learning** selected.
+4. To resolve all subnets as internal subnets, select **Resolve all internet traffic as internal/private**. Public IPs will be treated as private local addresses. No alerts will be sent about unauthorized internet activity.
-4. To resolve all subnets as internal subnets, select **Don't Detect Internet Activity**.
-
-5. Select **Add network** and define the following parameters for each subnet:
+5. Select **Add subnet** and define the following parameters for each subnet:
- The subnet IP address. - The subnet mask address.
To configure subnets:
7. To present the subnet separately when you're arranging the map according to the Purdue level, select **Segregated**.
-8. To delete a subnet, select :::image type="icon" source="media/how-to-control-what-traffic-is-monitored/delete-icon.png" border="false":::.
- 9. To delete all subnets, select **Clear All**. 10. To export configured subnets, select **Export**. The subnet table is downloaded to your workstation. 11. Select **Save**.
-### Importing information
+### Importing information
-To import subnet information, select **Import** and select a CSV file to import. The subnet information is updated with the information that you imported. If you important an empty field, you'll lose your data.
+To import subnet information, select **Import** and select a CSV file to import. The subnet information is updated with the information that you imported. If you import an empty field, you'll lose your data.
## Detection engines
Self-learning analytics engines eliminate the need for updating signatures or de
When an engine detects a deviation, an alert is triggered. You can view and manage alerts from the alert screen or from a partner system. - The name of the engine that triggered the alert appears under the alert title. ### Protocol violation engine
The name of the engine that triggered the alert appears under the alert title.
A protocol violation occurs when the packet structure or field values don't comply with the protocol specification. Example scenario:
-*"Illegal MODBUS Operation (Function Code Zero)"* alert. This alert indicates that a primary device sent a request with function code 0 to a secondary device. This action is not allowed according to the protocol specification, and the secondary device might not handle the input correctly.
+*"Illegal MODBUS Operation (Function Code Zero)"* alert. This alert indicates that a primary device sent a request with function code 0 to a secondary device. This action isn't allowed according to the protocol specification, and the secondary device might not handle the input correctly.
### Policy violation engine A policy violation occurs with a deviation from baseline behavior defined in learned or configured settings. Example scenario:
-*"Unauthorized HTTP User Agent"* alert. This alert indicates that an application that was not learned or approved by policy is used as an HTTP client on a device. This might be a new web browser or application on that device.
+*"Unauthorized HTTP User Agent"* alert. This alert indicates that an application that wasn't learned or approved by policy is used as an HTTP client on a device. This might be a new web browser or application on that device.
### Malware engine
Example scenario:
The Operational engine detects operational incidents or malfunctioning entities. Example scenario:
-*"Device is Suspected to be Disconnected (Unresponsive)"* alert. This alert is raised when a device is not responding to any kind of request for a predefined period. This alert might indicate a device shutdown, disconnection, or malfunction.
+*"Device is Suspected to be Disconnected (Unresponsive)"* alert. This alert is raised when a device isn't responding to any kind of request for a predefined period. This alert might indicate a device shutdown, disconnection, or malfunction.
### Enable and disable engines When you disable a policy engine, information that the engine generates won't be available to the sensor. For example, if you disable the Anomaly engine, you won't receive alerts on network anomalies. If you created a forwarding rule, anomalies that the engine learns won't be sent. To enable or disable a policy engine, select **Enabled** or **Disabled** for the specific engine.
-The overall score is displayed in the lower-right corner of the **System Settings** screen. The score indicates the percentage of available protection enabled through the threat protection engines. Each engine is 20 percent of available protection.
- ## Configure DHCP address ranges
The sensor console presents the most current IP address associated with the devi
- The **Device Properties** window indicates if the device was defined as a DHCP device.
-To set a DHCP address range:
-
-1. On the side menu, select **DHCP Ranges** from the **System Settings** window.
+**To set a DHCP address range:**
- :::image type="content" source="media/how-to-control-what-traffic-is-monitored/dhcp-address-range-screen.png" alt-text="Screenshot that shows the selection of DHCP Ranges.":::
+1. On the side menu, select **System Settings** > **Network monitoring** > **DHCP Ranges**.
2. Define a new range by setting **From** and **To** values.
The host name appears in the device inventory, and device map, and in reports.
You can schedule reverse lookup resolution schedules for specific hourly intervals, such as every 12 hours. Or you can schedule a specific time.
-To define DNS servers:
+**To define DNS servers:**
-1. Select **System Settings** and then select **DNS Settings**.
+1. Select **System settings**> **Network monitoring**, then select **Reverse DNS Lookup**.
2. Select **Add DNS Server**.
- :::image type="content" source="media/how-to-enrich-asset-information/dns-reverse-lookup-configuration-screen.png" alt-text="Screenshot that shows the selection of Add DNS Server.":::
-
-3. In the **Schedule reverse DNS lookup** field, choose either:
+3. In the **Schedule Reverse lookup** field, choose either:
- Intervals (per hour). - A specific time. Use European formatting. For example, use **14:30** and not **2:30 PM**.
-4. In the **DNS Server Address** field, enter the DNS IP address.
+4. In the **DNS server address** field, enter the DNS IP address.
-5. In the **DNS Server Port** field, enter the DNS port.
+5. In the **DNS server port** field, enter the DNS port.
-6. Resolve the network IP addresses to device FQDNs. In the **Number of Labels** field, add the number of domain labels to display. Up to 30 characters are displayed from left to right.
+6. Resolve the network IP addresses to device FQDNs. In the **Number of labels** field, add the number of domain labels to display. Up to 30 characters are displayed from left to right.
7. In the **Subnets** field, enter the subnets that you want the DNS server to query. 8. Select the **Enable** toggle if you want to initiate the reverse lookup.
+1. Select **Save**.
+ ### Test the DNS configuration By using a test device, verify that the settings you defined work properly:
By using a test device, verify that the settings you defined work properly:
3. Enter an address in **Lookup Address** for the **DNS reverse lookup test for server** dialog box.
- :::image type="content" source="media/how-to-enrich-asset-information/dns-reverse-lookup-test-screen.png" alt-text="Screenshot that shows the Lookup Address area.":::
- 4. Select **Test**. ## Configure Windows Endpoint Monitoring
You can perform scheduled scans or manual scans. When a scan is finished, you ca
Configure a firewall rule that opens outgoing traffic from the sensor to the scanned subnet by using UDP port 135 and all TCP ports above 1024.
-To configure an automatic scan:
-
-1. On the side menu, select **System Settings**.
+**To configure an automatic scan:**
-2. Select **Windows Endpoint Monitoring** :::image type="icon" source="media/how-to-control-what-traffic-is-monitored/windows-endpoint-monitoring-icon-v2.png" border="false":::.
+1. Select **System settings**> **Network monitoring**, then select **Windows Endpoint Monitoring (WMI)**.
- :::image type="content" source="media/how-to-control-what-traffic-is-monitored/windows-endpoint-monitoring-screen-v2.png" alt-text="Screenshot that shows the selection of Windows Endpoint Monitoring.":::
+1. In the **Edit scan ranges configuration** section, enter the ranges you want to scan and add your username and password.
-3. On the **Scan Schedule** pane, configure options as follows:
+3. Define how you want to run the scan:
- **By fixed intervals (in hours)**: Set the scan schedule according to intervals in hours. - **By specific times**: Set the scan schedule according to specific times and select **Save Scan**.
- :::image type="content" source="media/how-to-control-what-traffic-is-monitored/schedule-a-scan-screen-v2.png" alt-text="Screenshot that shows the Save Scan button.":::
-
-4. To define the scan range, select **Set scan ranges**.
-
-5. Set the IP address range and add your user and password.
-
- :::image type="content" source="media/how-to-control-what-traffic-is-monitored/edit-scan-range-screen.png" alt-text="Screenshot that shows adding a user and password.":::
-
-6. To exclude an IP range from a scan, select **Disable** next to the range.
-
-7. To remove a range, select :::image type="icon" source="media/how-to-control-what-traffic-is-monitored/remove-scan-icon.png" border="false"::: next to the range.
-
-8. Select **Save**. The **Edit Scan Ranges Configuration** dialog box closes, and the number of ranges appears in the **Scan Ranges** pane.
-
-To perform a manual scan:
-
-1. On the side menu, select **System Settings**.
-
-2. Select **Windows Endpoint Monitoring** :::image type="icon" source="media/how-to-control-what-traffic-is-monitored/windows-endpoint-monitoring-icon-v2.png" border="false":::.
+8. Select **Save**. The dialog box closes.
- :::image type="content" source="media/how-to-control-what-traffic-is-monitored/windows-endpoint-monitoring-screen-v2.png" alt-text="Screenshot that shows the Windows Endpoint Monitoring setup screen.":::
+**To perform a manual scan:**
-3. In the **Actions** pane, select **Start scan**. A status bar appears on the **Actions** pane and shows the progress of the scanning process.
+1. Define the scan ranges.
- :::image type="content" source="media/how-to-control-what-traffic-is-monitored/started-scan-screen-v2.png" alt-text="Screenshot that shows the Start scan button.":::
+3. Select **Save** and **Apply changes** and then select **Manually scan**.
-To view scan results:
+**To view scan results:**
-1. When the scan is finished, on the **Actions** pane, select **View Scan Results**. The CSV file with the scan results is downloaded to your computer.
+1. When the scan is finished, select **View Scan Results**. A .csv file with the scan results is downloaded to your computer.
## See also
defender-for-iot How To Create And Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-and-manage-users.md
This article describes how to create and manage users of sensors and the on-prem
Features are also available to track user activity and enable Active Directory sign in.
-By default, each sensor and on-premises management console is installed with a *cyberx and support* user. These users have access to advanced tools for troubleshooting and setup. Administrator users should sign in with these user credentials, create an admin user, and then create extra users for Security Analysts and Read-only users.
+By default, each sensor and on-premises management console is installed with a *cyberx, support* and *cyberx_host* user. These users have access to advanced tools for troubleshooting and setup. Administrator users should sign in with these user credentials, create an admin user, and then create extra users for security analysts and read-only users.
## Role-based permissions The following user roles are available:
defender-for-iot How To Create Data Mining Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-data-mining-queries.md
Title: Create data mining reports
-description: generate comprehensive and granular information about your network devices at various layers, such as protocols, firmware versions, or programming commands.
Previously updated : 11/09/2021
+ Title: Create data mining queries and reports in Defender for IoT
+description: Learn how to create granular reports about network devices.
Last updated : 02/02/2022
-# Sensor data mining queries
+# Run data mining queries
-## About Sensor data mining queries
-
-Data mining tools let you generate comprehensive and granular information about your network devices at various layers. For example, you can create a query based on:
--- Time periods--- Connections to the internet--- Ports--- Protocols--- Firmware versions--- Programming commands--- Inactivity of the device-
-You can fine-tune the report based on filters. For example, you can query a specific subnet in which firmware was updated.
--
-Various tools are available to manage queries. For example, you can export and save.
-
-> [!NOTE]
-> Administrators and security analysts have access to data-mining options.
-
-### Dynamic updates
-
-Data mining queries that you create are dynamically updated each time you open them. For example:
--- If you create a report for firmware versions on devices on June 1 and open the report again on June 10, this report will be updated with information that's accurate for June 10.--- If you create a report to detect new devices discovered over the last 30 days on June 1 and open the report on June 30, the results will be displayed for the last 30 days.-
-### Data mining use cases
-
-You can use queries to handle an extensive range of security needs for various security teams:
+Using data mining queries to get dynamic, granular information about your network devices, including for specific time periods, internet connectivity, ports and protocols, firmware vrsions, programming commands, and device state. You can use data mining queries for:
- **SOC incident response**: Generate a report in real time to help deal with immediate incident response. For example, Data Mining can generate a report for a list of devices that might require patching.- - **Forensics**: Generate a report based on historical data for investigative reports.--- **IT Network Integrity**: Generate a report that helps improve overall network security. For example, generate a report can be generated that lists devices with weak authentication credentials.-
+- **Network security**: Generate a report that helps improve overall network security. For example, generate a report can be generated that lists devices with weak authentication credentials.
- **Visibility**: Generate a report that covers all query items to view all baseline parameters of your network.--- **PLC Security** Improve security by detecting PLCs in unsecure states for example Program and Remote states.-
-## Data mining storage
+- **PLC security** Improve security by detecting PLCs in unsecure states for example Program and Remote states.
Data mining information is saved and stored continuously, except for when a device is deleted. Data mining results can be exported and stored externally to a secure server. In addition, the sensor performs automatic daily backups to ensure system continuity and preservation of data.
-## Predefined data mining queries
+## Predefined data mining reports
-The following predefined queries are available. These queries are generated in real time.
--- **CVEs**: A list of devices detected with known vulnerabilities within the last 24 hours.--- **Excluded CVEs**: A list of all the CVEs that were manually excluded. To achieve more accurate results in VA reports and attack vectors, you can customize the CVE list manually by including and excluding CVEs.
+The following predefined reports are available. These queries are generated in real time.
+- **Programming commands**: Devices that send industrial programming.
+- **Remote access**: Devices that communicate through remote session protocols.
- **Internet activity**: Devices that are connected to the internet.-+
+- **Excluded CVEs**: A list of all the CVEs that were manually excluded. To achieve more accurate results in VA reports and attack vectors, you can customize the CVE list manually by including and excluding CVEs.
- **Nonactive devices**: Devices that have not communicated for the past seven days.- - **Active devices**: Active network devices within the last 24 hours. -- **Remote access**: Devices that communicate through remote session protocols.--- **Programming commands**: Devices that send industrial programming.-
-These reports are automatically accessible from the **Reports** screen, where RO users and other users can view them. RO users can't access data-mining reports.
--
-## Create a data mining query
-
-To create a data-mining report:
-
-1. Select **Data Mining** from the side menu. Predefined suggested reports appear automatically.
-
- :::image type="content" source="media/how-to-generate-reports/data-mining-screeshot-v2.png" alt-text="Select data mining from side pane.":::
-
-2. Select :::image type="icon" source="media/how-to-generate-reports/plus-icon.png" border="false":::.
-
-3. Select **New Report** and define the report.
-
- :::image type="content" source="media/how-to-generate-reports/create-new-report-screen.png" alt-text="Create a new report by filling out this screen.":::
+Find these reports in Analyze** > **Data Mining*. Reports are available for users with Administrator and Security Analyst permissions. Read only users can't access these reports.
- The following parameters are available:
-
- - Provide a report name and description.
-
- - For categories, select either:
-
- - **Categories (All)** to view all report results about all devices in your network.
-
- - **Generic** to choose standard categories.
-
- - Select specific report parameters of interest to you.
-
- - Choose a sort order (**Order by**). Sort results based on activity or category.
-
- - Select **Save to Report Pages** to save the report results as a report that's accessible from the **Report** page. This will enable RO users to run the report that you created.
-
- - Select **Filters (Add)** to add more filters. Wildcard requests are supported.
-
- - Specify a device group (defined in the device map).
-
- - Specify an IP address.
-
- - Specify a port.
-
- - Specify a MAC address.
+## Create a report
+1. In Defender for IoT, **Data mining**.
+1. Select **Create report**.
+1. In the **Create new report** dialog, specify a report name and optional description.
+1. In **Choose category**, select the type of report you want to create. You can choose all, standard categories (generic) or specific settings.
+1. In **Order by**, order the report by category or activity.
+1. If you want to filter report results, you can specify a time range (minutes, days, and hours), and IP or MAC address, port, or device group (as defined in the device map).
4. Select **Save**. Report results open on the **Data Mining** page. -
-### Manage data-mining reports
-
-The following table describes management options for data mining:
-
-| Icon image | Description |
-|--|--|
-| :::image type="icon" source="media/how-to-generate-reports/edit-a-simulation-icon.png" border="false"::: | Edit the report parameters. |
-| :::image type="icon" source="media/how-to-generate-reports/export-as-pdf-icon.png" border="false"::: | Export as PDF. |
-| :::image type="icon" source="media/how-to-generate-reports/csv-export-icon.png" border="false"::: |Export as CSV. |
-| :::image type="icon" source="media/how-to-generate-reports/information-icon.png" border="false"::: | Show additional information such as the date last modified. Use this feature to create a query result snapshot. You might need to do this for further investigation with team leaders or SOC analysts, for example. |
-| :::image type="icon" source="media/how-to-generate-reports/pin-icon.png" border="false"::: | Display on the **Reports** page or hide on the **Reports** page. :::image type="content" source="media/how-to-generate-reports/hide-reports-page.png" alt-text="Hide or reveal your reports."::: |
-| :::image type="icon" source="media/how-to-generate-reports/delete-simulation-icon.png" border="false"::: | Delete the report. |
-
-#### Create customized directories
-
-You can organize the extensive information for data-mining queries by creating directories for categories. For example, you can create directories for protocols or locations.
-
-To create a new directory:
-
-1. Select :::image type="icon" source="media/how-to-generate-reports/plus-icon.png" border="false"::: to add a new directory.
-
-2. Select **New Directory** to display the new directory form.
-
-3. Name the new directory.
-
-4. Drag the required reports into the new directory. At any time, you can drag the report back to the main view.
-
-5. Right-click the new directory to open, edit, or delete it.
-
-#### Create snapshots of report results
-
-You might need to save certain query results for further investigation. For example, you might need to share results with various security teams.
-
-Snapshots are saved within the report results and don't generate dynamic queries.
--
-To create a snapshot:
-
-1. Open the required report.
-
-2. Select the information icon :::image type="icon" source="media/how-to-generate-reports/information-icon.png" border="false":::.
-
-3. Select **Take New**.
-
-4. Enter a name for the snapshot and select **Save**.
--
-#### Customize the CVE list
-
-You can manually customize the CVE list as follows:
-
- - Include/exclude CVEs
-
- - Change the CVE score
-
-To perform manual changes in the CVE report:
-
-1. From the side menu, select **Data Mining**.
-
-2. Select :::image type="icon" source="media/how-to-generate-reports/plus-icon.png" border="false"::: in the upper-left corner of the **Data Mining** window. Then select **New Report**.
-
- :::image type="content" source="media/how-to-generate-reports/create-a-new-report-screen.png" alt-text="Create a new report.":::
-
-3. From the left pane, select one of the following options:
-
- - **Known Vulnerabilities**: Selects both options and presents results in the report's two tables, one with CVEs and the other with excluded CVEs.
-
- - **CVEs**: Select this option to present a list of all the CVEs.
-
- - **Excluded CVEs**: Select this option to presents a list of all the excluded CVEs.
-
-4. Fill in the **Name** and **Description** information and select **Save**. The new report appears in the **Data Mining** window.
-
-5. To exclude CVEs, open the data-mining report for CVEs. The list of all the CVEs appears.
-
- :::image type="content" source="media/how-to-generate-reports/cves.png" alt-text="C V E report.":::
-
-6. To enable selecting items in the list, select :::image type="icon" source="media/how-to-generate-reports/enable-selecting-icon.png" border="false"::: and select the CVEs that you want to customize. The **Operations** bar appears on the bottom.
-
- :::image type="content" source="media/how-to-generate-reports/operations-bar-appears.png" alt-text="Screenshot of the data-mining Operations bar.":::
-
-7. Select the CVEs that you want to exclude, and then select **Delete Records**. The CVEs that you've selected don't appear in the list of CVEs and will appear in the list of excluded CVEs when you generate one.
-
-8. To include the excluded CVEs in the list of CVEs, generate the report for excluded CVEs and delete from that list the items that you want to include back in the list of CVEs.
-
-9. Select the CVEs in which you want to change the score, and then select **Update CVE Score**.
-
- :::image type="content" source="media/how-to-generate-reports/set-new-score-screen.png" alt-text="Update the CVE score.":::
-
-10. Enter the new score and select **OK**. The updated score appears in the CVEs that you selected.
---
-## Sensor reports based on data mining
-
-Regular reports, accessed from the **Reports** option, are predefined data mining reports. They're not dynamic queries as are available in data mining, but a static representation of the data mining query results.
-
-Data mining query results are not available to Read Only users. Administrators and security analysts who want Read Only users to have access to the information generated by data mining queries should save the information as report.
-
-Reports reflect information generated by data mining query results. This includes default data mining reports, which are available in the Reports view. Administrator and security analysts can also generate custom data mining queries, and save them as reports. These reports are available for RO users as well.
-
-To generate a report:
-
-1. Select **Reports** on the side menu.
-
-2. Choose the required report to display. The choice can be **Custom** or **Auto-Generated** reports, such as **Programming Commands** and **Remote Access**.
-
-3. You can export the report by selecting one of the icons on the upper right of the screen:
-
- :::image type="icon" source="media/how-to-generate-reports/export-to-pdf-icon.png" border="false"::: Export to a PDF file.
-
- :::image type="icon" source="media/how-to-generate-reports/export-to-csv-icon.png" border="false"::: Export to a CSV file.
-
-> [!NOTE]
-> The RO user can see only reports created for them.
+Reports are dynamically updated each time you open them. For example:
+- If you create a report for firmware versions on devices on June 1 and open the report again on June 10, this report will be updated with information that's accurate for June 10.
+- If you create a report to detect new devices discovered over the last 30 days on June 1 and open the report on June 30, the results will be displayed for the last 30 days.
+## Managing reports
+Reports can be viewed in the **Data Mining** page, You can refresh a report, edit report parameters, and export to a CSV file or PDF. You can also take a snapshot of a report.
-## On-premises management console reports based on data mining reports
-The on-premises management console lets you generate reports for each sensor that's connected to it. Reports are based on sensor data-mining queries that are performed.
+## View reports in on-premises management console
-You can generate the following reports:
+The on-premises management console lets you generate reports for each sensor that's connected to it. Reports are based on sensor data-mining queries that are performed, and include:
- **Active Devices (Last 24 Hours)**: Presents a list of devices that show network activity within a period of 24 hours.- - **Non-Active Devices (Last 7 Days)**: Presents a list of devices that show no network activity in the last seven days.- - **Programming Commands**: Presents a list of devices that sent programming commands within the last 24 hours.- - **Remote Access**: Presents a list of devices that remote sources accessed within the last 24 hours. :::image type="content" source="media/how-to-generate-reports/reports-view.png" alt-text="Screenshot of the reports view.":::
defender-for-iot How To Create Trends And Statistics Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-trends-and-statistics-reports.md
Title: Generate trends and statistics reports
+ Title: Create trends and statistics reports in Defender for IoT
description: Gain insight into network activity, statistics, and trends by using Defender for IoT Trends and Statistics widgets. Previously updated : 11/09/2021 Last updated : 02/01/2022
-# Sensor trends and statistics reports
+# Create trends and statistics dashboards
-You can create widget graphs and pie charts to gain insight into network trends and statistics. Widgets can be grouped under user-defined dashboards.
+This article describes how to create dashboards on your sensor console to get insight into network trends and statistics.
-> [!NOTE]
-> Administrator and security analysts can create Trends and Statistics reports.
-
-The dashboard consists of widgets that graphically describe the following types of information:
--- Traffic by port-- Top traffic by port-- Channel bandwidth-- Total bandwidth-- Active TCP connection-- Top Bandwidth by VLAN-- Devices:
- - New devices
- - Busy devices
- - Devices by vendor
- - Devices by OS
- - Number of devices per VLAN
- - Disconnected devices
-- Connectivity drops by hours-- Alerts for incidents by type-- Database table access-- Protocol dissection widgets-- DELTAV
- - DeltaV roc operations distribution
- - DeltaV roc events by name
- - DeltaV events by time
-- AMS
- - AMS traffic by server port
- - AMS traffic by command
-- Ethernet and IP address:
- - Ethernet and IP address traffic by CIP service
- - Ethernet and IP address traffic by CIP class
- - Ethernet and IP address traffic by command
-- OPC:
- - OPC top management operations
- - OPC top I/O operations
-- Siemens S7:
- - S7 traffic by control function
- - S7 traffic by subfunction
-- VLAN
- - Number of devices per VLAN
- - Top bandwidth by VLAN
-- 60870-5-104
- - IEC-60870 Traffic by ASDU
-- BACNET
- - BACnet Services
-- DNP3
- - DNP3 traffic by function
-- SRTP:
- - SRTP traffic by service code
- - SRTP errors by day
-- SuiteLink:
- - SuiteLink top queried tags
- - SuiteLink numeric tag behavior
-- IEC-60870 traffic by ASDU-- DNP3 traffic by function-- MMS traffic by service-- Modbus traffic by function-- OPC-UA traffic by service-
-> [!NOTE]
-> The time in the widgets is set according to the sensor time.
-
-## Create reports
-
-To see dashboards and widgets:
-
-Select **Trends & Statistics** on the side menu.
--
-By default, results are displayed for detections over the last seven days. You can use filter tools change this range. For example, a free text search.
-
-## Create a Dashboard
-
-Create a new dashboard by selecting the **Dashboard** drop-down menu. You can create and add as many widgets to a dashboard.
-
-You can create customized dashboards using the following options:
--- Add a widget to the dashboard--- Delete a widget from the dashboard--- Modify a widgetΓÇÖs filter--- Resize a widget--- Change the location of a widget-
- :::image type="content" source="media/how-to-create-trends-and-statistics-reports/pin-a-dashboard.png" alt-text="Change the location of a widget.":::
-
-To create a customized dashboard:
-
-1. Select **Trends and Statistics** from the left panel.
-
-1. Select the **Select Dashboard** drop down menu, and select the **Create Dashboard** button.
-
-1. Enter a meaningful name for the new dashboard, and select **Create**.
-
-1. Select **Add Widget** at the top left of the page.
-
- :::image type="content" source="media/how-to-create-trends-and-statistics-reports/widget-store.png" alt-text="Select the widget from the widget store.":::
-
-1. **Security**, and **Operational** widgets are available at the top right of the window. Choose from various categories, and protocols. A brief description with a miniature graphic appears with each widget. Use the scroll bar to see all available widgets.
-
-1. Select a widget using the **Click to Add** button. The widget is immediately displayed on the dashboard.
-
-To delete a dashboard:
-
-1. Select the name of the dashboard from the drop-down menu.
-
-1. Select the **Delete** icon, and then select **OK**.
-
- :::image type="content" source="media/how-to-create-trends-and-statistics-reports/garbage-icon.png" alt-text="Select the delete icon to delete the dashboard.":::
-
-To edit a dashboard name:
-
-1. Select the name of the dashboard from the drop-down menu.
-
-1. Select the **Edit** icon.
-
- :::image type="content" source="media/how-to-create-trends-and-statistics-reports/edit-name.png" alt-text="Select the edit icon to edit the name of your dashboard.":::
-
-1. Enter a new name for the dashboard, and select **Save**.
-
-To set the default dashboard:
-
-1. Select the name of the dashboard from the drop-down menu.
-
-1. Select the **Star** icon to select the dashboard to be set as the default dashboard.
-
- :::image type="content" source="media/how-to-create-trends-and-statistics-reports/default-dashboard.png" alt-text="Select the star icon to choose your default dashboard.":::
-
-To modify filtering data in a widget:
-
-1. Select the **Filter** icon.
-
- :::image type="content" source="media/how-to-create-trends-and-statistics-reports/filter-widget.png" alt-text="Select the filter icon to set parameters for your widget.":::
-
-1. Edit the required parameters.
-
-1. Select **OK**.
-
-To delete a widget:
--- Select the :::image type="icon" source="media/how-to-create-trends-and-statistics-reports/x-icon.png" border="false"::: icon.-
- :::image type="content" source="media/how-to-create-trends-and-statistics-reports/delete-widget.png" alt-text="Select the X to delete the widget.":::
-
-## Creating widgets
-The Widget Store allows you to select widgets by category and protocol. You can display the **Security**, or **Operational** widgets available by selecting them.
+## Before you start
+You need Administrator or Security Analyst permissions to create dashboards.
-Each widget contains specific information about system traffic, network statistics, protocol statistics, device, and alert information. A message is displayed when there is no data for a widget.
+## Create dashboards
-You can remove a section from the pie, in a pie chart, to see the relative significance of the remaining slices more clearly. Select the sliceΓÇÖs name in the legend at the bottom of the screen to do this.
+You can create many different types of dashboard. Based on traffic, device state, alerts, connectivity, and protocol.
-The following sections present examples of use cases for a few of the widgets.
+1. On your Defender for IoT sensor console, select **Trends & Statistics** > **Create Dashboard**.
-### Busy devices widget
+1. In the **Create Dashboard** pane that appears on the right:
-This widget lists the top five busiest devices. In **Edit** mode, you can filter by known Protocols.
+ - In the **Dashboard name** field, enter a meaningful name for your dashboard.
+ - (Optional) Filter the widgets displayed by selecting a category or protocol from the **Dashboard widget type** menu.
+ - Scroll down as needed and select the widget you want to add. Each widget has a short description and indicates whether it focuses on operations, security, or traffic.
+ - Select **Save** to start your new dashboard.
+1. Your widget is added to the new dashboard. Use the toolbar at the top of page to continue modifying your dashboard.
-### Total bandwidth widget
+By default, results are displayed for detections for over the last seven days. Select the **Filter** button at the top left of each widget to change this range.
-This widget tracks the bandwidth in Mbps (megabits per second). The bandwidth is indicated on the y-axis, with the date appearing on the x-axis. **Edit** mode allows you to filter results according to Client, Server, Server Port, or Subnet. A tooltip appears when you hover the cursor over the graph.
--
-### Channels bandwidth widget
-
-This widget displays the top five traffic channels. You can filter by Address, and set the number of Presented Results. Select the down arrow to show more channels.
--
-### Traffic by port widget
-
-This widget displays the traffic by port, which is indicated by a pie chart with each port designated by a different color. The amount of traffic in each port is proportional to the size of its part of the pie.
--
-### New devices widget
-
-This widget displays the new devices bar chart, which indicates how many new devices were discovered on a particular date.
--
-### Protocol dissection widgets
-
-This widget displays a pie chart that provides you with a look at the traffic per protocol, dissected by function codes, and services. The size of each slice of the pie is proportional to the amount of traffic relative to the other slices.
--
-### Active TCP connections widget
-
-This widget displays a chart that shows the number of active TCP connections in the system.
--
-### Incident by type widget
-
-This widget displays a pie chart that shows the number of incidents by type. This is the number of alerts generated by each engine over a predefined time period.
--
-## Devices by vendor widget
-
-This widget displays a pie chart that shows the number of devices by vendor. The number of devices for a specific vendor is proportional to the size of that deviceΓÇÖs vendor part of the disk relative to other device vendors.
-
-## Number of devices per VLAN widget
-
-This widget displays a pie chart that shows the number of discovered devices per VLAN. The size of each slice of the pie is proportional to the number of discovered devices relative to the other slices.
-
-Each VLAN appears with the VLAN tag assigned by the sensor or name that you have manually added.
--
-### Top bandwidth by VLAN widget
-
-This widget displays the bandwidth consumption by VLAN. By default, the widget shows five VLANs with the highest bandwidth usage.
-
-You can filter the data by the period presented in the widget. Select the down arrow to show more results.
--
-## System report
-
-To download the system report:
-
-1. Select **Trends & Statistics** on the side menu.
-
-1. Select **System Report** at the top-right corner. The report will download automatically.
-
- :::image type="content" source="media/how-to-create-trends-and-statistics-reports/system-report.png" alt-text="Select the system report button to download a copy of the system report.":::
-
-The System Report is a PDF file containing all the data in the system:
-
- - Devices
-
- - Alerts
-
- - Network Policy Information
-
-## Devices in a system report
-
-The System Report shows a list of all devices, and their information. For example, Type, Name, and Protocols used. The System Report also shows a list of devices per vendor.
-
-## Alerts in system report
-
-The System Report shows a list of all alerts with their information such as Date, and Severity.
--
-## Network information in system report
-
-The System Report shows in detail, your network baseline. For example, DNP3 function code, and open ports per connection.
-
+> [!NOTE]
+> The time shown in the widget is set according to the sensor machine's time.
+>
+
+## Sample widgets
+
+The following table summarizes common use cases for dashboard widgets.
+
+Widget name | Sample use case
+ |
+Busy devices | Lists the five busiest devices. In **Edit** mode, you can filter by known protocols.
+Total bandwidth | Tracks the bandwidth in Mbps (megabits per second). The bandwidth is indicated on the y-axis, with the date appearing on the x-axis. **Edit** mode allows you to filter results.
+Channels bandwidth | Displays the top five traffic channels. You can filter by Address, and set the number of Presented Results. Select the down arrow to show more channels.
+Traffic by port | Displays the traffic by port, which is indicated by a pie chart with each port designated by a different color. The amount of traffic in each port is proportional to the size of its part of the pie.
+New devices | Displays the new devices bar chart, which indicates how many new devices were discovered on a particular date.
+Protocol dissection | Displays a pie chart that provides you with a look at the traffic per protocol, dissected by function codes, and services. The size of each slice of the pie is proportional to the amount of traffic relative to the other slices.
+Active TCP connections | Displays a chart that shows the number of active TCP connections in the system.
+Incident by type | Displays a pie chart that shows the number of incidents by type. This is the number of alerts generated by each engine over a predefined time period.
+Devices by vendor | Displays a pie chart that shows the number of devices by vendor. The number of devices for a specific vendor is proportional to the size of that deviceΓÇÖs vendor part of the disk relative to other device vendors.
+Number of devices per VLAN | Displays a pie chart that shows the number of discovered devices per VLAN. The size of each slice of the pie is proportional to the number of discovered devices relative to the other slices. Each VLAN appears with the VLAN tag assigned by the sensor or name that you have manually added.
+Top bandwidth by VLAN | Displays the bandwidth consumption by VLAN. By default, the widget shows five VLANs with the highest bandwidth usage. You can filter the data by the period presented in the widget. Select the down arrow to show more results.
## See also
defender-for-iot How To Enhance Port And Vlan Name Resolution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-enhance-port-and-vlan-name-resolution.md
Title: Enhance port and VLAN name resolution
-description: Customize port and VLAN names on your sensors to enrich device resolution.
Previously updated : 11/09/2021
+ Title: Enhance port and VLAN name resolution in Defender for IoT
+description: Customize port and VLAN names on your sensors
Last updated : 01/02/2022
-# Enhance port, VLAN and OS resolution
+# Customize port and VLAN names
You can customize port and VLAN names on your sensors to enrich device resolution.
-## Customize port names
+## Customize a port name
-Microsoft Defender for IoT automatically assigns names to most universally reserved ports, such as DHCP or HTTP. You can customize port names for other ports that Defender for IoT detects. For example, assign a name to a non-reserved port because that port shows unusually high activity.
+Microsoft Defender for IoT automatically assigns names to most universally reserved ports, such as DHCP or HTTP. You can customize port names for other ports that Defender for IoT detects. For example, you might assign a name to a non-reserved port because that port shows unusually high activity. Names appear when you view device groups from the device map, or when you create reports that provide port information.
-These names appear when:
-
- - You select **Device groups** from the device map.
-
- - You create widgets that provide port information.
-
-### View custom port names in the device map
-
-Ports that include a name defined by users appear in the device map, in the **Known Applications** group.
--
-### View custom port names in widgets
-
-Port names that you defined appear in the widgets that cover traffic by port.
--
-To define custom port names:
-
-1. Select **System Settings** and then select **Standard Aliases**.
-
-2. Select **Add Port Alias**.
-
- :::image type="content" source="media/how-to-enrich-asset-information/edit-aliases.png" alt-text="Add Port Alias.":::
-
-3. Enter the port number, select **TCP/UDP**, or select **Both**, and add the name.
+Customize a name as follows:
+1. Select **System Settings**. Under **Network monitoring**, select **Port Naming**.
+2. Select **Add port**.
+3. Enter the port number, select the protocol (TCP, UDP, both) and type in a name.
4. Select **Save**.
-## Configure VLAN names
-
-You can enrich device inventory data with device VLAN numbers and tags. In addition to data enrichment, you can view the number of devices per VLAN, and view bandwidth by VLAN widgets.
-
-VLANs support is based on 802.1q (up to VLAN ID 4094).
-
-Two methods are available for retrieving VLAN information:
--- **Automatically discovered**: By default, the sensor automatically discovers VLANs. VLANs detected with traffic are displayed on the VLAN configuration screen, in data-mining reports, and in other reports that contain VLAN information. Unused VLANs are not displayed. You can't edit or delete these VLANs. -
- You should add a unique name to each VLAN. If you don't add a name, the VLAN number appears in all the locations where the VLAN is reported.
+## Customize a VLAN name
-- **Manually added**: You can add VLANs manually. You must add a unique name for each VLAN that was manually added, and you can edit or delete these VLANs.
+You can enrich device inventory data with device VLAN numbers and tags.
-VLAN names can contain up to 50 ASCII characters.
+- VLANs support is based on 802.1q (up to VLAN ID 4094). VLANS can be discovered automatically by the sensor or added manually.
+- Automatically discovered VLANs can't be edited or deleted. You should add a name to each VLAN, if you don't add a name, the VLAN number will appear when VLAN information is reported.
+- When you add a manual VLN, you must add a unique name. These VLANs can be edited and deleted.
+- VLAN names can contain up to 50 ASCII characters.
+## Before you start
> [!NOTE] > VLAN names are not synchronized between the sensor and the management console. You need to define the name on the management console as well. For Cisco switches, add the following line to the span configuration: `monitor session 1 destination interface XX/XX encapsulation dot1q`. In that command, *XX/XX* is the name and number of the port.
To configure VLAN names:
3. Add a unique name next to each VLAN ID.
-## Improve device operating system classification: data enhancement
-
-Sensors continuously auto discover new devices, as well as changes to previously discovered devices, including operating system types.
-
-Under certain circumstances, conflicts might be detected in discovered operating systems. This can happen, for example, if you have an operating systems version that refers to either desktop or server systems. If it happens, you'll receive a notification with optional operating systems classifications.
--
-Investigate the recommendations in order to enrich operating system classification. This classification appears in the device inventory, data-mining reports, and other displays. Making sure this information is up-to-date can improve the accuracy of alerts, threats, and risk analysis reports.
-
-To access operating system recommendations:
-
-1. Select **System Settings**.
-1. Select **Data Enhancement**.
## Next steps
defender-for-iot How To Import Device Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-import-device-information.md
Title: Import device information description: Defender for IoT sensors monitor and analyze mirrored traffic. In these cases, you might want to import data to enrich information on devices already detected. Previously updated : 01/06/2022 Last updated : 02/01/2022 # Import device information to a sensor
-Sensors monitor and analyzes mirrored traffic. In some cases, because of organization-specific network configuration policies, some information might not be transmitted.
+Sensors monitor and analyze device traffic. In some cases, because of network policies, some information might not be transmitted. In this case, you can import data and add it to device information that's already detected. You have two options for import:
-In these cases, you might want to import data to enrich information on devices that are already detected. Two options are available for importing information to sensors:
+- **Import from the device map**: Import device names, type, group, or Purdue layer to the device map.
+- **Import from import settings**: Import device IP address, operating system, patch level, or authorization status to the device map.
-- **Import from the Map**: Update the device name, type, group, or Purdue layer to the map.
+## Import from the device map
-- **Import from Import Settings**: Import device OS, IP address, patch level, or authorization status.-
-## Import from the map
-
-This section describes how to import device names, types, groups, or Purdue layers to the device map. You do this from the map.
-
-**Import requirements**
--- **Names**: Can be up to 30 characters.--- **Type** or **Purdue Layer**: Use the options that appear in the **Device Properties** dialog box. (Right-click the device and select **View Properties**.)
+Before you start, note that:
+- **Names**: Names can be up to 30 characters.
- **Device Group**: Create a new group of up to 30 characters.
+- **Type** or **Purdue Layer**: Use the options that appear in the device properties when you select a device.
+- To avoid conflicts, don't import the data that you exported from one sensor to another.
-**To avoid conflicts, don't import the data that you exported from one sensor to another sensor.**
-
-**To import:**
-
-1. On the side menu, select **Devices**.
-
-2. In the upper-right corner of the **Devices** window, select :::image type="icon" source="media/how-to-import-device-information/file-icon.png" border="false":::.
-
- :::image type="content" source="media/how-to-import-device-information/device-window-v2.png" alt-text="Screenshot of the device window.":::
-
-3. Select **Export Devices**. An extensive range of information appears in the exported file. This information includes protocols that the device uses and the device authorization status.
-
- :::image type="content" source="media/how-to-import-device-information/sample-exported-file.png" alt-text="The information in the exported file.":::
-
-4. In the CSV file, change only the device name, type, group, and Purdue layer. Then save the file.
-
- Use capitalization standards shown in the exported file. For example, for the Purdue layer, use all first-letter capitalization.
-
-5. From the **Import/Export** drop-down menu in the **Device** window, select **Import Devices**.
-
- :::image type="content" source="media/how-to-import-device-information/import-assets-v2.png" alt-text="Import devices through the device window.":::
+Import data as follows:
-6. Select **Import Devices** and select the CSV file that you want to import. The import status messages appear on the screen until the **Import Devices** dialog box closes.
+1. In Defender for IoT, select **Device map**.
+2. Select **Export Devices**. An extensive range of information appears in the exported file. This information includes protocols that the device uses and the device authorization status.
+4. In the CSV file, you should change only the device name, type, group, and Purdue layer. Use capitalization standards shown in the exported file. For example, for the Purdue layer, use all first-letter capitalization.
+1. Save the file.
+1. Select **Import Devices**. Then select the CSV file that you want to import.
## Import from import settings
-This section describes how to import the device IP address, OS, patch level, or authorization status to the device map. You do this from the **Import Settings** dialog box.
-
-**To import the IP address, OS, and patch level:**
-
-1. Download the [Devices settings file](https://download.microsoft.com/download/8/2/3/823c55c4-7659-4236-bfda-cc2427be2cee/CSS/devices_info_2.2.8%20and%20up.xlsx) and enter the information as follows:
-
- - **IP Address**: Enter the device IP address.
-
- - **Operating System**: Select from the drop-down list.
-
- - **Last Update**: Use the YYYY-MM-DD format.
-
- :::image type="content" source="media/how-to-import-device-information/last-update-screen.png" alt-text="The options screen.":::
-
-2. On the side menu, select **Import Settings**.
-
- :::image type="content" source="media/how-to-import-device-information/import-settings-screen-v2.png" alt-text="Import your settings.":::
-
-3. To upload the required configuration, in the **Device Info** section, select **Add** and upload the CSV file that you prepared.
-
-**To import the authorization status:**
-
-1. Download the [Authorization file](https://download.microsoft.com/download/8/2/3/823c55c4-7659-4236-bfda-cc2427be2cee/CSS/authorized_devices%20-%20example.csv) and save. Verify that you saved the file as a CSV.
-
-2. Enter the information as:
-
- - **IP Address**: The device IP address.
-
- - **Name**: The authorized device name. Make sure that names are accurate. Names given to the devices in the imported list overwrite names shown in the device map.
+1. Download the [Devices settings file](https://download.microsoft.com/download/8/2/3/823c55c4-7659-4236-bfda-cc2427be2cee/CSS/devices_info_2.2.8%20and%20up.xlsx).
+1. In the **Devices** sheet, enter the device IP address.
+1. In **Device Type**, select the type from the dropdown list.
+1. In **Last Update**, specify the data in YYYY-MM-DD format.
+1. In **System settings**, under **Import settings**, select **Device Information** to import. Select **Add** and upload the CSV file that you prepared.
- :::image type="content" source="media/how-to-import-device-information/device-map-file.png" alt-text="Excel files with imported device list.":::
-3. On the side menu, select **Import Settings**.
+### Import authorization status:**
-4. In the **Authorized Devices** section, select **Add** and upload the CSV file that you saved.
+1. Download the [Authorization file](https://download.microsoft.com/download/8/2/3/823c55c4-7659-4236-bfda-cc2427be2cee/CSS/authorized_devices%20-%20example.csv) and save as a CSV file.
+1. In the authorized_devices sheet, specify the device IP address.
+1. Specify the authorized device name. Make sure that names are accurate. Names given to the devices in the imported list overwrite names shown in the device map.
+1. In **System settings**, under **Import settings**, select **Authorized devices** to import. Select **Add** and upload the CSV file that you prepared.
When the information is imported, you receive alerts about unauthorized devices for all the devices that don't appear on this list.
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-install-software.md
Title: Defender for IoT installation description: Learn how to install a sensor and the on-premises management console for Microsoft Defender for IoT. Previously updated : 11/09/2021 Last updated : 01/06/2022
This article covers the following installation information:
- **Virtual Appliances:** Virtual machine details and software installation.
-After installation, connect your sensor to your network.
+After the software is installed, connect your sensor to your network.
## About Defender for IoT appliances
The following sections provide information about Defender for IoT sensor applian
### Physical appliances
-The Defender for IoT appliance sensor connects to a SPAN port or network TAP and immediately begins collecting ICS network traffic by using passive (agentless) monitoring. This process has zero impact on OT networks and devices because it isn't placed in the data path and doesn't actively scan OT devices.
+The Defender for IoT appliance sensor connects to a SPAN port, or network TAP. Once connected, the sensor immediately collects ICS network traffic by using passive (agentless) monitoring. This process has zero impact on OT networks, and devices because it isn't placed in the data path, and doesn't actively scan OT devices.
The following rack mount appliances are available:
The following virtual appliances are available:
### Access the ISO installation image
-The installation image is accessible from Defender for IoT in the Azure portal.
+The installation image is accessible from Defender for IoT, in the [Azure portal](https://ms.portal.azure.com).
-To access the file:
+**To access the file**:
-1. Sign in to your Defender for IoT account.
+1. Navigate to the [Azure portal](https://ms.portal.azure.com).
-1. Go to the **Network sensor** or **On-premises management console** page and select a version to download.
+1. Search for, and select **Microsoft Defender for IoT**.
+
+1. Select the **Sensor**, or **On-premises management console** tab.
+
+ :::image type="content" source="media/tutorial-install-components/sensor-tab.png" alt-text="Screeshot of the sensor tab under Defender for IoT.":::
+
+1. Select a version from the drop-down menu.
+
+1. Select the **Download** button.
### Install from DVD
Before the installation, ensure you have:
- An ISO installer image.
-To install:
+**To Burn the image to a DVD**:
+
+1. Connect a portable DVD drive to your computer.
-1. Burn the image to a DVD or prepare a disk on a key. Connect a portable DVD drive to your computer, right-click the ISO image, and select **Burn to disk**.
+1. Insert a blank DVD into the portable DVD drive.
-1. Connect the DVD or disk on a key and configure the appliance to boot from DVD or disk on a key.
+1. Right-click the ISO image, and select **Burn to disk**.
+
+1. Connect the DVD drive to the device, and configure the appliance to boot from DVD.
### Install from disk on a key
Before the installation, ensure you have:
- An ISO installer image file.
-The disk on a key will be erased in this process.
+This process will format the disk on a key and any data stored on the disk on key will be erased.
-To prepare a disk on a key:
+**To prepare a disk on a key**:
-1. Run Rufus and select **SENSOR ISO**.
+1. Run Rufus, and select **SENSOR ISO**.
1. Connect the disk on a key to the front panel.
When the connection is established, the BIOS is configurable.
#### Configuring the BIOS
-You need to configure the appliance BIOS if:
+Configure the appliance BIOS if:
- You did not purchase your appliance from Arrow.
After you access the BIOS, go to **Device Settings**.
The installation process takes about 20 minutes. After the installation, the system is restarted several times.
-To install:
+**To install the software**:
1. Verify that the version media is mounted to the appliance in one of the following ways:
- - Connect the external CD or disk on a key with the release.
+ - Connect the external CD, or disk on a key with the release.
- Mount the ISO image by using iDRAC. After signing in to iDRAC, select the virtual console, and then select **Virtual Media**.
To install:
1. Start the appliance. When you're using iDRAC, you can restart the servers by selecting the **Consul Control** button. Then, on the **Keyboard Macros**, select the **Apply** button, which will start the Ctrl+Alt+Delete sequence.
-1. Select **English**.
-
-1. Select **SENSOR-RELEASE-\<version\> Enterprise**.
-
- :::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Select your sensor version and enterprise type.":::
-
-1. Define the appliance profile, and network properties:
-
- :::image type="content" source="media/tutorial-install-components/appliance-profile-screen-v2.png" alt-text="Screenshot that shows the appliance profile, and network properties.":::
-
- | Parameter | Configuration |
- |--|--|
- | **Hardware profile** | **enterprise** |
- | **Management interface** | **eno1** |
- | **Network parameters (provided by the customer)** | - |
- |**management network IP address:** | - |
- | **subnet mask:** | - |
- | **appliance hostname:** | - |
- | **DNS:** | - |
- | **default gateway IP address:** | - |
- | **input interfaces:** | The system generates the list of input interfaces for you. To mirror the input interfaces, copy all the items presented in the list with a comma separator. You do not have to configure the bridge interface. This option is used for special use cases only. |
-
-1. After about 10 minutes, the two sets of credentials appear. One is for a **CyberX** user, and one is for a **Support** user.
-
-1. Save the appliance ID and passwords. You'll need these credentials to access the platform the first time you use it.
-
-1. Select **Enter** to continue.
+1. Follow the software installation instructions located [here](#install-the-software).
## HPE ProLiant DL20 installation
This section describes the HPE ProLiant DL20 installation process, which include
Use the following procedure to set up network options and update the default password.
-To enable and update the password:
+**To enable, and update the password**:
-1. Connect a screen and a keyboard to the HP appliance, turn on the appliance, and press **F9**.
+1. Connect a screen, and a keyboard to the HP appliance, turn on the appliance, and press **F9**.
:::image type="content" source="media/tutorial-install-components/hpe-proliant-screen-v2.png" alt-text="Screenshot that shows the HPE ProLiant window.":::
To enable and update the password:
### Configure the HPE BIOS
-The following procedure describes how to configure the HPE BIOS for the enterprise and SMB appliances.
+The following procedure describes how to configure the HPE BIOS for the enterprise, and SMB appliances.
**To configure the HPE BIOS**:
To install the software:
1. Start the appliance.
-1. Select **English**.
-
- :::image type="content" source="media/tutorial-install-components/select-english-screen.png" alt-text="Selection of English in the CLI window.":::
-
-1. Select **SENSOR-RELEASE-\<version> Enterprise**.
-
- :::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Screenshot of the screen for selecting a version.":::
-
-1. In the Installation Wizard define the hardware profile and network properties:
-
- :::image type="content" source="media/tutorial-install-components/installation-wizard-screen-v2.png" alt-text="Screenshot that shows the Installation Wizard.":::
-
- | Parameter | Configuration |
- | -| - |
- | **Hardware profile** | Select **Enterprise** or **Office** for SMB deployments. |
- | **Management interface** | **eno2** |
- | **Default network parameters (usually the parameters are provided by the customer)** | **management network IP address:** <br/> <br/>**appliance hostname:** <br/>**DNS:** <br/>**the default gateway IP address:**|
- | **input interfaces:** | The system generates the list of input interfaces for you.<br/><br/>To mirror the input interfaces, copy all the items presented in the list with a comma separator: **eno5, eno3, eno1, eno6, eno4**<br/><br/>**For HPE DL20: Do not list eno1, enp1s0f4u4 (iLo interfaces)**<br/><br/>**BRIDGE**: There's no need to configure the bridge interface. This option is used for special use cases only. Press **Enter** to continue. |
-
-1. After about 10 minutes, the two sets of credentials appear. One is for a **CyberX** user, and one is for a **Support** user.
-
-1. Save the appliance's ID and passwords. You'll need the credentials to access the platform for the first time.
-
-1. Select **Enter** to continue.
+1. Follow the software installation instructions located [here](#install-the-software).
## HPE ProLiant DL360 installation
The enterprise configuration is identical.
This procedure describes the iLO installation from a virtual drive.
-To install:
+**To perform the iLO installation from a virtual drive**:
1. Sign in to the iLO console, and then right-click the servers' screen.
To install:
1. Go to the left icon, select **Power**, and the select **Reset**.
-1. The appliance will restart and run the sensor installation process.
+1. The appliance will restart, and run the sensor installation process.
### Software installation (HPE DL360) The installation process takes about 20 minutes. After the installation, the system is restarted several times.
-To install:
+**To install the software**:
-1. Connect the screen and keyboard to the appliance, and then connect to the CLI.
+1. Connect a screen, and keyboard to the appliance, and then connect to the CLI.
1. Connect an external CD or disk on a key with the ISO image that you downloaded from the **Updates** page of Defender for IoT in the Azure portal.
-1. Start the appliance.
-
-1. Select **English**.
-
-1. Select **SENSOR-RELEASE-\<version> Enterprise**.
-
- :::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Screenshot that shows selecting the version.":::
-
-1. In the Installation Wizard, define the appliance profile and network properties.
-
- :::image type="content" source="media/tutorial-install-components/installation-wizard-screen-v2.png" alt-text="Screenshot that shows the Installation Wizard.":::
-
- | Parameter | Configuration |
- | -| - |
- | **Hardware profile** | Select **corporate**. |
- | **Management interface** | **eno2** |
- | **Default network parameters (provided by the customer)** | **management network IP address:** <br> **subnet mask:** <br/>**appliance hostname:** <br/>**DNS:** <br/>**the default gateway IP address:**|
- | **input interfaces:** | The system generates a list of input interfaces for you.<br/><br/>To mirror the input interfaces, copy all the items presented in the list with a comma separator.<br/><br/> You do not need to configure the bridge interface. This option is used for special use cases only. |
-
-1. After about 10 minutes, the two sets of credentials appear. One is for a **CyberX** user, and one is for a **support** user.
-
-1. Save the appliance's ID and passwords. You'll need these credentials to access the platform for the first time.
-
-1. Select **Enter** to continue.
+1. Follow the software installation instructions located [here](#install-the-software).
## HP EdgeLine 300 installation
To install:
1. Enter the iSM IP Address into your web browser.
-1. Sign in using the default username and password found on your appliance.
+1. Sign in using the default username, and password found on your appliance.
1. Navigate to **Wired and Wireless Network** > **IPV4**
To install:
1. Select **Apply**.
-1. Sign out and reboot the appliance.
+1. Sign out, and reboot the appliance.
### Configure the BIOS
The following procedure describes how to configure the BIOS for HP EL300 applian
**To configure the BIOS**:
-1. Turn on the appliance and push **F9** to enter the BIOS.
+1. Turn on the appliance, and push **F9** to enter the BIOS.
1. Select **Advanced**, and scroll down to **CSM Support**.
The following procedure describes how to configure the BIOS for HP EL300 applian
1. Push **Enter** to enable CSM Support.
-1. Navigate to **Storage** and push **+/-** to change it to Legacy.
+1. Navigate to Storage, and push **+/-** to change it to Legacy.
-1. Navigate to **Video** and push **+/-** to change it to Legacy.
+1. Navigate to Video, and push **+/-** to change it to Legacy.
:::image type="content" source="media/tutorial-install-components/storage-and-video.png" alt-text="Navigate to storage and video and change them to Legacy.":::
The following procedure describes how to configure the BIOS for HP EL300 applian
1. Select the device with the sensor image. Either **DVD** or **USB**.
-1. Select your language.
-
-1. Select **sensor-10.0.3.12-62a2a3f724 Office: 4 CPUS, 8GB RAM, 100GB STORAGE**.
-
- :::image type="content" source="media/tutorial-install-components/sensor-select-screen.png" alt-text="Select the sensor version as shown.":::
-
-1. In the Installation Wizard, define the appliance profile, and network properties:
-
- :::image type="content" source="media/tutorial-install-components/appliance-parameters.png" alt-text="Define the appliance's profile and network configurations with the following parameters.":::
-
- | Parameter | Configuration |
- |--|--|
- | **configure hardware profile** | **office** |
- | **configure management network interface** | **enp3s0** <br /> or <br />**possible value** |
- | **configure management network IP address:** | **IP address provided by the customer** |
- | **configure subnet mask:** | **IP address provided by the customer** |
- | **configure DNS:** | **IP address provided by the customer** |
- | **configure default gateway IP address:** | **IP address provided by the customer** |
- | **configure input interface(s)** | **enp4s0** <br /> or <br />**possible value** |
- | **configure bridge interface(s)** | N/A |
-
-1. Accept the settings and continue by entering `Y`.
+1. Follow the software installation instructions located [here](#install-the-software).
## Sensor installation for the virtual appliance
Make sure the hypervisor is running.
### Create the virtual machine (ESXi)
+This procedure describes how to create a virtual machine by using ESXi.
+
+**To create the virtual machine using ESXi**:
+ 1. Sign in to the ESXi, choose the relevant **datastore**, and select **Datastore Browser**.
-1. **Upload** the image and select **Close**.
+1. Select **Upload**, to upload the image, and select **Close**.
-1. Go to **Virtual Machines**, and then select **Create/Register VM**.
+1. Navigate to VM, and then select **Create/Register VM**.
1. Select **Create new virtual machine**, and then select **Next**.
-1. Add a sensor name and choose:
+1. Add a sensor name, and select the following options:
- Compatibility: **&lt;latest ESXi version&gt;**
Make sure the hypervisor is running.
This procedure describes how to create a virtual machine by using Hyper-V.
-To create a virtual machine:
+**To create the virtual machine using Hyper-V**:
1. Create a virtual disk in Hyper-V Manager.
To create a virtual machine:
1. Enter the required size (according to the architecture).
-1. Review the summary and select **Finish**.
+1. Review the summary, and select **Finish**.
1. On the **Actions** menu, create a new virtual machine.
To create a virtual machine:
1. Select **Specify Generation** > **Generation 1**.
-1. Specify the memory allocation (according to the architecture) and select the check box for dynamic memory.
+1. Specify the memory allocation (according to the architecture), and select the check box for dynamic memory.
1. Configure the network adaptor according to your server network topology. 1. Connect the VHDX created previously to the virtual machine.
-1. Review the summary and select **Finish**.
+1. Review the summary, and select **Finish**.
-1. Right-click the new virtual machine and select **Settings**.
+1. Right-click on the new virtual machine, and select **Settings**.
-1. Select **Add Hardware** and add a new network adapter.
+1. Select **Add Hardware**, and add a new network adapter.
1. Select the virtual switch that will connect to the sensor management network.
To install:
1. Open the virtual machine console.
-1. The VM will start from the ISO image, and the language selection screen will appear. Select **English**.
+1. The VM will start from the ISO image, and the language selection screen will appear.
-1. Select the required architecture.
+1. Follow the software installation instructions located [here](#install-the-software).
-1. Define the appliance profile and network properties:
+## Install the software
- | Parameter | Configuration |
- | -| - |
- | **Hardware profile** | &lt;required architecture&gt; |
- | **Management interface** | **ens192** |
- | **Network parameters (provided by the customer)** | **management network IP address:** <br/>**subnet mask:** <br/>**appliance hostname:** <br/>**DNS:** <br/>**default gateway:** <br/>**input interfaces:**|
- | **bridge interfaces:** | There's no need to configure the bridge interface. This option is for special use cases only. |
+Ensure you followed the installation instruction for your device prior to starting the software installation, and have downloaded the containerized sensor version ISO file.
+
+Mount the ISO file using one of the following options;
+
+- Physical media ΓÇô burn the ISO file to a DVD, or USB, and boot from the media.
+
+- Virtual mount ΓÇô use iLO for HPE, or iDRAC for Dell to boot the iso file.
+
+> [!Note]
+> At the end of this process you will be presented with the usernames, and passwords for your device. Make sure to copy these down as these passwords will not be presented again.
+
+**To install the sensor's software**:
+
+1. Select the installation language.
+
+ :::image type="content" source="media/tutorial-install-components/language-select.png" alt-text="Screenshot of the sensor's language select screen.":::
+
+1. Select the sensor's architecture.
+
+ :::image type="content" source="media/tutorial-install-components/sensor-architecture.png" alt-text="Screenshot of the sensor's architecture select screen.":::
+
+1. The Sensor will reboot, and the Package configuration screen will appear. Press the up, or down arrows to navigate, and the Space bar to select an option. Press the Enter key to advance to the next screen.
+
+1. Select the monitor interface, and press the **Enter** key.
+
+ :::image type="content" source="media/tutorial-install-components/monitor-interface.png" alt-text="Screenshot of the select monitor interface screen.":::
+
+1. If one of the monitoring ports is for ERSPAN, select it, and press the **Enter** key.
+
+ :::image type="content" source="media/tutorial-install-components/erspan-monitor.png" alt-text="Screenshot of the select erspan monitor screen.":::
+
+1. Select the interface to be used as the management interface, and press the **Enter** key.
+
+ :::image type="content" source="media/tutorial-install-components/management-interface.png" alt-text="Screenshot of the management interface select screen.":::
+
+1. Enter the sensor's IP address, and press the **Enter** key.
+
+ :::image type="content" source="media/tutorial-install-components/sensor-ip-address.png" alt-text="Screenshot of the sensor IP address screen.":::
+
+1. Enter the path of the mounted logs folder. We recommend using the default path, and press the **Enter** key.
+
+ :::image type="content" source="media/tutorial-install-components/mounted-backups-path.png" alt-text="Screenshot of the mounted backup path screen.":::
+
+1. Enter the Subnet Mask IP address, and press the **Enter** key.
-1. Enter **Y** to accept the settings.
+1. Enter the default gateway IP address, and press the **Enter** key.
-1. Sign-in credentials are automatically generated and presented. Copy the username and password in a safe place, because they're required for sign-in and administration.
+1. Enter the DNS Server IP address, and press the **Enter** key.
- - **Support**: The administrative user for user management.
+1. Enter the sensor hostname, and press the **Enter** key.
- - **CyberX**: The equivalent of root for accessing the appliance.
+ :::image type="content" source="media/tutorial-install-components/sensor-hostname.png" alt-text="Screenshot of the screen where you enter a hostname for your sensor.":::
-1. The appliance restarts.
+1. The installation process runs.
-1. Access the management console via the IP address previously configured: `https://ip_address`.
+1. When the installation process completes, save the appliance ID, and passwords. Copy these credentials to a safe place as you'll need them to access the platform the first time you use it.
- :::image type="content" source="media/tutorial-install-components/defender-for-iot-sign-in-screen.png" alt-text="Screenshot that shows access to the management console.":::
+ :::image type="content" source="media/tutorial-install-components/login-information.png" alt-text="Screenshot of the final screen of the installation with usernames, and passwords.":::
## On-premises management console installation
Before installing the software on the appliance, you need to adjust the applianc
The installation process takes about 20 minutes. After the installation, the system is restarted several times.
-During the installation process, you will can add a secondary NIC. If you choose not to install the secondary NIC during installation, you can [add a secondary NIC](#add-a-secondary-nic) at a later time.
+During the installation process, you can add a secondary NIC. If you choose not to install the secondary NIC during installation, you can [add a secondary NIC](#add-a-secondary-nic) at a later time.
To install the software:
To install the software:
| Parameter | Configuration | |--|--|
- | **configure management network interface** | For Dell: **eth0, eth1** <br /> For HP: **enu1, enu2** <br> or <br />**possible value** |
+ | **configure management network interface** | For Dell: **eth0, eth1** <br /> For HP: **enu1, enu2** <br> Or <br />**possible value** |
| **configure management network IP address:** | **IP address provided by the customer** | | **configure subnet mask:** | **IP address provided by the customer** | | **configure DNS:** | **IP address provided by the customer** |
The on-premises management console supports both VMware and Hyper-V deployment o
### Create the virtual machine (ESXi)
-To a create virtual machine (ESXi):
+To create a virtual machine (ESXi):
1. Sign in to the ESXi, choose the relevant **datastore**, and select **Datastore Browser**.
The following procedure describes how to configure the Nuvo 5006LP BIOS. Make su
1. Navigate to **Power** and change Power On after Power Failure to S0-Power On.
- :::image type="content" source="media/tutorial-install-components/nuvo-power-on.png" alt-text="Change you Nuvo 5006 to power on after a power failure..":::
+ :::image type="content" source="media/tutorial-install-components/nuvo-power-on.png" alt-text="Change your Nuvo 5006 to power on after a power failure.":::
1. Navigate to **Boot** and ensure that **PXE Boot to LAN** is set to **Disabled**.
Post-installation validation must include the following tests:
- When the last backup happened
- - How much space there is for the extra backup files
+ - How much space there are for the extra backup files
- **ifconfig**: Displays the parameters for the appliance's physical interfaces.
For any other issues, contact [Microsoft Support](https://support.microsoft.com/
## Configure a SPAN port
-A virtual switch does not have mirroring capabilities. However, you can use promiscuous mode in a virtual switch environment. Promiscuous mode is a mode of operation, as well as a security, monitoring and administration technique, that is defined at the virtual switch, or portgroup level. By default, Promiscuous mode is disabled. When Promiscuous mode is enabled the virtual machineΓÇÖs network interfaces that are in the same portgroup will use the Promiscuous mode to view all network traffic that goes through that virtual switch. You can implement a workaround with either ESXi, or Hyper-V.
+A virtual switch does not have mirroring capabilities. However, you can use promiscuous mode in a virtual switch environment. Promiscuous mode is a mode of operation, and a security, monitoring and administration technique, that is defined at the virtual switch, or portgroup level. By default, Promiscuous mode is disabled. When Promiscuous mode is enabled the virtual machineΓÇÖs network interfaces that are in the same portgroup will use the Promiscuous mode to view all network traffic that goes through that virtual switch. You can implement a workaround with either ESXi, or Hyper-V.
:::image type="content" source="media/tutorial-install-components/purdue-model.png" alt-text="A screenshot of where in your architecture the sensor should be placed.":::
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
This section describes how to ensure connection between the sensor and the on-pr
## Change the name of a sensor
-You can change the name of your sensor console. The new name will appear in the console web browser, in various console windows, and in troubleshooting logs.
+You can change the name of your sensor console. The new name will appear in:
+- The sensor console web browser
+- Various console windows
+- Troubleshooting logs
+- The Sites and sensors page in the Defender for IoT portal on Azure.
-The process for changing sensor names varies for locally connected sensors and cloud-connected sensors. The default name is **sensor**.
+The process for changing sensor names is the same for locally managed sensors and cloud-connected sensors.
-### Change the name of a locally connected sensor
+The sensor name is defined by the name assigned during the registration. The name is included in the activation file that you uploaded when signing in for the first time. To change the name of the sensor, you need to upload a new activation file.
-To change the name:
+**To change the name:**
-1. In the bottom of the left pane of the console, select the current sensor label.
+1. In the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started), go to the Sites and sensors page.
- :::image type="content" source="media/how-to-change-the-name-of-your-azure-consoles/label-name.png" alt-text="Screenshot that shows the sensor label.":::
+1. Delete the sensor from the page.
-1. In the **Edit sensor name** dialog box, enter a name.
-
-1. Select **Save**. The new name is applied.
-
-### Change the name of a cloud-connected sensor
-
-If your sensor was registered as a cloud-connected sensor, the sensor name is defined by the name assigned during the registration. The name is included in the activation file that you uploaded when signing in for the first time. To change the name of the sensor, you need to upload a new activation file.
-
-To change the name:
-
-1. In the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started), go to the Sites and Sensors page.
-
-1. Delete the sensor from the Sites and Sensors page.
-
-1. Register with the new name by selecting **Onboard sensor** from the Getting Started page.
+1. Register with the new name by selecting **Set up OT/ICS Security** from the Getting Started page.
1. Download the new activation file. 1. Sign in to the Defender for IoT sensor console.
-1. In the sensor console, select **System Settings** and then select **Reactivation**.
-
- :::image type="content" source="media/how-to-manage-sensors-on-the-cloud/reactivate.png" alt-text="Upload your activation file to reactivate the sensor.":::
+1. In the sensor console, select **System settings** > **Sensor management** and then select
+**Subscription & Activation Mode**.
1. Select **Upload** and select the file you saved.
defender-for-iot How To Set Up Snmp Mib Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-snmp-mib-monitoring.md
Title: Set up SNMP MIB monitoring description: You can perform sensor health monitoring by using SNMP. The sensor responds to SNMP queries sent from an authorized monitoring server. Previously updated : 11/09/2021 Last updated : 01/31/2022
You can perform sensor health monitoring by using Simple Network Management Prot
The SNMP supported versions are SNMP v2 or SNMP v3. SNMP uses UDP as its transport protocol with port 161 (SNMP).
-Before you begin configuring SNMP monitoring, you need to open the port UDP 161 in the firewall.
-
-## OIDs
+## Sensor OIDs
| Management console and sensor | OID | Format | Description | |--|--|--|--|
Before you begin configuring SNMP monitoring, you need to open the port UDP 161
| Locally/cloud connected | 1.3.6.1.4.1.53313.6 |STRING | Activation mode of this appliance: Cloud Connected / Locally Connected | | License status | 1.3.6.1.4.1.53313.7 |STRING | Activation period of this appliance: Active / Expiration Date / Expired |
+Note that:
+- Non-existing keys respond with null, HTTP 200.
+- Hardware-related MIBs (CPU usage, CPU temperature, memory usage, disk usage) should be tested on all architectures and physical sensors. CPU temperature on virtual machines is expected to be not applicable.
+- You can download the log that contains all the SNMP queries that the sensor receives, including the connection data and raw data.
- - Non-existing keys respond with null, HTTP 200.
-
- - Hardware-related MIBs (CPU usage, CPU temperature, memory usage, disk usage) should be tested on all architectures and physical sensors. CPU temperature on virtual machines is expected to be not applicable.
-You can download the log that contains all the SNMP queries that the sensor receives, including the connection data and raw data.
-To define SNMP v2 health monitoring:
+## Set up SNMP monitoring
+1. Before you begin configuring SNMP monitoring, you need to open the port UDP 161 in the firewall.
1. On the side menu, select **System Settings**.-
-2. In the **Active Discovery** pane, select **SNMP MIB Monitoring** :::image type="icon" source="media/how-to-set-up-snmp-mib-monitoring/snmp-icon.png" border="false":::.
-
- :::image type="content" source="media/how-to-set-up-snmp-mib-monitoring/edit-snmp.png" alt-text="Edit your SNMP window.":::
-
-3. In the **Allowed Hosts** section, select **Add host** and enter the IP address of the server that performs the system health monitoring.
-
- :::image type="content" source="media/how-to-set-up-snmp-mib-monitoring/snmp-allowed-ip-addess.png" alt-text="Enter the IP address for allowed hosts.":::
-
-4. In the **Authentication** section, in the **SNMP v2 Community String** box, enter the string. The SNMP community string can contain up to 32 characters and include any combination of alphanumeric characters (uppercase letters, lowercase letters, and numbers). Spaces are not allowed.
-
-5. Select **Save**.
-
-To define SNMP v3 health monitoring:
-
-1. On the side menu, select **System Settings**.
-
-2. On the **Active Discovery** pane, select **SNMP MIB Monitoring** :::image type="icon" source="media/how-to-set-up-snmp-mib-monitoring/snmp-icon.png" border="false":::.
-
- :::image type="content" source="media/how-to-set-up-snmp-mib-monitoring/edit-snmp.png" alt-text="Edit your SNMP window.":::
-
-3. In the **Allowed Hosts** section, select **Add host** and enter the IP address of the server that performs the system health monitoring.
-
- :::image type="content" source="media/how-to-set-up-snmp-mib-monitoring/snmp-allowed-ip-addess.png" alt-text="Enter the IP address for the allowed hosts.":::
-
-4. In the **Authentication** section, set the following parameters:
-
- | Parameter | Description |
- |--|--|
- | **Username** | The SNMP username can contain up to 32 characters and include any combination of alphanumeric characters (uppercase letters, lowercase letters, and numbers). Spaces are not allowed. <br /> <br />The username for the SNMP v3 authentication must be configured on the system and on the SNMP server. |
- | **Password** | Enter a case-sensitive authentication password. The authentication password can contain 8 to 12 characters and include any combination of alphanumeric characters (uppercase letters, lowercase letters, and numbers). <br /> <br/>The username for the SNMP v3 authentication must be configured on the system and on the SNMP server. |
- | **Auth Type** | Select MD5 or SHA. |
- | **Encryption** | Select DES or AES. |
- | **Secret Key** | The key must contain exactly eight characters and include any combination of alphanumeric characters (uppercase letters, lowercase letters, and numbers). |
+2. Expand **Sensor Management**, and select **SNMP MIB Monitoring** :
+3. Select **Add host** and enter the IP address of the server that performs the system health monitoring. You can add multiple servers.
+4. In **Authentication** section, select the SNMP version.
+ - If you select V2, type the string in **SNMP v2 Community String**. You can enter up to 32 characters, and include any combination of alphanumeric characters (uppercase letters, lowercase letters, and numbers). Spaces aren't allowed.
+ - If you select V3, specify the following:
+
+ | Parameter | Description |
+ |--|--|
+ | **Username** | The SNMP username can contain up to 32 characters and include any combination of alphanumeric characters (uppercase letters, lowercase letters, and numbers). Spaces are not allowed. <br /> <br />The username for the SNMP v3 authentication must be configured on the system and on the SNMP server. |
+ | **Password** | Enter a case-sensitive authentication password. The authentication password can contain 8 to 12 characters and include any combination of alphanumeric characters (uppercase letters, lowercase letters, and numbers). <br /> <br/>The username for the SNMP v3 authentication must be configured on the system and on the SNMP server. |
+ | **Auth Type** | Select MD5 or SHA. |
+ | **Encryption** | Select DES or AES. |
+ | **Secret Key** | The key must contain exactly eight characters and include any combination of alphanumeric characters (uppercase letters, lowercase letters, and numbers). |
5. Select **Save**.
defender-for-iot How To Set Up Your Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-your-network.md
Verify that your organizational security policy allows access to the following:
| Protocol | Transport | In/Out | Port | Used | Purpose | Source | Destination | |--|--|--|--|--|--|--|--|
-| HTTPS / Websocket | TCP | In/Out | 443 | Gives the sensor access to the Azure portal. (Optional) Access can be granted through a proxy. | Access to Azure portal | Sensor | Azure portal |
+| HTTPS / Websocket | TCP | Out | 443 | Gives the sensor access to the Azure portal. (Optional) Access can be granted through a proxy. | Access to Azure portal | Sensor | *.azure-devices.net, *.blob.core.windows.net, *.servicebus.windows.net |
#### Sensor access to the on-premises management console
defender-for-iot How To Track Sensor Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-track-sensor-activity.md
Title: Track sensor activity
-description: The event timeline presents a timeline of activity detected on your network, including alerts and alert management actions, network events, and user operations such as user sign-in and user deletion.
Previously updated : 11/09/2021-
+ Title: Track sensor activity in Defender for IoT
+description: Track sensor activity in the event timeline.
Last updated : 02/01/2022+ # Track sensor activity
-## Event timeline
+Activity that your sensor detects is recorded in the event timeline. Activity includes alerts and alert actions, network events, and user operations such as user sign in or user deletion.
-The event timeline presents a timeline of activity that your sensor has detected. For example:
+The event timeline provides a chronological view of events. Use the timeline during investigations, to understand and analyze the chain of events that preceded and followed an attack or incident.
- - Alerts and alert management actions
+## Before you start
- - Network events
+You need to have Administrator or Security Analyst permissions to perform the procedures described in this article.
- - User operations such as user sign-in and user deletion
+## View the event timeline
-The event timeline provides a chronological view of events that happened in the network. The event timeline allows understanding and analyses of the chain of events that preceded and followed an attack or incident, which assists in the investigation and forensics.
+1. In Defender for IoT, select **Event Timeline**.
+1. Review the events and filter as needed.
+1. Toggle **User Operations** to hide or show user events.
+1. Select **Add filter** to specify the events shown.
+1. In **Type** filter the events shown using a number of settings:
+ - **Event severity**: Show **Alerts Only**, **Alerts and Notices**, or **All Events**.
+ - **Device group**: Filter on specific devices defined in the device map.
+ - **Include devices**: Search for devices you want to include.
+ - **Exclude devices**: Search for devices you want to exclude.
+ - **Keywords**: Search for specific keywords.
+ - **Include Event Types**: Search for specific event types to include.
+ - **Exclude Event Types**: Search for specific event types to exclude.
+ - **Date**: Search for events in a specific date range.
+1. Select **Apply* to set the filter.
+1. Select **Export** to export the event timeline to a CSV file.
+
+## Add an event
-> [!NOTE]
-> *Administrators* and *security analysts* can perform the procedures described in this section.
+In addition to viewing the events that the sensor has detected, you can manually add events to the timeline. This process is useful if an external system event impacts your network, and you want to record it on the timeline.
-To view the event logs:
+1. Select **Create Event**.
+1. In the **Create Event** dialog, specify the event type (Info, Notice, or Alert)
+1. Set a timestamp for the event, the device it should be connected with, and provide a description.
+1. Select **Save** to add the event to the timeline.
-- From the side menu, select **Event Timeline**.
- :::image type="content" source="media/how-to-track-sensor-activity/event-timeline.png" alt-text="View your events on the event timeline.":::
-
-In addition to viewing the events that the sensor has detected, you can manually add events to the timeline. This process is useful if the event happened in an external system but has an impact on your network, and it's important to record the event and present it as a part of the timeline.
-
-To add events manually:
--- Select **Create Event**.-
-To export event log information into a CSV file:
--- Select **Export**.-
-## Filter the event timeline
-
-Filter the timeline to display devices and events of interest to you.
-
-To filter the timeline:
-
-1. Select **Advanced Filters**.
-
- :::image type="content" source="media/how-to-track-sensor-activity/advance-filters.png" alt-text="Use the Events Advanced Filters window to filter your events.":::
-
-2. Set event filters, as follows:
-
- - **Include Address**: Display events for specific devices.
-
- - **Exclude Address**: Hide events for specific devices.
-
- - **Include Event Types**: Display specific events types.
-
- - **Exclude Event Types**: Hide specific events types.
-
- - **Device Group**: Select a device group, as it was defined in the device map. Only the events from this group are presented.
-
-3. Select **Clear All** to clear all the selected filters.
-
-4. Search by **Alerts Only**, **Alerts and Notices**, or **All Events**.
-
-5. Select **Select Date** to choose a specific date. Choose a day, hour, and minute. Events from the selected time frame are shown.
-
-6. Select **User Operations** to include or exclude user operation events.
-
-7. Select the arrow (**V**) to view more information about the event:
-
- - Select the related alerts (if any) to display a detailed description of the alert.
-
- - Select the device to display the device on the map.
-
- - Select **Filter events by related devices** if you want to filter by related devices.
-
- - Select **PCAP File** to download the PCAP file (if it exists) that contains a packet capture of the whole network at a specific time.
-
- The PCAP file contains technical information that can help network engineers determine the exact parameters of the event. You can analyze the PCAP file with a network protocol analyzer such as Wireshark, an open-source application.
## See also
defender-for-iot How To Troubleshoot The Sensor And On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-the-sensor-and-on-premises-management-console.md
Title: Troubleshoot the sensor and on-premises management console description: Troubleshoot your sensor and on-premises management console to eliminate any problems you might be having. Previously updated : 11/09/2021 Last updated : 02/10/2022 # Troubleshoot the sensor and on-premises management console This article describes basic troubleshooting tools for the sensor and the on-premises management console. In addition to the items described here, you can check the health of your system in the following ways:
-**Alerts**: An alert is created when the sensor interface that monitors the traffic is down.
+- **Alerts**: An alert is created when the sensor interface that monitors the traffic is down.
+- **SNMP**: Sensor health is monitored through SNMP. Microsoft Defender for IoT responds to SNMP queries sent from an authorized monitoring server.
+- **System notifications**: When a management console controls the sensor, you can forward alerts about failed sensor backups and disconnected sensors.
-**SNMP**: Sensor health is monitored through SNMP. Microsoft Defender for IoT responds to SNMP queries sent from an authorized monitoring server.
+## Troubleshoot sensors
-**System notifications**: When a management console controls the sensor, you can forward alerts about failed sensor backups and disconnected sensors.
-
-## Sensor troubleshooting tools
### Investigate password failure at initial sign in
-When signing into a preconfigured Arrow sensor for the first time, you'll need to perform password recovery.
-
-**To recover your password**:
+When signing into a preconfigured sensor for the first time, you'll need to perform password recovery as follows:
1. On the Defender for IoT sign in screen, select **Password recovery**. The **Password recovery** screen opens.
When signing into a preconfigured Arrow sensor for the first time, you'll need t
1. Enter the unique identifier that you received on the **Password recovery** screen and select **Recover**. The `password_recovery.zip` file is downloaded.
- :::image type="content" source="media/how-to-create-and-manage-users/enter-identifier.png" alt-text="Screenshot of the enter the unique identifier and then select recover.":::
+ :::image type="content" source="media/how-to-create-and-manage-users/enter-identifier.png" alt-text="Screenshot of the Recover dialog box.":::
> [!NOTE] > Don't alter the password recovery file. It's a signed file and won't work if you tamper with it.
When signing into a preconfigured Arrow sensor for the first time, you'll need t
### Investigate a lack of traffic
-An indicator appears at the top of the console when the sensor recognizes that there's no traffic on one of the configured ports. This indicator is visible to all users.
--
-When this message appears, you can investigate where there's no traffic. Make sure the span cable is connected and there was no change in the span architecture.
+An indicator appears at the top of the console when the sensor recognizes that there's no traffic on one of the configured ports. This indicator is visible to all users. When this message appears, you can investigate where there's no traffic. Make sure the span cable is connected and there was no change in the span architecture.
-For support and troubleshooting information, contact [Microsoft Support](https://support.serviceshub.microsoft.com/supportforbusiness/create?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099).
### Check system performance
-When a new sensor is deployed or, for example, the sensor is working slowly or not showing any alerts, you can check system performance.
-
-**To check system performance**:
-
-1. In the dashboard, make sure that `PPS > 0`.
-
- :::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/dashboard-view-v2.png" alt-text="Screenshot of a sample dashboard.":::
-
-1. From the side menu, select **Devices**.
-
-1. In the **Devices** window, make sure devices are being discovered.
-
- :::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/discovered-devices.png" alt-text="Screenshot of the discovered devices.":::
-
-1. From the side menu, select **Data Mining**.
-
-1. In the **Data Mining** window, select **ALL** and generate a report.
-
- :::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/new-report-generated.png" alt-text="Screenshot of the generate a new report by using data mining screen.":::
-
-1. Make sure the report contains data.
-
- :::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/new-report-generated.png" alt-text="Screenshot of the ensure that the report contains data screen.":::
+When a new sensor is deployed or a sensor is working slowly or not showing any alerts, you can check system performance.
-1. From the side menu, select **Trends & Statistics**.
+1. In the Defender for IoT dashboard > **Overview**, make sure that `PPS > 0`.
+1. In *Devices** check that devices are being discovered.
+1. In **Data Mining**, generate a report.
+1. In **Trends & Statistics** window, create a dashboard.
+1. In **Alerts**, check that the alert was created.
-1. In the **Trends & Statistics** window, select **Add Widget**.
- :::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/add-widget.png" alt-text="Screenshot of the add a widget by selecting it.":::
-
-1. Add a widget and make sure it shows data.
-
- :::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/widget-data.png" alt-text="Screenshot of the widget showing data.":::
-
-1. From the side menu, select **Alerts**. The **Alerts** window appears.
-
-1. Make sure the alerts were created.
-
- :::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/alerts-created.png" alt-text="Screenshot of the alerts were created.":::
-
-### Investigate a lack of expected alerts on the sensor
+### Investigate a lack of expected alerts
If the **Alerts** window doesn't show an alert that you expected, verify the following: -- Check if the same alert already appears in the **Alerts** window as a reaction to a different security instance. If yes, and this alert has not been handled yet, the sensor console does not show a new alert.--- Make sure you did not exclude this alert by using the **Alert Exclusion** rules in the management console.-
-### Investigate widgets that show no data
+1. Check if the same alert already appears in the **Alerts** window as a reaction to a different security instance. If yes, and this alert has not been handled yet, the sensor console does not show a new alert.
+1. Make sure you did not exclude this alert by using the **Alert Exclusion** rules in the management console.
-When the widgets in the **Trends & Statistics** window show no data, do the following:
+### Investigate dashboard that show no data
-- [Check system performance](#check-system-performance).--- Make sure the time and region settings are properly configured and not set to a future time.
+When the dashboards in the **Trends & Statistics** window show no data, do the following:
+1. [Check system performance](#check-system-performance).
+1. Make sure the time and region settings are properly configured and not set to a future time.
### Investigate a device map that shows only broadcasting devices
-When devices shown on the map appear not connected to each other, something might be wrong with the SPAN port configuration. That is, you might be seeing only broadcasting devices and no unicast traffic.
--
-In such a case, validate that you only the broadcast traffic and then ask the network engineer to fix the SPAN port configuration so that you can see the unicast traffic as well.
-
-To validate that you're seeing only the broadcast traffic:
+When devices shown on the device map appear not connected to each other, something might be wrong with the SPAN port configuration. That is, you might be seeing only broadcasting devices and no unicast traffic.
-- On the **Data Mining** screen, create a report by using the **All** option. Then see if only broadcast and multicast traffic (and no unicast traffic) appears in the report.-
-Or:
--- Record a PCAP directly from the switch, or connect a laptop by using Wireshark.
+1. Validate that you're only seeing the broadcast traffic. To do this, in **Data Mining**, select **Create report**. In **Create new report**,specify the report fields. In **Choose Category**, choose **Select all**.
+1. Save the report, and review it to see if only broadcast and multicast traffic (and no unicast traffic) appears. If so, asking networking to fix the SPAN port configuration so that you can see the unicast traffic as well. Alternately, you can record a PCAP directly from the switch, or connect a laptop by using Wireshark.
### Connect the sensor to NTP
To connect a sensor controlled by the management console to NTP:
### Investigate when devices aren't shown on the map, or you have multiple internet-related alerts
-Sometimes ICS devices are configured with external IP addresses. These ICS devices are not shown on the map. Instead of the devices, an internet cloud appears on the map. The IP addresses of these devices are included in the cloud image.
-
-Another indication of the same problem is when multiple internet-related alerts appear.
--
-**To fix the configuration**:
-
-1. Right-click the cloud icon on the device map and select **Export IP Addresses**. Copy the public ranges that are private, and add them to the subnet list. For more information, see [Configure subnets](how-to-control-what-traffic-is-monitored.md#configure-subnets).
+Sometimes ICS devices are configured with external IP addresses. These ICS devices are not shown on the map. Instead of the devices, an internet cloud appears on the map. The IP addresses of these devices are included in the cloud image. Another indication of the same problem is when multiple internet-related alerts appear. Fix the issue as follows:
+1. Right-click the cloud icon on the device map and select **Export IP Addresses**.
+1. Copy the public ranges that are private, and add them to the subnet list. Learn more about [configuring subnets](how-to-control-what-traffic-is-monitored.md#configure-subnets).
1. Generate a new data-mining report for internet connections.-
-1. In the data-mining report, select :::image type="icon" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/administrator-mode.png" border="false"::: to enter the administrator mode and delete the IP addresses of your ICS devices.
-
-### Tweak the sensor's Quality of Service (QoS)
-
-To save your network resources, you can limit the interface bandwidth that the sensor uses for day-to-day procedures.
-
-To limit the interface bandwidth, use the `cyberx-xsense-limit-interface` CLI tool that needs to be run with sudo permissions. The tool gets the following arguments:
--- `* -i`: interfaces (example: eth0).--- `* -l`: limit (example: 30 kbit / 1 mbit). You can use the following bandwidth units: kbps, mbps, kbit, mbit, or bps.--- `* -c`: clear (to clear the interface bandwidth limitation).-
-**To tweak the Quality of Service (QoS)**:
-
-1. Sign in to the sensor CLI as a Defender for IoT user, and enter `sudo cyberx-xsense-limit-interface-I eth0 -l value`.
-
- For example: `sudo cyberx-xsense-limit-interface -i eth0 -l 30kbit`
-
- > [!NOTE]
- > For a physical appliance, use the em1 interface.
-
-1. To clear interface limitation, enter `sudo cyberx-xsense-limit-interface -i eth0 -l 1mbps -c`.
+1. In the data-mining report, enter the administrator mode and delete the IP addresses of your ICS devices.
## On-premises management console troubleshooting tools
To limit the number of alerts, use the `notifications.max_number_to_report` prop
1. Save the changes. No restart is required.
-## Export information from the sensor for troubleshooting
-
-In addition to tools for monitoring and analyzing your network, you can send information to the support team for further investigation. When you export logs, the sensor will automatically generate a one-time password (OTP), unique for the exported logs, in a separate text file.
-
-**To export logs**:
-
-1. On the left pane, select **System Settings**.
-
-1. Select **Export Logs**.
-
- :::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/sensor-export-log.png" alt-text="Screenshot of the export a log to system support screen.":::
-
-1. In the **File Name** field, enter the file name that you want to use for the log export. The default is the current date.
-1. To define what data you want to export, select the data categories:
-
- | Export category | Description |
- |--|--|
- | **Operating System Logs** | Select this option to get information about the operating system state. |
- | **Installation/Upgrade logs** | Select this option for investigation of the installation and upgrade configuration parameters. |
- | **System Sanity Output** | Select this option to check system performance. |
- | **Dissection Logs** | Select this option to allow advanced inspection of protocol dissection. |
- | **OS Kernel Dumps** | Select this option to export your kernel memory dump. A kernel memory dump contains all the memory that the kernel is using at the time of the problem that occurred in this kernel. The size of the dump file is smaller than the complete memory dump. Typically, the dump file is around one-third the size of the physical memory on the system. |
- | **Forwarding logs** | Select this option for investigation of the forwarding rules. |
- | **SNMP Logs** | Select this option to receive SNMP health check information. |
- | **Core Application Logs** | Select this option to export data about the core application configuration and operation. |
- | **Communication with CM logs** | Select this option if there are continuous problems or interruptions of connection with the management console. |
- | **Web Application Logs** | Select this option to get information about all the requests sent from the application's web interface. |
- | **System Backup** | Select this option to export a backup of all the system data for investigating the exact state of the system. |
- | **Dissection Statistics** | Select this option to allow advanced inspection of protocol statistics. |
- | **Database Logs** | Select this option to export logs from the system database. Investigating system logs helps identify system problems. |
- | **Configuration** | Select this option to export information about all the configurable parameters to make sure everything was configured correctly. |
-
-1. To select all the options, select **Select All** next to **Choose Categories**.
-
-1. Select **Export Logs**.
-
-The exported logs are added to the **Archived Logs** list. Send the OTP to the support team in a separate message and medium from the exported logs. The support team will be able to extract exported logs only by using the unique OTP that's used to encrypt the logs.
-
-The list of archived logs can contain up to five items. If the number of items in the list goes beyond that number, the earliest item is deleted.
## Export audit log from the management console
Audit logs record key information at the time of occurrence. Audit logs are usef
| **Login** | User | | **User creation** | User, User role | | **Password reset** | User name |
-| **Exclusion rules**: </br></br>- Creation </br></br>- Editing </br></br>- Deletion | </br></br>Rule summary </br></br>Rule ID, Rule Summary </br></br>Rule ID |
+| **Exclusion rules-Creation**| Rule summary |
+| **Exclusion rules-Editing**| Rule ID, Rule Summary |
+| **Exclusion rules-Deletion** | Rule ID |
| **Management Console Upgrade** | The upgrade file used | | **Sensor upgrade retry** | Sensor ID | | **Uploaded TI package** | No additional information recorded. |
Audit logs record key information at the time of occurrence. Audit logs are usef
1. Select **Export**.
- :::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/audit-logs-export.png" alt-text="Screenshot of the select Audit Logs and then select Export to create your file screen.":::
- The exported log is added to the **Archived Logs** list. Select the :::image type="icon" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/eye-icon.png" border="false"::: button to view the OTP. Send the OTP string to the support team in a separate message from the exported logs. The support team will be able to extract exported logs only by using the unique OTP that's used to encrypt the logs. - ## Next steps - [View alerts](how-to-view-alerts.md)
defender-for-iot How To Work With Device Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-device-notifications.md
Title: Work with device notifications
-description: Notifications provide information about network activity that might require your attention, along with recommendations for handling this activity.
Previously updated : 11/09/2021
+ Title: Work with device notifications in Defender for IoT
+description: Notifications provide information and recommendations about network activity.
Last updated : 01/02/2022 # Work with device notifications
-Notifications provide information about network activity that might require your attention, along with recommendations for handling this activity. For example, you might receive a notification about:
+Discovery notifications provide information about network activity that might require your attention, along with recommendations for handling this activity. For example, you might receive a notification about an inactive device that should be reconnected, or removed if it's no longer part of the network. Notifications aren't the same as alerts. Alerts provide information about changes that might present a threat to your network.
-- An inactive device. If the device is no longer a part of your network, you can remove it. If the device is inactive, for example because it's mistakenly disconnected from the network, reconnect the device and dismiss the notification.
+## Notification types
-- An IP address was detected on a device that's currently identified by a MAC address only. Respond by authorizing the IP address for the device.
+The following table describes the notification event types you might receive, along with the options for handling them. When you dismiss a notification, the device information is not updated with the recommended action. If traffic is detected again, the notification is resent.
-Responding to notifications improves the information provided in the device map, device inventory, and data-mining queries and reports. It also provides insight into legitimate network changes and potential network misconfigurations.
-
-**Notifications vs. alerts**
-
-In addition to receiving notifications on network activity, you might receive *alerts*. Notifications provide information about network changes or unresolved device properties that don't present a threat. Alerts provide information about network deviations and changes that might present a threat to the network.
-
-To view notifications:
-
-1. Select **Device Map** from the console's left menu pane.
-
-2. Select the **Notifications** icon. The red number above the icon indicates the number of notifications.
-
- :::image type="content" source="media/how-to-enrich-asset-information/notifications-alert-screenshot.png" alt-text="Notification icon.":::
-
- The **Notifications** window displays all notifications that the sensor has detected.
-
- :::image type="content" source="media/how-to-enrich-asset-information/notification-screen.png" alt-text="Notifications.":::
-
-## Filter the Notifications window
-
-Use search filters to display notifications of interest to you.
-
-| Filter by | Description |
-|--|--|
-| Filter by type | View notifications that cover a specific area of interest. For example, view only notifications for inactive devices. |
-| Filter by date range | Display notifications that cover a specific time range. For example, view notifications sent over the last week only. |
-| Search for specific information | Search for specific notifications. |
-
-## Notification events and responses
-
-The following table describes the notification event types you might receive, along with the options for handling them. You can update the device information with a recommended value or dismiss the notification. When you dismiss a notification, the device information is not updated with the recommended information. If traffic is detected again, the notification will be re-sent.
-
-| Notification event types | Description | Responses |
+| Type | Description | Responses |
|--|--|--| | New IP detected | A new IP address is associated with the device. Five scenarios might be detected: <br /><br /> An additional IP address was associated with a device. This device is also associated with an existing MAC address.<br /><br /> A new IP address was detected for a device that's using an existing MAC address. Currently the device does not communicate by using an IP address.<br /> <br /> A new IP address was detected for a device that's using a NetBIOS name. <br /><br /> An IP address was detected as the management interface for a device associated with a MAC address. <br /><br /> A new IP address was detected for a device that's using a virtual IP address. | **Set Additional IP to Device** (merge devices) <br /> <br />**Replace Existing IP** <br /> <br /> **Dismiss**<br /> Remove the notification. | | Inactive devices | Traffic was not detected on a device for more than 60 days. | **Delete** <br /> If this device is not part of your network, remove it. <br /><br />**Dismiss** <br /> Remove the notification if the device is part of your network. If the device is inactive (for example, because it's mistakenly disconnected from the network), dismiss the notification and reconnect the device. |
The following table describes the notification event types you might receive, al
| New subnets | New subnets were discovered. | **Learn**<br />Automatically add the subnet.<br />**Open Subnet Configuration**<br />Add all missing subnet information.<br />**Dismiss**<br />Remove the notification. | | Device type changes | A new device type has been associated with the device. | **Set as {…}**<br />Associate the new type with the device.<br />**Dismiss**<br />Remove the notification. |
-## Respond to many notifications simultaneously
+## View notifications
-You might need to handle several notifications simultaneously. For example:
+1. In Defender for IoT, select **Device Map**.
+1. Select **Notifications** icon.
+1. In **Discovery Notifications**, review all notifications.
+1. For each notification, either accept the recommendation, or dismiss it.
+1. By default, all notifications are shown.
+ - To filter for specific dates and times, select **Time range ==** and specify a days, weeks, or month filter.
+ - Select **Add filter** to filter on other device, subnet, and operating system values.
-- If IT did an OS upgrade to a large set of network servers, you can instruct the sensor to learn the new server versions for all upgraded servers. -- If a group of devices in a certain line was phased out and isn't active anymore, you can instruct the sensor to remove these devices from the console.
+## Respond to multiple notifications
-You can instruct the sensor to apply newly detected information to multiple devices or ignore it.
-
-To display notifications and handle notifications:
-
-1. Use the **filter by type, date range** option or the **Select All** option. Deselect notifications as required.
+You might need to handle several notifications simultaneously. For example:
-2. Instruct the sensor to apply newly detected information to selected devices by selecting **LEARN**. Or, instruct the sensor to ignore newly detected information by selecting **DISMISS**. The number of notifications that you can simultaneously learn and dismiss, along with the number of notifications you must handle individually, is shown.
+- If IT did an OS upgrade to a large set of network servers, you can instruct the sensor to learn the new server versions for all upgraded servers.
+- If a group of devices in a certain line was phased out and isn't active anymore, you can instruct the sensor to remove these devices from the console.
-**New IPs** and **No Subnets** configured events can't be handled simultaneously. They require manual confirmation.
+Respond as follows:
+1. In **Discovery Notifications*, choose **Select All**, and then clear the notifications you don't need. When you choose **Select All**, Defender for IoT displays information about which notifications can be handled or dismissed simultaneously, and which need your input.
+1. You can accept all recommendations, dismiss all recommendations, or handled notifications one at a time.
+1. For notifications that indicate manual changes are required, such as **New IPs** and **No Subnets**, make the manual modifications as needed.
+1.
## See also [View alerts](how-to-view-alerts.md)
defender-for-iot References Work With Defender For Iot Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-apis.md
Title: Work with Defender for IoT APIs description: Use an external REST API to access the data discovered by sensors and management consoles and perform actions with that data. Previously updated : 11/21/2021 Last updated : 01/31/2022
Use an external REST API to access the data discovered by sensors and management
Connections are secured over SSL.
-## Getting started
+## Generate a token
-In general, when you're using an external API on the Microsoft Defender for IoT sensor or on-premises management console, you need to generate an access token. Tokens are not required for authentication APIs that you use on the sensor and the on-premises management console.
+In general, when you're using an external API on the Microsoft Defender for IoT sensor or on-premises management console, you need to generate an access token. Tokens aren't required for authentication APIs that you use on the sensor and the on-premises management console.
To generate a token:
-1. In the **System Settings** window, select **Access Tokens**.
-
- :::image type="content" source="media/references-work-with-defender-for-iot-apis/access-tokens.png" alt-text="Screenshot of System Settings windows highlighting the Access Tokens button.":::
-
-1. Select **Generate new token**.
-
- :::image type="content" source="media/references-work-with-defender-for-iot-apis/new-token.png" alt-text="Select the button to generate a new token.":::
-
-1. Describe the purpose of the new token and select **Next**.
-
- :::image type="content" source="media/references-work-with-defender-for-iot-apis/token-name.png" alt-text="Generate a new token and enter the name of the integration associated with it.":::
-
+1. In the **System Settings** window, select **Integrations** > **Access Tokens**.
+1. Select **Generate token**.
+1. In **Description**, describe what the new token is for, and select **Generate**.
1. The access token appears. Copy it, because it won't be displayed again.
+1. Select **Finish**.
+ - The tokens that you create appear in the **Access Tokens** dialog box. The **Used** indicates the last time an external call with this token was received.
+ - **N/A** in the **Used** field indicates that the connection between the sensor and the connected server isn't working.
- :::image type="content" source="media/references-work-with-defender-for-iot-apis/token-code.png" alt-text="Copy your access token for your integration.":::
-
-1. Select **Finish**. The tokens that you create appear in the **Access Tokens** dialog box.
-
- :::image type="content" source="media/references-work-with-defender-for-iot-apis/access-token-window.png" alt-text="Screenshot of Device Tokens dialog box with filled-out tokens":::
-
- **Used** indicates the last time an external call with this token was received.
+After generating the token, add an HTTP header titled **Authorization** to your request, and set its value to the token that you generated.
- If **N/A** is displayed in the **Used** field for this token, the connection between the sensor and the connected server is not working.
-
-1. Add an HTTP header titled **Authorization** to your request, and set its value to the token that you generated.
-
-## Sensor API specifications
-
-This section describes the following sensor APIs:
+## Sensor APIs
### No version
Example:
|-|-|-| |GET|`curl -k -H "Authorization: <AUTH_TOKEN>" 'https://<IP_ADDRESS>/api/v2/alerts/pcap/<ID>'`|`curl -k -H "Authorization: d2791f58-2a88-34fd-ae5c-2651fe30a63c" 'https://10.1.0.2/api/v2/alerts/pcap/1'`|
-## On-premises management console API specifications
+## Management console API
This section describes on-premises management console APIs for:
response:
### QRadar alerts
-QRadar integration with Defender for IoT helps you identify the alerts generated by Defender for IoT and perform actions with these alerts. QRadar receives the data from Defender for IoT and then contacts the public API on-premises management console component.
-
-To send the data discovered by Defender for IoT to QRadar, define a forwarding rule in the Defender for IoT system and select the **Remote support alert handling** option.
+QRadar integration helps you to identify and action alerts generated by Defender for IoT. QRadar receives the data from Defender for IoT, and then contacts the public API on-premises management console.
-
-When you select this option during the process of configuring forwarding rules, the following additional fields appear in QRadar:
+To send the data discovered by Defender for IoT to QRadar, define a forwarding rule in the Defender for IoT system and select the **Remote support alert handling** option. With this option selected, the following additional fields appear in QRadar:
- **UUID**: Unique alert identifier, such as 1-1555245116250.
When you select this option during the process of configuring forwarding rules,
- **Zone**: The zone where the alert was discovered.
-Example of the payload sent to QRadar:
+#### Example of the payload sent to QRadar:
``` <9>May 5 12:29:23 sensor_Agent LEEF:1.0|CyberX|CyberX platform|2.5.0|CyberX platform Alert|devTime=May 05 2019 15:28:54 devTimeFormat=MMM dd yyyy HH:mm:ss sev=2 cat=XSense Alerts title=Device is Suspected to be Disconnected (Unresponsive) score=81 reporter=192.168.219.50 rta=0 alertId=6 engine=Operational senderName=sensor Agent UUID=5-1557059334000 site=Site zone=Zone actions=handle dst=192.168.2.2 dstName=192.168.2.2 msg=Device 192.168.2.2 is suspected to be disconnected (unresponsive).
Array of JSON objects that represent devices.
| Name | Type | Nullable | List of values | |--|--|--|--| | **Name** | String | No | |
-| **Addresses** | JSON array | Yes | Master, or numeric values |
+| **Addresses** | JSON array | Yes | `Master`, or numeric values |
#### Firmware fields
Array of JSON objects that represent devices.
| Name | Type | Nullable | List of values | |--|--|--|--| | Name | String | No | - |
-| Addresses | JSON array | Yes | Master, or numeric values |
+| Addresses | JSON array | Yes | `Master`, or numeric values |
#### Firmware fields
Array of JSON objects that represent devices.
"error": "Invalid action" } ```-
+o
#### Curl command | Type | APIs | Example |
Array of JSON objects that represent devices.
Define conditions under which alerts won't be sent. For example, define and update stop and start times, devices or subnets that should be excluded when triggering alerts, or Defender for IoT engines that should be excluded. For example, during a maintenance window, you might want to stop alert delivery of all alerts, except for malware alerts on critical devices.
-The APIs that you define here appear in the on-premises management console's **Alert Exclusions** window as a read-only exclusion rule.
-
+The APIs that you define here appear in the on-premises management console's Alert Exclusions window as a read-only exclusion rule.
#### Method - POST
The below API's can be used with the ServiceNow integration via the ServiceNow's
- ΓÇ£**timestamp**ΓÇ¥ ΓÇô the time from which updates are required, only later updates will be returned. - Query parameters:
- - ΓÇ£**sensorId**ΓÇ¥ - use this parameter to get only devices seen by a specific sensor. The Id should be taken from the results of the Sensors API.
+ - ΓÇ£**sensorId**ΓÇ¥ - use this parameter to get only devices seen by a specific sensor. The ID should be taken from the results of the Sensors API.
- ΓÇ£**notificationType**ΓÇ¥ - should be a number, from the following mapping: - 0 ΓÇô both updated and new devices (default). - 1 ΓÇô only new devices.
The below API's can be used with the ServiceNow integration via the ServiceNow's
- Structure: - ΓÇ£**u_count**ΓÇ¥ - amount of object in the full result sets, including all pages. - ΓÇ£**u_connections**ΓÇ¥ - array of
- - ΓÇ£**u_src_device_id**ΓÇ¥ - the Id of the source device.
- - ΓÇ£**u_dest_device_id**ΓÇ¥ - the Id of the destination device.
+ - ΓÇ£**u_src_device_id**ΓÇ¥ - the ID of the source device.
+ - ΓÇ£**u_dest_device_id**ΓÇ¥ - the ID of the destination device.
- ΓÇ£**u_connection_type**ΓÇ¥ - one of the following: - ΓÇ£**One Way**ΓÇ¥ - ΓÇ£**Two Way**ΓÇ¥
The below API's can be used with the ServiceNow integration via the ServiceNow's
- Path: ΓÇ£/device/{deviceId}ΓÇ¥ - Method type: GET - Path parameters:
- - ΓÇ£**deviceId**ΓÇ¥ ΓÇô the Id of the requested device.
+ - ΓÇ£**deviceId**ΓÇ¥ ΓÇô the ID of the requested device.
#### Response
defender-for-iot References Work With Defender For Iot Cli Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-cli-commands.md
This article describes CLI commands for sensors and on-premises management conso
- Administrator - CyberX - Support
+- cyberx_host
To start working in the CLI, connect using a terminal. For example, terminal name `Putty`, and `Support` user.
defender-for-iot Tutorial Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-onboarding.md
The validation is available to both the **Support**, and **CyberX** user.
1. Sign in to the sensor.
-1. Select **System Settings** from the left side pane.
+1. Select **System Settings**> **Health and troubleshooting** > **System Health Check**.
-1. Select the :::image type="icon" source="media/tutorial-onboarding/system-statistics-icon.png" border="false"::: button.
+1. Select a command.
- :::image type="content" source="media/tutorial-onboarding/system-health-check-screen.png" alt-text="Screenshot of the system health check." lightbox="media/tutorial-onboarding/system-health-check-screen-expanded.png":::
-
-For post-installation validation, you must test to ensure the system is running, that you have the right version, and to verify that all of the input interfaces that were configured during the installation process are running.
+For post-installation validation, test that:
+- the system is running
+- you have the right version
+- all of the input interfaces that were configured during the installation process are running
**To verify that the system is running**:
Once registration is complete for the sensor, you will be able to download an ac
1. Enter the credentials defined during the sensor installation.
-1. After you sign in, the **Activation** dialog box opens. Select **Upload** and go to the activation file that you downloaded during the sensor onboarding.
-
- :::image type="content" source="media/tutorial-onboarding/activation-upload-screen-with-upload-button.png" alt-text="Screenshot of selecting to upload and go to the activation file.":::
-
-1. Accept the terms and conditions.
-
-1. Select **Activate**. The SSL/TLS certificate dialog box opens.
-
-1. Define a certificate name.
-
-1. Upload the CRT and key files.
-
-1. Enter a passphrase and upload a PEM file if necessary.
-
-1. Select **Next**. The validation screen opens. By default, validation between the management console and connected sensors is enabled.
-
-1. Turn off the **Enable system-wide validation** toggle to disable validation. We recommend that you enable validation.
+1. Select **Log in** and follow the instructions described in [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md#activate-and-set-up-your-sensor).
-1. Select **Save**.
-You might need to refresh your screen after uploading the CA-signed certificate.
## Next steps
devtest-labs Image Factory Save Distribute Custom Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/image-factory-save-distribute-custom-images.md
The following items should already be in place:
If needed, follow steps in the [Run an image factory from Azure DevOps](image-factory-set-up-devops-lab.md) to create or set up these items. ## Save VMs as generalized VHDs
-Save the existing VMs as generalized VHDs. There's a sample PowerShell script to save the existing VMs as generalized VHDs. To use it, first, add another **Azure Powershell** task to the build definition as shown in the following image:
+Save the existing VMs as generalized VHDs. There's a sample PowerShell script to save the existing VMs as generalized VHDs. To use it, first, add another **Azure PowerShell** task to the build definition as shown in the following image:
![Add Azure PowerShell step](./media/save-distribute-custom-images/powershell-step.png)
dms Known Issues Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-postgresql-online.md
Title: "Known issues: Online migrations from PostgreSQL to Azure Database for Po
description: Learn about known issues and migration limitations with online migrations from PostgreSQL to Azure Database for PostgreSQL using the Azure Database Migration Service. --++
dms Tutorial Azure Postgresql To Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md
Title: "Tutorial: Migrate Azure DB for PostgreSQL to Azure DB for PostgreSQL onl
description: Learn to perform an online migration from one Azure DB for PostgreSQL to another Azure Database for PostgreSQL by using Azure Database Migration Service via the Azure portal. --++
dms Tutorial Postgresql Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-postgresql-azure-postgresql-online-portal.md
Title: "Tutorial: Migrate PostgreSQL to Azure DB for PostgreSQL online via the A
description: Learn to perform an online migration from PostgreSQL on-premises to Azure Database for PostgreSQL by using Azure Database Migration Service via the Azure portal. --++
dms Tutorial Postgresql Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-postgresql-azure-postgresql-online.md
Title: "Tutorial: Migrate PostgreSQL to Azure Database for PostgreSQL online via
description: Learn to perform an online migration from PostgreSQL on-premises to Azure Database for PostgreSQL by using Azure Database Migration Service via the CLI. --++
dms Tutorial Rds Postgresql Server Azure Db For Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-rds-postgresql-server-azure-db-for-postgresql-online.md
Title: "Tutorial: Migrate RDS PostgreSQL online to Azure Database for PostgreSQL
description: Learn to perform an online migration from RDS PostgreSQL to Azure Database for PostgreSQL by using the Azure Database Migration Service. --++
event-hubs Event Hubs Node Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-node-get-started-send.md
Title: Send or receive events from Azure Event Hubs using JavaScript (latest) description: This article provides a walkthrough for creating a JavaScript application that sends/receives events to/from Azure Event Hubs using the latest azure/event-hubs package. Previously updated : 09/16/2021 Last updated : 02/22/2022 ms.devlang: javascript
In this section, you create a JavaScript application that sends events to an eve
* `EVENT HUBS NAMESPACE CONNECTION STRING` * `EVENT HUB NAME` 1. Run `node send.js` to execute this file. This command sends a batch of three events to your event hub.
-1. In the Azure portal, verify that the event hub has received the messages. In the **Metrics** section, switch to **Messages** view. Refresh the page to update the chart. It might take a few seconds for it to show that the messages have been received.
+1. In the Azure portal, verify that the event hub has received the messages. Refresh the page to update the chart. It might take a few seconds for it to show that the messages have been received.
- [![Verify that the event hub received the messages](./media/getstarted-dotnet-standard-send-v2/verify-messages-portal.png)](./media/getstarted-dotnet-standard-send-v2/verify-messages-portal.png#lightbox)
+ [![Verify that the event hub received the messages](./media/node-get-started-send/verify-messages-portal.png)](./media/node-get-started-send/verify-messages-portal.png#lightbox)
> [!NOTE] > For the complete source code, including additional informational comments, go to the [GitHub sendEvents.js page](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/eventhub/event-hubs/samples/v5/javascript/sendEvents.js).
Be sure to record the connection string and container name for later use in the
1. Create a file called *receive.js*, and paste the following code into it: ```javascript
- const { EventHubConsumerClient } = require("@azure/event-hubs");
+ const { EventHubConsumerClient, earliestEventPosition } = require("@azure/event-hubs");
const { ContainerClient } = require("@azure/storage-blob"); const { BlobCheckpointStore } = require("@azure/eventhubs-checkpointstore-blob");-
+
const connectionString = "EVENT HUBS NAMESPACE CONNECTION STRING"; const eventHubName = "EVENT HUB NAME"; const consumerGroup = "$Default"; // name of the default consumer group const storageConnectionString = "AZURE STORAGE CONNECTION STRING"; const containerName = "BLOB CONTAINER NAME";-
+
async function main() { // Create a blob container client and a blob checkpoint store using the client. const containerClient = new ContainerClient(storageConnectionString, containerName); const checkpointStore = new BlobCheckpointStore(containerClient);-
+
// Create a consumer client for the event hub by specifying the checkpoint store. const consumerClient = new EventHubConsumerClient(consumerGroup, connectionString, eventHubName, checkpointStore);-
+
// Subscribe to the events, and specify handlers for processing the events and errors. const subscription = consumerClient.subscribe({ processEvents: async (events, context) => {
Be sure to record the connection string and container name for later use in the
console.log(`No events received within wait time. Waiting for next interval`); return; }
-
+
for (const event of events) { console.log(`Received event: '${event.body}' from partition: '${context.partitionId}' and consumer group: '${context.consumerGroup}'`); } // Update the checkpoint. await context.updateCheckpoint(events[events.length - 1]); },-
+
processError: async (err, context) => { console.log(`Error : ${err}`); }
- }
+ },
+ { startPosition: earliestEventPosition }
);-
+
// After 30 seconds, stop processing. await new Promise((resolve) => { setTimeout(async () => {
Be sure to record the connection string and container name for later use in the
}, 30000); }); }-
+
main().catch((err) => { console.log("Error occurred: ", err);
- });
+ });
``` 1. In the code, use real values to replace the following values: - `EVENT HUBS NAMESPACE CONNECTION STRING`
Be sure to record the connection string and container name for later use in the
- `BLOB CONTAINER NAME` 1. Run `node receive.js` in a command prompt to execute this file. The window should display messages about received events.
+ ```
+ C:\Self Study\Event Hubs\JavaScript>node receive.js
+ Received event: 'First event' from partition: '0' and consumer group: '$Default'
+ Received event: 'Second event' from partition: '0' and consumer group: '$Default'
+ Received event: 'Third event' from partition: '0' and consumer group: '$Default'
+ ```
> [!NOTE] > For the complete source code, including additional informational comments, go to the [GitHub receiveEventsUsingCheckpointStore.js page](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/eventhub/eventhubs-checkpointstore-blob/samples/v1/javascript/receiveEventsUsingCheckpointStore.js).
expressroute How To Move Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-move-peering.md
If the layer 3 is managed by you the following information is required before yo
Detailed instructions to enable Microsoft peering can be found in the following articles: * [Create Microsoft peering using Azure portal](expressroute-howto-routing-portal-resource-manager.md#msft)<br>
-* [Create Microsoft peering using Azure Powershell](expressroute-howto-routing-arm.md#msft)<br>
+* [Create Microsoft peering using Azure PowerShell](expressroute-howto-routing-arm.md#msft)<br>
* [Create Microsoft peering using Azure CLI](howto-routing-cli.md#msft) ## <a name="validate"></a>2. Validate Microsoft peering is enabled
expressroute Site To Site Vpn Over Microsoft Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/site-to-site-vpn-over-microsoft-peering.md
Configure your firewall and filtering according to your requirements.
## <a name="testipsec"></a>6. Test and validate the IPsec tunnel
-The status of IPsec tunnels can be verified on the Azure VPN gateway by Powershell commands:
+The status of IPsec tunnels can be verified on the Azure VPN gateway by PowerShell commands:
```azurepowershell-interactive Get-AzVirtualNetworkGatewayConnection -Name vpn2local1 -ResourceGroupName myRG | Select-Object ConnectionStatus,EgressBytesTransferred,IngressBytesTransferred | fl
firewall-manager Migrate To Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/migrate-to-policy.md
-# Migrate Azure Firewall configurations to Azure Firewall policy using Powershell
+# Migrate Azure Firewall configurations to Azure Firewall policy using PowerShell
You can use an Azure PowerShell script to migrate existing Azure Firewall configurations to an Azure Firewall policy resource. You can then use Azure Firewall Manager to deploy the policy.
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
To learn about Firewall Standard features, see [Azure Firewall Standard features
## Azure Firewall Premium
- Azure Firewall Premium provides advanced capabilities include signature-based IDPS to allow rapid detection of attacks by looking for specific patterns. These patterns can includes byte sequences in network traffic, or known malicious instruction sequences used by malware. There are more than 58,000 signatures in over 50 categories which are updated in real time to protect against new and emerging exploits. The exploit categories include malware, phishing, coin mining, and Trojan attacks.
+ Azure Firewall Premium provides advanced capabilities include signature-based IDPS to allow rapid detection of attacks by looking for specific patterns. These patterns can include byte sequences in network traffic, or known malicious instruction sequences used by malware. There are more than 58,000 signatures in over 50 categories which are updated in real time to protect against new and emerging exploits. The exploit categories include malware, phishing, coin mining, and Trojan attacks.
![Firewall Premium overview](media/overview/firewall-premium.png)
firewall Premium Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-migrate.md
Previously updated : 02/03/2022 Last updated : 02/22/2022
You can migrate Azure Firewall Standard to Azure Firewall Premium to take advantage of the new Premium capabilities. For more information about Azure Firewall Premium features, see [Azure Firewall Premium features](premium-features.md).
-The following two examples show how to:
-- Migrate an existing standard policy using Azure PowerShell-- Migrate an existing standard firewall (with classic rules) to Azure Firewall Premium with a Premium policy
+This article guides you with the required steps to manually migrate your Standard firewall and policy to Premium.
+
+Before you start the migration, understand the [performance considerations](#performance-considerations) and plan ahead for the required maintenance window. Typical down time of 20-30 minutes is expected.
+
+The following general steps are required for a successful migration:
+
+1. Create new Premium policy based on your existing Standard policy or classic rules. By the end of this step your new premium policy will include all your existing rules and policy settings.
+ - [Migrate Classic rules to Standard policy](#migrate-classic-rules-to-standard-policy)
+ - [Migrate an existing policy using Azure PowerShell](#migrate-an-existing-policy-using-azure-powershell)
+1. [Migrate Azure Firewall from Standard to Premium using stop/start](#migrate-azure-firewall-using-stopstart).
+1. [Attach the newly created Premium policy to your Premium Firewall](#attach-a-premium-policy-to-a-premium-firewall).
> [!IMPORTANT] > Upgrading a Standard Firewall deployed in Southeast Asia with Availability Zones is not currently supported.
frontdoor Concept Origin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/concept-origin.md
Previously updated : 02/18/2021 Last updated : 02/12/2022
Azure Front Door Standard/Premium origin refers to the host name or public IP of
* **Subscription and Origin host name:** If you didn't select **Custom host** for your backend host type, select your backend by choosing the appropriate subscription and the corresponding backend host name.
+* **Private Link:** Azure Front Door Premium supports sending traffic to an origin by using Private Link. For more information, see [Secure your Origin with Private Link](concept-private-link.md).
+ * **Origin host header:** The host header value sent to the backend for each request. For more information, see [Origin host header](#hostheader). * **Priority:** Assign priorities to your different origin when you want to use a primary origin for all traffic. Also, provide backups if the primary or the backup origins are unavailable. For more information, see [Priority](#priority).
frontdoor Concept Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/concept-private-link.md
documentationcenter: ''
Previously updated : 02/18/2021 Last updated : 02/12/2022
> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-Azure Front Door Premium SKU can connect to your origin via private link service. Your applications can be hosted in your private VNet or behind a PaaS service such as Web App and Storage Account, removing the need for your origin to be publically accessible.
+Azure Front Door Premium can connect to your origin via Private Link. Your origin can be hosted in your private VNet or by using a PaaS service such as Azure App Service or Azure Storage. Private Link removing the need for your origin to be publically accessible.
:::image type="content" source="../media/concept-private-link/front-door-private-endpoint-architecture.png" alt-text="Front Door Private Endpoints architecture":::
-When you enable Private Link to your origin in Azure Front Door Premium configuration, Front Door creates a private endpoint on your behalf from Front Door's regional private network. This endpoint is managed by Azure Front Door. You'll receive an Azure Front Door private endpoint request for approval message at your origin. After you approve the request, a private IP address gets assigned from Front Door's virtual network, traffic between Azure Front Door and your origin traverses the established private link with Azure network backbone. Incoming traffic to your origin is now secured when coming from your Azure Front Door.
+When you enable Private Link to your origin in Azure Front Door Premium, Front Door creates a private endpoint on your behalf from a regional network managed Front Door's regional private network. This endpoint is managed by Azure Front Door. You'll receive an Azure Front Door private endpoint request for approval message at your origin.
+You must approve the private endpoint connection before traffic will flow to the origin. You can approve private endpoint connections by using the Azure portal, the Azure CLI, or Azure PowerShell. For more information, see [Manage a Private Endpoint connection](../../private-link/manage-private-endpoint.md).
+
+> [!IMPORTANT]
+> You must approve the private endpoint connection before traffic will flow to your origin.
+
+After you enable a Private Link origin and approve the private endpoint connection, it takes a few minutes for the connection to be established. During this time, requests to the origin will receive a Front Door error message. The error message will go away once the connection is established.
-> [!NOTE]
-> Once you enable a Private Link origin and approve the private endpoint connection, it takes a few minutes for the connection to be established. During this time, requests to the origin will receive a Front Door error message. The error message will go away once the connection is established.
+After you approve the request, a private IP address gets assigned from Front Door's virtual network. Traffic between Azure Front Door and your origin traverses the established private link by using Azure's network backbone. Incoming traffic to your origin is now secured when coming from your Azure Front Door.
+ ## Limitations
-Azure Front Door private endpoints are available in the following regions during public preview: East US, West 2 US, South Central US, UK South, and Japan East.
+Azure Front Door private endpoints are available in the following regions during public preview: East US, West US 2, South Central US, UK South, and Japan East.
For the best latency, you should always pick an Azure region closest to your origin when choosing to enable Front Door private link endpoint.
-Azure Front Door private endpoints get managed by the platform and under the subscription of Azure Front Door. Azure Front Door allows private link connections to the same customer subscription that is used to create the Front Door profile.
- ## Next steps * To connect Azure Front Door Premium to your Web App via Private Link service, see [Connect Azure Front Door Premium to a Web App origin with Private Link](../../frontdoor/standard-premium/how-to-enable-private-link-web-app.md).
governance Assignment Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/assignment-structure.md
after creation of the initial assignment.
## Policy definition ID This field must be the full path name of either a policy definition or an initiative definition.
-`policyDefinitionId` is a string and not an array. It's recommended that if multiple policies are
-often assigned together, to use an [initiative](./initiative-definition-structure.md) instead.
+`policyDefinitionId` is a string and not an array. The latest content of the assigned policy
+definition or initiative will be retrieved each time the policy assignment is evaluated. It's
+recommended that if multiple policies are often assigned together, to use an
+[initiative](./initiative-definition-structure.md) instead.
## Non-compliance messages
For policy assignments with effect set to **deployIfNotExisit** or **modify**, i
- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md). - Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).
+
governance Policy As Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-as-code.md
Like policy definitions, when adding or updating an existing initiative, the wor
automatically update the initiative definition in Azure. Testing of the new or updated initiative definition comes in a later step.
+> [!NOTE]
+> It's recommended to use a centralized deployment mechanism like GitHub workflows or Azure
+> Pipelines to deploy policies. This helps to ensure only reviewed policy resources are deployed
+> to your environment and that a central deployment mechanism is used. _Write_ permissions
+> to policy resources can be restricted to the identity used in the deployment.
+ ### Test and validate the updated definition Once automation has taken your newly created or updated policy or initiative definitions and made
GitHub, see
- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md). - Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).
+
governance Remediate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/remediate-resources.md
following code:
az role definition list --name 'Contributor' ```
+> [!IMPORTANT]
+> Permissions should be restricted to the smallest possible set when defining **roleDefinitionIds**
+> within a policy definition or assigning permissions to a managed identity manually. See
+> [managed identity best practice recommendations](../../../active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations.md)
+> for more best practices.
+ ## Manually configure the managed identity When creating an assignment using the portal, Azure Policy can both generate a managed identity and
To create a **remediation task**, follow these steps:
1. On the **New remediation task** page, optional remediation settings are shown:
- - **Failure Threshold percentage** - Used to specify whether the remediation task should fail if the percentage of failures exceeds the given threshold. Provided as a number between 0 to 100. By default, the failure threshold is 100%.
+ - **Failure Threshold percentage** - Used to specify whether the remediation task should fail if the percentage of failures exceeds the given threshold. Provided as a number between 0 to 100. By default, the failure threshold is 100%.
- **Resource Count** - Determines how many non-compliant resources to remediate in a given remediation task. The default value is 500 (the previous limit). The maximum number of is 50,000 resources. - **Parallel Deployments** - Determines how many resources to remediate at the same time. The allowed values are 1 to 30 resources at a time. The default value is 10.
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/overview.md
Azure Policy has several permissions, known as operations, in two Resource Provi
Many Built-in roles grant permission to Azure Policy resources. The **Resource Policy Contributor** role includes most Azure Policy operations. **Owner** has full rights. Both **Contributor** and **Reader** have access to all _read_ Azure Policy operations. **Contributor** may trigger resource
-remediation, but can't _create_ definitions or assignments. **User Access Administrator** is
+remediation, but can't _create_ or _update_ definitions and assignments. **User Access Administrator** is
necessary to grant the managed identity on **deployIfNotExists** or **modify** assignments necessary permissions. All policy objects will be readable to all roles over the scope. If none of the Built-in roles have the permissions required, create a [custom role](../../role-based-access-control/custom-roles.md).
+Azure Policy operations can have a significant impact on your Azure environment. Only the minimum set of
+permissions necessary to perform a task should be assigned and these permissions should not be granted
+to users who do not need them.
+ > [!NOTE] > The managed identity of a **deployIfNotExists** or **modify** policy assignment needs enough > permissions to create or update targetted resources. For more information, see
Here are a few pointers and tips to keep in mind:
_initiativeDefC_. If you create another policy definition later for _policyDefB_ with goals similar to _policyDefA_, you can add it under _initiativeDefC_ and track them together. -- Once you've created an initiative assignment, policy definitions added to the initiative also
+ - Once you've created an initiative assignment, policy definitions added to the initiative also
become part of that initiative's assignments.--- When an initiative assignment is evaluated, all policies within the initiative are also evaluated.
+
+ - When an initiative assignment is evaluated, all policies within the initiative are also evaluated.
If you need to evaluate a policy individually, it's better to not include it in an initiative.
+- Manage Azure Policy resources as code with manual reviews on changes to policy definitions,
+ initiatives, and assignments. To learn more about suggested patterns and tooling, see
+ [Design Azure Policy as Code Workflows](./concepts/policy-as-code.md).
+ ## Azure Policy objects ### Policy definition
To learn more about the structures of initiative definitions, review
### Assignments
-An assignment is a policy definition or initiative that has been assigned to take place within a
+An assignment is a policy definition or initiative that has been assigned to a
specific scope. This scope could range from a [management group](../management-groups/overview.md) to an individual resource. The term _scope_ refers to all the resources, resource groups, subscriptions, or management groups that the definition is assigned to. Assignments are inherited by
subscription from the management group-level assignment. Then, assign the more p
on the child management group or subscription level. If any assignment results in a resource getting denied, then the only way to allow the resource is to modify the denying assignment.
+Policy assignments always use the latest state of their assigned definition or initiative when
+evaluating resources. If a policy definition that is already assigned is changed all existing
+assignments of that definition will use the updated logic when evaluating.
+ For more information on setting assignments through the portal, see [Create a policy assignment to identify non-compliant resources in your Azure environment](./assign-policy-portal.md). Steps for [PowerShell](./assign-policy-powershell.md) and [Azure CLI](./assign-policy-azurecli.md) are also
hdinsight Apache Domain Joined Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-architecture.md
If federation is being used and password hashes are synced correctly, but you're
3. Check if the Microsoft Azure PowerShell service principal has already been created. ```powershell
- Get-AzureADServicePrincipal -SearchString "Microsoft Azure Powershell"
+ Get-AzureADServicePrincipal -SearchString "Microsoft Azure PowerShell"
``` 4. If it doesn't exist, then create the service principal.
hdinsight Apache Hadoop On Premises Migration Best Practices Data Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-best-practices-data-migration.md
This article gives recommendations for data migration to Azure HDInsight. It's p
There are two main options to migrate data from on-premises to Azure environment: * Transfer data over network with TLS
- * Over internet - You can transfer data to Azure storage over a regular internet connection using any one of several tools such as: Azure Storage Explorer, AzCopy, Azure Powershell, and Azure CLI. For more information, see [Moving data to and from Azure Storage](../../storage/common/storage-choose-data-transfer-solution.md).
+ * Over internet - You can transfer data to Azure storage over a regular internet connection using any one of several tools such as: Azure Storage Explorer, AzCopy, Azure PowerShell, and Azure CLI. For more information, see [Moving data to and from Azure Storage](../../storage/common/storage-choose-data-transfer-solution.md).
* Express Route - ExpressRoute is an Azure service that lets you create private connections between Microsoft datacenters and infrastructure thatΓÇÖs on your premises or in a colocation facility. ExpressRoute connections don't go over the public Internet, and offer higher security, reliability, and speeds with lower latencies than typical connections over the Internet. For more information, see [Create and modify an ExpressRoute circuit](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md).
hdinsight Hdinsight Hadoop Windows Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-windows-tools.md
Examples of tasks you can do with PowerShell:
* [Run Apache Hive queries using PowerShell](hadoop/apache-hadoop-use-hive-powershell.md). * [Manage clusters with PowerShell](hdinsight-administer-use-powershell.md).
-Follow steps to [install and configure Azure Powershell](/powershell/azure/install-az-ps) to get the latest version.
+Follow steps to [install and configure Azure PowerShell](/powershell/azure/install-az-ps) to get the latest version.
## Utilities you can run in a browser
hdinsight Manage Clusters Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/manage-clusters-runbooks.md
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
1. Select **Runbooks** under **Process Automation**. 1. Select **Create a runbook**.
-1. On the **Create a runbook** panel, enter a name for the runbook, such as `hdinsight-cluster-create`. Select **Powershell** from the **Runbook type** dropdown.
+1. On the **Create a runbook** panel, enter a name for the runbook, such as `hdinsight-cluster-create`. Select **PowerShell** from the **Runbook type** dropdown.
1. Select **Create**. :::image type="content" source="./media/manage-clusters-runbooks/create-runbook.png" alt-text="create runbook" border="true":::
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
1. Select **Runbooks** under **Process Automation**. 1. Select **Create a runbook**.
-1. On the **Create a runbook** panel, enter a name for the runbook, such as `hdinsight-cluster-delete`. Select **Powershell** from the **Runbook type** dropdown.
+1. On the **Create a runbook** panel, enter a name for the runbook, such as `hdinsight-cluster-delete`. Select **PowerShell** from the **Runbook type** dropdown.
1. Select **Create**. 1. Enter the following code on the **Edit PowerShell Runbook** screen and select **Publish**:
iot-edge Tutorial Java Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-java-module.md
To develop an IoT Edge module in Java, install the following additional prerequi
* [Maven](https://maven.apache.org/) >[!TIP]
- >The Java and Maven installation processes add environment variables to your system. Restart any open Visual Studio Code terminal, Powershell, or command prompt instances after completing installation. This step ensures that these utilities can recognize the Java and Maven commands going forward.
+ >The Java and Maven installation processes add environment variables to your system. Restart any open Visual Studio Code terminal, PowerShell, or command prompt instances after completing installation. This step ensures that these utilities can recognize the Java and Maven commands going forward.
## Create a module project
iot-fundamentals Howto Use Iot Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/howto-use-iot-explorer.md
# Install and use Azure IoT explorer
-The Azure IoT explorer is a graphical tool for interacting with and devices connected to your IoT hub. This article focuses on using the tool to test your IoT Plug and Play devices. After installing the tool on your local machine, you can use it to connect to a hub. You can use the tool to view the telemetry the devices are sending, work with device properties, and invoke commands.
+The Azure IoT explorer is a graphical tool for interacting with devices connected to your IoT hub. This article focuses on using the tool to test your IoT Plug and Play devices. After installing the tool on your local machine, you can use it to connect to a hub. You can use the tool to view the telemetry the devices are sending, work with device properties, and invoke commands.
This article shows you how to:
iot-fundamentals Iot Services And Technologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/iot-services-and-technologies.md
# What Azure technologies and services can you use to create IoT solutions?
-Azure IoT technologies and services provide you with options to create a wide variety of IoT solutions that enable digital transformation for your organization. For example, you can:
+Azure IoT technologies and services provide you with options to create a wide variety of IoT solutions that enable digital transformation for your organization. For example:
-* Use [Azure IoT Central](https://apps.azureiotcentral.com), a managed IoT application platform, to build and deploy a secure, enterprise-grade IoT solution. IoT Central features a collection of industry-specific application templates, such as retail and healthcare, to accelerate your solution development process.
-* Use Azure IoT platform services such as [Azure IoT Hub](../iot-hub/about-iot-hub.md) and the [Azure IoT device SDKs](../iot-hub/iot-hub-devguide-sdks.md) to build a custom IoT solution from scratch.
+- For simple onboarding, start as high as you can with Azure IoT Central, an application platform as a service (aPaaS). Starting here simplifies connectivity and management of IoT devices, and extensibility features help you integrate your IoT data into business applications to deliver proof of value.
-![Azure IoT technologies, services, and solutions](./media/iot-services-and-technologies/iot-technologies-services.png)
+- If your requirements go beyond IoT Central's capabilities, the Azure portfolio supports you to go lower in the stack. Customizable platform as a service (PaaS) offerings such as [Azure IoT Hub](../iot-hub/iot-concepts-and-iot-hub.md) and the [Azure IoT device SDKs](../iot-hub/iot-hub-devguide-sdks.md) enable custom IoT solutions from scratch.
-## Azure IoT Central
-The [IoT Central application platform](https://apps.azureiotcentral.com) reduces the burden and cost of developing, managing, and maintaining enterprise-grade IoT solutions. IoT Central's customizable web UI in lets you monitor device conditions, create rules, and manage millions of devices and their data throughout their life cycle. The API surface within IoT Central gives you programmatic access to configure and interact with your IoT solution.
+## Azure IoT Central (aPaaS)
-Azure IoT Central is a fully managed application platform that you can use to create custom IoT solutions. IoT Central uses application templates to create solutions. There are templates for generic solutions and for specific industries such as energy, healthcare, government, and retail. IoT Central application templates let you deploy an IoT Central application in minutes that you can then customize with themes, dashboards, and views.
+The [IoT Central application platform](https://apps.azureiotcentral.com/) is a ready-made environment for IoT solution development. Built on trusted Azure PaaS services it reduces the burden and cost of developing, managing, and maintaining enterprise-grade IoT solutions. It delivers built-in disaster recovery, multitenancy, global availability, and a predictable cost structure.
-Choose devices from the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com) to quickly connect to your solution. Use the IoT Central web UI to monitor and manage your devices to keep them healthy and connected. Use connectors and APIs to integrate your IoT Central application with other business applications.
+IoT Central's customizable web UI and API surface let you monitor and manage millions of devices and their data throughout their life cycle. Get started exploring IoT Central in minutes [using your phone as an IoT device](../iot-central/core/quick-deploy-iot-central.md) ΓÇô see live telemetry, create rules, run commands from the cloud, and export your data for business analytics.
-As a fully managed application platform, IoT Central has a simple, predictable pricing model.
+Choose devices from the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com/) to quickly connect to your solution, or develop a custom device using IoT Central's [device templates](../iot-central/core/howto-set-up-template.md).
-## Custom solutions
+## Custom solutions (PaaS)
-To build an IoT solution from scratch, or extend a solution created using IoT Central, use one or more of the following Azure IoT technologies and
+To build an IoT solution from scratch, or extend one created using IoT Central, you can use the following Azure IoT technologies and
+ ### Devices
-Develop your IoT devices using one of the [Azure IoT Starter Kits](https://devicecatalog.azure.com/kits) or choose a device to use from the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com). Implement your embedded code using the open-source [device SDKs](../iot-hub/iot-hub-devguide-sdks.md). The device SDKs support multiple operating systems, such as Linux, Windows, and real-time operating systems. There are SDKs for multiple programming languages, such as [C](https://github.com/Azure/azure-iot-sdk-c), [Node.js](https://github.com/Azure/azure-iot-sdk-node), [Java](https://github.com/Azure/azure-iot-sdk-java), [.NET](https://github.com/Azure/azure-iot-sdk-csharp), and [Python](https://github.com/Azure/azure-iot-sdk-python).
+Develop your IoT devices using a starter kit such as the [Azure MXChip IoT DevKit](/samples/azure-samples/mxchip-iot-devkit-get-started/sample/) or choose a device from the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com/). Implement your embedded code using the open-source [device SDKs](../iot-hub/iot-hub-devguide-sdks.md). The device SDKs support multiple operating systems, such as Linux, Windows, and real-time operating systems. There are SDKs for multiple programming languages, such as [C](https://github.com/Azure/azure-iot-sdk-c), [Node.js](https://github.com/Azure/azure-iot-sdk-node), [Java](https://github.com/Azure/azure-iot-sdk-java), [.NET](https://github.com/Azure/azure-iot-sdk-csharp), and [Python](https://github.com/Azure/azure-iot-sdk-python).
-You can further simplify how you create the embedded code for your devices by following the [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) conventions. IoT Plug and Play enables solution developers to integrate devices with their solutions without writing any embedded code. At the core of IoT Plug and Play, is a _device capability model_ schema that describes device capabilities. Use the device capability model to generate your embedded device code and configure a cloud-based solution such as an IoT Central application.
+To further simplify how you create the embedded code for your devices, follow the [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) conventions. At the core of IoT Plug and Play, is a *device capability model* schema that describes device capabilities. Use the device capability model to configure a cloud-based solution such as an IoT Central application.
-[Azure IoT Edge](../iot-edge/about-iot-edge.md) lets you offload parts of your IoT workload from your Azure cloud services to your devices. IoT Edge can reduce latency in your solution, reduce the amount of data your devices exchange with the cloud, and enable off-line scenarios. You can manage IoT Edge devices from IoT Central.
+[Azure IoT Edge](../iot-edge/about-iot-edge.md) lets you offload parts of your IoT workload from your Azure cloud services to your devices. IoT Edge can reduce latency in your solution, reduce the amount of data your devices exchange with the cloud, and enable off-line scenarios. You can manage IoT Edge devices from IoT Central.
-[Azure Sphere](/azure-sphere/product-overview/what-is-azure-sphere) is a secured, high-level application platform with built-in communication and security features for internet-connected devices. It includes a secured microcontroller unit, a custom Linux-based operating system, and a cloud-based security service that provides continuous, renewable security.
+[Azure Sphere](/azure-sphere/product-overview/what-is-azure-sphere) is a secured, high-level application platform with built-in communication and security features for internet-connected devices. It includes a secured microcontroller unit, a custom Linux-based operating system, and a cloud-based security service that provides continuous, renewable security.
### Cloud connectivity
-The [Azure IoT Hub](../iot-hub/about-iot-hub.md) service enables reliable and secure bidirectional communications between millions of IoT devices and a cloud-based solution. [Azure IoT Hub Device Provisioning Service](../iot-dps/about-iot-dps.md) is a helper service for IoT Hub. The service provides zero-touch, just-in-time provisioning of devices to the right IoT hub without requiring human intervention. These capabilities enable customers to provision millions of devices in a secure and scalable manner.
+The [Azure IoT Hub](../iot-hub/iot-concepts-and-iot-hub.md) service enables reliable and secure bidirectional communications between millions of IoT devices and a cloud-based solution. [Azure IoT Hub Device Provisioning Service](../iot-dps/about-iot-dps.md) is a helper service for IoT Hub. The service provides zero-touch, just-in-time provisioning of devices to the right IoT hub without requiring human intervention. These capabilities enable customers to provision millions of devices in a secure and scalable manner.
-IoT Hub is a core component and you can use it to meet IoT implementation challenges such as:
+IoT Hub is a core component, and you can use it to meet IoT implementation challenges such as:
-* High-volume device connectivity and management.
-* High-volume telemetry ingestion.
-* Command and control of devices.
-* Device security enforcement.
+- High-volume device connectivity and management.
+- High-volume telemetry ingestion.
+- Command and control of devices.
+- Device security enforcement.
-### Bridging the gap between the physical and digital worlds
+### Bridge the gap between the physical and digital worlds
-[Azure Digital Twins](../digital-twins/overview.md) is an IoT service that enables you to model a physical environment. It uses a spatial intelligence graph to model the relationships between people, spaces, and devices. By corelating data across the digital and physical worlds you can create contextually aware solutions.
+[Azure Digital Twins](../digital-twins/overview.md) is an IoT service that enables you to model a physical environment. It uses a spatial intelligence graph to model the relationships between people, spaces, and devices. By corelating data across the digital and physical worlds you can create contextually aware solutions.
-Iot Central uses digital twins to synchronize devices and data in the real world with the digital models that enable users to monitor and manage those connected devices.
+IoT Central uses digital twins to synchronize devices and data in the real world with the digital models that enable users to monitor and manage those connected devices.
### Data and analytics
-IoT devices typically generate large amounts of time series data, such as temperature readings from sensors. [Azure Time Series Insights](../time-series-insights/time-series-insights-overview.md) can connect to an IoT hub, read the telemetry stream from your devices, store that data, and enable you to query and visualize it.
+IoT devices typically generate large amounts of time series data, such as temperature readings from sensors. [Azure Time Series Insights](../time-series-insights/time-series-insights-overview.md) can connect to an IoT hub, read the telemetry stream from your devices, store that data, and enable you to query and visualize it.
-[Azure Maps](../azure-maps/index.yml) is a collection of geospatial services that use fresh mapping data to provide accurate geographic context to web and mobile applications. You can use a REST API, a web-based JavaScript control, or an Android SDK to build your applications.
+[Azure Maps](../azure-maps/about-azure-maps.md) is a collection of geospatial services that use fresh mapping data to provide accurate geographic context to web and mobile applications. You can use a REST API, a web-based JavaScript control, or an Android SDK to build your applications.
## Next steps For a hands-on experience, try one of the quickstarts: -- [Create an Azure IoT Central application](../iot-central/core/quick-deploy-iot-central.md)
+- [Connect your first device to IoT Central](../iot-central/core/quick-deploy-iot-central.md)
+ - [Send telemetry from a device to an IoT hub](../iot-hub/quickstart-send-telemetry-cli.md)
iot-hub-device-update Device Update Simulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-simulator.md
Once the Device Update agent is running on an IoT device, the device needs to be
``` Device Update for Azure IoT Hub software is subject to the following license terms:
- * [Device Update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE.md)
+ * [Device Update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE)
* [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE) Read the license terms prior to using the agent. Your installation and use constitutes your acceptance of these terms. If you do not agree with the license terms, do not use the Device Update for IoT Hub agent.
When no longer needed, clean up your Device Update account, instance, IoT Hub an
> [!div class="nextstepaction"] > [Troubleshooting](troubleshoot-device-update.md)-
iot-hub Iot Hub Automatic Device Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-automatic-device-management.md
Automatic module configurations require the use of module twins to synchronize s
## Use tags to target twins
-Before you create a configuration, you must specify which devices or modules you want to affect. Azure IoT Hub identifies devices and using tags in the device twin, and identifies modules using tags in the module twin. Each device or modules can have multiple tags, and you can define them any way that makes sense for your solution. For example, if you manage devices in different locations, add the following tags to a device twin:
+Before you create a configuration, you must specify which devices or modules you want to affect. Azure IoT Hub identifies devices using tags in the device twin, and identifies modules using tags in the module twin. Each device or modules can have multiple tags, and you can define them any way that makes sense for your solution. For example, if you manage devices in different locations, add the following tags to a device twin:
```json "tags": {
There are five steps to create a configuration. The following sections walk thro
### Specify Settings
-This section defines the content to be set in targeted device or module twins. There are two inputs for each set of settings. The first is the twin path, which is the path to the JSON section within the twin desired properties that will be set. The second is the JSON content to be inserted in that section.
+This section defines the content to be set in targeted device twin or module twin desired properties. There are two inputs for each set of settings. The first is the twin path, which is the path to the JSON section within the twin desired properties that will be set. The second is the JSON content to be inserted in that section.
For example, you could set the twin path to `properties.desired.chiller-water` and then provide the following JSON content:
To view the details of a configuration and monitor the devices running it, use t
1. In the [Azure portal](https://portal.azure.com), go to your IoT hub.
-2. Select **IoT device configuration**.
+2. Select **Configurations** in Device management.
3. Inspect the configuration list. For each configuration, you can view the following details:
To modify a configuration, use the following steps:
1. In the [Azure portal](https://portal.azure.com), go to your IoT hub.
-2. Select **IoT device configuration**.
+2. Select **Configurations** in Device management.
3. Select the configuration that you want to modify. 4. Make updates to the following fields:
- * Target condition
- * Labels
* Priority * Metrics
+ * Target condition
+ * Labels
4. Select **Save**.
When you delete a configuration, any device twins take on their next highest pri
1. In the [Azure portal](https://portal.azure.com), go to your IoT hub.
-2. Select **IoT device configuration**.
+2. Select **Configurations** in Device management.
3. Use the checkbox to select the configuration that you want to delete.
To further explore the capabilities of IoT Hub, see:
To explore using the IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning, see:
-* [Azure IoT Hub Device Provisioning Service](../iot-dps/index.yml)
+* [Azure IoT Hub Device Provisioning Service](../iot-dps/index.yml)
iot-hub Iot Hub Raspberry Pi Kit Node Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-raspberry-pi-kit-node-get-started.md
ms.devlang: javascript Previously updated : 06/18/2021 Last updated : 02/22/2022
After you've successfully connected BME280 to your Raspberry Pi, it should be li
Turn on Pi by using the micro USB cable and the power supply. Use the Ethernet cable to connect Pi to your wired network or follow the [instructions from the Raspberry Pi Foundation](https://www.raspberrypi.org/documentation/configuration/wireless/) to connect Pi to your wireless network. After your Pi has been successfully connected to the network, you need to take a note of the [IP address of your Pi](https://www.raspberrypi.org/documentation/remote-access/ip-address.md).
-![Connected to wired network](./media/iot-hub-raspberry-pi-kit-node-get-started/5-power-on-pi.png)
- > [!NOTE] > Make sure that Pi is connected to the same network as your computer. For example, if your computer is connected to a wireless network while Pi is connected to a wired network, you might not see the IP address in the devdisco output.
key-vault Vault Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/vault-create-template.md
You can deploy access policies to an existing key vault without redeploying the
"metadata": { "description": "Specifies the permissions to certificates in the vault. Valid values are: all, create, delete, update, deleteissuers, get, getissuers, import, list, listissuers, managecontacts, manageissuers, recover, backup, restore, setissuers, and purge." }
- },
+ }
+ },
"resources": [ { "type": "Microsoft.KeyVault/vaults/accessPolicies",
lighthouse Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/architecture.md
While in most cases only one service provider will be managing specific resource
## Next steps -- Review [Azure CLI](/cli/azure/managedservices) and [Azure Powershell](/powershell/module/az.managedservices) commands for working with registration definitions and registration assignments.
+- Review [Azure CLI](/cli/azure/managedservices) and [Azure PowerShell](/powershell/module/az.managedservices) commands for working with registration definitions and registration assignments.
- Learn about [enhanced services and scenarios](cross-tenant-management-experience.md#enhanced-services-and-scenarios) for Azure Lighthouse. - Learn more about how [tenants, users, and roles](tenants-users-roles.md) work with Azure Lighthouse.
load-balancer Ipv6 Dual Stack Standard Internal Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/ipv6-dual-stack-standard-internal-load-balancer-powershell.md
Title: Deploy an IPv6 dual stack application using Standard Internal Load Balancer in Azure - PowerShell
-description: This article shows how to deploy an IPv6 dual stack application with Standard Internal Load Balancer in Azure virtual network using Azure Powershell.
+description: This article shows how to deploy an IPv6 dual stack application with Standard Internal Load Balancer in Azure virtual network using Azure PowerShell.
documentationcenter: na
load-balancer Load Balancer Ipv6 Internet Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-internet-cli.md
>[!NOTE] >This article describes an introductory IPv6 feature to allow Basic Load Balancers to provide both IPv4 and IPv6 connectivity. Comprehensive IPv6 connectivity is now available with [IPv6 for Azure VNETs](../virtual-network/ip-services/ipv6-overview.md) which integrates IPv6 connectivity with your Virtual Networks and includes key features such as IPv6 Network Security Group rules, IPv6 User-defined routing, IPv6 Basic and Standard load balancing, and more. IPv6 for Azure VNETs is the recommended standard for IPv6 applications in Azure.
-See [IPv6 for Azure VNET Powershell Deployment](./virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-powershell.md)
+See [IPv6 for Azure VNET PowerShell Deployment](./virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-powershell.md)
An Azure load balancer is a Layer-4 (TCP, UDP) load balancer. Load balancers provide high availability by distributing incoming traffic among healthy service instances in cloud services or virtual machines in a load balancer set. Load balancers can also present these services on multiple ports or multiple IP addresses or both.
load-balancer Load Balancer Ipv6 Internet Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-internet-ps.md
>[!NOTE] >This article describes an introductory IPv6 feature to allow Basic Load Balancers to provide both IPv4 and IPv6 connectivity. Comprehensive IPv6 connectivity is now available with [IPv6 for Azure VNETs](../virtual-network/ip-services/ipv6-overview.md) which integrates IPv6 connectivity with your Virtual Networks and includes key features such as IPv6 Network Security Group rules, IPv6 User-defined routing, IPv6 Basic and Standard load balancing, and more. IPv6 for Azure VNETs is the recommended standard for IPv6 applications in Azure.
-See [IPv6 for Azure VNET Powershell Deployment](./virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-powershell.md)
+See [IPv6 for Azure VNET PowerShell Deployment](./virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-powershell.md)
An Azure load balancer is a Layer-4 (TCP, UDP) load balancer. The load balancer provides high availability by distributing incoming traffic among healthy service instances in cloud services or virtual machines in a load balancer set. Azure Load Balancer can also present those services on multiple ports, multiple IP addresses, or both.
load-balancer Load Balancer Ipv6 Internet Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-internet-template.md
>[!NOTE] >This article describes an introductory IPv6 feature to allow Basic Load Balancers to provide both IPv4 and IPv6 connectivity. Comprehensive IPv6 connectivity is now available with [IPv6 for Azure VNETs](../virtual-network/ip-services/ipv6-overview.md) which integrates IPv6 connectivity with your Virtual Networks and includes key features such as IPv6 Network Security Group rules, IPv6 User-defined routing, IPv6 Basic and Standard load balancing, and more. IPv6 for Azure VNETs is the recommended standard for IPv6 applications in Azure.
-See [IPv6 for Azure VNET Powershell Deployment](./virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-powershell.md)
+See [IPv6 for Azure VNET PowerShell Deployment](./virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-powershell.md)
An Azure load balancer is a Layer-4 (TCP, UDP) load balancer. The load balancer provides high availability by distributing incoming traffic among healthy service instances in cloud services or virtual machines in a load balancer set. Azure Load Balancer can also present those services on multiple ports, multiple IP addresses, or both.
load-balancer Virtual Network Ipv4 Ipv6 Dual Stack Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/virtual-network-ipv4-ipv6-dual-stack-powershell.md
Title: Deploy IPv6 dual stack application - Basic Load Balancer - PowerShell
-description: This article shows how deploy an IPv6 dual stack application in Azure virtual network using Azure Powershell.
+description: This article shows how deploy an IPv6 dual stack application in Azure virtual network using Azure PowerShell.
documentationcenter: na
load-balancer Virtual Network Ipv4 Ipv6 Dual Stack Standard Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-powershell.md
Title: Deploy IPv6 dual stack application - Standard Load Balancer - PowerShell
-description: This article shows how deploy an IPv6 dual stack application with Standard Load Balancer in Azure virtual network using Azure Powershell.
+description: This article shows how deploy an IPv6 dual stack application with Standard Load Balancer in Azure virtual network using Azure PowerShell.
documentationcenter: na
logic-apps Create Replication Tasks Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-replication-tasks-azure-resources.md
ms.suite: integration Previously updated : 01/29/2022 Last updated : 02/22/2022
For Event Hubs, replication between the same number of [partitions](../event-hub
For Service Bus, you must enable sessions so that message sequences with the same session ID retrieved from the source are submitted to the target queue or topic as a batch in the original sequence and with the same session ID. For more information, review [Sequences and order preservation](../service-bus-messaging/service-bus-federation-patterns.md#sequences-and-order-preservation).
+> [!IMPORTANT]
+> Replication tasks don't track which messages have already been processed when the source experiences
+> a disruptive event. To prevent reprocessing already processed messages, you have to set up a way to
+> track the already processed messages so that processing resumes only with the unprocessed messages.
+>
+> For example, you can set up a database that stores the proccessing state for each message.
+> When a message arrives, check the message's state and process only when the message is unprocessed.
+> That way, no processing happens for an already processed message.
+>
+> This pattern demonstrates the *idempotence* concept where repeating an action on an input produces
+> the same result without other side effects or won't change the input's value.
+ To learn more about multi-site and multi-region federation for Azure services where you can create replication tasks, review the following documentation: - [Event Hubs multi-site and multi-region federation](../event-hubs/event-hubs-federation-overview.md)
machine-learning Concept Network Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-network-data-access.md
The following table lists what identities should be used for specific scenarios:
| Access from Job | Yes/No | Compute MSI | | Access from Notebook | Yes/No | User's identity |
+> [!TIP]
+> If you need to access data from outside Azure Machine Learning, such as using Azure Storage Explorer, _user_ identity is probably what is used. Consult the documentation for the tool or service you are using for specific information. For more information on how Azure Machine Learning works with data, see [Identity-based data access to storage services on Azure](how-to-identity-based-data-access.md).
+ ## Azure Storage Account When using an Azure Storage Account from Azure Machine Learning studio, you must add the managed identity of the workspace to the following Azure RBAC roles for the storage account:
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
Review detailed code examples and use cases in the [GitHub notebook repository f
## Next steps * [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
+* [Make predictions with ONNX on computer vision models from AutoML](how-to-inference-onnx-automl-image-models.md)
* [Troubleshoot automated ML experiments](how-to-troubleshoot-auto-ml.md).
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md
Automated ML offers options for you to monitor and evaluate your training result
* You can view your training results in a widget or inline if you are in a notebook. See [Monitor automated machine learning runs](#monitor) for more details.
-* For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md) .
+* For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md).
* To get a featurization summary and understand what features were added to a particular model, see [Featurization transparency](how-to-configure-auto-features.md#featurization-transparency). You can view the hyperparameters, the scaling and normalization techniques, and algorithm applied to a specific automated ML run with the [custom code solution, `print_model()`](how-to-configure-auto-features.md#scaling-and-normalization).
+> [!TIP]
+> Automated ML also let's you [view the generated model training code for Auto ML trained models](how-to-generate-automl-training-code.md). This functionality is in public preview and can change at any time.
+ ## <a name="monitor"></a> Monitor automated machine learning runs For automated ML runs, to access the charts from a previous run, replace `<<experiment_name>>` with the appropriate experiment name:
machine-learning How To Deploy App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-app-service.md
- Title: Deploy ML models to Azure App Service (preview)-
-description: Learn how to use Azure Machine Learning to deploy a trained ML model to a Web App using Azure App Service.
------ Previously updated : 10/21/2021-----
-# Deploy a machine learning model to Azure App Service (preview)
--
-Learn how to deploy a model from Azure Machine Learning as a web app in Azure App Service.
-
-> [!IMPORTANT]
-> While both Azure Machine Learning and Azure App Service are generally available, the ability to deploy a model from the Machine Learning service to App Service is in preview.
-
-With Azure Machine Learning, you can create Docker images from trained machine learning models. This image contains a web service that receives data, submits it to the model, and then returns the response. Azure App Service can be used to deploy the image, and provides the following features:
-
-* Advanced [authentication](../app-service/configure-authentication-provider-aad.md) for enhanced security. Authentication methods include both Azure Active Directory and multi-factor auth.
-* [Autoscale](../azure-monitor/autoscale/autoscale-get-started.md?toc=%2fazure%2fapp-service%2ftoc.json) without having to redeploy.
-* [TLS support](../app-service/configure-ssl-certificate-in-code.md) for secure communications between clients and the service.
-
-For more information on features provided by Azure App Service, see the [App Service overview](../app-service/overview.md).
-
-> [!IMPORTANT]
-> If you need the ability to log the scoring data used with your deployed model, or the results of scoring, you should instead deploy to Azure Kubernetes Service. For more information, see [Collect data on your production models](how-to-enable-data-collection.md).
-
-## Prerequisites
-
-* An Azure Machine Learning workspace. For more information, see the [Create a workspace](how-to-manage-workspace.md) article.
-* The [Azure CLI](/cli/azure/install-azure-cli).
-* A trained machine learning model registered in your workspace. If you do not have a model, use the [Image classification tutorial: train model](tutorial-train-deploy-notebook.md) to train and register one.
-
- > [!IMPORTANT]
- > The code snippets in this article assume that you have set the following variables:
- >
- > * `ws` - Your Azure Machine Learning workspace.
- > * `model` - The registered model that will be deployed.
- > * `inference_config` - The inference configuration for the model.
- >
- > For more information on setting these variables, see [Deploy models with Azure Machine Learning](how-to-deploy-and-where.md).
-
-## Prepare for deployment
-
-Before deploying, you must define what is needed to run the model as a web service. The following list describes the main items needed for a deployment:
-
-* An __entry script__. This script accepts requests, scores the request using the model, and returns the results.
-
- > [!IMPORTANT]
- > The entry script is specific to your model; it must understand the format of the incoming request data, the format of the data expected by your model, and the format of the data returned to clients.
- >
- > If the request data is in a format that is not usable by your model, the script can transform it into an acceptable format. It may also transform the response before returning to it to the client.
-
- > [!IMPORTANT]
- > The Azure Machine Learning SDK does not provide a way for the web service access your datastore or data sets. If you need the deployed model to access data stored outside the deployment, such as in an Azure Storage account, you must develop a custom code solution using the relevant SDK. For example, the [Azure Storage SDK for Python](https://github.com/Azure/azure-storage-python).
- >
- > Another alternative that may work for your scenario is [batch predictions](./tutorial-pipeline-batch-scoring-classification.md), which does provide access to datastores when scoring.
-
- For more information on entry scripts, see [Deploy models with Azure Machine Learning](how-to-deploy-and-where.md).
-
-* **Dependencies**, such as helper scripts or Python/Conda packages required to run the entry script or model
-
-These entities are encapsulated into an __inference configuration__. The inference configuration references the entry script and other dependencies.
-
-> [!IMPORTANT]
-> When creating an inference configuration for use with Azure App Service, you must use an [Environment](/python/api/azureml-core/azureml.core.environment(class)) object. Please note that if you are defining a custom environment, you must add azureml-defaults with version >= 1.0.45 as a pip dependency. This package contains the functionality needed to host the model as a web service. The following example demonstrates creating an environment object and using it with an inference configuration:
->
-> ```python
-> from azureml.core.environment import Environment
-> from azureml.core.conda_dependencies import CondaDependencies
-> from azureml.core.model import InferenceConfig
->
-> # Create an environment and add conda dependencies to it
-> myenv = Environment(name="myenv")
-> # Enable Docker based environment
-> myenv.docker.enabled = True
-> # Build conda dependencies
-> myenv.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'],
-> pip_packages=['azureml-defaults'])
-> inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
-> ```
-
-For more information on environments, see [Create and manage environments for training and deployment](how-to-use-environments.md).
-
-For more information on inference configuration, see [Deploy models with Azure Machine Learning](how-to-deploy-and-where.md).
-
-> [!IMPORTANT]
-> When deploying to Azure App Service, you do not need to create a __deployment configuration__.
-
-## Create the image
-
-To create the Docker image that is deployed to Azure App Service, use [Model.package](/python/api/azureml-core/azureml.core.model.model). The following code snippet demonstrates how to build a new image from the model and inference configuration:
-
-> [!NOTE]
-> The code snippet assumes that `model` contains a registered model, and that `inference_config` contains the configuration for the inference environment. For more information, see [Deploy models with Azure Machine Learning](how-to-deploy-and-where.md).
-
-```python
-from azureml.core import Model
-
-package = Model.package(ws, [model], inference_config)
-package.wait_for_creation(show_output=True)
-# Display the package location/ACR path
-print(package.location)
-```
-
-When `show_output=True`, the output of the Docker build process is shown. Once the process finishes, the image has been created in the Azure Container Registry for your workspace. Once the image has been built, the location in your Azure Container Registry is displayed. The location returned is in the format `<acrinstance>.azurecr.io/package@sha256:<imagename>`. For example, `myml08024f78fd10.azurecr.io/package@sha256:20190827151241`.
-
-> [!IMPORTANT]
-> Save the location information, as it is used when deploying the image.
-
-## Deploy image as a web app
-
-1. Use the following command to get the login credentials for the Azure Container Registry that contains the image. Replace `<acrinstance>` with the value returned previously from `package.location`:
-
- ```azurecli-interactive
- az acr credential show --name <myacr>
- ```
-
- The output of this command is similar to the following JSON document:
-
- ```json
- {
- "passwords": [
- {
- "name": "password",
- "value": "Iv0lRZQ9762LUJrFiffo3P4sWgk4q+nW"
- },
- {
- "name": "password2",
- "value": "=pKCxHatX96jeoYBWZLsPR6opszr==mg"
- }
- ],
- "username": "myml08024f78fd10"
- }
- ```
-
- Save the value for __username__ and one of the __passwords__.
-
-1. If you do not already have a resource group or app service plan to deploy the service, the following commands demonstrate how to create both:
-
- ```azurecli-interactive
- az group create --name myresourcegroup --location "West Europe"
- az appservice plan create --name myplanname --resource-group myresourcegroup --sku B1 --is-linux
- ```
-
- In this example, a __Basic__ pricing tier (`--sku B1`) is used.
-
- > [!IMPORTANT]
- > Images created by Azure Machine Learning use Linux, so you must use the `--is-linux` parameter.
-
-1. To create the web app, use the following command. Replace `<app-name>` with the name you want to use. Replace `<acrinstance>` and `<imagename>` with the values from returned `package.location` earlier:
-
- ```azurecli-interactive
- az webapp create --resource-group myresourcegroup --plan myplanname --name <app-name> --deployment-container-image-name <acrinstance>.azurecr.io/package@sha256:<imagename>
- ```
-
- This command returns information similar to the following JSON document:
-
- ```json
- {
- "adminSiteName": null,
- "appServicePlanName": "myplanname",
- "geoRegion": "West Europe",
- "hostingEnvironmentProfile": null,
- "id": "/subscriptions/0000-0000/resourceGroups/myResourceGroup/providers/Microsoft.Web/serverfarms/myplanname",
- "kind": "linux",
- "location": "West Europe",
- "maximumNumberOfWorkers": 1,
- "name": "myplanname",
- < JSON data removed for brevity. >
- "targetWorkerSizeId": 0,
- "type": "Microsoft.Web/serverfarms",
- "workerTierName": null
- }
- ```
-
- > [!IMPORTANT]
- > At this point, the web app has been created. However, since you haven't provided the credentials to the Azure Container Registry that contains the image, the web app is not active. In the next step, you provide the authentication information for the container registry.
-
-1. To provide the web app with the credentials needed to access the container registry, use the following command. Replace `<app-name>` with the name you want to use. Replace `<acrinstance>` and `<imagename>` with the values from returned `package.location` earlier. Replace `<username>` and `<password>` with the ACR login information retrieved earlier:
-
- ```azurecli-interactive
- az webapp config container set --name <app-name> --resource-group myresourcegroup --docker-custom-image-name <acrinstance>.azurecr.io/package@sha256:<imagename> --docker-registry-server-url https://<acrinstance>.azurecr.io --docker-registry-server-user <username> --docker-registry-server-password <password>
- ```
-
- This command returns information similar to the following JSON document:
-
- ```json
- [
- {
- "name": "WEBSITES_ENABLE_APP_SERVICE_STORAGE",
- "slotSetting": false,
- "value": "false"
- },
- {
- "name": "DOCKER_REGISTRY_SERVER_URL",
- "slotSetting": false,
- "value": "https://myml08024f78fd10.azurecr.io"
- },
- {
- "name": "DOCKER_REGISTRY_SERVER_USERNAME",
- "slotSetting": false,
- "value": "myml08024f78fd10"
- },
- {
- "name": "DOCKER_REGISTRY_SERVER_PASSWORD",
- "slotSetting": false,
- "value": null
- },
- {
- "name": "DOCKER_CUSTOM_IMAGE_NAME",
- "value": "DOCKER|myml08024f78fd10.azurecr.io/package@sha256:20190827195524"
- }
- ]
- ```
-
-At this point, the web app begins loading the image.
-
-> [!IMPORTANT]
-> It may take several minutes before the image has loaded. To monitor progress, use the following command:
->
-> ```azurecli-interactive
-> az webapp log tail --name <app-name> --resource-group myresourcegroup
-> ```
->
-> Once the image has been loaded and the site is active, the log displays a message that states `Container <container name> for site <app-name> initialized successfully and is ready to serve requests`.
-
-Once the image is deployed, you can find the hostname by using the following command:
-
-```azurecli-interactive
-az webapp show --name <app-name> --resource-group myresourcegroup
-```
-
-This command returns information similar to the following hostname - `<app-name>.azurewebsites.net`. Use this value as part of the __base url__ for the service.
-
-## Use the Web App
-
-The web service that passes requests to the model is located at `{baseurl}/score`. For example, `https://<app-name>.azurewebsites.net/score`. The following Python code demonstrates how to submit data to the URL and display the response:
-
-```python
-import requests
-import json
-
-scoring_uri = "https://mywebapp.azurewebsites.net/score"
-
-headers = {'Content-Type':'application/json'}
-
-test_sample = json.dumps({'data': [
- [1,2,3,4,5,6,7,8,9,10],
- [10,9,8,7,6,5,4,3,2,1]
-]})
-
-response = requests.post(scoring_uri, data=test_sample, headers=headers)
-print(response.status_code)
-print(response.elapsed)
-print(response.json())
-```
-
-## Next steps
-
-* Learn to configure your Web App in the [App Service on Linux](/azure/app-service/containers/) documentation.
-* Learn more about scaling in [Get started with Autoscale in Azure](../azure-monitor/autoscale/autoscale-get-started.md?toc=%2fazure%2fapp-service%2ftoc.json).
-* [Use a TLS/SSL certificate in your Azure App Service](../app-service/configure-ssl-certificate-in-code.md).
-* [Configure your App Service app to use Azure Active Directory sign-in](../app-service/configure-authentication-provider-aad.md).
-* [Consume a ML Model deployed as a web service](how-to-consume-web-service.md)
machine-learning How To Deploy Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-functions.md
- Title: Deploy ML models to Azure Functions Apps (preview)-
-description: Learn how to use Azure Machine Learning to package and deploy a model as a Web Service in an Azure Functions App.
------ Previously updated : 10/21/2021-----
-# Deploy a machine learning model to Azure Functions (preview)
--
-Learn how to deploy a model from Azure Machine Learning as a function app in Azure Functions.
-
-> [!IMPORTANT]
-> While both Azure Machine Learning and Azure Functions are generally available, the ability to package a model from the Machine Learning service for Functions is in preview.
-
-With Azure Machine Learning, you can create Docker images from trained machine learning models. Azure Machine Learning now has the preview functionality to build these machine learning models into function apps, which can be [deployed into Azure Functions](../azure-functions/functions-deployment-technologies.md#docker-container).
-
-## Prerequisites
-
-* An Azure Machine Learning workspace. For more information, see the [Create a workspace](how-to-manage-workspace.md) article.
-* The [Azure CLI](/cli/azure/install-azure-cli).
-* A trained machine learning model registered in your workspace. If you do not have a model, use the [Image classification tutorial: train model](tutorial-train-deploy-notebook.md) to train and register one.
-
- > [!IMPORTANT]
- > The code snippets in this article assume that you have set the following variables:
- >
- > * `ws` - Your Azure Machine Learning workspace.
- > * `model` - The registered model that will be deployed.
- > * `inference_config` - The inference configuration for the model.
- >
- > For more information on setting these variables, see [Deploy models with Azure Machine Learning](how-to-deploy-and-where.md).
-
-## Prepare for deployment
-
-Before deploying, you must define what is needed to run the model as a web service. The following list describes the core items needed for a deployment:
-
-* An __entry script__. This script accepts requests, scores the request using the model, and returns the results.
-
- > [!IMPORTANT]
- > The entry script is specific to your model; it must understand the format of the incoming request data, the format of the data expected by your model, and the format of the data returned to clients.
- >
- > If the request data is in a format that is not usable by your model, the script can transform it into an acceptable format. It may also transform the response before returning to it to the client.
- >
- > By default when packaging for functions, the input is treated as text. If you are interested in consuming the raw bytes of the input (for instance for Blob triggers), you should use [AMLRequest to accept raw data](./how-to-deploy-advanced-entry-script.md#binary-data).
-
-For more information on entry script, see [Define scoring code](./how-to-deploy-and-where.md#define-an-entry-script)
-
-* **Dependencies**, such as helper scripts or Python/Conda packages required to run the entry script or model
-
-These entities are encapsulated into an __inference configuration__. The inference configuration references the entry script and other dependencies.
-
-> [!IMPORTANT]
-> When creating an inference configuration for use with Azure Functions, you must use an [Environment](/python/api/azureml-core/azureml.core.environment%28class%29) object. Please note that if you are defining a custom environment, you must add azureml-defaults with version >= 1.0.45 as a pip dependency. This package contains the functionality needed to host the model as a web service. The following example demonstrates creating an environment object and using it with an inference configuration:
->
-> ```python
-> from azureml.core.environment import Environment
-> from azureml.core.conda_dependencies import CondaDependencies
->
-> # Create an environment and add conda dependencies to it
-> myenv = Environment(name="myenv")
-> # Enable Docker based environment
-> myenv.docker.enabled = True
-> # Build conda dependencies
-> myenv.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'],
-> pip_packages=['azureml-defaults'])
-> inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
-> ```
-
-For more information on environments, see [Create and manage environments for training and deployment](how-to-use-environments.md).
-
-For more information on inference configuration, see [Deploy models with Azure Machine Learning](how-to-deploy-and-where.md).
-
-> [!IMPORTANT]
-> When deploying to Functions, you do not need to create a __deployment configuration__.
-
-## Install the SDK preview package for functions support
-
-To build packages for Azure Functions, you must install the SDK preview package.
-
-```bash
-pip install azureml-contrib-functions
-```
-
-## Create the image
-
-To create the Docker image that is deployed to Azure Functions, use [azureml.contrib.functions.package](/python/api/azureml-contrib-functions/azureml.contrib.functions) or the specific package function for the trigger you are interested in using. The following code snippet demonstrates how to create a new package with a blob trigger from the model and inference configuration:
-
-> [!NOTE]
-> The code snippet assumes that `model` contains a registered model, and that `inference_config` contains the configuration for the inference environment. For more information, see [Deploy models with Azure Machine Learning](how-to-deploy-and-where.md).
-
-```python
-from azureml.contrib.functions import package
-from azureml.contrib.functions import BLOB_TRIGGER
-blob = package(ws, [model], inference_config, functions_enabled=True, trigger=BLOB_TRIGGER, input_path="input/{blobname}.json", output_path="output/{blobname}_out.json")
-blob.wait_for_creation(show_output=True)
-# Display the package location/ACR path
-print(blob.location)
-```
-
-When `show_output=True`, the output of the Docker build process is shown. Once the process finishes, the image has been created in the Azure Container Registry for your workspace. Once the image has been built, the location in your Azure Container Registry is displayed. The location returned is in the format `<acrinstance>.azurecr.io/package@sha256:<imagename>`.
-
-> [!NOTE]
-> Packaging for functions currently supports HTTP Triggers, Blob triggers and Service bus triggers. For more information on triggers, see [Azure Functions bindings](../azure-functions/functions-bindings-storage-blob-trigger.md#blob-name-patterns).
-
-> [!IMPORTANT]
-> Save the location information, as it is used when deploying the image.
-
-## Deploy image as a web app
-
-1. Use the following command to get the login credentials for the Azure Container Registry that contains the image. Replace `<myacr>` with the value returned previously from `blob.location`:
-
- ```azurecli-interactive
- az acr credential show --name <myacr>
- ```
-
- The output of this command is similar to the following JSON document:
-
- ```json
- {
- "passwords": [
- {
- "name": "password",
- "value": "Iv0lRZQ9762LUJrFiffo3P4sWgk4q+nW"
- },
- {
- "name": "password2",
- "value": "=pKCxHatX96jeoYBWZLsPR6opszr==mg"
- }
- ],
- "username": "myml08024f78fd10"
- }
- ```
-
- Save the value for __username__ and one of the __passwords__.
-
-1. If you do not already have a resource group or app service plan to deploy the service, the following commands demonstrate how to create both:
-
- ```azurecli-interactive
- az group create --name myresourcegroup --location "West Europe"
- az appservice plan create --name myplanname --resource-group myresourcegroup --sku B1 --is-linux
- ```
-
- In this example, a _Linux basic_ pricing tier (`--sku B1`) is used.
-
- > [!IMPORTANT]
- > Images created by Azure Machine Learning use Linux, so you must use the `--is-linux` parameter.
-
-1. Create the storage account to use for the web job storage and get its connection string. Replace `<webjobStorage>` with the name you want to use.
-
- ```azurecli-interactive
- az storage account create --name <webjobStorage> --location westeurope --resource-group myresourcegroup --sku Standard_LRS
- ```
- ```azurecli-interactive
- az storage account show-connection-string --resource-group myresourcegroup --name <webJobStorage> --query connectionString --output tsv
- ```
-
-1. To create the function app, use the following command. Replace `<app-name>` with the name you want to use. Replace `<acrinstance>` and `<imagename>` with the values from returned `package.location` earlier. Replace `<webjobStorage>` with the name of the storage account from the previous step:
-
- ```azurecli-interactive
- az functionapp create --resource-group myresourcegroup --plan myplanname --name <app-name> --deployment-container-image-name <acrinstance>.azurecr.io/package:<imagename> --storage-account <webjobStorage>
- ```
-
- > [!IMPORTANT]
- > At this point, the function app has been created. However, since you haven't provided the connection string for the blob trigger or credentials to the Azure Container Registry that contains the image, the function app is not active. In the next steps, you provide the connection string and the authentication information for the container registry.
-
-1. Create the storage account to use for the blob trigger storage and get its connection string. Replace `<triggerStorage>` with the name you want to use.
-
- ```azurecli-interactive
- az storage account create --name <triggerStorage> --location westeurope --resource-group myresourcegroup --sku Standard_LRS
- ```
- ```azurecli-interactive
- az storage account show-connection-string --resource-group myresourcegroup --name <triggerStorage> --query connectionString --output tsv
- ```
- Record this connection string to provide to the function app. We will use it later when we ask for `<triggerConnectionString>`
-
-1. Create the containers for the input and output in the storage account. Replace `<triggerConnectionString>` with the connection string returned earlier:
-
- ```azurecli-interactive
- az storage container create -n input --connection-string <triggerConnectionString>
- ```
- ```azurecli-interactive
- az storage container create -n output --connection-string <triggerConnectionString>
- ```
-
-1. To associate the trigger connection string with the function app, use the following command. Replace `<app-name>` with the name of the function app. Replace `<triggerConnectionString>` with the connection string returned earlier:
-
- ```azurecli-interactive
- az functionapp config appsettings set --name <app-name> --resource-group myresourcegroup --settings "TriggerConnectionString=<triggerConnectionString>"
- ```
-1. You will need to retrieve the tag associated with the created container using the following command. Replace `<username>` with the username returned earlier from the container registry:
-
- ```azurecli-interactive
- az acr repository show-tags --repository package --name <username> --output tsv
- ```
- Save the value returned, it will be used as the `imagetag` in the next step.
-
-1. To provide the function app with the credentials needed to access the container registry, use the following command. Replace `<app-name>` with the name of the function app. Replace `<acrinstance>` and `<imagetag>` with the values from the AZ CLI call in the previous step. Replace `<username>` and `<password>` with the ACR login information retrieved earlier:
-
- ```azurecli-interactive
- az functionapp config container set --name <app-name> --resource-group myresourcegroup --docker-custom-image-name <acrinstance>.azurecr.io/package:<imagetag> --docker-registry-server-url https://<acrinstance>.azurecr.io --docker-registry-server-user <username> --docker-registry-server-password <password>
- ```
-
- This command returns information similar to the following JSON document:
-
- ```json
- [
- {
- "name": "WEBSITES_ENABLE_APP_SERVICE_STORAGE",
- "slotSetting": false,
- "value": "false"
- },
- {
- "name": "DOCKER_REGISTRY_SERVER_URL",
- "slotSetting": false,
- "value": "https://myml08024f78fd10.azurecr.io"
- },
- {
- "name": "DOCKER_REGISTRY_SERVER_USERNAME",
- "slotSetting": false,
- "value": "myml08024f78fd10"
- },
- {
- "name": "DOCKER_REGISTRY_SERVER_PASSWORD",
- "slotSetting": false,
- "value": null
- },
- {
- "name": "DOCKER_CUSTOM_IMAGE_NAME",
- "value": "DOCKER|myml08024f78fd10.azurecr.io/package:20190827195524"
- }
- ]
- ```
-
-At this point, the function app begins loading the image.
-
-> [!IMPORTANT]
-> It may take several minutes before the image has loaded. You can monitor progress using the Azure Portal.
-
-## Test the deployment
-
-Once the image has loaded and the app is available, use the following steps to trigger the app:
-
-1. Create a text file that contains the data that the score.py file expects. The following example would work with a score.py that expects an array of 10 numbers:
-
- ```json
- {"data": [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]]}
- ```
-
- > [!IMPORTANT]
- > The format of the data depends on what your score.py and model expects.
-
-2. Use the following command to upload this file to the input container in the trigger storage blob created earlier. Replace `<file>` with the name of the file containing the data. Replace `<triggerConnectionString>` with the connection string returned earlier. In this example, `input` is the name of the input container created earlier. If you used a different name, replace this value:
-
- ```azurecli-interactive
- az storage blob upload --container-name input --file <file> --name <file> --connection-string <triggerConnectionString>
- ```
-
- The output of this command is similar to the following JSON:
-
- ```json
- {
- "etag": "\"0x8D7C21528E08844\"",
- "lastModified": "2020-03-06T21:27:23+00:00"
- }
- ```
-
-3. To view the output produced by the function, use the following command to list the output files generated. Replace `<triggerConnectionString>` with the connection string returned earlier. In this example, `output` is the name of the output container created earlier. If you used a different name, replace this value:
-
- ```azurecli-interactive
- az storage blob list --container-name output --connection-string <triggerConnectionString> --query '[].name' --output tsv
- ```
-
- The output of this command is similar to `sample_input_out.json`.
-
-4. To download the file and inspect the contents, use the following command. Replace `<file>` with the file name returned by the previous command. Replace `<triggerConnectionString>` with the connection string returned earlier:
-
- ```azurecli-interactive
- az storage blob download --container-name output --file <file> --name <file> --connection-string <triggerConnectionString>
- ```
-
- Once the command completes, open the file. It contains the data returned by the model.
-
-For more information on using blob triggers, see the [Create a function triggered by Azure Blob storage](../azure-functions/functions-create-storage-blob-triggered-function.md) article.
-
-## Next steps
-
-* Learn to configure your Functions App in the [Functions](../azure-functions/functions-create-function-linux-custom-image.md) documentation.
-* Learn more about Blob storage triggers [Azure Blob storage bindings](../azure-functions/functions-bindings-storage-blob.md).
-* [Deploy your model to Azure App Service](how-to-deploy-app-service.md).
-* [Consume a ML Model deployed as a web service](how-to-consume-web-service.md)
-* [API Reference](/python/api/azureml-contrib-functions/azureml.contrib.functions)
machine-learning How To Deploy No Code Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-no-code-deployment.md
- Title: No code deployment (preview)-
-description: 'No code deployment lets you deploy a model as a web service without having to manually create an entry script.'
--- Previously updated : 07/31/2020------
-# No-code model deployment (preview)
-
-No-code model deployment is currently in preview and supports the following machine learning frameworks:
-
-## TensorFlow SavedModel format
-TensorFlow models need to be registered in **SavedModel format** to work with no-code model deployment.
-
-See [this link](https://www.tensorflow.org/guide/saved_model) for information on how to create a SavedModel.
-
-We support any TensorFlow version that is listed under "Tags" at the [TensorFlow Serving DockerHub](https://registry.hub.docker.com/r/tensorflow/serving/tags).
-
-```python
-from azureml.core import Model
-
-model = Model.register(workspace=ws,
- model_name='flowers', # Name of the registered model in your workspace.
- model_path='./flowers_model', # Local Tensorflow SavedModel folder to upload and register as a model.
- model_framework=Model.Framework.TENSORFLOW, # Framework used to create the model.
- model_framework_version='1.14.0', # Version of Tensorflow used to create the model.
- description='Flowers model')
-
-service_name = 'tensorflow-flower-service'
-service = Model.deploy(ws, service_name, [model])
-```
-
-## ONNX models
-
-ONNX model registration and deployment is supported for any ONNX inference graph. Preprocess and postprocess steps are not currently supported.
-
-Here is an example of how to register and deploy an MNIST ONNX model:
-
-```python
-from azureml.core import Model
-
-model = Model.register(workspace=ws,
- model_name='mnist-sample', # Name of the registered model in your workspace.
- model_path='mnist-model.onnx', # Local ONNX model to upload and register as a model.
- model_framework=Model.Framework.ONNX , # Framework used to create the model.
- model_framework_version='1.3', # Version of ONNX used to create the model.
- description='Onnx MNIST model')
-
-service_name = 'onnx-mnist-service'
-service = Model.deploy(ws, service_name, [model])
-```
-
-To score a model, see [Consume an Azure Machine Learning model deployed as a web service](./how-to-consume-web-service.md). Many ONNX projects use protobuf files to compactly store training and validation data, which can make it difficult to know what the data format expected by the service. As a model developer, you should document for your developers:
-
-* Input format (JSON or binary)
-* Input data shape and type (for example, an array of floats of shape [100,100,3])
-* Domain information (for instance, for an image, the color space, component order, and whether the values are normalized)
-
-If you're using Pytorch, [Exporting models from PyTorch to ONNX](https://github.com/onnx/tutorials/blob/master/tutorials/PytorchOnnxExport.ipynb) has the details on conversion and limitations.
-
-## Scikit-learn models
-
-No code model deployment is supported for all built-in scikit-learn model types.
-
-Here is an example of how to register and deploy a sklearn model with no extra code:
-
-```python
-from azureml.core import Model
-from azureml.core.resource_configuration import ResourceConfiguration
-
-model = Model.register(workspace=ws,
- model_name='my-sklearn-model', # Name of the registered model in your workspace.
- model_path='./sklearn_regression_model.pkl', # Local file to upload and register as a model.
- model_framework=Model.Framework.SCIKITLEARN, # Framework used to create the model.
- model_framework_version='0.19.1', # Version of scikit-learn used to create the model.
- resource_configuration=ResourceConfiguration(cpu=1, memory_in_gb=0.5),
- description='Ridge regression model to predict diabetes progression.',
- tags={'area': 'diabetes', 'type': 'regression'})
-
-service_name = 'my-sklearn-service'
-service = Model.deploy(ws, service_name, [model])
-```
-
-> [!NOTE]
-> Models that support predict_proba will use that method by default. To override this to use predict you can modify the POST body as below:
-
-```python
-import json
--
-input_payload = json.dumps({
- 'data': [
- [ 0.03807591, 0.05068012, 0.06169621, 0.02187235, -0.0442235,
- -0.03482076, -0.04340085, -0.00259226, 0.01990842, -0.01764613]
- ],
- 'method': 'predict' # If you have a classification model, the default behavior is to run 'predict_proba'.
-})
-
-output = service.run(input_payload)
-
-print(output)
-```
-
-> [!NOTE]
-> These dependencies are included in the prebuilt scikit-learn inference container:
-
-```yaml
- - dill
- - azureml-defaults
- - inference-schema[numpy-support]
- - scikit-learn
- - numpy
- - joblib
- - pandas
- - scipy
- - sklearn_pandas
-```
-## Next steps
-
-* [Troubleshoot a failed deployment](how-to-troubleshoot-deployment.md)
-* [Deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md)
-* [Create client applications to consume web services](how-to-consume-web-service.md)
-* [Update web service](how-to-deploy-update-web-service.md)
-* [How to deploy a model using a custom Docker image](./how-to-deploy-custom-container.md)
-* [Use TLS to secure a web service through Azure Machine Learning](how-to-secure-web-service.md)
-* [Monitor your Azure Machine Learning models with Application Insights](how-to-enable-app-insights.md)
-* [Collect data for models in production](how-to-enable-data-collection.md)
-* [Create event alerts and triggers for model deployments](how-to-use-event-grid.md)
machine-learning How To Homomorphic Encryption Seal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-homomorphic-encryption-seal.md
- Title: Deploy an encrypted inferencing service (preview)-
-description: Learn how to use Microsoft SEAL to deploy an encrypted prediction service for image classification
-- Previously updated : 10/21/2021-----
-#Customer intent: As a data scientist, I want to deploy a service that uses homomorphic encryption to make predictions on encrypted data.
--
-# How to deploy an encrypted inferencing web service (preview)
-
-Learn how to deploy an image classification model as an encrypted inferencing web service in [Azure Container Instances](../container-instances/index.yml) (ACI). The web service is a Docker container image that contains the model and scoring logic.
-
-In this guide, you use Azure Machine Learning service to:
-
-> [!div class="checklist"]
-> * Configure your environments
-> * Deploy encrypted inferencing web service
-> * Prepare test data
-> * Make encrypted predictions
-> * Clean up resources
-
-ACI is a great solution for testing and understanding the model deployment workflow. For scalable production deployments, consider using Azure Kubernetes Service. For more information, see [how to deploy and where](./how-to-deploy-and-where.md).
-
-The encryption method used in this sample is [homomorphic encryption](https://github.com/Microsoft/SEAL#homomorphic-encryption). Homomorphic encryption allows for computations to be done on encrypted data without requiring access to a secret (decryption) key. The results of the computations are encrypted and can be revealed only by the owner of the secret key.
-
-## Prerequisites
-
-This guide assumes that you have an image classification model registered in Azure Machine Learning. If not, register the model using a [pretrained model](https://github.com/Azure/MachineLearningNotebooks/raw/master/tutorials/image-classification-mnist-dat).
-
-## Configure local environment
-
-In a Jupyter notebook
-
-1. Import the Python packages needed for this sample.
-
- ```python
- %matplotlib inline
- import numpy as np
- import matplotlib.pyplot as plt
-
- import azureml.core
-
- # display the core SDK version number
- print("Azure ML SDK Version: ", azureml.core.VERSION)
- ```
-
-2. Install homomorphic encryption library for secure inferencing.
-
- > [!NOTE]
- > The `encrypted-inference` package is currently in preview.
-
- [`encrypted-inference`](https://pypi.org/project/encrypted-inference) is a library that contains bindings for encrypted inferencing based on [Microsoft SEAL](https://github.com/Microsoft/SEAL).
-
- ```python
- !pip install encrypted-inference==0.9
- ```
-
-## Configure the inferencing environment
-
-Create an environment for inferencing and add `encrypted-inference` package as a conda dependency.
-
-```python
-from azureml.core.environment import Environment
-from azureml.core.conda_dependencies import CondaDependencies
-
-# to install required packages
-env = Environment('tutorial-env')
-cd = CondaDependencies.create(pip_packages=['azureml-dataprep[pandas,fuse]>=1.1.14', 'azureml-defaults', 'azure-storage-blob', 'encrypted-inference==0.9'], conda_packages = ['scikit-learn==0.22.1'])
-
-env.python.conda_dependencies = cd
-
-# Register environment to re-use later
-env.register(workspace = ws)
-```
-
-## Deploy encrypted inferencing web service
-
-Deploy the model as a web service hosted in ACI.
-
-To build the correct environment for ACI, provide the following:
-
-* A scoring script to show how to use the model
-* A configuration file to build the ACI
-* A trained model
-
-### Create scoring script
-
-Create the scoring script `score.py` used by the web service for inferencing.
-
-You must include two required functions into the scoring script:
-
-* The `init()` function, which typically loads the model into a global object. This function is run only once when the Docker container is started.
-* The `run(input_data)` function uses the model to predict a value based on the input data. Inputs and outputs to the run typically use JSON for serialization and de-serialization, but other formats are supported. The function fetches homomorphic encryption based public keys that are uploaded by the service caller.
-
-```python
-%%writefile score.py
-import json
-import os
-import pickle
-import joblib
-from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient, PublicAccess
-from encrypted.inference.eiserver import EIServer
-
-def init():
- global model
- # AZUREML_MODEL_DIR is an environment variable created during deployment.
- # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)
- # For multiple models, it points to the folder containing all deployed models (./azureml-models)
- model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_mnist_model.pkl')
- model = joblib.load(model_path)
-
- global server
- server = EIServer(model.coef_, model.intercept_, verbose=True)
-
-def run(raw_data):
-
- json_properties = json.loads(raw_data)
-
- key_id = json_properties['key_id']
- conn_str = json_properties['conn_str']
- container = json_properties['container']
- data = json_properties['data']
-
- # download the public keys from blob storage
- blob_service_client = BlobServiceClient.from_connection_string(conn_str=conn_str)
- blob_client = blob_service_client.get_blob_client(container=container, blob=key_id)
- public_keys = blob_client.download_blob().readall()
-
- result = {}
- # make prediction
- result = server.predict(data, public_keys)
-
- # you can return any data type as long as it is JSON-serializable
- return result
-```
-
-### Create configuration file
-
-Create a deployment configuration file and specify the number of CPUs and gigabyte of RAM needed for your ACI container. While it depends on your model, the default of 1 core and 1 gigabyte of RAM is usually sufficient for many models. If you feel you need more later, you would have to recreate the image and redeploy the service.
-
-```python
-from azureml.core.webservice import AciWebservice
-
-aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
- memory_gb=1,
- tags={"data": "MNIST", "method" : "sklearn"},
- description='Encrypted Predict MNIST with sklearn + SEAL')
-```
-
-### Deploy to Azure Container Instances
-
-Estimated time to complete: **about 2-5 minutes**
-
-Configure the image and deploy. The following code goes through these steps:
-
-1. Create environment object containing dependencies needed by the model using the environment file (`myenv.yml`)
-1. Create inference configuration necessary to deploy the model as a web service using:
- * The scoring file (`score.py`)
- * Environment object created in the previous step
-1. Deploy the model to the ACI container.
-1. Get the web service HTTP endpoint.
-
-```python
-%%time
-from azureml.core.webservice import Webservice
-from azureml.core.model import InferenceConfig
-from azureml.core.environment import Environment
-from azureml.core import Workspace
-from azureml.core.model import Model
-
-ws = Workspace.from_config()
-model = Model(ws, 'sklearn_mnist')
-
-myenv = Environment.get(workspace=ws, name="tutorial-env")
-inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
-
-service = Model.deploy(workspace=ws,
- name='sklearn-encrypted-mnist-svc',
- models=[model],
- inference_config=inference_config,
- deployment_config=aciconfig)
-
-service.wait_for_deployment(show_output=True)
-```
-
-Get the scoring web service's HTTP endpoint, which accepts REST client calls. This endpoint can be shared with anyone who wants to test the web service or integrate it into an application.
-
-```python
-print(service.scoring_uri)
-```
-
-## Prepare test data
-
-1. Download the test data. In this case, it's saved into a directory called *data*.
-
- ```python
- import os
- from azureml.core import Dataset
- from azureml.opendatasets import MNIST
-
- data_folder = os.path.join(os.getcwd(), 'data')
- os.makedirs(data_folder, exist_ok=True)
-
- mnist_file_dataset = MNIST.get_file_dataset()
- mnist_file_dataset.download(data_folder, overwrite=True)
- ```
-
-1. Load the test data from the *data* directory.
-
- ```python
- from utils import load_data
- import os
- import glob
-
- data_folder = os.path.join(os.getcwd(), 'data')
- # note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the neural network converge faster
- X_test = load_data(glob.glob(os.path.join(data_folder,"**/t10k-images-idx3-ubyte.gz"), recursive=True)[0], False) / 255.0
- y_test = load_data(glob.glob(os.path.join(data_folder,"**/t10k-labels-idx1-ubyte.gz"), recursive=True)[0], True).reshape(-1)
- ```
-
-## Make encrypted predictions
-
-Use the test dataset with the model to get predictions.
-
-To make encrypted predictions:
-
-1. Create a new `EILinearRegressionClient`, a homomorphic encryption based client, and public keys.
-
- ```python
- from encrypted.inference.eiclient import EILinearRegressionClient
-
- # Create a new Encrypted inference client and a new secret key.
- edp = EILinearRegressionClient(verbose=True)
-
- public_keys_blob, public_keys_data = edp.get_public_keys()
- ```
-
-1. Upload homomorphic encryption generated public keys to the workspace default blob store. This will allow you to share the keys with the inference server.
-
- ```python
- import azureml.core
- from azureml.core import Workspace, Datastore
- import os
-
- ws = Workspace.from_config()
-
- datastore = ws.get_default_datastore()
- container_name=datastore.container_name
-
- # Create a local file and write the keys to it
- public_keys = open(public_keys_blob, "wb")
- public_keys.write(public_keys_data)
- public_keys.close()
-
- # Upload the file to blob store
- datastore.upload_files([public_keys_blob])
-
- # Delete the local file
- os.remove(public_keys_blob)
- ```
-
-1. Encrypt the test data
-
- ```python
- #choose any one sample from the test data
- sample_index = 1
-
- #encrypt the data
- raw_data = edp.encrypt(X_test[sample_index])
-
- ```
-
-1. Use the SDK's `run` API to invoke the service and provide the test dataset to the model to get predictions. We will need to send the connection string to the blob storage where the public keys were uploaded.
-
- ```python
- import json
- from azureml.core import Webservice
-
- service = Webservice(ws, 'sklearn-encrypted-mnist-svc')
-
- #pass the connection string for blob storage to give the server access to the uploaded public keys
- conn_str_template = 'DefaultEndpointsProtocol={};AccountName={};AccountKey={};EndpointSuffix=core.windows.net'
- conn_str = conn_str_template.format(datastore.protocol, datastore.account_name, datastore.account_key)
-
- #build the json
- data = json.dumps({"data": raw_data, "key_id" : public_keys_blob, "conn_str" : conn_str, "container" : container_name })
- data = bytes(data, encoding='ASCII')
-
- print ('Making an encrypted inference web service call ')
- eresult = service.run(input_data=data)
-
- print ('Received encrypted inference results')
- ```
-
-1. Use the client to decrypt the results.
-
- ```python
- import numpy as np
-
- results = edp.decrypt(eresult)
-
- print ('Decrypted the results ', results)
-
- #Apply argmax to identify the prediction result
- prediction = np.argmax(results)
-
- print ( ' Prediction : ', prediction)
- print ( ' Actual Label : ', y_test[sample_index])
- ```
-
-## Clean up resources
-
-Delete the web service created in this sample:
-
-```python
-service.delete()
-```
-
-If you no longer plan to use the Azure resources youΓÇÖve created, delete them. This prevents you from being charged for unutilized resources that are still running. See this guide on how to [clean up resources](how-to-manage-workspace.md#clean-up-resources) to learn more.
machine-learning How To Inference Onnx Automl Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-onnx-automl-image-models.md
Title: Local inference using ONNX for AutoML image
-description: Use ONNX with Azure Machine Learning automated ML to make predictions on computer vision models for classification, object detection, and segmentation.
+description: Use ONNX with Azure Machine Learning automated ML to make predictions on computer vision models for classification, object detection, and instance segmentation.
Last updated 10/18/2021
-# Make predictions with ONNX on computer vision models from AutoML
+# Make predictions with ONNX on computer vision models from AutoML
In this article, you learn how to use Open Neural Network Exchange (ONNX) to make predictions on computer vision models generated from automated machine learning (AutoML) in Azure Machine Learning.
To use ONNX for predictions, you need to:
1. Understand the inputs and outputs of an ONNX model. 1. Preprocess your data so it's in the required format for input images. 1. Perform inference with ONNX Runtime for Python.
-1. Visualize predictions for object detection and segmentation tasks.
+1. Visualize predictions for object detection and instance segmentation tasks.
[ONNX](https://onnx.ai/https://docsupdatetracker.net/about.html) is an open standard for machine learning and deep learning models. It enables model import and export (interoperability) across the popular AI frameworks. For more details, explore the [ONNX GitHub project](https://github.com/onnx/onnx).
In this guide, you'll learn how to use [Python APIs for ONNX Runtime](https://on
## Prerequisites
-* Get an AutoML-trained computer vision model for any of the supported image tasks: classification, object detection, or segmentation. [Learn more about AutoML support for computer vision tasks](how-to-auto-train-image-models.md).
+* Get an AutoML-trained computer vision model for any of the supported image tasks: classification, object detection, or instance segmentation. [Learn more about AutoML support for computer vision tasks](how-to-auto-train-image-models.md).
* Install the [onnxruntime](https://onnxruntime.ai/docs/get-started/with-python.html) package. The methods in this article have been tested with versions 1.3.0 to 1.8.0. + ## Download ONNX model files
-You can download ONNX model files from AutoML runs by using the Azure Machine Learning studio UI or the Azure Machine Learning Python SDK. We recommend downloading via the SDK with the experiment name and parent run ID.
+You can download ONNX model files from AutoML runs by using the Azure Machine Learning studio UI or the Azure Machine Learning Python SDK. We recommend downloading via the SDK with the experiment name and parent run ID.
+ ### Azure Machine Learning studio
With the SDK, you can select the best child run (by primary metric) with the exp
The following code returns the best child run based on the relevant primary metric. ```python
-# Select the best child run
from azureml.train.automl.run import AutoMLRun
-run_id = "" # Specify the run ID
+# Select the best child run
+run_id = '' # Specify the run ID
automl_image_run = AutoMLRun(experiment=experiment, run_id=run_id) best_child_run = automl_image_run.get_best_child() ```
best_child_run = automl_image_run.get_best_child()
Download the *labels.json* file, which contains all the classes and labels in the training dataset. ```python
-import json
-
-labels_file = "automl_models/labels.json"
-best_child_run.download_file(name="train_artifacts/labels.json", output_file_path=labels_file)
+labels_file = 'automl_models/labels.json'
+best_child_run.download_file(name='train_artifacts/labels.json', output_file_path=labels_file)
``` Download the *model.onnx* file. ```python
-onnx_model_path = "automl_models/model.onnx"
-best_child_run.download_file(name="train_artifacts/model.onnx", output_file_path=onnx_model_path)
+onnx_model_path = 'automl_models/model.onnx'
+best_child_run.download_file(name='train_artifacts/model.onnx', output_file_path=onnx_model_path)
+```
+
+### Model generation for batch scoring
+
+By default, AutoML for Images supports batch scoring for classification.But object detection and instance segmentation models don't support batch inferencing. In case of batch inference for object detection and instance segmentation, use the following procedure to generate an ONNX model for the required batch size. Models generated for a specific batch size don't work for other batch sizes.
++
+```python
+from azureml.core.script_run_config import ScriptRunConfig
+from azureml.train.automl.run import AutoMLRun
+from azureml.core.workspace import Workspace
+from azureml.core import Experiment
+
+# specify experiment name
+experiment_name = ''
+# specify workspace parameters
+subscription_id = ''
+resource_group = ''
+workspace_name = ''
+# load the workspace and compute target
+ws = ''
+compute_target = ''
+experiment = Experiment(ws, name=experiment_name)
+
+# specify the run id of the automl run
+run_id = ''
+automl_image_run = AutoMLRun(experiment=experiment, run_id=run_id)
+best_child_run = automl_image_run.get_best_child()
+```
+
+Use the following model specific arguments to submit the script. For more details on arguments, refer to [model specific hyperparameters](how-to-auto-train-image-models.md#configure-model-algorithms-and-hyperparameters) and for supported object detection model names refer to the [supported model algorithm section](how-to-auto-train-image-models.md#supported-model-algorithms).
+
+To get the argument values needed to create the batch scoring model, refer to the scoring scripts generated under the outputs folder of the Automl training runs. Use the hyperparameter values available in the model settings variable inside the scoring file for the best child run.
+
+# [Multi-class image classification ](#tab/multi-class)
+For multi-class image classification, the generated ONNX model for the best child-run supports batch scoring by default. Therefore, no model specific arguments are needed for this task type and you can skip to the [Load the labels and ONNX model files](#load-the-labels-and-onnx-model-files) section.
+
+# [Multi-label image classification ](#tab/multi-label)
+For multi-label image classification, the generated ONNX model for the best child-run supports batch scoring by default. Therefore, no model specific arguments are needed for this task type and you can skip to the [Load the labels and ONNX model files](#load-the-labels-and-onnx-model-files) section.
+
+# [Object detection with Faster R-CNN or RetinaNet](#tab/object-detect-cnn)
+```python
+arguments = ['--model_name', 'fasterrcnn_resnet34_fpn', # enter the faster rcnn or retinanet model name
+ '--batch_size', 8, # enter the batch size of your choice
+ '--height_onnx', 600, # enter the height of input to ONNX model
+ '--width_onnx', 800, # enter the width of input to ONNX model
+ '--experiment_name', experiment_name,
+ '--subscription_id', subscription_id,
+ '--resource_group', resource_group,
+ '--workspace_name', workspace_name,
+ '--run_id', run_id,
+ '--task_type', 'image-object-detection',
+ '--min_size', 600, # minimum size of the image to be rescaled before feeding it to the backbone
+ '--max_size', 1333, # maximum size of the image to be rescaled before feeding it to the backbone
+ '--box_score_thresh', 0.3, # threshold to return proposals with a classification score > box_score_thresh
+ '--box_nms_thresh', 0.5, # NMS threshold for the prediction head
+ '--box_detections_per_img', 100 # maximum number of detections per image, for all classes
+ ]
+```
+
+# [Object detection with YOLO](#tab/object-detect-yolo)
+
+```python
+arguments = ['--model_name', 'yolov5', # enter the yolo model name
+ '--batch_size', 8, # enter the batch size of your choice
+ '--height_onnx', 640, # enter the height of input to ONNX model
+ '--width_onnx', 640, # enter the width of input to ONNX model
+ '--experiment_name', experiment_name,
+ '--subscription_id', subscription_id,
+ '--resource_group', resource_group,
+ '--workspace_name', workspace_name,
+ '--run_id', run_id,
+ '--task_type', 'image-object-detection',
+ '--img_size', 640, # image size for inference
+ '--model_size', 'medium', # size of the yolo model
+ '--box_score_thresh', 0.1, # threshold to return proposals with a classification score > box_score_thresh
+ '--box_iou_thresh', 0.5 # IOU threshold used during inference in nms post processing
+ ]
+```
+
+# [Instance segmentation](#tab/instance-segmentation)
+
+```python
+arguments = ['--model_name', 'maskrcnn_resnet50_fpn', # enter the maskrcnn model name
+ '--batch_size', 8, # enter the batch size of your choice
+ '--height_onnx', 600, # enter the height of input to ONNX model
+ '--width_onnx', 800, # enter the width of input to ONNX model
+ '--experiment_name', experiment_name,
+ '--subscription_id', subscription_id,
+ '--resource_group', resource_group,
+ '--workspace_name', workspace_name,
+ '--run_id', run_id,
+ '--task_type', 'image-instance-segmentation',
+ '--min_size', 600, # minimum size of the image to be rescaled before feeding it to the backbone
+ '--max_size', 1333, # maximum size of the image to be rescaled before feeding it to the backbone
+ '--box_score_thresh', 0.3, # threshold to return proposals with a classification score > box_score_thresh
+ '--box_nms_thresh', 0.5, # NMS threshold for the prediction head
+ '--box_detections_per_img', 100 # maximum number of detections per image, for all classes
+ ]
+```
+++
+Download and keep the `ONNX_batch_model_generator_automl_for_images.py` file in the current directory and submit the script. Use [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) to submit the script `ONNX_batch_model_generator_automl_for_images.py` available in the [azureml-examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml), to generate an ONNX model of a specific batch size. In the following code, the trained model environment is used to submit this script to generate and save the ONNX model to the outputs directory.
+```python
+script_run_config = ScriptRunConfig(source_directory='.',
+ script='ONNX_batch_model_generator_automl_for_images.py',
+ arguments=arguments,
+ compute_target=compute_target,
+ environment=best_child_run.get_environment())
+
+remote_run = experiment.submit(script_run_config)
+remote_run.wait_for_completion(wait_post_processing=True)
+```
+
+Once the batch model is generated, either download it from **Outputs+logs** > **outputs** manually, or use the following method:
+```python
+batch_size= 8 # use the batch size used to generate the model
+onnx_model_path = 'automl_models/model.onnx' # local path to save the model
+remote_run.download_file(name='outputs/model_'+str(batch_size)+'.onnx', output_file_path=onnx_model_path)
``` After the model downloading step, you use the ONNX Runtime Python package to perform inferencing by using the *model.onnx* file. For demonstration purposes, this article uses the datasets from [How to prepare image datasets](how-to-prepare-datasets-for-automl-images.md) for each vision task.
We've trained the models for all vision tasks with their respective datasets to
The following code snippet loads *labels.json*, where class names are ordered. That is, if the ONNX model predicts a label ID as 2, then it corresponds to the label name given at the third index in the *labels.json* file. ```python
+import json
import onnxruntime labels_file = "automl_models/labels.json"
for idx, output in enumerate(range(len(sess_output))):
Every ONNX model has a predefined set of input and output formats.
-# [Multi-class image classification ](#tab/multi-class)
+# [Multi-class image classification](#tab/multi-class)
-This example applies the model trained on the [fridgeObjects](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/fridgeObjects.zip) dataset with 134 images and 4 classes/labels to explain ONNX model inference. For more information on training an image classification task, see the [multi-class image classification notebook](https://github.com/Azure/azureml-examples/tree/81c7d33ed82f62f419472bc11f7e1bad448ff15b/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass).
+This example applies the model trained on the [fridgeObjects](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/fridgeObjects.zip) dataset with 134 images and 4 classes/labels to explain ONNX model inference. For more information on training an image classification task, see the [multi-class image classification notebook](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass).
### Input format
The output is an array of logits for all the classes/labels.
# [Multi-label image classification](#tab/multi-label)
-This example uses the model trained on the [multi-label fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/multilabelFridgeObjects.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on model training for multi-label image classification, see the [multi-label image classification notebook](https://github.com/Azure/azureml-examples/tree/81c7d33ed82f62f419472bc11f7e1bad448ff15b/python-sdk/tutorials/automl-with-azureml/image-classification-multilabel).
+This example uses the model trained on the [multi-label fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/multilabelFridgeObjects.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on model training for multi-label image classification, see the [multi-label image classification notebook](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/image-classification-multilabel).
### Input format
The output is an array of logits for all the classes/labels.
| output1 | `(batch_size, num_classes)` | ndarray(float) | Model returns logits (without `sigmoid`). For instance, for batch size 1 and 4 classes, it returns `(1, 4)`. |
-# [Object detection with Faster R-CNN](#tab/object-detect-cnn)
+# [Object detection with Faster R-CNN or RetinaNet](#tab/object-detect-cnn)
-This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains Faster R-CNN models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/81c7d33ed82f62f419472bc11f7e1bad448ff15b/python-sdk/tutorials/automl-with-azureml/image-object-detection).
+This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains Faster R-CNN models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/image-object-detection).
### Input format
Input is a preprocessed image.
### Output format
-The output is a tuple of boxes, labels, and scores.
+The output is a tuple of `output_names` and predictions. Here, `output_names` and `predictions` are lists with length 3*`batch_size` each. For Faster R-CNN order of outputs are boxes, labels and scores, whereas for RetinaNet outputs are boxes, scores, labels.
| Output name | Output shape | Output type | Description | | -- |-|--||
+| `output_names` | `(3*batch_size)` | List of keys | For a batch size of 2, `output_names` will be `['boxes_0', 'labels_0', 'scores_0', 'boxes_1', 'labels_1', 'scores_1']` |
+| `predictions` | `(3*batch_size)` | List of ndarray(float) | For a batch size of 2, `predictions` will take the shape of `[(n1_boxes, 4), (n1_boxes), (n1_boxes), (n2_boxes, 4), (n2_boxes), (n2_boxes)]`. Here, values at each index correspond to same index in `output_names`. |
++
+The following table describes boxes, labels and scores returned for each sample in the batch of images.
+
+| Name | Shape | Type | Description |
+| -- |-|--||
| Boxes | `(n_boxes, 4)`, where each box has `x_min, y_min, x_max, y_max` | ndarray(float) | Model returns *n* boxes with their top-left and bottom-right coordinates. | | Labels | `(n_boxes)`| ndarray(float) | Label or class ID of an object in each box. | | Scores | `(n_boxes)` | ndarray(float) | Confidence score of an object in each box. |
The output is a tuple of boxes, labels, and scores.
# [Object detection with YOLO](#tab/object-detect-yolo)
-This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains YOLO models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/81c7d33ed82f62f419472bc11f7e1bad448ff15b/python-sdk/tutorials/automl-with-azureml/image-object-detection).
+This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains YOLO models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/image-object-detection).
### Input format
The input is a preprocessed image, with the shape `(1, 3, 640, 640)` for a batch
| Input name | Input shape | Input type | Description | | -- |-|--|--|
-| Input | `(batch_size, num_channels, height, width)` | ndarray(float) | Input is a preprocessed image, with the shape `(1, 3, 600, 800)` for a batch size of 1, and a height of 600 and width of 800.|
+| Input | `(batch_size, num_channels, height, width)` | ndarray(float) | Input is a preprocessed image, with the shape `(1, 3, 640, 640)` for a batch size of 1, and a height of 640 and width of 640.|
### Output format
-The output is a list of boxes, labels, and scores. For YOLO, you need the first output to extract boxes, labels, and scores.
-
+ONNX model predictions contain multiple outputs. The first output is needed to perform non-max suppression for detections. For ease of use, automated ML displays the output format after the NMS postprocessing step. The output after NMS is a list of boxes, labels, and scores for each sample in the batch.
++ | Output name | Output shape | Output type | Description | | -- |-|--||
-| Output | `(n_boxes, 6)`, where each box has `x_min, y_min, x_max, y_max, confidence_score, class_id` | ndarray(float) | Model returns *n* boxes with their top-left and bottom-right coordinates, along with object confidence scores, class IDs, or label IDs. |
+| Output | `(batch_size)`| List of ndarray(float) | Model returns box detections for each sample in the batch |
+
+Each cell in the list indicates box detections of a sample with shape `(n_boxes, 6)`, where each box has `x_min, y_min, x_max, y_max, confidence_score, class_id`.
# [Instance segmentation](#tab/instance-segmentation)
-For this instance segmentation example, you use the Mask R-CNN model that has been trained on the [fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjectsMask.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on training of the instance segmentation model, see the [instance segmentation notebook](https://github.com/Azure/azureml-examples/tree/81c7d33ed82f62f419472bc11f7e1bad448ff15b/python-sdk/tutorials/automl-with-azureml/image-instance-segmentation).
+For this instance segmentation example, you use the Mask R-CNN model that has been trained on the [fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjectsMask.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on training of the instance segmentation model, see the [instance segmentation notebook](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/image-instance-segmentation).
>[!IMPORTANT] > Only Mask R-CNN is supported for instance segmentation tasks. The input and output formats are based on Mask R-CNN only.
The input is a preprocessed image. The ONNX model for Mask R-CNN has been export
### Output format
-The output is a tuple of boxes (instances), labels, and scores
-
+The output is a tuple of `output_names` and predictions. Here, `output_names` and `predictions` are lists with length 4*`batch_size` each.
+
| Output name | Output shape | Output type | Description | | -- |-|--||
+| `output_names` | `(4*batch_size)` | List of keys | For a batch size of 2, `output_names` will be `['boxes_0', 'labels_0', 'scores_0', 'masks_0', 'boxes_1', 'labels_1', 'scores_1', 'masks_1']` |
+| `predictions` | `(4*batch_size)` | List of ndarray(float) | For a batch size of 2, `predictions` will take the shape of `[(n1_boxes, 4), (n1_boxes), (n1_boxes), (n1_boxes, 1, height_onnx, width_onnx), (n2_boxes, 4), (n2_boxes), (n2_boxes), (n2_boxes, 1, height_onnx, width_onnx)]`. Here, values at each index correspond to same index in `output_names`. |
+
+| Name | Shape | Type | Description |
+| -- |-|--||
| Boxes | `(n_boxes, 4)`, where each box has `x_min, y_min, x_max, y_max` | ndarray(float) | Model returns *n* boxes with their top-left and bottom-right coordinates. | | Labels | `(n_boxes)`| ndarray(float) | Label or class ID of an object in each box. | | Scores | `(n_boxes)` | ndarray(float) | Confidence score of an object in each box. |
-| Masks | `(n_boxes, 1, height, width)` | ndarray(float) | Masks (polygons) of detected objects with the shape height and width of an input image. |
+| Masks | `(n_boxes, 1, height_onnx, width_onnx)` | ndarray(float) | Masks (polygons) of detected objects with the shape height and width of an input image. |
Perform the following preprocessing steps for the ONNX model inference:
1. Convert the image to RGB. 2. Resize the image to `valid_resize_size` and `valid_resize_size` values that correspond to the values used in the transformation of the validation dataset during training. The default value for `valid_resize_size` is 256.
-3. Center crop the image to `height_onnx_crop_size` and `width_onnx_crop_size`. This corresponds to `valid_crop_size` with the default value of 224.
+3. Center crop the image to `height_onnx_crop_size` and `width_onnx_crop_size`. It corresponds to `valid_crop_size` with the default value of 224.
4. Change `HxWxC` to `CxHxW`. 5. Convert to float type. 6. Normalize with ImageNet's `mean` = `[0.485, 0.456, 0.406]` and `std` = `[0.229, 0.224, 0.225]`.
batch, channel, height_onnx_crop_size, width_onnx_crop_size
### Without PyTorch ```python
+import glob
+import numpy as np
+from PIL import Image
+ def preprocess(image, resize_size, crop_size_onnx):
- """perform pre-processing on raw input image
-
+ """Perform pre-processing on raw input image
+
:param image: raw input image :type image: PIL image :param resize_size: value to resize the image :type image: Int
- :param crop_size_onnx: expected crop size of an input image in onnx model
+ :param crop_size_onnx: expected height of an input image in onnx model
:type crop_size_onnx: Int :return: pre-processed image in numpy format
- :rtype: ndarray of shape 1xCxHxW
+ :rtype: ndarray 1xCxHxW
""" image = image.convert('RGB') # resize image = image.resize((resize_size, resize_size))-
- # center crop
+ # center crop
left = (resize_size - crop_size_onnx)/2 top = (resize_size - crop_size_onnx)/2 right = (resize_size + crop_size_onnx)/2 bottom = (resize_size + crop_size_onnx)/2 image = image.crop((left, top, right, bottom))
- np_image = np.array(image)
+ np_image = np.array(image)
# HWC -> CHW
- np_image = np_image.transpose(2, 0, 1)# CxHxW
-
+ np_image = np_image.transpose(2, 0, 1) # CxHxW
# normalize the image mean_vec = np.array([0.485, 0.456, 0.406]) std_vec = np.array([0.229, 0.224, 0.225]) norm_img_data = np.zeros(np_image.shape).astype('float32') for i in range(np_image.shape[0]):
- norm_img_data[i,:,:] = (np_image[i,:,:]/255 - mean_vec[i]) / std_vec[i]
- np_image = np.expand_dims(norm_img_data, axis=0)# 1xCxHxW
+ norm_img_data[i,:,:] = (np_image[i,:,:]/255 - mean_vec[i])/std_vec[i]
+
+ np_image = np.expand_dims(norm_img_data, axis=0) # 1xCxHxW
return np_image
-from PIL import Image
+# following code loads only batch_size number of images for demonstrating ONNX inference
+# make sure that the data directory has at least batch_size number of images
+test_images_path = "automl_models_multi_cls/test_images_dir/*" # replace with path to images
+# Select batch size needed
+batch_size = 8
# you can modify resize_size based on your trained model
-resize_size = 256
-
+resize_size = 256
# height and width will be the same for classification crop_size_onnx = height_onnx_crop_size
-test_image_path = "automl_models/test_image.jpg"
-img = Image.open(test_image_path)
-print("Input image dimensions: ", img.size)
-display(img)
-img_data = preprocess(img, resize_size, crop_size_onnx)
+image_files = glob.glob(test_images_path)
+img_processed_list = []
+for i in range(batch_size):
+ img = Image.open(image_files[i])
+ img_processed_list.append(preprocess(img, resize_size, crop_size_onnx))
+
+if len(img_processed_list) > 1:
+ img_data = np.concatenate(img_processed_list)
+elif len(img_processed_list) == 1:
+ img_data = img_processed_list[0]
+else:
+ img_data = None
+
+assert batch_size == img_data.shape[0]
``` ### With PyTorch ```python
+import glob
import torch
+import numpy as np
+from PIL import Image
from torchvision import transforms
-# tested with torch version: '1.7.1', torchvision version: '0.8.2')
def _make_3d_tensor(x) -> torch.Tensor: """This function is for images that have less channels.
def preprocess(image, resize_size, crop_size_onnx):
img_data = np.expand_dims(img_data, axis=0) return img_data
+# following code loads only batch_size number of images for demonstrating ONNX inference
+# make sure that the data directory has at least batch_size number of images
+
+test_images_path = "automl_models_multi_cls/test_images_dir/*" # replace with path to images
+# Select batch size needed
+batch_size = 8
# you can modify resize_size based on your trained model resize_size = 256
+# height and width will be the same for classification
+crop_size_onnx = height_onnx_crop_size
-#height and width will be the same for classification
-test_image_path = "automl_models/test_image.jpg"
-img = Image.open(test_image_path)
-print("Input image dimensions: ", img.size)
-display(img)
-img_data = preprocess(img, resize_size, crop_size_onnx)
+image_files = glob.glob(test_images_path)
+img_processed_list = []
+for i in range(batch_size):
+ img = Image.open(image_files[i])
+ img_processed_list.append(preprocess(img, resize_size, crop_size_onnx))
+
+if len(img_processed_list) > 1:
+ img_data = np.concatenate(img_processed_list)
+elif len(img_processed_list) == 1:
+ img_data = img_processed_list[0]
+else:
+ img_data = None
+
+assert batch_size == img_data.shape[0]
``` # [Multi-label image classification](#tab/multi-label)
batch, channel, height_onnx_crop_size, width_onnx_crop_size
### Without PyTorch ```python
+import glob
+import numpy as np
+from PIL import Image
+ def preprocess(image, resize_size, crop_size_onnx):
- """perform pre-processing on raw input image
-
+ """Perform pre-processing on raw input image
+
:param image: raw input image :type image: PIL image :param resize_size: value to resize the image :type image: Int
- :param crop_size_onnx: expected crop size of an input image in onnx model
+ :param crop_size_onnx: expected height of an input image in onnx model
:type crop_size_onnx: Int :return: pre-processed image in numpy format
- :rtype: ndarray of shape 1xCxHxW
+ :rtype: ndarray 1xCxHxW
""" image = image.convert('RGB') # resize image = image.resize((resize_size, resize_size))-
- # center crop
+ # center crop
left = (resize_size - crop_size_onnx)/2 top = (resize_size - crop_size_onnx)/2 right = (resize_size + crop_size_onnx)/2 bottom = (resize_size + crop_size_onnx)/2 image = image.crop((left, top, right, bottom))
- np_image = np.array(image)
+ np_image = np.array(image)
# HWC -> CHW
- np_image = np_image.transpose(2, 0, 1)# CxHxW
+ np_image = np_image.transpose(2, 0, 1) # CxHxW
# normalize the image mean_vec = np.array([0.485, 0.456, 0.406]) std_vec = np.array([0.229, 0.224, 0.225]) norm_img_data = np.zeros(np_image.shape).astype('float32') for i in range(np_image.shape[0]):
- norm_img_data[i,:,:] = (np_image[i,:,:]/255 - mean_vec[i]) / std_vec[i]
- np_image = np.expand_dims(norm_img_data, axis=0)# 1xCxHxW
+ norm_img_data[i,:,:] = (np_image[i,:,:] / 255 - mean_vec[i]) / std_vec[i]
+ np_image = np.expand_dims(norm_img_data, axis=0) # 1xCxHxW
return np_image
-from PIL import Image
+# following code loads only batch_size number of images for demonstrating ONNX inference
+# make sure that the data directory has at least batch_size number of images
+test_images_path = "automl_models_multi_label/test_images_dir/*" # replace with path to images
+# Select batch size needed
+batch_size = 8
# you can modify resize_size based on your trained model
-resize_size = 256
-
+resize_size = 256
# height and width will be the same for classification crop_size_onnx = height_onnx_crop_size
-test_image_path = "automl_models/test_image.jpg"
-img = Image.open(test_image_path)
-print("Input image dimensions: ", img.size)
-display(img)
-img_data = preprocess(img, resize_size, crop_size_onnx)
+
+image_files = glob.glob(test_images_path)
+img_processed_list = []
+for i in range(batch_size):
+ img = Image.open(image_files[i])
+ img_processed_list.append(preprocess(img, resize_size, crop_size_onnx))
+
+if len(img_processed_list) > 1:
+ img_data = np.concatenate(img_processed_list)
+elif len(img_processed_list) == 1:
+ img_data = img_processed_list[0]
+else:
+ img_data = None
+
+assert batch_size == img_data.shape[0]
``` ### With PyTorch ```python
+import glob
import torch
+import numpy as np
+from PIL import Image
from torchvision import transforms
-# tested with torch version: '1.7.1', torchvision version: '0.8.2')
def _make_3d_tensor(x) -> torch.Tensor: """This function is for images that have less channels.
def _make_3d_tensor(x) -> torch.Tensor:
return x if x.shape[0] == 3 else x.expand((3, x.shape[1], x.shape[2])) def preprocess(image, resize_size, crop_size_onnx):
+ """Perform pre-processing on raw input image
+
+ :param image: raw input image
+ :type image: PIL image
+ :param resize_size: value to resize the image
+ :type image: Int
+ :param crop_size_onnx: expected height of an input image in onnx model
+ :type crop_size_onnx: Int
+ :return: pre-processed image in numpy format
+ :rtype: ndarray 1xCxHxW
+ """
transform = transforms.Compose([ transforms.Resize(resize_size), transforms.CenterCrop(crop_size_onnx),
def preprocess(image, resize_size, crop_size_onnx):
img_data = transform(image) img_data = img_data.numpy() img_data = np.expand_dims(img_data, axis=0)
+
return img_data
+# following code loads only batch_size number of images for demonstrating ONNX inference
+# make sure that the data directory has at least batch_size number of images
+
+test_images_path = "automl_models_multi_label/test_images_dir/*" # replace with path to images
+# Select batch size needed
+batch_size = 8
# you can modify resize_size based on your trained model resize_size = 256- # height and width will be the same for classification crop_size_onnx = height_onnx_crop_size
-test_image_path = "automl_models/test_image.jpg"
-img = Image.open(test_image_path)
-print("Input image dimensions: ", img.size)
-display(img)
-img_data = preprocess(img, resize_size, crop_size_onnx)
-```
+image_files = glob.glob(test_images_path)
+img_processed_list = []
+for i in range(batch_size):
+ img = Image.open(image_files[i])
+ img_processed_list.append(preprocess(img, resize_size, crop_size_onnx))
+
+if len(img_processed_list) > 1:
+ img_data = np.concatenate(img_processed_list)
+elif len(img_processed_list) == 1:
+ img_data = img_processed_list[0]
+else:
+ img_data = None
+
+assert batch_size == img_data.shape[0]
+```
-# [Object detection with Faster R-CNN](#tab/object-detect-cnn)
+# [Object detection with Faster R-CNN or RetinaNet](#tab/object-detect-cnn)
For object detection with the Faster R-CNN algorithm, follow the same preprocessing steps as image classification, except for image cropping. You can resize the image with height `600` and width `800`. You can get the expected input height and width with the following code.
batch, channel, height_onnx, width_onnx
Then, perform the preprocessing steps. ```python
+import glob
+import numpy as np
+from PIL import Image
+ def preprocess(image, height_onnx, width_onnx):
- """perform pre-processing on raw input image
+ """Perform pre-processing on raw input image
:param image: raw input image :type image: PIL image
def preprocess(image, height_onnx, width_onnx):
:param width_onnx: expected width of an input image in onnx model :type width_onnx: Int :return: pre-processed image in numpy format
- :rtype: ndarray of shape 1xCxHxW
+ :rtype: ndarray 1xCxHxW
""" image = image.convert('RGB') image = image.resize((width_onnx, height_onnx)) np_image = np.array(image) # HWC -> CHW
- np_image = np_image.transpose(2, 0, 1)# CxHxW
-
- # Normalize the image
+ np_image = np_image.transpose(2, 0, 1) # CxHxW
+ # normalize the image
mean_vec = np.array([0.485, 0.456, 0.406]) std_vec = np.array([0.229, 0.224, 0.225]) norm_img_data = np.zeros(np_image.shape).astype('float32') for i in range(np_image.shape[0]):
- norm_img_data[i,:,:] = (np_image[i,:,:]/255 - mean_vec[i]) / std_vec[i]
-
-
- np_image = np.expand_dims(norm_img_data, axis=0)# 1xCxHxW
+ norm_img_data[i,:,:] = (np_image[i,:,:] / 255 - mean_vec[i]) / std_vec[i]
+ np_image = np.expand_dims(norm_img_data, axis=0) # 1xCxHxW
return np_image
-from PIL import Image
+# following code loads only batch_size number of images for demonstrating ONNX inference
+# make sure that the data directory has at least batch_size number of images
-test_image_path = "automl_models/test_image.jpg"
-img = Image.open(test_image_path)
-print("Input image dimensions: ", img.size)
-display(img)
-img_data = preprocess(img, height_onnx, width_onnx)
+test_images_path = "automl_models_od/test_images_dir/*" # replace with path to images
+image_files = glob.glob(test_images_path)
+img_processed_list = []
+for i in range(batch_size):
+ img = Image.open(image_files[i])
+ img_processed_list.append(preprocess(img, height_onnx, width_onnx))
+
+if len(img_processed_list) > 1:
+ img_data = np.concatenate(img_processed_list)
+elif len(img_processed_list) == 1:
+ img_data = img_processed_list[0]
+else:
+ img_data = None
+
+assert batch_size == img_data.shape[0]
``` # [Object detection with YOLO](#tab/object-detect-yolo)
batch, channel, height_onnx, width_onnx
For preprocessing required for YOLO, refer to [yolo_onnx_preprocessing_utils.py](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml/image-object-detection). ```python
+import glob
+import numpy as np
from yolo_onnx_preprocessing_utils import preprocess
-test_image_path = "automl_models_od_yolo/test_image.jpg"
-img_data, pad = preprocess(test_image_path)
+# use height and width based on the generated model
+test_images_path = "automl_models_od_yolo/test_images_dir/*" # replace with path to images
+image_files = glob.glob(test_images_path)
+img_processed_list = []
+pad_list = []
+for i in range(batch_size):
+ img_processed, pad = preprocess(image_files[i])
+ img_processed_list.append(img_processed)
+ pad_list.append(pad)
+
+if len(img_processed_list) > 1:
+ img_data = np.concatenate(img_processed_list)
+elif len(img_processed_list) == 1:
+ img_data = img_processed_list[0]
+else:
+ img_data = None
+
+assert batch_size == img_data.shape[0]
``` # [Instance segmentation](#tab/instance-segmentation)
Perform the following preprocessing steps for the ONNX model inference:
For `resize_height` and `resize_width`, you can also use the values that you used during training, bounded by the `min_size` and `max_size` [hyperparameters](how-to-auto-train-image-models.md#configure-model-algorithms-and-hyperparameters) for Mask R-CNN. ```python
+import glob
+import numpy as np
+from PIL import Image
+ def preprocess(image, resize_height, resize_width):
- """perform pre-processing on raw input image
-
+ """Perform pre-processing on raw input image
+
:param image: raw input image :type image: PIL image :param resize_height: resize height of an input image
def preprocess(image, resize_height, resize_width):
""" image = image.convert('RGB')
- image = image.resize((resize_width,resize_height))
+ image = image.resize((resize_width, resize_height))
np_image = np.array(image)- # HWC -> CHW
- np_image = np_image.transpose(2, 0, 1)# CxHxW
-
+ np_image = np_image.transpose(2, 0, 1) # CxHxW
# normalize the image mean_vec = np.array([0.485, 0.456, 0.406]) std_vec = np.array([0.229, 0.224, 0.225]) norm_img_data = np.zeros(np_image.shape).astype('float32') for i in range(np_image.shape[0]):
- norm_img_data[i,:,:] = (np_image[i,:,:]/255 - mean_vec[i]) / std_vec[i]
- np_image = np.expand_dims(norm_img_data, axis=0)# 1xCxHxW
+ norm_img_data[i,:,:] = (np_image[i,:,:]/255 - mean_vec[i])/std_vec[i]
+ np_image = np.expand_dims(norm_img_data, axis=0) # 1xCxHxW
return np_image
-from PIL import Image
+# following code loads only batch_size number of images for demonstrating ONNX inference
+# make sure that the data directory has at least batch_size number of images
# use height and width based on the trained model
-resize_height, resize_width = 600, 800
-test_image_path = 'automl_models/test_image.jpg'
-img = Image.open(test_image_path)
-print("Input image dimensions: ", img.size)
-display(img)
-img_data = preprocess(img, resize_height, resize_width)
-
+# use height and width based on the generated model
+test_images_path = "automl_models_is/test_images_dir/*" # replace with path to images
+image_files = glob.glob(test_images_path)
+img_processed_list = []
+for i in range(batch_size):
+ img = Image.open(image_files[i])
+ img_processed_list.append(preprocess(img, height_onnx, width_onnx))
+
+if len(img_processed_list) > 1:
+ img_data = np.concatenate(img_processed_list)
+elif len(img_processed_list) == 1:
+ img_data = img_processed_list[0]
+else:
+ img_data = None
+
+assert batch_size == img_data.shape[0]
```
img_data = preprocess(img, resize_height, resize_width)
Inferencing with ONNX Runtime differs for each computer vision task.
->[!WARNING]
-> Batch scoring is not currently supported for all computer vision tasks.
- # [Multi-class image classification](#tab/multi-class) ```python
-def get_predictions_from_ONNX(onnx_session,img_data):
- """perform predictions with ONNX Runtime
+def get_predictions_from_ONNX(onnx_session, img_data):
+ """Perform predictions with ONNX runtime
- :param onnx_session: onnx inference session
+ :param onnx_session: onnx model session
:type onnx_session: class InferenceSession :param img_data: pre-processed numpy image :type img_data: ndarray with shape 1xCxHxW :return: scores with shapes (1, No. of classes in training dataset)
- :rtype: ndarray
+ :rtype: numpy array
"""+ sess_input = onnx_session.get_inputs() sess_output = onnx_session.get_outputs() print(f"No. of inputs : {len(sess_input)}, No. of outputs : {len(sess_output)}")
def get_predictions_from_ONNX(onnx_session,img_data):
output_names = [ output.name for output in sess_output] scores = onnx_session.run(output_names=output_names,\ input_feed={sess_input[0].name: img_data})
+
return scores[0] scores = get_predictions_from_ONNX(session, img_data)
scores = get_predictions_from_ONNX(session, img_data)
```python def get_predictions_from_ONNX(onnx_session,img_data):
- """perform predictions with ONNX Runtime
+ """Perform predictions with ONNX runtime
- :param onnx_session: onnx inference session
+ :param onnx_session: onnx model session
:type onnx_session: class InferenceSession :param img_data: pre-processed numpy image :type img_data: ndarray with shape 1xCxHxW :return: scores with shapes (1, No. of classes in training dataset)
- :rtype: ndarray
+ :rtype: numpy array
"""
+
sess_input = onnx_session.get_inputs() sess_output = onnx_session.get_outputs() print(f"No. of inputs : {len(sess_input)}, No. of outputs : {len(sess_output)}")
def get_predictions_from_ONNX(onnx_session,img_data):
output_names = [ output.name for output in sess_output] scores = onnx_session.run(output_names=output_names,\ input_feed={sess_input[0].name: img_data})
+
return scores[0] scores = get_predictions_from_ONNX(session, img_data) ```
-# [Object detection with Faster R-CNN](#tab/object-detect-cnn)
+# [Object detection with Faster R-CNN or RetinaNet](#tab/object-detect-cnn)
```python
-def get_predictions_from_ONNX(onnx_session,img_data):
- """perform predictions with ONNX Runtime
+def get_predictions_from_ONNX(onnx_session, img_data):
+ """perform predictions with ONNX runtime
:param onnx_session: onnx model session :type onnx_session: class InferenceSession
def get_predictions_from_ONNX(onnx_session,img_data):
(No. of boxes, 4) (No. of boxes,) (No. of boxes,) :rtype: tuple """+ sess_input = onnx_session.get_inputs() sess_output = onnx_session.get_outputs() # predict with ONNX Runtime
- output_names = [ output.name for output in sess_output]
- boxes, labels, scores = onnx_session.run(output_names=output_names,\
+ output_names = [output.name for output in sess_output]
+ predictions = onnx_session.run(output_names=output_names,\
input_feed={sess_input[0].name: img_data})
- return boxes, labels, scores
-boxes, labels, scores = get_predictions_from_ONNX(session, img_data)
+ return output_names, predictions
+
+output_names, predictions = get_predictions_from_ONNX(session, img_data)
``` # [Object detection with YOLO](#tab/object-detect-yolo)
result = get_predictions_from_ONNX(session, img_data)
The instance segmentation model predicts boxes, labels, scores, and masks. ONNX outputs a predicted mask per instance, along with corresponding bounding boxes and class confidence score. You might need to convert from binary mask to polygon if necessary. ```python
-def get_predictions_from_ONNX(onnx_session,img_data):
- """perform predictions with ONNX Runtime
+
+def get_predictions_from_ONNX(onnx_session, img_data):
+ """Perform predictions with ONNX runtime
:param onnx_session: onnx model session :type onnx_session: class InferenceSession
def get_predictions_from_ONNX(onnx_session,img_data):
sess_output = onnx_session.get_outputs() # predict with ONNX Runtime output_names = [ output.name for output in sess_output]
- boxes, labels, scores, masks = onnx_session.run(output_names=output_names,\
+ predictions = onnx_session.run(output_names=output_names,\
input_feed={sess_input[0].name: img_data})
- return boxes, labels, scores, masks
-
+ return output_names, predictions
-boxes, labels, scores, masks = get_predictions_from_ONNX(session, img_data)
+output_names, predictions = get_predictions_from_ONNX(session, img_data)
```
Apply `softmax()` over predicted values to get classification confidence scores
### Without PyTorch ```python- def softmax(x):
- x = x.reshape(-1)
- e_x = np.exp(x - np.max(x))
- return e_x / e_x.sum(axis=0)
-conf_scores = softmax(scores).tolist()
-class_idx = np.argmax(conf_scores)
-print("predicted class:",(class_idx, classes[class_idx]))
+ e_x = np.exp(x - np.max(x, axis=1, keepdims=True))
+ return e_x / np.sum(e_x, axis=1, keepdims=True)
+conf_scores = softmax(scores)
+class_preds = np.argmax(conf_scores, axis=1)
+print("predicted classes:", ([(class_idx, classes[class_idx]) for class_idx in class_preds]))
``` ### With PyTorch ```python-
-conf_scores = torch.nn.functional.softmax(torch.from_numpy(scores), dim=1).tolist()[0]
-class_idx = np.argmax(conf_scores)
-print("predicted class:", (class_idx, classes[class_idx]))
+conf_scores = torch.nn.functional.softmax(torch.from_numpy(scores), dim=1)
+class_preds = torch.argmax(conf_scores, dim=1)
+print("predicted classes:", ([(class_idx.item(), classes[class_idx]) for class_idx in class_preds]))
``` # [Multi-label image classification](#tab/multi-label)
This step differs from multi-class classification. You need to apply `sigmoid` t
### Without PyTorch ```python-
-import math
- def sigmoid(x):
- return 1 / (1 + math.exp(-x))
+ return 1 / (1 + np.exp(-x))
# we apply a threshold of 0.5 on confidence scores
-cutoff_threshold = .5
-for class_idx, prob in enumerate([sigmoid(logit) for logit in scores[0]]):
- if prob > cutoff_threshold:
- print("predicted class:", (class_idx, classes[class_idx]))
+score_threshold = 0.5
+conf_scores = sigmoid(scores)
+image_wise_preds = np.where(conf_scores > score_threshold)
+for image_idx, class_idx in zip(image_wise_preds[0], image_wise_preds[1]):
+ print('image: {}, class_index: {}, class_name: {}'.format(image_files[image_idx], class_idx, classes[class_idx]))
``` ### With PyTorch ```python- # we apply a threshold of 0.5 on confidence scores
-cutoff_threshold = .5
-conf_scores = torch.sigmoid(torch.from_numpy(scores[0])).tolist()
-for class_idx, prob in enumerate(conf_scores):
- if prob > cutoff_threshold:
- print("predicted class:", (class_idx, classes[class_idx]))
+score_threshold = 0.5
+conf_scores = torch.sigmoid(torch.from_numpy(scores))
+image_wise_preds = torch.where(conf_scores > score_threshold)
+for image_idx, class_idx in zip(image_wise_preds[0], image_wise_preds[1]):
+ print('image: {}, class_index: {}, class_name: {}'.format(image_files[image_idx], class_idx, classes[class_idx]))
``` For multi-class and multi-label classification, you can follow the same steps mentioned earlier for all the supported algorithms in AutoML.
-# [Object detection with Faster R-CNN](#tab/object-detect-cnn)
+# [Object detection with Faster R-CNN or RetinaNet](#tab/object-detect-cnn)
+
+For object detection, predictions are automatically on the scale of `height_onnx`, `width_onnx`. To transform the predicted box coordinates to the original dimensions, you can implement the following calculations.
+
+- Xmin * original_width/width_onnx
+- Ymin * original_height/height_onnx
+- Xmax * original_width/width_onnx
+- Ymax * original_height/height_onnx
+
+Another option is to use the following code to scale the box dimensions to be in the range of [0, 1]. Doing so allows the box coordinates to be multiplied with original images height and width with respective coordinates (as described in [visualize predictions section](#visualize-predictions)) to get boxes in original image dimensions.
```python def _get_box_dims(image_shape, box):
def _get_prediction(boxes, labels, scores, image_shape, classes):
return bounding_boxes
-bounding_boxes = _get_prediction(boxes, labels, scores, (height_onnx,width_onnx), classes)
-print(json.dumps(bounding_boxes, indent=1))
-
-# Filter the results with a threshold.
-# Replace the threshold for your test scenario.
+# Filter the results with threshold.
+# Please replace the threshold for your test scenario.
score_threshold = 0.8
-filtered_bounding_boxes = []
-for box in bounding_boxes:
- if box['score'] >= score_threshold:
- filtered_bounding_boxes.append(box)
+filtered_boxes_batch = []
+for batch_sample in range(0, batch_size*3, 3):
+ # in case of retinanet change the order of boxes, labels, scores to boxes, scores, labels
+ # confirm the same from order of boxes, labels, scores output_names
+ boxes, labels, scores = predictions[batch_sample], predictions[batch_sample + 1], predictions[batch_sample + 2]
+ bounding_boxes = _get_prediction(boxes, labels, scores, (height_onnx, width_onnx), classes)
+ filtered_bounding_boxes = [box for box in bounding_boxes if box['score'] >= score_threshold]
+ filtered_boxes_batch.append(filtered_bounding_boxes)
```
-### Visualize boxes
+# [Object detection with YOLO](#tab/object-detect-yolo)
+
+The following code creates boxes, labels, and scores. Use these bounding box details to perform the same postprocessing steps as you did for the Faster R-CNN model.
```python
+from yolo_onnx_preprocessing_utils import non_max_suppression, _convert_to_rcnn_output
+
+result_final = non_max_suppression(
+ torch.from_numpy(result),
+ conf_thres=0.1,
+ iou_thres=0.5)
+
+def _get_box_dims(image_shape, box):
+ box_keys = ['topX', 'topY', 'bottomX', 'bottomY']
+ height, width = image_shape[0], image_shape[1]
+
+ box_dims = dict(zip(box_keys, [coordinate.item() for coordinate in box]))
+
+ box_dims['topX'] = box_dims['topX'] * 1.0 / width
+ box_dims['bottomX'] = box_dims['bottomX'] * 1.0 / width
+ box_dims['topY'] = box_dims['topY'] * 1.0 / height
+ box_dims['bottomY'] = box_dims['bottomY'] * 1.0 / height
+
+ return box_dims
+
+def _get_prediction(label, image_shape, classes):
+
+ boxes = np.array(label["boxes"])
+ labels = np.array(label["labels"])
+ labels = [label[0] for label in labels]
+ scores = np.array(label["scores"])
+ scores = [score[0] for score in scores]
+
+ bounding_boxes = []
+ for box, label_index, score in zip(boxes, labels, scores):
+ box_dims = _get_box_dims(image_shape, box)
+
+ box_record = {'box': box_dims,
+ 'label': classes[label_index],
+ 'score': score.item()}
+
+ bounding_boxes.append(box_record)
+
+ return bounding_boxes
+
+bounding_boxes_batch = []
+for result_i, pad in zip(result_final, pad_list):
+ label, image_shape = _convert_to_rcnn_output(result_i, height_onnx, width_onnx, pad)
+ bounding_boxes_batch.append(_get_prediction(label, image_shape, classes))
+print(json.dumps(bounding_boxes_batch, indent=1))
+```
+
+# [Instance segmentation](#tab/instance-segmentation)
+
+ You can either use the steps mentioned for Faster R-CNN (in case of Mask R-CNN, each sample has four elements boxes, labels, scores, masks) or refer to the [visualize predictions](#visualize-predictions) section for instance segmentation.
+++
+<a id='visualize_section'></a>
+## Visualize predictions
++
+# [Multi-class image classification](#tab/multi-class)
+Visualize an input image with labels
+
+```python
+import matplotlib.image as mpimg
import matplotlib.pyplot as plt
+%matplotlib inline
+
+sample_image_index = 0 # change this for an image of interest from image_files list
+IMAGE_SIZE = (18, 12)
+plt.figure(figsize=IMAGE_SIZE)
+img_np = mpimg.imread(image_files[sample_image_index])
+
+img = Image.fromarray(img_np.astype('uint8'), 'RGB')
+x, y = img.size
+
+fig,ax = plt.subplots(1, figsize=(15, 15))
+# Display the image
+ax.imshow(img_np)
+
+label = class_preds[sample_image_index]
+if torch.is_tensor(label):
+ label = label.item()
+
+conf_score = conf_scores[sample_image_index]
+if torch.is_tensor(conf_score):
+ conf_score = np.max(conf_score.tolist())
+else:
+ conf_score = np.max(conf_score)
+
+display_text = '{} ({})'.format(label, round(conf_score, 3))
+print(display_text)
+
+color = 'red'
+plt.text(30, 30, display_text, color=color, fontsize=30)
+
+plt.show()
+```
+
+# [Multi-label image classification](#tab/multi-label)
+
+Visualize an input image with labels
+
+```python
+import matplotlib.image as mpimg
+import matplotlib.pyplot as plt
+%matplotlib inline
+
+sample_image_index = 0 # change this for an image of interest from image_files list
+IMAGE_SIZE = (18, 12)
+plt.figure(figsize=IMAGE_SIZE)
+img_np = mpimg.imread(image_files[sample_image_index])
+img = Image.fromarray(img_np.astype('uint8'), 'RGB')
+x, y = img.size
+
+fig,ax = plt.subplots(1, figsize=(15, 15))
+# Display the image
+ax.imshow(img_np)
+# we apply a threshold of 0.5 on confidence scores
+score_threshold = 0.5
+label_offset_x = 30
+label_offset_y = 30
+if torch.is_tensor(conf_scores):
+ sample_image_scores = conf_scores[sample_image_index].tolist()
+else:
+ sample_image_scores = conf_scores[sample_image_index]
+
+for index, score in enumerate(sample_image_scores):
+ if score > score_threshold:
+ label = classes[index]
+ display_text = '{} ({})'.format(label, round(score, 3))
+ print(display_text)
+
+ color = 'red'
+ plt.text(label_offset_x, label_offset_y, display_text, color=color, fontsize=30)
+ label_offset_y += 30
+
+plt.show()
+```
+
+# [Object detection with Faster R-CNN or RetinaNet](#tab/object-detect-cnn)
+
+Visualize an input image with boxes and labels
+
+```python
import matplotlib.image as mpimg import matplotlib.patches as patches
-from PIL import Image
+import matplotlib.pyplot as plt
%matplotlib inline
-IMAGE_SIZE = (18,12)
+img_np = mpimg.imread(image_files[1]) # replace with desired image index
+image_boxes = filtered_boxes_batch[1] # replace with desired image index
+
+IMAGE_SIZE = (18, 12)
plt.figure(figsize=IMAGE_SIZE)
-img_np = mpimg.imread(test_image_path)
-img = Image.fromarray(img_np.astype('uint8'),'RGB')
+img = Image.fromarray(img_np.astype('uint8'), 'RGB')
x, y = img.size print(img.size)
fig,ax = plt.subplots(1)
# Display the image ax.imshow(img_np)
-# Draw a box and label for each detection
-for detect in filtered_bounding_boxes:
+# Draw box and label for each detection
+for detect in image_boxes:
label = detect['label'] box = detect['box']
- ymin, xmin, ymax, xmax = box['topY'],box['topX'], box['bottomY'],box['bottomX']
+ ymin, xmin, ymax, xmax = box['topY'], box['topX'], box['bottomY'], box['bottomX']
topleft_x, topleft_y = x * xmin, y * ymin width, height = x * (xmax - xmin), y * (ymax - ymin) print('{}: {}, {}, {}, {}'.format(detect['label'], topleft_x, topleft_y, width, height)) rect = patches.Rectangle((topleft_x, topleft_y), width, height,
- linewidth=1, edgecolor='green',facecolor='none')
+ linewidth=1, edgecolor='green', facecolor='none')
ax.add_patch(rect) color = 'green'
plt.show()
# [Object detection with YOLO](#tab/object-detect-yolo)
-The following code creates boxes, labels, and scores. Use these bounding box details to perform the same postprocessing steps as you did for the Faster R-CNN model.
+Visualize an input image with boxes and labels
```python
-from yolo_onnx_preprocessing_utils import non_max_suppression, _convert_to_rcnn_output
-import torch
-
-result = non_max_suppression(
- torch.from_numpy(result),
- conf_thres=.1,
- iou_thres=.5)
-label, image_shape = _convert_to_rcnn_output(result[0], height_onnx, width_onnx, pad)
-boxes = np.array(label["boxes"])
-labels = np.array(label["labels"])
-labels = [label[0] for label in labels]
-scores = np.array(label["scores"])
-scores = [score[0] for score in scores]
-boxes, labels, scores
-```
-
-### Visualize boxes
-
-```python
-
-import matplotlib.pyplot as plt
import matplotlib.image as mpimg import matplotlib.patches as patches
-from PIL import Image
+import matplotlib.pyplot as plt
%matplotlib inline
-IMAGE_SIZE = (18,12)
+img_np = mpimg.imread(image_files[1]) # replace with desired image index
+image_boxes = bounding_boxes_batch[1] # replace with desired image index
+
+IMAGE_SIZE = (18, 12)
plt.figure(figsize=IMAGE_SIZE)
-img_np = mpimg.imread(test_image_path)
-img = Image.fromarray(img_np.astype('uint8'),'RGB')
+img = Image.fromarray(img_np.astype('uint8'), 'RGB')
x, y = img.size print(img.size)
fig,ax = plt.subplots(1)
# Display the image ax.imshow(img_np)
-# Draw a box and label for each detection
-for detect in filtered_bounding_boxes:
+# Draw box and label for each detection
+for detect in image_boxes:
label = detect['label'] box = detect['box']
- ymin, xmin, ymax, xmax = box['topY'],box['topX'], box['bottomY'],box['bottomX']
+ ymin, xmin, ymax, xmax = box['topY'], box['topX'], box['bottomY'], box['bottomX']
topleft_x, topleft_y = x * xmin, y * ymin width, height = x * (xmax - xmin), y * (ymax - ymin) print('{}: {}, {}, {}, {}'.format(detect['label'], topleft_x, topleft_y, width, height)) rect = patches.Rectangle((topleft_x, topleft_y), width, height,
- linewidth=1, edgecolor='green',facecolor='none')
+ linewidth=1, edgecolor='green', facecolor='none')
ax.add_patch(rect) color = 'green'
for detect in filtered_bounding_boxes:
plt.show() ``` - # [Instance segmentation](#tab/instance-segmentation)
-#### Visualize the masks and bounding boxes
+Visualize a sample input image with masks and labels
```python
-import numpy as np
-import matplotlib.pyplot as plt
import matplotlib.patches as patches
-import cv2
+import matplotlib.pyplot as plt
%matplotlib inline def display_detections(image, boxes, labels, scores, masks, resize_height,
- resize_width, classes, score_threshold=0.3):
- """visualize boxes and masks
+ resize_width, classes, score_threshold):
+ """Visualize boxes and masks
:param image: raw image :type image: PIL image
def display_detections(image, boxes, labels, scores, masks, resize_height,
:type scores: ndarray :param masks: masks with shape (No. of instances, 1, HEIGHT, WIDTH) :type masks: ndarray
- :param resize_height: resize height of an input image
+ :param resize_height: expected height of an input image in onnx model
:type resize_height: Int
- :param resize_width: resize width of an input image
+ :param resize_width: expected width of an input image in onnx model
:type resize_width: Int :param classes: classes with shape (No. of classes) :type classes: list :param score_threshold: threshold on scores in the range of 0-1 :type score_threshold: float
- :return: None
-
+ :return: None
""" _, ax = plt.subplots(1, figsize=(12,9))
def display_detections(image, boxes, labels, scores, masks, resize_height,
box[2]*original_width/resize_width, box[3]*original_height/resize_height]
- mask = cv2.resize(mask, (image.shape[1],image.shape[0]), 0, 0, interpolation = cv2.INTER_NEAREST)
+ mask = cv2.resize(mask, (image.shape[1], image.shape[0]), 0, 0, interpolation = cv2.INTER_NEAREST)
# mask is a matrix with values in the range of [0,1]
- # higher values indicate the presence of an object and vice versa
- # select the threshold or cutoff value to get objects present
- mask = mask > 0.5
+ # higher values indicate presence of object and vice versa
+ # select threshold or cut-off value to get objects present
+ mask = mask > score_threshold
image_masked = image.copy() image_masked[mask] = (0, 255, 255)
- alpha = .5 # alpha blending with range 0 to 1
- cv2.addWeighted(image_masked, alpha, image, 1 - alpha,0, image)
+ alpha = 0.5 # alpha blending with range 0 to 1
+ cv2.addWeighted(image_masked, alpha, image, 1 - alpha,0, image)
rect = patches.Rectangle((box[0], box[1]), box[2] - box[0], box[3] - box[1],\ linewidth=1, edgecolor='b', facecolor='none') ax.annotate(classes[label] + ':' + str(np.round(score, 2)), (box[0], box[1]),\ color='w', fontsize=12) ax.add_patch(rect) - ax.imshow(image) plt.show()
+score_threshold = 0.5
+img = Image.open(image_files[1]) # replace with desired image index
+image_boxes = filtered_boxes_batch[1] # replace with desired image index
+boxes, labels, scores, masks = predictions[4:8] # replace with desired image index
display_detections(img, boxes.copy(), labels, scores, masks.copy(),
- resize_height, resize_width, classes, score_threshold=.5)
+ height_onnx, width_onnx, classes, score_threshold)
``` - ## Next steps * [Learn more about computer vision tasks in AutoML](how-to-auto-train-image-models.md) * [Troubleshoot AutoML experiments](how-to-troubleshoot-auto-ml.md)
machine-learning How To Prepare Datasets For Automl Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prepare-datasets-for-automl-images.md
description: Image data preparation for Azure Machine Learning automated ML to t
-+ Last updated 10/13/2021
print("Training dataset name: " + training_dataset.name)
## Next steps * [Train computer vision models with automated machine learning](how-to-auto-train-image-models.md).
-* [Train a small object detection model with automated machine learning](how-to-use-automl-small-object-detect.md).
+* [Train a small object detection model with automated machine learning](how-to-use-automl-small-object-detect.md).
+* [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
machine-learning How To Run Jupyter Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-run-jupyter-notebooks.md
Previously updated : 12/16/2021 Last updated : 02/17/2022 #Customer intent: As a data scientist, I want to run Jupyter notebooks in my workspace in Azure Machine Learning studio.
For information on how to create and manage files, including notebooks, see [Cre
* An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. * A Machine Learning workspace. See [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* Your user identity must have access to your workspace's default storage account. Whether you can read, edit, or create notebooks depends on your [access level](how-to-assign-roles.md) to your workspace. For example, a Contributor can edit the notebook, while a Reader could only view it.
## Edit a notebook
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
In this article you learn how to secure the following training compute resources
+ To deploy resources into a virtual network or subnet, your user account must have permissions to the following actions in Azure role-based access control (Azure RBAC):
- - "Microsoft.Network/virtualNetworks/*/read" on the virtual network resource. This permission is not needed for Azure Resource Manager (ARM) template deployments.
+ - "Microsoft.Network/virtualNetworks/*/read" on the virtual network resource. This permission isn't needed for Azure Resource Manager (ARM) template deployments.
- "Microsoft.Network/virtualNetworks/subnet/join/action" on the subnet resource. For more information on Azure RBAC with networking, see the [Networking built-in roles](../role-based-access-control/built-in-roles.md#networking) ### Azure Machine Learning compute cluster/instance
-* Compute clusters and instances create the following resources. If they are unable to create these resources (for example, if there is a resource lock on the resource group) then creation, scale out, or scale in, may fail.
+* Compute clusters and instances create the following resources. If they're unable to create these resources (for example, if there's a resource lock on the resource group) then creation, scale out, or scale in, may fail.
* IP address. * Network Security Group (NSG).
In this article you learn how to secure the following training compute resources
> [!TIP] > If your compute cluster or instance does not use a public IP address (a preview feature), these inbound NSG rules are not required.
- * For compute cluster or instance, it is now possible to remove the public IP address (a preview feature). If you have Azure Policy assignments prohibiting Public IP creation, then deployment of the compute cluster or instance will succeed.
+ * For compute cluster or instance, it's now possible to remove the public IP address (a preview feature). If you have Azure Policy assignments prohibiting Public IP creation, then deployment of the compute cluster or instance will succeed.
* One load balancer For compute clusters, these resources are deleted every time the cluster scales down to 0 nodes and created when scaling up.
- For a compute instance, these resources are kept until the instance is deleted. Stopping the instance does not remove the resources.
+ For a compute instance, these resources are kept until the instance is deleted. Stopping the instance doesn't remove the resources.
> [!IMPORTANT] > These resources are limited by the subscription's [resource quotas](../azure-resource-manager/management/azure-subscription-service-limits.md). If the virtual network resource group is locked then deletion of compute cluster/instance will fail. Load balancer cannot be deleted until the compute cluster/instance is deleted. Also please ensure there is no Azure Policy assignment which prohibits creation of network security groups.
+* If you create a compute instance and plan to use the no public IP address configuration, your Azure Machine Learning workspace's managed identity must be assigned the __Reader__ role for the virtual network that contains the workspace. For more information on assigning roles, see [Steps to assign an Azure role](/azure/role-based-access-control/role-assignments-steps).
+ * If the Azure Storage Accounts for the workspace are also in the virtual network, use the following guidance on subnet limitations: * If you plan to use Azure Machine Learning __studio__ to visualize data or use designer, the storage account must be __in the same subnet as the compute instance or cluster__.
In this article you learn how to secure the following training compute resources
* When your workspace uses a private endpoint, the compute instance can only be accessed from inside the virtual network. If you use a custom DNS or hosts file, add an entry for `<instance-name>.<region>.instances.azureml.ms`. Map this entry to the private IP address of the workspace private endpoint. For more information, see the [custom DNS](./how-to-custom-dns.md) article. * Virtual network service endpoint policies don't work for compute cluster/instance system storage accounts. * If storage and compute instance are in different regions, you may see intermittent timeouts.
-* If the Azure Container Registry for your workspace uses a private endpoint to connect to the virtual network, you cannot use a managed identity for the compute instance. To use a managed identity with the compute instance, do not put the container registry in the VNet.
+* If the Azure Container Registry for your workspace uses a private endpoint to connect to the virtual network, you canΓÇÖt use a managed identity for the compute instance. To use a managed identity with the compute instance, don't put the container registry in the VNet.
* If you want to use Jupyter Notebooks on a compute instance: * Don't disable websocket communication. Make sure your network allows websocket communication to `*.instances.azureml.net` and `*.instances.azureml.ms`.
In this article you learn how to secure the following training compute resources
### Azure Databricks * In addition to the __databricks-private__ and __databricks-public__ subnets used by Azure Databricks, the __default__ subnet created for the virtual network is also required.
-* Azure Databricks does not use a private endpoint to communicate with the virtual network.
+* Azure Databricks doesn't use a private endpoint to communicate with the virtual network.
For more information on using Azure Databricks in a virtual network, see [Deploy Azure Databricks in your Azure Virtual Network](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject).
When you enable **No public IP**, your compute cluster doesn't use a public IP f
A compute cluster with **No public IP** enabled has **no inbound communication requirements** from public internet. Specifically, neither inbound NSG rule (`BatchNodeManagement`, `AzureMachineLearning`) is required. You still need to allow inbound from source of **VirtualNetwork** and any port source, to destination of **VirtualNetwork**, and destination port of **29876, 29877**.
+> [!IMPORTANT]
+> When creating a compute instance with no public IP, the managed identity for your workspace must be assigned the __Owner__ role on the virtual network. For more information on assigning roles, see [Steps to assign an Azure role](/azure/role-based-access-control/role-assignments-steps).
+ **No public IP** clusters are dependent on [Azure Private Link](how-to-configure-private-link.md) for Azure Machine Learning workspace.
-A compute cluster with **No public IP** also requires you to disable private endpoint network policies and private link service network policies. These requirements come from Azure private link service and private endpoints and are not Azure Machine Learning specific. Follow instruction from [Disable network policies for Private Link service](../private-link/disable-private-link-service-network-policy.md) to set the parameters `disable-private-endpoint-network-policies` and `disable-private-link-service-network-policies` on the virtual network subnet.
+A compute cluster with **No public IP** also requires you to disable private endpoint network policies and private link service network policies. These requirements come from Azure private link service and private endpoints and aren't Azure Machine Learning specific. Follow instruction from [Disable network policies for Private Link service](../private-link/disable-private-link-service-network-policy.md) to set the parameters `disable-private-endpoint-network-policies` and `disable-private-link-service-network-policies` on the virtual network subnet.
For **outbound connections** to work, you need to set up an egress firewall such as Azure firewall with user defined routes. For instance, you can use a firewall set up with [inbound/outbound configuration](how-to-access-azureml-behind-firewall.md) and route traffic there by defining a route table on the subnet in which the compute cluster is deployed. The route table entry can set up the next hop of the private IP address of the firewall with the address prefix of 0.0.0.0/0.
You can also create no public IP compute cluster through an ARM template. In the
* If job execution fails with connection issues to ACR or Azure Storage, verify that customer has added ACR and Azure Storage service endpoint/private endpoints to subnet and ACR/Azure Storage allows the access from the subnet.
-* To ensure that you have created a no public IP cluster, in Studio when looking at cluster details you will see **No Public IP** property is set to **true** under resource properties.
+* To ensure that you've created a no public IP cluster, in Studio when looking at cluster details you'll see **No Public IP** property is set to **true** under resource properties.
## Compute instance
For **outbound connections** to work, you need to set up an egress firewall such
A compute instance with **No public IP** enabled has **no inbound communication requirements** from public internet. Specifically, neither inbound NSG rule (`BatchNodeManagement`, `AzureMachineLearning`) is required. You still need to allow inbound from source of **VirtualNetwork**, any port source, destination of **VirtualNetwork**, and destination port of **29876, 29877, 44224**.
-A compute instance with **No public IP** also requires you to disable private endpoint network policies and private link service network policies. These requirements come from Azure private link service and private endpoints and are not Azure Machine Learning specific. Follow instruction from [Disable network policies for Private Link service source IP](../private-link/disable-private-link-service-network-policy.md) to set the parameters `disable-private-endpoint-network-policies` and `disable-private-link-service-network-policies` on the virtual network subnet.
+A compute instance with **No public IP** also requires you to disable private endpoint network policies and private link service network policies. These requirements come from Azure private link service and private endpoints and aren't Azure Machine Learning specific. Follow instruction from [Disable network policies for Private Link service source IP](../private-link/disable-private-link-service-network-policy.md) to set the parameters `disable-private-endpoint-network-policies` and `disable-private-link-service-network-policies` on the virtual network subnet.
To create a no public IP address compute instance (a preview feature) in studio, set **No public IP** checkbox in the virtual network section. You can also create no public IP compute instance through an ARM template. In the ARM template set enableNodePublicIP parameter to false.
machine-learning How To Troubleshoot Auto Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-auto-ml.md
For automated ML runs, you need to ensure the file datastore that connects to yo
Error message: `Could not create a connection to the AzureFileService due to missing credentials. Either an Account Key or SAS token needs to be linked the default workspace blob store.`
+## Data schema
+
+When you try to create a new automated ML experiment via the **Edit and submit** button in the Azure Machine Learning studio, the data schema for the new experiment must match the schema of the data that was used in the original experiment. Otherwise, an error message similar to the following results. Learn more about how to [edit and submit experiments from the studio UI](how-to-use-automated-ml-for-ml-models.md#edit-and-submit-runs-preview).
+
+Error message non-vision experiments: ` Schema mismatch error: (an) additional column(s): "Column1: String, Column2: String, Column3: String", (a) missing column(s)`
+
+Error message for vision datasets: `Schema mismatch error: (an) additional column(s): "dataType: String, dataSubtype: String, dateTime: Date, category: String, subcategory: String, status: String, address: String, latitude: Decimal, longitude: Decimal, source: String, extendedProperties: String", (a) missing column(s): "image_url: Stream, image_details: DataRow, label: List" Vision dataset error(s): Vision dataset should have a target column with name 'label'. Vision dataset should have labelingProjectType tag with value as 'Object Identification (Bounding Box)'.`
+ ## Databricks See [How to configure an automated ML experiment with Databricks](how-to-configure-databricks-automl-environment.md#troubleshooting).
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
To get explanations for a particular model,
![Model explanation dashboard](media/how-to-use-automated-ml-for-ml-models/model-explanation-dashboard.png)
+## Edit and submit runs (preview)
+
+>[!IMPORTANT]
+> The ability to copy, edit and submit a new experiment based on an existing experiment is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
+
+In scenarios where you would like to create a new experiment based on the settings of an existing experiment, automated ML provides the option to do so with the **Edit and submit** button in the studio UI.
+
+This functionality is limited to experiments initiated from the studio UI and requires the data schema for the new experiment to match that of the original experiment.
+
+The **Edit and submit** button opens the **Create a new Automated ML run** wizard with the data, compute and experiment settings pre-populated. You can go through each form and edit selections as needed for your new experiment.
+ ## Deploy your model Once you have the best model at hand, it is time to deploy it as a web service to predict on new data.
machine-learning How To Use Automl Small Object Detect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automl-small-object-detect.md
See the [object detection sample notebook](https://github.com/Azure/azureml-exam
## Next steps * Learn more about [how and where to deploy a model](how-to-deploy-and-where.md).
-* For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md) .
+* For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md).
+* [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
+* See [what hyperparameters are available for computer vision tasks](reference-automl-images-hyperparameters.md).
+*[Make predictions with ONNX on computer vision models from AutoML](how-to-inference-onnx-automl-image-models.md)
machine-learning Reference Automl Images Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-schema.md
In instance segmentation, output consists of multiple boxes with their scaled to
## Next steps
-Learn how to [Prepare data for training computer vision models with automated ML](how-to-prepare-datasets-for-automl-images.md).
+* Learn how to [Prepare data for training computer vision models with automated ML](how-to-prepare-datasets-for-automl-images.md).
+* [Set up computer vision tasks in AutoML](how-to-auto-train-image-models.md)
+* [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md).
machine-learning Tutorial Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-auto-train-image-models.md
Title: 'Tutorial: AutoML- train object detection model'
-description: Train an object detection model to predict NYC taxi fares with the Azure Machine Learning Python SDK using Azure Machine Learning automated ML.
+description: Train an object detection model to identify if an image contains certain objects with automated ML and the Azure Machine Learning Python SDK automated ML.
-+
In this automated machine learning tutorial, you did the following tasks:
* [Learn more about computer vision in automated ML (preview)](concept-automated-ml.md#computer-vision-preview). * [Learn how to set up AutoML to train computer vision models with Python (preview)](how-to-auto-train-image-models.md).
-* [Learn how to onfigure incremental training on computer vision models](how-to-auto-train-image-models.md#incremental-training-optional).
+* [Learn how to configure incremental training on computer vision models](how-to-auto-train-image-models.md#incremental-training-optional).
* See [what hyperparameters are available for computer vision tasks](reference-automl-images-hyperparameters.md). * Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models.
mariadb Concepts Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-compatibility.md
Azure Database for MariaDB uses the community edition of MariaDB server. Therefo
The goal is to support the three most recent versions MariaDB drivers, and efforts with authors from the open source community to constantly improve the functionality and usability of MariaDB drivers continue. A list of drivers that have been tested and found to be compatible with Azure Database for MariaDB 10.2 is provided in the following table:
+> [!WARNING]
+> The MySQL 8.0.27 client is incompatible with Azure Database for MariaDB - Single Server. All connections from the MySQL 8.0.27 client created either via mysql.exe or workbench will fail. As a workaround, consider using an earlier version of the client (prior to MySQL 8.0.27).
++ **Driver** | **Links** | **Compatible Versions** | **Incompatible Versions** | **Notes** |||| PHP | https://secure.php.net/downloads.php | 5.5, 5.6, 7.x | 5.3 | For PHP 7.0 connection with SSL MySQLi, add MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT in the connection string. <br> ```mysqli_real_connect($conn, $host, $username, $password, $db_name, 3306, NULL, MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT);```<br> PDO set: ```PDO::MYSQL_ATTR_SSL_VERIFY_SERVER_CERT``` option to false.
mariadb Howto Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-troubleshoot-common-connection-issues.md
Generally, connection issues to Azure Database for MariaDB can be classified as
* Transient errors (short-lived or intermittent) * Persistent or non-transient errors (errors that regularly recur)
+> [!WARNING]
+> The MySQL 8.0.27 client is incompatible with Azure Database for MariaDB - Single Server. All connections from the MySQL 8.0.27 client created either via mysql.exe or workbench will fail. As a workaround, consider using an earlier version of the client (prior to MySQL 8.0.27).
+ ## Troubleshoot transient errors Transient errors occur when maintenance is performed, the system encounters an error with the hardware or software, or you change the vCores or service tier of your server. The Azure Database for MariaDB service has built-in high availability and is designed to mitigate these types of problems automatically. However, your application loses its connection to the server for a short period of time of typically less than 60 seconds at most. Some events can occasionally take longer to mitigate, such as when a large transaction causes a long-running recovery.
media-services Architecture Design Multi Drm System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/architecture-design-multi-drm-system.md
- Title: A multi-DRM content protection system
-description: This articles gives detailed description of how to design a multi-DRM content protection system with Azure Media Services.
------ Previously updated : 08/31/2020---
-# Design of a multi-DRM content protection system with access control
--
-Designing and building a Digital Rights Management (DRM) subsystem for an over-the-top (OTT) or online streaming solution is a complex task. Operators/online video providers typically outsource this task to specialized DRM service providers. The goal of this document is to present a reference design and a reference implementation of an end-to-end DRM subsystem in an OTT or online streaming solution.
-
-The targeted readers for this document are engineers who work in DRM subsystems of OTT or online streaming/multiscreen solutions or readers who are interested in DRM subsystems. The assumption is that readers are familiar with at least one of the DRM technologies on the market, such as PlayReady, Widevine, FairPlay, or Adobe Access.
-
-In this discussion, by multi-DRM we include the 3 DRMs supported by Azure Media
-
-The benefits of using native multi-DRM for content protection are that it:
-
-* Reduces encryption cost because a single encryption process is used to target different platforms with its native DRMs.
-* Reduces the cost of managing assets because only a single copy of asset is needed in storage.
-* Eliminates DRM client licensing cost because the native DRM client is usually free on its native platform.
-
-### Goals of the article
-
-The goals of this article are to:
-
-* Provide a reference design of a DRM subsystem that uses all 3 DRMs (CENC for DASH, FairPlay for HLS and PlayReady for smooth streaming).
-* Provide a reference implementation on Azure and Azure Media Services platform.
-* Discuss some design and implementation topics.
-
-The following table summarizes native DRM support on different platforms and EME support in different browsers.
-
-| **Client platform** | **Native DRM** | **EME** |
-| | | |
-| **Smart TVs, STBs** | PlayReady, Widevine, and/or other | Embedded browser/EME for PlayReady and/or Widevine|
-| **Windows 10** | PlayReady | Microsoft Edge/IE11 for PlayReady|
-| **Android devices (phone, tablet, TV)** |Widevine |Chrome for Widevine |
-| **iOS** | FairPlay | Safari for FairPlay (since iOS 11.2) |
-| **macOS** | FairPlay | Safari for FairPlay (since Safari 9+ on macOS X 10.11+ El Capitan)|
-| **tvOS** | FairPlay | |
-
-Considering the current state of deployment for each DRM, a service typically wants to implement two or three DRMs to make sure you address all the types of endpoints in the best way.
-
-There is a tradeoff between the complexity of the service logic and the complexity on the client side to reach a certain level of user experience on the various clients.
-
-To make your selection, keep in mind:
-
-* PlayReady is natively implemented in every Windows device, on some Android devices, and available through software SDKs on virtually any platform.
-* Widevine is natively implemented in every Android device, in Chrome, and in some other devices. Widevine is also supported in Firefox and Opera browsers over DASH.
-* FairPlay is available on iOS, macOS and tvOS.
-
-## A reference design
-
-This section presents a reference design that is agnostic to the technologies used to implement it.
-
-A DRM subsystem can contain the following components:
-
-* Key management
-* DRM encryption packaging
-* DRM license delivery
-* Entitlement check/access control
-* User authentication/authorization
-* Player app
-* Origin/content delivery network (CDN)
-
-The following diagram illustrates the high-level interaction among the components in a DRM subsystem:
-
-![DRM subsystem with CENC](./media/architecture-design-multi-drm-system/media-services-generic-drm-subsystem-with-cenc.png)
-
-The design has three basic layers:
-
-* A back-office layer (black) is not exposed externally.
-* A DMZ layer (dark blue) contains all the endpoints that face the public.
-* A public internet layer (light blue) contains the CDN and players with traffic across the public internet.
-
-There also should be a content management tool to control DRM protection, regardless of whether it's static or dynamic encryption. The inputs for DRM encryption include:
-
-* MBR video content
-* Content key
-* License acquisition URLs
-
-Here's the high-level flow during playback time:
-
-* The user is authenticated.
-* An authorization token is created for the user.
-* DRM protected content (manifest) is downloaded to the player.
-* The player submits a license acquisition request to license servers together with a key ID and an authorization token.
-
-The following section discusses the design of key management.
-
-| **ContentKey-to-asset** | **Scenario** |
-| | |
-| 1-to-1 |The simplest case. It provides the finest control. But this arrangement generally results in the highest license delivery cost. At minimum, one license request is required for each protected asset. |
-| 1-to-many |You could use the same content key for multiple assets. For example, for all the assets in a logical group, such as a genre or the subset of a genre (or movie gene), you can use a single content key. |
-| Many-to-1 |Multiple content keys are needed for each asset. <br/><br/>For example, if you need to apply dynamic CENC protection with multi-DRM for MPEG-DASH and dynamic AES-128 encryption for HLS, you need two separate content keys. Each content key needs its own ContentKeyType. (For the content key used for dynamic CENC protection, use ContentKeyType.CommonEncryption. For the content key used for dynamic AES-128 encryption, use ContentKeyType.EnvelopeEncryption.)<br/><br/>As another example, in CENC protection of DASH content, in theory, you can use one content key to protect the video stream and another content key to protect the audio stream. |
-| Many-to-many |Combination of the previous two scenarios. One set of content keys is used for each of the multiple assets in the same asset group. |
-
-Another important factor to consider is the use of persistent and nonpersistent licenses.
-
-Why are these considerations important?
-
-If you use a public cloud for license delivery, persistent and nonpersistent licenses have a direct impact on license delivery cost. The following two different design cases serve to illustrate:
-
-* Monthly subscription: Use a persistent license and 1-to-many content key-to-asset mapping. For example, for all the kids' movies, we use a single content key for encryption. In this case:
-
- Total number of licenses requested for all kids' movies/device = 1
-
-* Monthly subscription: Use a nonpersistent license and 1-to-1 mapping between content key and asset. In this case:
-
- Total number of licenses requested for all kids' movies/device = [number of movies watched] x [number of sessions]
-
-The two different designs result in very different license request patterns. The different patterns result in different license delivery cost if license delivery service is provided by a public cloud such as Media Services.
-
-## Map design to technology for implementation
-Next, the generic design is mapped to technologies on the Azure/Media Services platform by specifying which technology to use for each building block.
-
-The following table shows the mapping.
-
-| **Building block** | **Technology** |
-| | |
-| **Player** |[Azure Media Player](https://azure.microsoft.com/services/media-services/media-player/) |
-| **Identity Provider (IDP)** |Azure Active Directory (Azure AD) |
-| **Secure Token Service (STS)** |Azure AD |
-| **DRM protection workflow** |Azure Media Services dynamic protection |
-| **DRM license delivery** |* Media Services license delivery (PlayReady, Widevine, FairPlay) <br/>* Axinom license server <br/>* Custom PlayReady license server |
-| **Origin** |Azure Media Services streaming endpoint |
-| **Key management** |Not needed for reference implementation |
-| **Content management** |A C# console application |
-
-In other words, both IDP and STS are provided by Azure AD. The [Azure Media Player API](https://amp.azure.net/libs/amp/latest/docs/) is used for the player. Both Azure Media Services and Azure Media Player support CENC over DASH, FairPlay over HLS, PlayReady over smooth streaming, and AES-128 encryption for DASH, HLS and smooth.
-
-The following diagram shows the overall structure and flow with the previous technology mapping:
-
-![CENC on Media Services](./media/architecture-design-multi-drm-system/media-services-cenc-subsystem-on-AMS-platform.png)
-
-To set up DRM content protection, the content management tool uses the following inputs:
-
-* Open content
-* Content key from key management
-* License acquisition URLs
-* A list of information from Azure AD, such as audience, issuer, and token claims
-
-Here's the output of the content management tool:
-
-* ContentKeyPolicy describes DRM license template for each kind of DRM used;
-* ContentKeyPolicyRestriction describes the access control before a DRM license is issued
-* Streamingpolicy describes the various combinations of DRM - encryption mode - streaming protocol - container format, for streaming
-* StreamingLocator describes content key/IV used for encryption, and streaming URLs
-
-Here's the flow during runtime:
-
-* Upon user authentication, a JWT is generated.
-* One of the claims contained in the JWT is a groups claim that contains the group object ID EntitledUserGroup. This claim is used to pass the entitlement check.
-* The player downloads the client manifest of CENC-protected content and identifies the following:
- * Key ID.
- * The content is DRM protected.
- * License acquisition URLs.
-* The player makes a license acquisition request based on the browser/DRM supported. In the license acquisition request, the key ID and the JWT are also submitted. The license delivery service verifies the JWT and the claims contained before it issues the needed license.
-
-## Implementation
-### Implementation procedures
-Implementation includes the following steps:
-
-1. Prepare test assets. Encode/package a test video to multi-bitrate fragmented MP4 in Media Services. This asset is *not* DRM protected. DRM protection is done by dynamic protection later.
-
-2. Create a key ID and a content key (optionally from a key seed). In this instance, the key management system isn't needed because only a single key ID and content key are required for a couple of test assets.
-
-3. Use the Media Services API to configure multi-DRM license delivery services for the test asset. If you use custom license servers by your company or your company's vendors instead of license services in Media Services, you can skip this step. You can specify license acquisition URLs in the step when you configure license delivery. The Media Services API is needed to specify some detailed configurations, such as authorization policy restriction and license response templates for different DRM license services. At this time, the Azure portal doesn't provide the needed UI for this configuration. For API-level information and sample code, see [Use PlayReady and/or Widevine dynamic common encryption](drm-protect-with-drm-tutorial.md).
-
-4. Use the Media Services API to configure the asset delivery policy for the test asset. For API-level information and sample code, see [Use PlayReady and/or Widevine dynamic common encryption](drm-protect-with-drm-tutorial.md).
-
-5. Create and configure an Azure AD tenant in Azure.
-
-6. Create a few user accounts and groups in your Azure AD tenant. Create at least an "Entitled User" group, and add a user to this group. Users in this group pass the entitlement check in license acquisition. Users not in this group fail to pass the authentication check and can't acquire a license. Membership in this "Entitled User" group is a required groups claim in the JWT issued by Azure AD. You specify this claim requirement in the step when you configure multi-DRM license delivery services.
-
-7. Create an ASP.NET MVC app to host your video player. This ASP.NET app is protected with user authentication against the Azure AD tenant. Proper claims are included in the access tokens obtained after user authentication. We recommend OpenID Connect API for this step. Install the following NuGet packages:
-
- * Install-Package Microsoft.Azure.ActiveDirectory.GraphClient
- * Install-Package Microsoft.Owin.Security.OpenIdConnect
- * Install-Package Microsoft.Owin.Security.Cookies
- * Install-Package Microsoft.Owin.Host.SystemWeb
- * Install-Package Microsoft.IdentityModel.Clients.ActiveDirectory
-
-8. Create a player by using the [Azure Media Player API](https://amp.azure.net/libs/amp/latest/docs/). Use the [Azure Media Player ProtectionInfo API](https://amp.azure.net/libs/amp/latest/docs/) to specify which DRM technology to use on different DRM platforms.
-
-9. The following table shows the test matrix.
-
- | **DRM** | **Browser** | **Result for entitled user** | **Result for unentitled user** |
- | | | | |
- | **PlayReady** |Microsoft Edge or Internet Explorer 11 on Windows 10 |Succeed |Fail |
- | **Widevine** |Chrome, Firefox, Opera |Succeed |Fail |
- | **FairPlay** |Safari on macOS |Succeed |Fail |
- | **AES-128** |Most modern browsers |Succeed |Fail |
-
-For information on how to set up Azure AD for an ASP.NET MVC player app, see [Integrate an Azure Media Services OWIN MVC-based app with Azure Active Directory and restrict content key delivery based on JWT claims](http://gtrifonov.com/2015/01/24/mvc-owin-azure-media-services-ad-integration/).
-
-For more information, see [JWT token authentication in Azure Media Services and dynamic encryption](http://gtrifonov.com/2015/01/03/jwt-token-authentication-in-azure-media-services-and-dynamic-encryption/).
-
-For information on Azure AD:
-
-* You can find developer information in the [Azure Active Directory developer's guide](../../active-directory/develop/v2-overview.md).
-* You can find administrator information in [Administer your Azure AD tenant directory](../../active-directory/fundamentals/active-directory-whatis.md).
-
-### Some issues in implementation
-
-Use the following troubleshooting information for help with implementation issues.
-
-* The issuer URL must end with "/". The audience must be the player application client ID. Also, add "/" at the end of the issuer URL.
-
- ```xml
- <add key="ida:audience" value="[Application Client ID GUID]" />
- <add key="ida:issuer" value="https://sts.windows.net/[AAD Tenant ID]/" />
- ```
-
- In the [JWT Decoder](http://jwt.calebb.net/), you see **aud** and **iss**, as shown in the JWT:
-
- ![JWT](./media/architecture-design-multi-drm-system/media-services-1st-gotcha.png)
-
-* Add permissions to the application in Azure AD on the **Configure** tab of the application. Permissions are required for each application, both local and deployed versions.
-
- ![Permissions](./media/architecture-design-multi-drm-system/media-services-perms-to-other-apps.png)
-
-* Use the correct issuer when you set up dynamic CENC protection.
-
- ```xml
- <add key="ida:issuer" value="https://sts.windows.net/[AAD Tenant ID]/"/>
- ```
-
- The following doesn't work:
-
- ```xml
- <add key="ida:issuer" value="https://willzhanad.onmicrosoft.com/" />
- ```
-
- The GUID is the Azure AD tenant ID. The GUID can be found in the **Endpoints** pop-up menu in the Azure portal.
-
-* Grant group membership claims privileges. Make sure the following is in the Azure AD application manifest file:
-
- "groupMembershipClaims": "All" (the default value is null)
-
-* Set the proper TokenType when you create restriction requirements.
-
- `objTokenRestrictionTemplate.TokenType = TokenType.JWT;`
-
- Because you add support for JWT (Azure AD) in addition to SWT (ACS), the default TokenType is TokenType.JWT. If you use SWT/ACS, you must set the token to TokenType.SWT.
-
-## The completed system and test
-
-This section walks you through the following scenarios in the completed end-to-end system so that you can have a basic picture of the behavior before you get a sign-in account:
-
-* If you need a non-integrated scenario:
-
- * For video assets hosted in Media Services that are either unprotected or DRM protected but without token authentication (issuing a license to whoever requested it), you can test it without signing in. Switch to HTTP if your video streaming is over HTTP.
-
-* If you need an end-to-end integrated scenario:
-
- * For video assets under dynamic DRM protection in Media Services, with the token authentication and JWT generated by Azure AD, you need to sign in.
-
-For the player web application and its sign-in, see [this website](https://openidconnectweb.azurewebsites.net/).
-
-### User sign-in
-To test the end-to-end integrated DRM system, you need to have an account created or added.
-
-What account?
-
-Although Azure originally allowed access only by Microsoft account users, access is now allowed by users from both systems. All Azure properties now trust Azure AD for authentication, and Azure AD authenticates organizational users. A federation relationship was created where Azure AD trusts the Microsoft account consumer identity system to authenticate consumer users. As a result, Azure AD can authenticate guest Microsoft accounts as well as native Azure AD accounts.
-
-Because Azure AD trusts the Microsoft account domain, you can add any accounts from any of the following domains to the custom Azure AD tenant and use the account to sign in:
-
-| **Domain name** | **Domain** |
-| | |
-| **Custom Azure AD tenant domain** |somename.onmicrosoft.com |
-| **Corporate domain** |microsoft.com |
-| **Microsoft account domain** |outlook.com, live.com, hotmail.com |
-
-You can contact any of the authors to have an account created or added for you.
-
-The following screenshots show different sign-in pages used by different domain accounts:
-
-**Custom Azure AD tenant domain account**: The customized sign-in page of the custom Azure AD tenant domain.
-
-![Custom Azure AD tenant domain account one](./media/architecture-design-multi-drm-system/media-services-ad-tenant-domain1.png)
-
-**Microsoft domain account with smart card**: The sign-in page customized by Microsoft corporate IT with two-factor authentication.
-
-![Custom Azure AD tenant domain account two](./media/architecture-design-multi-drm-system/media-services-ad-tenant-domain2.png)
-
-**Microsoft account**: The sign-in page of the Microsoft account for consumers.
-
-![Custom Azure AD tenant domain account three](./media/architecture-design-multi-drm-system/media-services-ad-tenant-domain3.png)
-
-### Use Encrypted Media Extensions for PlayReady
-
-On a modern browser with Encrypted Media Extensions (EME) for PlayReady support, such as Internet Explorer 11 on Windows 8.1 or later and Microsoft Edge browser on Windows 10, PlayReady is the underlying DRM for EME.
-
-![Use EME for PlayReady](./media/architecture-design-multi-drm-system/media-services-eme-for-playready1.png)
-
-The dark player area is because PlayReady protection prevents you from making a screen capture of protected video.
-
-The following screenshot shows the player plug-ins and Microsoft Security Essentials (MSE)/EME support:
-
-![Player plug-ins for PlayReady](./media/architecture-design-multi-drm-system/media-services-eme-for-playready2.png)
-
-EME in Microsoft Edge and Internet Explorer 11 on Windows 10 allows [PlayReady SL3000](https://www.microsoft.com/playready/features/EnhancedContentProtection.aspx/) to be invoked on Windows 10 devices that support it. PlayReady SL3000 unlocks the flow of enhanced premium content (4K, HDR) and new content delivery models (for enhanced content).
-
-To focus on the Windows devices, PlayReady is the only DRM in the hardware available on Windows devices (PlayReady SL3000). A streaming service can use PlayReady through EME or through a Universal Windows Platform application and offer a higher video quality by using PlayReady SL3000 than another DRM. Typically, content up to 2K flows through Chrome or Firefox, and content up to 4K flows through Microsoft Edge/Internet Explorer 11 or a Universal Windows Platform application on the same device. The amount depends on service settings and implementation.
-
-#### Use EME for Widevine
-
-On a modern browser with EME/Widevine support, such as Chrome 41+ on Windows 10, Windows 8.1, Mac OSX Yosemite, and Chrome on Android 4.4.4, Google Widevine is the DRM behind EME.
-
-![Use EME for Widevine](./media/architecture-design-multi-drm-system/media-services-eme-for-widevine1.png)
-
-Widevine doesn't prevent you from making a screen capture of protected video.
-
-![Player plug-ins for Widevine](./media/architecture-design-multi-drm-system/media-services-eme-for-widevine2.png)
-
-#### Use EME for FairPlay
-
-Similarly, you can test FairPlay protected content in this test player in Safari on macOS or iOS 11.2 and later.
-
-Make sure you put "FairPlay" as protectionInfo.type and put in the right URL for your Application Certificate in FPS AC Path (FairPlay Streaming Application Certificate Path).
-
-### Unentitled users
-
-If a user isn't a member of the "Entitled Users" group, the user doesn't pass the entitlement check. The multi-DRM license service then refuses to issue the requested license as shown. The detailed description is "License acquire failed," which is as designed.
-
-![Unentitled users](./media/architecture-design-multi-drm-system/media-services-unentitledusers.png)
-
-### Run a custom security token service
-
-If you run a custom STS, the JWT is issued by the custom STS by using either a symmetric or an asymmetric key.
-
-The following screenshot shows a scenario that uses a symmetric key (using Chrome):
-
-![Custom STS with a symmetric key](./media/architecture-design-multi-drm-system/media-services-running-sts1.png)
-
-The following screenshot shows a scenario that uses an asymmetric key via an X509 certificate (using a Microsoft modern browser):
-
-![Custom STS with an asymmetric key](./media/architecture-design-multi-drm-system/media-services-running-sts2.png)
-
-In both of the previous cases, user authentication stays the same. It takes place through Azure AD. The only difference is that JWTs are issued by the custom STS instead of Azure AD. When you configure dynamic CENC protection, the license delivery service restriction specifies the type of JWT, either a symmetric or an asymmetric key.
media-services Limits Quotas Constraints Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/limits-quotas-constraints-reference.md
This article lists some of the most common Microsoft Azure Media Services limits
## Storage limits
-| Resource | Default Limit |
-| | |
-| File size| In some scenarios, there is a limit on the maximum file size supported for processing in Media Services. <sup>(1)</sup> |
-| [Storage accounts](storage-account-concept.md) | 100<sup>(2)</sup> (fixed) |
+Azure Storage block blog limits apply to storage accounts used with Media Services. See [Azure Blob Storage limits](/azure/azure-resource-manager/management/azure-subscription-service-limits.md#azure-blob-storage-limits).
+
+These limit includes the total stored data storage size of the files that you upload for encoding and the file sizes of the encoded files. The limit for file size for encoding is a different limit. See [File size for encoding](#file-size-for-encoding-limit).
-<sup>1</sup> The maximum size supported for a single blob is currently up to 5 TB in Azure Blob Storage. Additional limits apply in Media Services based on the VM sizes that are used by the service. The size limit applies to the files that you upload and also the files that get generated as a result of Media Services processing (encoding or analyzing). If your source file is larger than 260-GB, your Job will likely fail.
+### Storage account limit
+You can have up to 100 storage accounts. All storage accounts must be in the same Azure subscription.
-<sup>2</sup> The storage accounts must be from the same Azure subscription.
+## File size for encoding limit
+An individual file that you upload to be encoded should be no larger than 260 GB.
## Jobs (encoding & analyzing) limits
media-services Media Services Axinom Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/previous/media-services-axinom-integration.md
- Title: Using Axinom to deliver Widevine licenses to Azure Media Services | Microsoft Docs
-description: This article describes how you can use Azure Media Services (AMS) to deliver a stream that is dynamically encrypted by AMS with both PlayReady and Widevine DRMs. The PlayReady license comes from Media Services PlayReady license server and Widevine license is delivered by Axinom license server.
------ Previously updated : 03/10/2021----
-# Using Axinom to deliver Widevine licenses to Azure Media Services
--
-> [!div class="op_single_selector"]
-> * [castLabs](media-services-castlabs-integration.md)
-> * [Axinom](media-services-axinom-integration.md)
->
->
-
-## Overview
-Azure Media Services (AMS) has added Google Widevine dynamic protection (see [MingfeiΓÇÖs blog](https://azure.microsoft.com/blog/azure-media-services-adds-google-widevine-packaging-for-delivering-multi-drm-stream/) for details). In addition, Azure Media Player (AMP) has also added Widevine support (see [AMP document](https://amp.azure.net/libs/amp/latest/docs/) for details). This is a major accomplishment in streaming DASH content protected by CENC with multi-native-DRM (PlayReady and Widevine) on modern browsers equipped with MSE and EME.
-
-Starting with the Media Services .NET SDK version 3.5.2, Media Services enables you to configure Widevine license template and get Widevine licenses. You can also use the following AMS partners to help you deliver Widevine licenses: [Axinom](https://www.axinom.com), [EZDRM](https://ezdrm.com/), [castLabs](https://castlabs.com/company/partners/azure/).
-
-This article describes how to integrate and test Widevine license server managed by Axinom. Specifically, it covers:
-
-* Configuring dynamic Common Encryption with multi-DRM (PlayReady and Widevine) with corresponding license acquisition URLs;
-* Generating a JWT token in order to meet the license server requirements;
-* Developing Azure Media Player app, which handles license acquisition with JWT token authentication;
-
-The complete system and the flow of content key, key ID, key seed, JTW token, and its claims can be best described by the following diagram:
-
-![DASH and CENC](./media/media-services-axinom-integration/media-services-axinom1.png)
-
-## Content Protection
-For configuring dynamic protection and key delivery policy, please see MingfeiΓÇÖs blog: [How to configure Widevine packaging with Azure Media Services](https://mingfeiy.com/how-to-configure-widevine-packaging-with-azure-media-services).
-
-You can configure dynamic CENC protection with multi-DRM for DASH streaming having both of the following:
-
-1. PlayReady protection for Microsoft Edge and IE11, that could have a token authorization restriction. The token restricted policy must be accompanied by a token issued by a Secure Token Service (STS), such as Azure Active Directory;
-2. Widevine protection for Chrome, it can require token authentication with token issued by another STS.
-
-See [JWT Token Generation](media-services-axinom-integration.md#jwt-token-generation) section for why Azure Active Directory cannot be used as an STS for AxinomΓÇÖs Widevine license server.
-
-### Considerations
-1. You must use the Axinom specified key seed (8888000000000000000000000000000000000000) and your generated or selected key ID to generate the content key for configuring key delivery service. Axinom license server issues all licenses containing content keys based on the same key seed, which is valid for both testing and production.
-2. The Widevine license acquisition URL for testing: [https://drm-widevine-licensing.axtest.net/AcquireLicense](https://drm-widevine-licensing.axtest.net/AcquireLicense). Both HTTP and HTTS are allowed.
-
-## Azure Media Player Preparation
-AMP v1.4.0 supports playback of AMS content that is dynamically packaged with both PlayReady and Widevine DRM.
-If Widevine license server does not require token authentication, there is nothing additional you need to do to test a DASH content protected by Widevine. For an example, the AMP team provides a simple [sample](https://amp.azure.net/libs/amp/latest/samples/dynamic_multiDRM_PlayReadyWidevineFairPlay_notoken.html), where you can see it working in Microsoft Edge and IE11 with PlayReady and Chrome with Widevine.
-The Widevine license server provided by Axinom requires JWT token authentication. The JWT token needs to be submitted with license request through an HTTP header ΓÇ£X-AxDRM-MessageΓÇ¥. For this purpose, you need to add the following javascript in the web page hosting AMP before setting the source:
-
-```html
-<script>AzureHtml5JS.KeySystem.WidevineCustomAuthorizationHeader = "X-AxDRM-Message"</script>
-```
-
-The rest of AMP code is standard AMP API as in AMP document [here](https://amp.azure.net/libs/amp/latest/docs/).
-
-The above javascript for setting custom authorization header is still a short-term approach before the official long-term approach in AMP is released.
-
-## JWT Token Generation
-Axinom Widevine license server for testing requires JWT token authentication. In addition, one of the claims in the JWT token is of a complex object type instead of primitive data type.
-
-Unfortunately, Azure AD can only issue JWT tokens with primitive types. Similarly, .NET Framework API (System.IdentityModel.Tokens.SecurityTokenHandler and JwtPayload) only allows you to input complex object type as claims. However, the claims are still serialized as string. Therefore we cannot use any of the two for generating the JWT token for Widevine license request.
-
-John SheehanΓÇÖs [JWT NuGet package](https://www.nuget.org/packages/JWT) meets the needs so we are going to use this NuGet package.
-
-Below is the code for generating JWT token with the needed claims as required by Axinom Widevine license server for testing:
-
-```csharp
-using System;
-using System.Collections.Generic;
-using System.Linq;
-using System.Web;
-using System.IdentityModel.Tokens;
-using System.IdentityModel.Protocols.WSTrust;
-using System.Security.Claims;
-
-namespace OpenIdConnectWeb.Utils
-{
- public class JwtUtils
- {
- //using John Sheehan's NuGet JWT library: https://www.nuget.org/packages/JWT/
- public static string CreateJwtSheehan(string symmetricKeyHex, string key_id)
- {
- byte[] symmetricKey = ConvertHexStringToByteArray(symmetricKeyHex); //hex string to byte[] Note: Note that the key is a hex string, however it must be treated as a series of bytes not a string when encoding.
-
- var payload = new Dictionary<string, object>()
- {
- { "version", 1 },
- { "com_key_id", System.Configuration.ConfigurationManager.AppSettings["ax:com_key_id"] },
- { "message", new { type = "entitlement_message", key_ids = new string[] { key_id } } }
- };
-
- string token = JWT.JsonWebToken.Encode(payload, symmetricKey, JWT.JwtHashAlgorithm.HS256);
-
- return token;
- }
-
- //convert hex string to byte[]
- public static byte[] ConvertHexStringToByteArray(string hexString)
- {
- if (hexString.Length % 2 != 0)
- {
- throw new ArgumentException(String.Format(System.Globalization.CultureInfo.InvariantCulture, "The binary key cannot have an odd number of digits: {0}", hexString));
- }
-
- byte[] HexAsBytes = new byte[hexString.Length / 2];
- for (int index = 0; index < HexAsBytes.Length; index++)
- {
- string byteValue = hexString.Substring(index * 2, 2);
- HexAsBytes[index] = byte.Parse(byteValue, System.Globalization.NumberStyles.HexNumber, System.Globalization.CultureInfo.InvariantCulture);
- }
-
- return HexAsBytes;
- }
-
- }
-
-}
-```
-
-Axinom Widevine license server
-
-```xml
-<add key="ax:laurl" value="https://drm-widevine-licensing.axtest.net/AcquireLicense" />
-<add key="ax:com_key_id" value="69e54088-e9e0-4530-8c1a-1eb6dcd0d14e" />
-<add key="ax:com_key" value="4861292d027e269791093327e62ceefdbea489a4c7e5a4974cc904b840fd7c0f" />
-<add key="ax:keyseed" value="8888000000000000000000000000000000000000" />
-```
-
-### Considerations
-1. Even though AMS PlayReady license delivery service requires ΓÇ£Bearer=ΓÇ¥ preceding an authentication token, Axinom Widevine license server does not use it.
-2. The Axinom communication key is used as signing key. The key is a hex string, however it must be treated as a series of bytes not a string when encoding. This is achieved by the method ConvertHexStringToByteArray.
-
-## Retrieving Key ID
-You may have noticed that in the code for generating a JWT token, key ID is required. Since the JWT token needs to be ready before loading AMP player, key ID needs to be retrieved in order to generate JWT token.
-
-Of course, there are multiple ways to get hold of key ID. For example, one may store key ID together with content metadata in a database. Or you can retrieve key ID from DASH MPD (Media Presentation Description) file. The code below is for the latter.
-
-```csharp
-//get key_id from DASH MPD
-public static string GetKeyID(string dashUrl)
-{
- if (!dashUrl.EndsWith("(format=mpd-time-csf)"))
- {
- dashUrl += "(format=mpd-time-csf)";
- }
-
- XPathDocument objXPathDocument = new XPathDocument(dashUrl);
- XPathNavigator objXPathNavigator = objXPathDocument.CreateNavigator();
- XmlNamespaceManager objXmlNamespaceManager = new XmlNamespaceManager(objXPathNavigator.NameTable);
- objXmlNamespaceManager.AddNamespace("", "urn:mpeg:dash:schema:mpd:2011");
- objXmlNamespaceManager.AddNamespace("ns1", "urn:mpeg:dash:schema:mpd:2011");
- objXmlNamespaceManager.AddNamespace("cenc", "urn:mpeg:cenc:2013");
- objXmlNamespaceManager.AddNamespace("ms", "urn:microsoft");
- objXmlNamespaceManager.AddNamespace("mspr", "urn:microsoft:playready");
- objXmlNamespaceManager.AddNamespace("xsi", "https://www.w3.org/2001/XMLSchema-instance");
- objXmlNamespaceManager.PushScope();
-
- XPathNodeIterator objXPathNodeIterator;
- objXPathNodeIterator = objXPathNavigator.Select("//ns1:MPD/ns1:Period/ns1:AdaptationSet/ns1:ContentProtection[@value='cenc']", objXmlNamespaceManager);
-
- string key_id = string.Empty;
- if (objXPathNodeIterator.MoveNext())
- {
- key_id = objXPathNodeIterator.Current.GetAttribute("default_KID", "urn:mpeg:cenc:2013");
- }
-
- return key_id;
-}
-```
-
-## Summary
-
-With latest addition of Widevine support in both Azure Media Services Content Protection and Azure Media Player, we are able to implement streaming of DASH + Multi-native-DRM (PlayReady + Widevine) with both PlayReady license service in AMS and Widevine license server from Axinom for the following modern browsers:
-
-* Chrome
-* Microsoft Edge on Windows 10