Updates from: 11/23/2022 02:11:02
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory On Premises Ecma Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-ecma-troubleshoot.md
After you configure the provisioning agent and ECMA host, it's time to test conn
![Screenshot that shows that the ECMA service is running.](./media/on-premises-ecma-troubleshoot/tshoot-1.png)
- 2. Go to the folder where the ECMA host was installed by selecting **Troubleshooting** > **Scripts** > **TestECMA2HostConnection**. Run the script. This script sends a SCIM GET or POST request to validate that the ECMA Connector Host is operating and responding to requests. It should be run on the same computer as the ECMA Connector Host service itself.
+ 2. Check that the ECMA Connector Host service is responding to requests.
+ 1. On the server with the agent installed, launch PowerShell.
+ 1. Change to the folder where the ECMA host was installed, such as `C:\Program Files\Microsoft ECMA2Host`.
+ 1. Change to the subdirectory `Troubleshooting`.
+ 1. Run the script `TestECMA2HostConnection.ps1` in that directory. Provide as arguments the connector name and the secret token when prompted.
+ ```
+ PS C:\Program Files\Microsoft ECMA2Host\Troubleshooting> .\TestECMA2HostConnection.ps1
+ Supply values for the following parameters:
+ ConnectorName: CORPDB1
+ SecretToken: ************
+ ```
+ 1. This script sends a SCIM GET or POST request to validate that the ECMA Connector Host is operating and responding to requests. If the output does not show that an HTTP connection was successful, then check that the service is running and that the correct secret token was provided.
+ 3. Ensure that the agent is active by going to your application in the Azure portal, selecting **admin connectivity**, selecting the agent dropdown list, and ensuring your agent is active. 4. Check if the secret token provided is the same as the secret token on-premises. Go to on-premises, provide the secret token again, and then copy it into the Azure portal. 5. Ensure that you've assigned one or more agents to the application in the Azure portal. 6. After you assign an agent, you need to wait 10 to 20 minutes for the registration to complete. The connectivity test won't work until the registration completes.
- 7. Ensure that you're using a valid certificate. Go to the **Settings** tab of the ECMA host to generate a new certificate.
+ 7. Ensure that you're using a valid certificate that has not expired. Go to the **Settings** tab of the ECMA host to view the certificate expiration date. If the certificate has expired, click `Generate certificate` to generate a new certificate.
8. Restart the provisioning agent by going to the taskbar on your VM by searching for the Microsoft Azure AD Connect provisioning agent. Right-click **Stop**, and then select **Start**.
- 1. If you continue to see `The ECMA host is currently importing data from the target application` even after restarting the ECMA Connector Host and the provisioning agent, and waiting for the initial import to complete, then you may need to cancel and re-start configuring provisioning to the application in the Azure portal.
+ 1. If you continue to see `The ECMA host is currently importing data from the target application` even after restarting the ECMA Connector Host and the provisioning agent, and waiting for the initial import to complete, then you may need to cancel and start over configuring provisioning to the application in the Azure portal.
1. When you provide the tenant URL in the Azure portal, ensure that it follows the following pattern. You can replace `localhost` with your host name, but it isn't required. Replace `connectorName` with the name of the connector you specified in the ECMA host. The error message 'invalid resource' generally indicates that the URL does not follow the expected format. ```
After you configure the provisioning agent and ECMA host, it's time to test conn
## Unable to configure the ECMA host, view logs in Event Viewer, or start the ECMA host service
-To resolve the following issues, run the ECMA host as an admin:
+To resolve the following issues, run the ECMA host configuration wizard as an administrator:
* I get an error when I open the ECMA host wizard. ![Screenshot that shows an ECMA wizard error.](./media/on-premises-ecma-troubleshoot/tshoot-2.png)
-* I can configure the ECMA host wizard, but I can't see the ECMA host logs. In this case, you need to open the host as an admin and set up a connector end to end. This step can be simplified by exporting an existing connector and importing it again.
+* I can configure the ECMA host wizard, but I can't see the ECMA host logs. In this case, you need to open the ECMA Host configuration wizard as an administrator and set up a connector end to end. This step can be simplified by exporting an existing connector and importing it again.
![Screenshot that shows host logs.](./media/on-premises-ecma-troubleshoot/tshoot-3.png)
To resolve the following issues, run the ECMA host as an admin:
## Turn on verbose logging
-By default, `switchValue` for the ECMA Connector Host is set to `Verbose`. This will emit detailed logging that will help you troubleshoot issues. You can change the verbosity to `Error` if you would like to limit the number of logs emitted to only errors. Wen using the SQL connector without Windows Integrated Auth, we recommend setting the `switchValue` to `Error` as it will ensure that the connection string is not emitted in the logs. In order to change the verbosity to error, please update the `switchValue` to "Error" in both places as shown below.
+By default, `switchValue` for the ECMA Connector Host is set to `Verbose`. This setting will emit detailed logging that will help you troubleshoot issues. You can change the verbosity to `Error` if you would like to limit the number of logs emitted to only errors. Wen using the SQL connector without Windows Integrated Auth, we recommend setting the `switchValue` to `Error` as it will ensure that the connection string is not emitted in the logs. In order to change the verbosity to error, update the `switchValue` to "Error" in both places as shown below.
The file location for verbose service logging is C:\Program Files\Microsoft ECMA2Host\Service\Microsoft.ECMA2Host.Service.exe.config. ```
The file location for wizard logging is C:\Program Files\Microsoft ECMA2Host\Wiz
## Query the ECMA Host Cache The ECMA Host has a cache of users in your application that is updated according to the schedule you specify in the properties page of the ECMA Host wizard. In order to query the cache, perform the steps below:+ 1. Set the Debug flag to `true`.
-2. Restart the ECMA Host service.
-3. Query this endpoint from the server the ECMA Host is installed on, replacing `{connector name}` with the name of your connector, specified in the properties page of the ECMA Host. `https://localhost:8585/ecma2host_{connectorName}/scim/cache`
-Please be aware that setting the debug flag to `true` disables authentication on the ECMA Host. You will want to set it back to `false` and restart the ECMA Host service once you are done querying the cache.
+ Please be aware that setting the debug flag to `true` disables authentication on the ECMA Host. You will need to set it back to `false` and restart the ECMA Host service once you are done querying the cache.
-The file location for verbose service logging is C:\Program Files\Microsoft ECMA2Host\Service\Microsoft.ECMA2Host.Service.exe.config.
- ```
- <?xml version="1.0" encoding="utf-8"?>
- <configuration>
+ The file location for verbose service logging is `C:\Program Files\Microsoft ECMA2Host\Service\Microsoft.ECMA2Host.Service.exe.config`.
+ ```
+ <?xml version="1.0" encoding="utf-8"?>
+ <configuration>
<startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.6" /> </startup>
The file location for verbose service logging is C:\Program Files\Microsoft ECMA
<add key="Debug" value="true" /> </appSettings>
- ```
+ ```
+
+2. Restart the `Microsoft ECMA2Host` service.
+1. Wait for the ECMA Host to connect to the target systems and re-read its cache from each of the connected systems. If there are many users in those connected systems, this import process could take several minutes.
+1. Query this endpoint from the server the ECMA Host is installed on, replacing `{connector name}` with the name of your connector, specified in the properties page of the ECMA Host: `https://localhost:8585/ecma2host_{connectorName}/scim/cache`.
+
+ 1. On the server with the agent installed, launch PowerShell.
+ 1. Change to the folder where the ECMA host was installed, such as `C:\Program Files\Microsoft ECMA2Host`.
+ 1. Change to the subdirectory `Troubleshooting`.
+ 1. Run the script `TestECMA2HostConnection.ps1` in that directory, and provide as arguments the connector name and the `ObjectTypePath` value `cache`. When prompted, type the secret token configured for that connector.
+ ```
+ PS C:\Program Files\Microsoft ECMA2Host\Troubleshooting> .\TestECMA2HostConnection.ps1 -ConnectorName CORPDB1 -ObjectTypePath cache
+ Supply values for the following parameters:
+ SecretToken: ************
+ ```
+ 1. This script sends a SCIM GET request to validate that the ECMA Connector Host is operating and responding to requests. If the output does not show that an HTTP connection was successful, then check that the service is running and that the correct secret token was provided.
+
+1. Set the Debug flag back to `false` or remove the setting once you are done querying the cache.
+2. Restart the `Microsoft ECMA2Host` service.
++ ## Target attribute is missing The provisioning service automatically discovers attributes in your target application. If you see that a target attribute is missing in the target attribute list in the Azure portal, perform the following troubleshooting step:
- 1. Review the **Select Attributes** page of your ECMA host configuration to check that the attribute has been selected to be exposed to the Azure portal.
- 1. Ensure that the ECMA host service is turned on.
+ 1. Review the **Select Attributes** page of your ECMA host configuration to check that the attribute has been selected, so that it will be exposed to the Azure portal.
+ 1. Ensure that the ECMA host service is running.
1. Review the ECMA host logs to check that a /schemas request was made, and review the attributes in the response. This information will be valuable for support to troubleshoot the issue. ## Collect logs from Event Viewer as a zip file
-Go to the folder where the ECMA host was installed by selecting **Troubleshooting** > **Scripts**. Run the `CollectTroubleshootingInfo` script as an admin. You can use it to capture the logs in a zip file and export them.
+You can use an included script to capture the event logs in a zip file and export them.
+
+ 1. On the server with the agent installed, right click on PowerShell in the Start menu and select to `Run as administrator`.
+ 1. Change to the folder where the ECMA host was installed, such as `C:\Program Files\Microsoft ECMA2Host`.
+ 1. Change to the subdirectory `Troubleshooting`.
+ 1. Run the script `CollectTroubleshootingInfo.ps1` in that directory.
+ 1. The script will create a ZIP file in that directory containing the event logs.
## Review events in Event Viewer
After the ECMA Connector Host schema mapping has been configured, start the serv
| Error | Resolution | | -- | -- | | Could not load file or assembly 'file:///C:\Program Files\Microsoft ECMA2Host\Service\ECMA\Cache\8b514472-c18a-4641-9a44-732c296534e8\Microsoft.IAM.Connector.GenericSql.dll' or one of its dependencies. Access is denied. | Ensure that the network service account has 'full control' permissions over the cache folder. |
-| Invalid LDAP style of object's DN. DN: username@domain.com" or `Target Site: ValidByLdapStyle` | Ensure the 'DN is Anchor' checkbox is not checked in the 'connectivity' page of the ECMA host. Ensure the 'autogenerated' checkbox is selected in the 'object types' page of the ECMA host. See [About anchor attributes and distinguished names](on-premises-application-provisioning-architecture.md#about-anchor-attributes-and-distinguished-names) for more information.|
+| Invalid LDAP style of object's DN. DN: username@domain.com" or `Target Site: ValidByLdapStyle` | Ensure the 'DN is Anchor' checkbox is not checked in the 'connectivity' page of the ECMA host. Ensure the 'autogenerated' checkbox is selected in the 'object types' page of the ECMA host. For more information, see [About anchor attributes and distinguished names](on-premises-application-provisioning-architecture.md#about-anchor-attributes-and-distinguished-names).|
## Understand incoming SCIM requests Requests made by Azure AD to the provisioning agent and connector host use the SCIM protocol. Requests made from the host to apps use the protocol the app supports. The requests from the host to the agent to Azure AD rely on SCIM. You can learn more about the SCIM implementation in [Tutorial: Develop and plan provisioning for a SCIM endpoint in Azure Active Directory](use-scim-to-provision-users-and-groups.md).
-At the beginning of each provisioning cycle, before performing on-demand provisioning and when doing the test connection, the Azure AD provisioning service generally makes a get-user call for a [dummy user](use-scim-to-provision-users-and-groups.md#request-3) to ensure the target endpoint is available and returning SCIM-compliant responses.
+The Azure AD provisioning service generally makes a get-user call to check for a [dummy user](use-scim-to-provision-users-and-groups.md#request-3) in three situations: at the beginning of each provisioning cycle, before performing on-demand provisioning and when **test connection** is selected. This check ensures the target endpoint is available and returning SCIM-compliant responses to the Azure AD provisioning service.
## How do I troubleshoot the provisioning agent?
By using Azure AD, you can monitor the provisioning service in the cloud and col
### I am getting an Invalid LDAP style DN error when trying to configure the ECMA Connector Host with SQL By default, the generic SQL connector expects the DN to be populated using the LDAP style (when the 'DN is anchor' attribute is left unchecked in the first connectivity page). In the error message `Invalid LDAP style DN` or `Target Site: ValidByLdapStyle`, you may see that the DN field contains a user principal name (UPN), rather than an LDAP style DN that the connector expects.
-To resolve this, ensure that **Autogenerated** is selected on the object types page when you configure the connector.
+To resolve this error message, ensure that **Autogenerated** is selected on the object types page when you configure the connector.
-See [About anchor attributes and distinguished names](on-premises-application-provisioning-architecture.md#about-anchor-attributes-and-distinguished-names) for more information.
+For more information, see [About anchor attributes and distinguished names](on-premises-application-provisioning-architecture.md#about-anchor-attributes-and-distinguished-names).
## Next steps
active-directory On Premises Ldap Connector Prepare Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-ldap-connector-prepare-directory.md
+
+ Title: Preparing for Azure AD Provisioning to Active Directory Lightweight Directory Services (preview)
+description: This document describes how to configure Azure AD to provision users into Active Directory Lightweight Directory Services as an example of an LDAP directory.
+++++++ Last updated : 11/15/2022++++
+# Prepare Active Directory Lightweight Directory Services for provisioning from Azure AD
+
+The following documentation provides tutorial information demonstrating how to prepare an Active Directory Lightweight Directory Services (AD LDS) installation. This can be used as an example LDAP directory for troubleshooting or to demonstrate [how to provision users from Azure AD into an LDAP directory](on-premises-ldap-connector-configure.md).
+
+## Prepare the LDAP directory
+
+If you do not already have a directory server, the following information is provided to help create a test AD LDS environment. This setup uses PowerShell and the ADAMInstall.exe with an answers file. This document does not cover in-depth information on AD LDS. For more information, see [Active Directory Lightweight Directory Services](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/hh831593(v=ws.11)).
+
+If you already have AD LDS or another directory server, you can skip this content, and continue at the [Tutorial: ECMA Connector Host generic LDAP connector](on-premises-ldap-connector-configure.md) for installing and configuring the ECMA connector host.
+
+### Create an SSL certificate, a test directory and install AD LDS.
+Use the PowerShell script from [Appendix A](#appendix-ainstall-ad-lds-powershell-script). The script performs the following actions:
+ 1. Creates a self-signed certificate that will be used by the LDAP connector.
+ 2. Creates a directory for the feature install log.
+ 3. Exports the certificate in the personal store to the directory.
+ 4. Imports the certificate to the trusted root of the local machine.
+ 5. Installs the AD LDS role on our virtual machine.
+
+On the Windows Server virtual machine where you are using to test the LDAP connector, edit the script to match your computer name, and then run the script using Windows PowerShell with administrative privileges.
+
+### Create an instance of AD LDS
+Now that the role has been installed, you need to create an instance of AD LDS. To create an instance, you can use the answer file provided below. This file will install the instance quietly without using the UI.
+
+Copy the contents of [Appendix B](#appendix-banswer-file) in to notepad and save it as **answer.txt** in **"C:\Windows\ADAM"**.
+
+Now open a cmd prompt with administrative privileges and run the following executable:
+
+```
+C:\Windows\ADAM> ADAMInstall.exe /answer:answer.txt
+```
+
+### Create containers and a service account for AD LDS
+The use the PowerShell script from [Appendix C](#appendix-cpopulate-ad-lds-powershell-script). The script performs the following actions:
+ 1. Creates a container for the service account that will be used with the LDAP connector.
+ 1. Creates a container for the cloud users, where users will be provisioned to.
+ 1. Creates the service account in AD LDS.
+ 1. Enables the service account.
+ 1. Adds the service account to the AD LDS Administrators role.
+
+On the Windows Server virtual machine, you are using to test the LDAP connector run the script using Windows PowerShell with administrative privileges.
+
+### Grant the NETWORK SERVICE read permissions to the SSL certificate
+In order to enable SSL to work, you need to grant the NETWORK SERVICE read permissions to our newly created certificate. To grant permissions, use the following steps.
+
+ 1. Navigate to **C:\Program Data\Microsoft\Crypto\Keys**.
+ 2. Right-click on the system file located here. It will be a guid. This container is storing our certificate.
+ 1. Select properties.
+ 1. At the top, select the **Security** tab.
+ 1. Select **Edit**.
+ 1. Click **Add**.
+ 1. In the box, enter **Network Service** and select **Check Names**.
+ 1. Select **NETWORK SERVICE** from the list and click **OK**.
+ 1. Click **Ok**.
+ 1. Ensure the Network service account has read and read & execute permissions and click **Apply** and **OK**.
+
+### Verify SSL connectivity with AD LDS
+Now that we have configured the certificate and granted the network service account permissions, test the connectivity to verify that it is working.
+ 1. Open Server Manager and select AD LDS on the left
+ 2. Right-click your instance of AD LDS and select ldp.exe from the pop-up.
+ [![Screenshot that shows the Ldp tool location.](../../../includes/media/active-directory-app-provisioning-ldap/ldp-1.png)](../../../includes/media/active-directory-app-provisioning-ldap/ldp-1.png#lightbox)</br>
+ 3. At the top of ldp.exe, select **Connection** and **Connect**.
+ 4. Enter the following information and click **OK**.
+ - Server: APP3
+ - Port: 636
+ - Place a check in the SSL box
+ [![Screenshot that shows the Ldp tool connection configuration.](../../../includes/media/active-directory-app-provisioning-ldap/ldp-2.png)](../../../includes/media/active-directory-app-provisioning-ldap/ldp-2.png#lightbox)</br>
+ 5. You should see a response similar to the screenshot below.
+ [![Screenshot taht shows the Ldp tool connection configuration success.](../../../includes/media/active-directory-app-provisioning-ldap/ldp-3.png)](../../../includes/media/active-directory-app-provisioning-ldap/ldp-3.png#lightbox)</br>
+ 6. At the top, under **Connection** select **Bind**.
+ 7. Leave the defaults and click **OK**.
+ [![Screenshot that shows the Ldp tool bind operation.](../../../includes/media/active-directory-app-provisioning-ldap/ldp-4.png)](../../../includes/media/active-directory-app-provisioning-ldap/ldp-4.png#lightbox)</br>
+ 8. You should now, successfully bind to the instance.
+ [![Screenshot that shows the Ldp tool bind success.](../../../includes/media/active-directory-app-provisioning-ldap/ldp-5.png)](../../../includes/media/active-directory-app-provisioning-ldap/ldp-5.png#lightbox)</br>
+
+### Disable the local password policy
+Currently, the LDAP connector provisions users with a blank password. This provisioning will not satisfy the local password policy on our server so we are going to disable it for testing purposes. To disable password complexity, on a non-domain-joined server, use the following steps.
+
+>[!IMPORTANT]
+>Because on-going password sync is not a feature of on-premises LDAP provisioning, Microsoft recommends that AD LDS is used specifically with federated applications, when used in conjunction with AD DS, or when updating existing users in an instance of AD LDS.
+
+ 1. On the server, click **Start**, **Run**, and then **gpedit.msc**
+ 2. On the **Local Group Policy editor**, navigate to Computer Configuration > Windows Settings > Security Settings > Account Policies > Password Policy
+ 3. On the right, double-click **Password must meet complexity requirements** and select **Disabled**.
+ [![Screenshot of the complexity requirements setting.](../../../includes/media/active-directory-app-provisioning-ldap/local-1.png)](../../../includes/media/active-directory-app-provisioning-ldap/local-1.png#lightbox)</br>
+ 5. Click **Apply** and **Ok**
+ 6. Close the Local Group Policy editor
+
+
+Next, continue in the guidance to [provision users from Azure AD into an LDAP directory](on-premises-ldap-connector-configure.md) to download and configure the provisioning agent.
+
+## Appendix A - Install AD LDS PowerShell script
+The following PowerShell script can be used to automate the installation of Active Directory Lightweight Directory Services. You'll need to edit the script to match your environment; in particular, change `APP3` to the hostname of your computer.
+++
+```powershell
+# Filename: 1_SetupADLDS.ps1
+# Description: Creates a certificate that will be used for SSL and installs Active Directory Lighetweight Directory Services.
+#
+# DISCLAIMER:
+# Copyright (c) Microsoft Corporation. All rights reserved. This
+# script is made available to you without any express, implied or
+# statutory warranty, not even the implied warranty of
+# merchantability or fitness for a particular purpose, or the
+# warranty of title or non-infringement. The entire risk of the
+# use or the results from the use of this script remains with you.
+#
+#
+#
+#
+#Declare variables
+$DNSName = 'APP3'
+$CertLocation = 'cert:\LocalMachine\MY'
+$logpath = "c:\"
+$dirname = "test"
+$dirtype = "directory"
+$featureLogPath = "c:\test\featurelog.txt"
+
+#Create a new self-signed certificate
+New-SelfSignedCertificate -DnsName $DNSName -CertStoreLocation $CertLocation
+
+#Create directory
+New-Item -Path $logpath -Name $dirname -ItemType $dirtype
+
+#Export the certifcate from the local machine personal store
+Get-ChildItem -Path cert:\LocalMachine\my | Export-Certificate -FilePath c:\test\allcerts.sst -Type SST
+
+#Import the certificate in to the trusted root
+Import-Certificate -FilePath "C:\test\allcerts.sst" -CertStoreLocation cert:\LocalMachine\Root
++
+#Install AD LDS
+start-job -Name addFeature -ScriptBlock {
+Add-WindowsFeature -Name "ADLDS" -IncludeAllSubFeature -IncludeManagementTools
+ }
+Wait-Job -Name addFeature
+Get-WindowsFeature | Where installed >>$featureLogPath
++
+ ```
+
+## Appendix B - Answer file
+This file is used to automate and create an instance of AD LDS. You will edit this file to match your environment; in particular, change `APP3` to the hostname of your server.
+
+>[!IMPORTANT]
+> This script uses the local administrator for the AD LDS service account and has its password hard-coded in the answers. This action is for **testing only** and should never be used in a production environment.
+>
+> If you are installing AD LDS on a domain controller and not a member or standalone server, you will need to change the LocalLDAPPortToListenOn and LocalSSLPortToListonOn to something other than the well-known ports for LDAP and LDAP over SSL. For example, LocalLDAPPortToListenOn=51300 and LocalSSLPortToListenOn=51301.
+
+```
+ [ADAMInstall]
+ InstallType=Unique
+ InstanceName=AD-APP-LDAP
+ LocalLDAPPortToListenOn=389
+ LocalSSLPortToListenOn=636
+ NewApplicationPartitionToCreate=CN=App,DC=contoso,DC=lab
+ DataFilesPath=C:\Program Files\Microsoft ADAM\AD-APP-LDAP\data
+ LogFilesPath=C:\Program Files\Microsoft ADAM\AD-APP-LDAP\data
+ ServiceAccount=APP3\Administrator
+ ServicePassword=Pa$$Word1
+ AddPermissionsToServiceAccount=Yes
+ Administrator=APP3\Administrator
+ ImportLDIFFiles="MS-User.LDF"
+ SourceUserName=APP3\Administrator
+ SourcePassword=Pa$$Word1
+ ```
+## Appendix C - Populate AD LDS PowerShell script
+PowerShell script to populate AD LDS with containers and a service account.
+++
+```powershell
+# Filename: 2_PopulateADLDS.ps1
+# Description: Populates our AD LDS environment with 2 containers and a service account
+
+# DISCLAIMER:
+# Copyright (c) Microsoft Corporation. All rights reserved. This
+# script is made available to you without any express, implied or
+# statutory warranty, not even the implied warranty of
+# merchantability or fitness for a particular purpose, or the
+# warranty of title or non-infringement. The entire risk of the
+# use or the results from the use of this script remains with you.
+#
+#
+#
+#
+# Create service accounts container
+New-ADObject -Name "ServiceAccounts" -Type "container" -Path "CN=App,DC=contoso,DC=lab" -Server "APP3:389"
+Write-Output "Creating ServiceAccounts container"
+
+# Create cloud users container
+New-ADObject -Name "CloudUsers" -Type "container" -Path "CN=App,DC=contoso,DC=lab" -Server "APP3:389"
+Write-Output "Creating CloudUsers container"
+
+# Create a new service account
+New-ADUser -name "svcAccountLDAP" -accountpassword (ConvertTo-SecureString -AsPlainText 'Pa$$1Word' -Force) -Displayname "LDAP Service Account" -server 'APP3:389' -path "CN=ServiceAccounts,CN=App,DC=contoso,DC=lab"
+Write-Output "Creating service account"
+
+# Enable the new service account
+Enable-ADAccount -Identity "CN=svcAccount,CN=ServiceAccounts,CN=App,DC=contoso,DC=lab" -Server "APP3:389"
+Write-Output "Enabling service account"
+
+# Add the service account to the Administrators role
+Get-ADGroup -Server "APP3:389" -SearchBase "CN=Administrators,CN=Roles,CN=App,DC=contoso,DC=lab" -Filter "name -like 'Administrators'" | Add-ADGroupMember -Members "CN=svcAccount,CN=ServiceAccounts,CN=App,DC=contoso,DC=lab"
+Write-Output "Adding service accounnt to Administrators role"
++
+ ```
+
+## Next steps
+
+- [Tutorial: ECMA Connector Host generic LDAP connector](on-premises-ldap-connector-configure.md)
active-directory Msal Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-configuration.md
The list of authorities that are known and trusted by you. In addition to the au
|--|-|--|--| | `type` | String | Yes | Mirrors the audience or account type your app targets. Possible values: `AAD`, `B2C` | | `audience` | Object | No | Only applies when type=`AAD`. Specifies the identity your app targets. Use the value from your app registration |
-| `authority_url` | String | Yes | Required only when type=`B2C`. Specifies the authority URL or policy your app should use |
+| `authority_url` | String | Yes | Required only when type=`B2C`. Optional for type=`AAD`. Specifies the authority URL or policy your app should use |
| `default` | boolean | Yes | A single `"default":true` is required when one or more authorities is specified. | #### Audience Properties
active-directory Reference Third Party Cookies Spas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-third-party-cookies-spas.md
The solution outlined in this article works in all of these browsers, or anywher
## Overview of the solution
-To continue authenticating users in SPAs, app developers must use the [authorization code flow](v2-oauth2-auth-code-flow.md). In the auth code flow, the identity provider issues a code, and the SPA redeems the code for an access token and a refresh token. When the app requires additional tokens, it can use the [refresh token flow](v2-oauth2-auth-code-flow.md#refresh-the-access-token) to get new tokens. Microsoft Authentication Library (MSAL) for JavaScript v2.0, implements the authorization code flow for SPAs and, with minor updates, is a drop-in replacement for MSAL.js 1.x.
+To continue authenticating users in SPAs, app developers must use the [authorization code flow](v2-oauth2-auth-code-flow.md). In the auth code flow, the identity provider issues a code, and the SPA redeems the code for an access token and a refresh token. When the app requires new tokens, it can use the [refresh token flow](v2-oauth2-auth-code-flow.md#refresh-the-access-token) to get new tokens. Microsoft Authentication Library (MSAL) for JavaScript v2.0, implements the authorization code flow for SPAs and, with minor updates, is a drop-in replacement for MSAL.js 1.x.
For the Microsoft identity platform, SPAs and native clients follow similar protocol guidance:
For the Microsoft identity platform, SPAs and native clients follow similar prot
- PKCE is _required_ for SPAs on the Microsoft identity platform. PKCE is _recommended_ for native and confidential clients. - No use of a client secret
-SPAs have two additional restrictions:
+SPAs have two more restrictions:
- [The redirect URI must be marked as type `spa`](v2-oauth2-auth-code-flow.md#redirect-uris-for-single-page-apps-spas) to enable CORS on login endpoints. - Refresh tokens issued through the authorization code flow to `spa` redirect URIs have a 24-hour lifetime rather than a 90-day lifetime.
There are two ways of accomplishing sign-in:
- Consider having a pre-load sequence in the app that checks for a login session and redirects to the login page before the app fully unpacks and executes the JavaScript payload. - **Popups** - If the user experience (UX) of a full page redirect doesn't work for the application, consider using a popup to handle authentication.
- - When the popup finishes redirecting to the application after authentication, code in the redirect handler will store the code and tokens in local storage for the application to use. MSAL.js supports popups for authentication, as do most libraries.
+ - When the popup finishes redirecting to the application after authentication, code in the redirect handler will store the code, and tokens in local storage for the application to use. MSAL.js supports popups for authentication, as do most libraries.
- Browsers are decreasing support for popups, so they may not be the most reliable option. User interaction with the SPA before creating the popup may be needed to satisfy browser requirements.
- Apple [describes a popup method](https://webkit.org/blog/8311/intelligent-tracking-prevention-2-0/) as a temporary compatibility fix to give the original window access to third-party cookies. While Apple may remove this transferral of permissions in the future, it will not impact the guidance here.
+ Apple [describes a popup method](https://webkit.org/blog/8311/intelligent-tracking-prevention-2-0/) as a temporary compatibility fix to give the original window access to third-party cookies. While Apple may remove this transferal of permissions in the future, it will not impact the guidance here.
Here, the popup is being used as a first party navigation to the login page so that a session is found and an auth code can be provided. This should continue working into the future. ### Using iframes
-A common pattern in web apps is to use an iframe to embed one app inside anotherd: the top-level frame handles authenticating the user and the application hosted in the iframe can trust that the user is signed in, fetching tokens silently using the implicit flow.
+A common pattern in web apps is to use an iframe to embed one app inside another: the top-level frame handles authenticating the user and the application hosted in the iframe can trust that the user is signed in, fetching tokens silently using the implicit flow. However, there are couple of caveats to this assumption irrespective of whether third-party cookies are enabled or blocked in the browser.
Silent token acquisition no longer works when third-party cookies are blocked - the application embedded in the iframe must switch to using popups to access the user's session as it can't navigate to the login page.
active-directory Users Default Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md
Previously updated : 08/04/2021 Last updated : 11/22/2022
Policies | <ul><li>Read all properties of policies<li>Manage all properties of o
## Restrict member users' default permissions
-It's possible to add restrictions to users' default permissions. You can use this feature if you don't want all users in the directory to have access to the Azure AD admin portal/directory.
-
-For example, a university has many users in its directory. The admin might not want all of the students in the directory to be able to see the full directory and violate other students' privacy. The use of this feature is optional and at the discretion of the Azure AD administrator.
+It's possible to add restrictions to users' default permissions.
You can restrict default permissions for member users in the following ways:
+> [!CAUTION]
+> Using the **Restrict access to Azure AD administration portal** switch **is NOT a security measure**. For more information on the functionality, see the table below.
+ | Permission | Setting explanation | | - | |
-| **Register applications** | Setting this option to **No** prevents users from creating application registrations. You can the grant the ability back to specific individuals by adding them to the application developer role. |
+| **Register applications** | Setting this option to **No** prevents users from creating application registrations. You can the grant the ability back to specific individuals, by adding them to the application developer role. |
| **Allow users to connect work or school account with LinkedIn** | Setting this option to **No** prevents users from connecting their work or school account with their LinkedIn account. For more information, see [LinkedIn account connections data sharing and consent](../enterprise-users/linkedin-user-consent.md). | | **Create security groups** | Setting this option to **No** prevents users from creating security groups. Global administrators and user administrators can still create security groups. To learn how, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md). | | **Create Microsoft 365 groups** | Setting this option to **No** prevents users from creating Microsoft 365 groups. Setting this option to **Some** allows a set of users to create Microsoft 365 groups. Global administrators and user administrators can still create Microsoft 365 groups. To learn how, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md). |
-| **Restrict access to Azure AD administration portal** | **What does this switch do?** <br>**No** lets non-administrators browse the Azure AD administration portal. <br>**Yes** Restricts non-administrators from browsing the Azure AD administration portal. Non-administrators who are owners of groups or applications are unable to use the Azure portal to manage their owned resources. </p><p></p><p>**What does it not do?** <br> It does not restrict access to Azure AD data using PowerShell, Microsoft GraphAPI, or other clients such as Visual Studio. <br>It does not restrict access as long as a user is assigned a custom role (or any role). </p><p></p><p>**When should I use this switch?** <br>Use this to prevent users from misconfiguring the resources that they own. </p><p></p><p>**When should I not use this switch?** <br>Do not use this switch as a security measure. Instead, create a Conditional Access policy that targets Microsoft Azure Management will block non-administrators access to [Microsoft Azure Management](../conditional-access/concept-conditional-access-cloud-apps.md#microsoft-azure-management). </p><p></p><p> **How do I grant only a specific non-administrator users the ability to use the Azure AD administration portal?** <br> Set this option to **Yes**, then assign them a role like global reader. </p><p></p><p>**Restrict access to the Entra administration portal** <br>A Conditional Access policy that targets Microsoft Azure Management will target access to all Azure management. |
-| **Read other users** | This setting is available in Microsoft Graph and PowerShell only. Setting this flag to `$false` prevents all non-admins from reading user information from the directory. This flag does not prevent reading user information in other Microsoft services like Exchange Online.</p><p>This setting is meant for special circumstances, so we don't recommend setting the flag to `$false`. |
-
-> [!NOTE]
-> It's assumed that the average user would only use the portal to access Azure AD, and not use PowerShell or the Azure CLI to access their resources. Currently, restricting access to users' default permissions occurs only when users try to access the directory within the Azure portal.
+| **Restrict access to Azure AD administration portal** | **What does this switch do?** <br>**No** lets non-administrators browse the Azure AD administration portal. <br>**Yes** Restricts non-administrators from browsing the Azure AD administration portal. Non-administrators who are owners of groups or applications are unable to use the Azure portal to manage their owned resources. </p><p></p><p>**What does it not do?** <br> It doesn't restrict access to Azure AD data using PowerShell, Microsoft GraphAPI, or other clients such as Visual Studio. <br>It doesn't restrict access as long as a user is assigned a custom role (or any role). </p><p></p><p>**When should I use this switch?** <br>Use this option to prevent users from misconfiguring the resources that they own. </p><p></p><p>**When should I not use this switch?** <br>Don't use this switch as a security measure. Instead, create a Conditional Access policy that targets Microsoft Azure Management will block non-administrators access to [Microsoft Azure Management](../conditional-access/concept-conditional-access-cloud-apps.md#microsoft-azure-management). </p><p></p><p> **How do I grant only a specific non-administrator users the ability to use the Azure AD administration portal?** <br> Set this option to **Yes**, then assign them a role like global reader. </p><p></p><p>**Restrict access to the Entra administration portal** <br>A Conditional Access policy that targets Microsoft Azure Management will target access to all Azure management. |
+| **Read other users** | This setting is available in Microsoft Graph and PowerShell only. Setting this flag to `$false` prevents all non-admins from reading user information from the directory. This flag doesn't prevent reading user information in other Microsoft services like Exchange Online.</p><p>This setting is meant for special circumstances, so we don't recommend setting the flag to `$false`. |
## Restrict guest users' default permissions
You can restrict default permissions for guest users in the following ways.
Permission | Setting explanation - |
-**Guest user access restrictions** | Setting this option to **Guest users have the same access as members** grants all member user permissions to guest users by default.<p>Setting this option to **Guest user access is restricted to properties and memberships of their own directory objects** restricts guest access to only their own user profile by default. Access to other users is no longer allowed, even when they're searching by user principal name, object ID, or display name. Access to group information, including groups memberships, is also no longer allowed.<p>This setting does not prevent access to joined groups in some Microsoft 365 services like Microsoft Teams. To learn more, see [Microsoft Teams guest access](/MicrosoftTeams/guest-access).<p>Guest users can still be added to administrator roles regardless of this permission setting.
+**Guest user access restrictions** | Setting this option to **Guest users have the same access as members** grants all member user permissions to guest users by default.<p>Setting this option to **Guest user access is restricted to properties and memberships of their own directory objects** restricts guest access to only their own user profile by default. Access to other users is no longer allowed, even when they're searching by user principal name, object ID, or display name. Access to group information, including groups memberships, is also no longer allowed.<p>This setting doesn't prevent access to joined groups in some Microsoft 365 services like Microsoft Teams. To learn more, see [Microsoft Teams guest access](/MicrosoftTeams/guest-access).<p>Guest users can still be added to administrator roles regardless of this permission setting.
**Guests can invite** | Setting this option to **Yes** allows guests to invite other guests. To learn more, see [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md). **Members can invite** | Setting this option to **Yes** allows non-admin members of your directory to invite guests. To learn more, see [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md). **Admins and users in the guest inviter role can invite** | Setting this option to **Yes** allows admins and users in the guest inviter role to invite guests. When you set this option to **Yes**, users in the guest inviter role will still be able to invite guests, regardless of the **Members can invite** setting. To learn more, see [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md).
active-directory Migrate From Federation To Cloud Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/migrate-from-federation-to-cloud-authentication.md
Last updated 08/26/2022
-+
active-directory Jiramicrosoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jiramicrosoft-tutorial.md
Use your Microsoft Azure Active Directory account with Atlassian JIRA server to
To configure Azure AD integration with JIRA SAML SSO by Microsoft, you need the following items: - An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).-- JIRA Core and Software 6.4 to 8.22.1 or JIRA Service Desk 3.0 to 4.22.1 should be installed and configured on Windows 64-bit version.
+- JIRA Core and Software 6.4 to 9.4.0 or JIRA Service Desk 3.0 to 4.22.1 should be installed and configured on Windows 64-bit version.
- JIRA server is HTTPS enabled. - Note the supported versions for JIRA Plugin are mentioned in below section. - JIRA server is reachable on the Internet particularly to the Azure AD login page for authentication and should able to receive the token from Azure AD.
To get started, you need the following items:
## Supported versions of JIRA
-* JIRA Core and Software: 6.4 to 8.22.1.
+* JIRA Core and Software: 6.4 to 9.4.0.
* JIRA Service Desk 3.0 to 4.22.1. * JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](jira52microsoft-tutorial.md).
active-directory Starleaf Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/starleaf-provisioning-tutorial.md
Before you configure and enable automatic user provisioning, you should decide w
Before you configure StarLeaf for automatic user provisioning with Azure AD, you will need to configure SCIM provisioning in StarLeaf:
-1. Sign in to your [StarLeaf Admin Console](https://portal.starleaf.com/#page=login). Navigate to **Integrations** > **Add integration**.
+1. Sign in to your StarLeaf Admin Console. Navigate to **Integrations** > **Add integration**.
![Screenshot of the StarLeaf Admin Console with the Integrations and Add integration options called out.](media/starleaf-provisioning-tutorial/image00.png)
aks Cluster Container Registry Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-container-registry-integration.md
You need to establish an authentication mechanism when using [Azure Container Re
You can set up the AKS to ACR integration using the Azure CLI or Azure PowerShell. The AKS to ACR integration assigns the [**AcrPull** role][acr-pull] to the [Azure Active Directory (Azure AD) **managed identity**][aad-identity] associated with your AKS cluster.
+> [!IMPORTANT]
+> There is a latency issue with Azure Active Directory groups when attaching ACR. If the AcrPull role is granted to an Azure AD group and the kubelet identity is added to the group to complete the RBAC configuration, there might be up to a one-hour delay before the RBAC group takes effect. We recommended you to use the [Bring your own kubelet identity][byo-kubelet-identity] as a workaround. You can pre-create a user-assigned identity, add it to the Azure AD group, then use the identity as the kubelet identity to create an AKS cluster. This ensures the identity is added to the Azure AD group before a token is generated by kubelet, which avoids the latency issue.
+ > [!NOTE] > This article covers automatic authentication between AKS and ACR. If you need to pull an image from a private external registry, use an [image pull secret][image-pull-secret].
nginx0-deployment-669dfc4d4b-xdpd6 1/1 Running 0 20s
[ps-detach]: /powershell/module/az.aks/set-azakscluster#-acrnametodetach [cli-param]: /cli/azure/aks#az-aks-update-optional-parameters [ps-attach]: /powershell/module/az.aks/set-azakscluster#-acrnametoattach
+[byo-kubelet-identity]: use-managed-identity.md#use-a-pre-created-kubelet-managed-identity
aks Monitor Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks.md
Application Insights provides complete monitoring of applications running on AKS
- [Java](../azure-monitor/app/java-in-process-agent.md) - [Node.js](../azure-monitor/app/nodejs.md) - [Python](../azure-monitor/app/opencensus-python.md)-- [Other platforms](../azure-monitor/app/platforms.md)
+- [Other platforms](../azure-monitor/app/app-insights-overview.md#supported-languages)
See [What is Application Insights?](../azure-monitor/app/app-insights-overview.md)
aks Tutorial Kubernetes Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-upgrade-cluster.md
As part of the application and cluster lifecycle, you may want to upgrade to the
In this tutorial, part seven of seven, you learn how to: > [!div class="checklist"]
+>
> * Identify current and available Kubernetes versions. > * Upgrade your Kubernetes nodes. > * Validate a successful upgrade.
In this tutorial, part seven of seven, you learn how to:
In previous tutorials, an application was packaged into a container image, and this container image was uploaded to Azure Container Registry (ACR). You also created an AKS cluster. The application was then deployed to the AKS cluster. If you have not done these steps and would like to follow along, start with [Tutorial 1: Prepare an application for AKS][aks-tutorial-prepare-app].
-* If you're using Azure CLI, this article requires that you're running Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+* If you're using Azure CLI, this tutorial requires that you're running Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
* If you're using Azure PowerShell, this tutorial requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
-* If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
## Get available cluster versions
aks Use Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-network-policies.md
Please execute the following commands prior to creating a cluster:
```azurecli az extension add --name aks-preview az extension update --name aks-preview
- az feature register --namespace Microsoft.ContainerService --name AKSWindows2022Preview
az feature register --namespace Microsoft.ContainerService --name WindowsNetworkPolicyPreview az provider register -n Microsoft.ContainerService ```
aks Use System Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-system-pools.md
Title: Use system node pools in Azure Kubernetes Service (AKS)
description: Learn how to create and manage system node pools in Azure Kubernetes Service (AKS) Previously updated : 06/18/2020 Last updated : 11/22/2022
You need the Azure PowerShell version 7.5.0 or later installed and configured. R
The following limitations apply when you create and manage AKS clusters that support system node pools. * See [Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS)][quotas-skus-regions].
-* The AKS cluster must be built with virtual machine scale sets as the VM type and the *Standard* SKU load balancer.
-* The name of a node pool may only contain lowercase alphanumeric characters and must begin with a lowercase letter. For Linux node pools, the length must be between 1 and 12 characters. For Windows node pools, the length must be between 1 and 6 characters.
+* The name of a node pool may only contain lowercase alphanumeric characters and must begin with a lowercase letter. For Linux node pools, the length must be between 1 and 12 characters. For Windows node pools, the length must be between one and six characters.
* An API version of 2020-03-01 or greater must be used to set a node pool mode. Clusters created on API versions older than 2020-03-01 contain only user node pools, but can be migrated to contain system node pools by following [update pool mode steps](#update-existing-cluster-system-and-user-node-pools). * The mode of a node pool is a required property and must be explicitly set when using ARM templates or direct API calls. ## System and user node pools
-For a system node pool, AKS automatically assigns the label **kubernetes.azure.com/mode: system** to its nodes. This causes AKS to prefer scheduling system pods on node pools that contain this label. This label does not prevent you from scheduling application pods on system node pools. However, we recommend you isolate critical system pods from your application pods to prevent misconfigured or rogue application pods from accidentally killing system pods.
+For a system node pool, AKS automatically assigns the label **kubernetes.azure.com/mode: system** to its nodes. This causes AKS to prefer scheduling system pods on node pools that contain this label. This label doesn't prevent you from scheduling application pods on system node pools. However, we recommend you isolate critical system pods from your application pods to prevent misconfigured or rogue application pods from accidentally killing system pods.
You can enforce this behavior by creating a dedicated system node pool. Use the `CriticalAddonsOnly=true:NoSchedule` taint to prevent application pods from being scheduled on system node pools. System node pools have the following restrictions:
System node pools have the following restrictions:
* System pools osType must be Linux. * User node pools osType may be Linux or Windows. * System pools must contain at least one node, and user node pools may contain zero or more nodes.
-* System node pools require a VM SKU of at least 2 vCPUs and 4GB memory. But burstable-VM(B series) is not recommended.
-* A minimum of two nodes 4 vCPUs is recommended(e.g. Standard_DS4_v2), especially for large clusters (Multiple CoreDNS Pod replicas, 3-4+ add-ons, etc.).
+* System node pools require a VM SKU of at least 2 vCPUs and 4 GB memory. But burstable-VM(B series) isn't recommended.
+* A minimum of two nodes 4 vCPUs is recommended (for example, Standard_DS4_v2), especially for large clusters (Multiple CoreDNS Pod replicas, 3-4+ add-ons, etc.).
* System node pools must support at least 30 pods as described by the [minimum and maximum value formula for pods][maximum-pods]. * Spot node pools require user node pools.
-* Adding an additional system node pool or changing which node pool is a system node pool will *NOT* automatically move system pods. System pods can continue to run on the same node pool even if you change it to a user node pool. If you delete or scale down a node pool running system pods that was previously a system node pool, those system pods are redeployed with preferred scheduling to the new system node pool.
+* Adding another system node pool or changing which node pool is a system node pool *does not* automatically move system pods. System pods can continue to run on the same node pool, even if you change it to a user node pool. If you delete or scale down a node pool running system pods that were previously a system node pool, those system pods are redeployed with preferred scheduling to the new system node pool.
You can do the following operations with node pools:
The following example creates a resource group named *myResourceGroup* in the *e
az group create --name myResourceGroup --location eastus ```
-Use the [az aks create][az-aks-create] command to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one dedicated system pool containing one node. For your production workloads, ensure you are using system node pools with at least three nodes. This operation may take several minutes to complete.
+Use the [az aks create][az-aks-create] command to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one dedicated system pool containing one node. For your production workloads, ensure you're using system node pools with at least three nodes. This operation may take several minutes to complete.
```azurecli-interactive # Create a new AKS cluster with a single system pool
The following example creates a resource group named *myResourceGroup* in the *e
New-AzResourceGroup -ResourceGroupName myResourceGroup -Location eastus ```
-Use the [New-AzAksCluster][new-azakscluster] cmdlet to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one dedicated system pool containing one node. For your production workloads, ensure you are using system node pools with at least three nodes. This operation may take several minutes to complete.
+Use the [New-AzAksCluster][new-azakscluster] cmdlet to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one dedicated system pool containing one node. For your production workloads, ensure you're using system node pools with at least three nodes. The create operation may take several minutes to complete.
```azurepowershell-interactive # Create a new AKS cluster with a single system pool
az aks nodepool add \
### [Azure PowerShell](#tab/azure-powershell)
-You can add one or more system node pools to existing AKS clusters. It's recommended to schedule your application pods on user node pools, and dedicate system node pools to only critical system pods. This prevents rogue application pods from accidentally killing system pods. Enforce this behavior with the `CriticalAddonsOnly=true:NoSchedule` [taint][aks-taints] for your system node pools.
+You can add one or more system node pools to existing AKS clusters. It's recommended to schedule your application pods on user node pools, and dedicate system node pools to only critical system pods. Adding more system node pools prevents rogue application pods from accidentally killing system pods. Enforce the behavior with the `CriticalAddonsOnly=true:NoSchedule` [taint][aks-taints] for your system node pools.
The following command adds a dedicated node pool of mode type system with a default count of three nodes.
aks Windows Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-faq.md
To fix this error:
1. Move Windows pods from existing Windows agent pools to new Windows agent pools. 1. Delete old Windows agent pools.
+## Why is there an unexpected user named "sshd" on my VM node?
+
+AKS adds a user named "sshd" when installing the OpenSSH service. This user is not malicious. We recommend that customers update their alerts to ignore this unexpected user account.
+ ## How do I rotate the service principal for my Windows node pool? Windows node pools do not support service principal rotation. To update the service principal, create a new Windows node pool and migrate your pods from the older pool to the new one. After your pods are migrated to the new pool, delete the older node pool.
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
az provider register --namespace Microsoft.ContainerService
Create an AKS cluster using the [az aks create][az-aks-create] command with the `--enable-oidc-issuer` parameter to use the OIDC Issuer. The following example creates a cluster named *myAKSCluster* with one node in the *myResourceGroup*: ```azurecli-interactive
+az group create --name myResourceGroup --location eastus
+ az aks create -g myResourceGroup -n myAKSCluster --node-count 1 --enable-oidc-issuer --enable-workload-identity --generate-ssh-keys ```
You can retrieve this information using the Azure CLI command: [az keyvault list
1. Use the Azure CLI [az account set][az-account-set] command to set a specific subscription to be the current active subscription. Then use the [az identity create][az-identity-create] command to create a managed identity. ```azurecli
- az account set --subscription "subscriptionID"
- ```
+ export SUBSCRIPTION_ID="$(az account show --query id --output tsv)"
+ export USER_ASSIGNED_IDENTITY_NAME="myIdentity"
+ export RG_NAME="myResourceGroup"
+ export LOCATION="eastus"
- ```azurecli
- az identity create --name "userAssignedIdentityName" --resource-group "resourceGroupName" --location "location" --subscription "subscriptionID"
+ az identity create --name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RG_NAME}" --location "${LOCATION}" --subscription "${SUBSCRIPTION_ID}"
``` 2. Set an access policy for the managed identity to access secrets in your Key Vault by running the following commands:
- ```bash
- export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "resourceGroupName" --name "userAssignedIdentityName" --query 'clientId' -otsv)"
- ```
- ```azurecli
- az keyvault set-policy --name "keyVaultName" --secret-permissions get --spn "${USER_ASSIGNED_CLIENT_ID}"
+ export RG_NAME="myResourceGroup"
+ export USER_ASSIGNED_IDENTITY_NAME="myIdentity"
+ export KEYVAULT_NAME="myKeyVault"
+ export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RG_NAME}" --name "${USER_ASSIGNED_IDENTITY_NAME}" --query 'clientId' -otsv)"
+
+ az keyvault set-policy --name "${KEYVAULT_NAME}" --secret-permissions get --spn "${USER_ASSIGNED_CLIENT_ID}"
``` ## Create Kubernetes service account
You can retrieve this information using the Azure CLI command: [az keyvault list
Create a Kubernetes service account and annotate it with the client ID of the managed identity created in the previous step. Use the [az aks get-credentials][az-aks-get-credentials] command and replace the values for the cluster name and the resource group name. ```azurecli
-az aks get-credentials -n myAKSCluster -g MyResourceGroup
+az aks get-credentials -n myAKSCluster -g myResourceGroup
```
-Copy and paste the following multi-line input in the Azure CLI, and update the values for `serviceAccountName` and `serviceAccountNamespace` with the Kubernetes service account name and its namespace.
+Copy and paste the following multi-line input in the Azure CLI, and update the values for `SERVICE_ACCOUNT_NAME` and `SERVICE_ACCOUNT_NAMESPACE` with the Kubernetes service account name and its namespace.
```bash
+export SERVICE_ACCOUNT_NAME="workload-identity-sa"
+export SERVICE_ACCOUNT_NAMESPACE="my-namespace"
+ cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ServiceAccount metadata: annotations:
- azure.workload.identity/client-id: ${USER_ASSIGNED_CLIENT_ID}
+ azure.workload.identity/client-id: "${USER_ASSIGNED_CLIENT_ID}"
labels: azure.workload.identity/use: "true"
- name: serviceAccountName
- namespace: serviceAccountNamspace
+ name: "${SERVICE_ACCOUNT_NAME}"
+ namespace: "${SERVICE_ACCOUNT_NAMESPACE}"
EOF ```
Serviceaccount/workload-identity-sa created
## Establish federated identity credential
-Use the [az identity federated-credential create][az-identity-federated-credential-create] command to create the federated identity credential between the managed identity, the service account issuer, and the subject. Replace the values `resourceGroupName`, `userAssignedIdentityName`, `federatedIdentityName`, `serviceAccountNamespace`, and `serviceAccountName`.
+Use the [az identity federated-credential create][az-identity-federated-credential-create] command to create the federated identity credential between the managed identity, the service account issuer, and the subject.
```azurecli
-az identity federated-credential create --name federatedIdentityName --identity-name userAssignedIdentityName --resource-group resourceGroupName --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:serviceAccountNamespace:serviceAccountName
+az identity federated-credential create --name myfederatedIdentity --identity-name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RG_NAME}" --issuer "${AKS_OIDC_ISSUER}" --subject system:serviceaccount:"${SERVICE_ACCOUNT_NAMESPACE}":"${SERVICE_ACCOUNT_NAME}"
``` > [!NOTE]
api-management Api Management Howto Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-cache.md
APIs and operations in API Management can be configured with response caching. Response caching can significantly reduce latency for API callers and backend load for API providers. > [!IMPORTANT]
-> Built-in cache is volatile and is shared by all units in the same region in the same API Management service.
-
+> Built-in cache is volatile and is shared by all units in the same region in the same API Management service. Regardless of the cache type being used (internal or external), if the cache-related operations fail to connect to the cache due to the volatility of the cache or any other reason, the API call that uses the cache related operation doesn't raise an error, and the cache operation completes successfully. In the case of a read operation, a null value is returned to the calling policy expression. Your policy code should be designed to ensure that that there's a "fallback" mechanism to retrieve data not found in the cache.
For more detailed information about caching, see [API Management caching policies](api-management-caching-policies.md) and [Custom caching in Azure API Management](api-management-sample-cache-by-key.md). ![cache policies](media/api-management-howto-cache/cache-policies.png)
app-service Configure Language Dotnet Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-dotnet-framework.md
Last updated 06/02/2020
# Configure an ASP.NET app for Azure App Service > [!NOTE]
-> For ASP.NET Core, see [Configure an ASP.NET Core app for Azure App Service](configure-language-dotnetcore.md)
+> For ASP.NET Core, see [Configure an ASP.NET Core app for Azure App Service](configure-language-dotnetcore.md). If your ASP.NET app runs in a custom Windows or Linux container, see [Configure a custom container for Azure App Service](configure-custom-container.md).
ASP.NET apps must be deployed to Azure App Service as compiled binaries. The Visual Studio publishing tool builds the solution and then deploys the compiled binaries directly, whereas the App Service deployment engine deploys the code repository first and then compiles the binaries.
app-service Configure Language Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-dotnetcore.md
zone_pivot_groups: app-service-platform-windows-linux
# Configure an ASP.NET Core app for Azure App Service > [!NOTE]
-> For ASP.NET in .NET Framework, see [Configure an ASP.NET app for Azure App Service](configure-language-dotnet-framework.md)
+> For ASP.NET in .NET Framework, see [Configure an ASP.NET app for Azure App Service](configure-language-dotnet-framework.md). If your ASP.NET Core app runs in a custom Windows or Linux container, see [Configure a custom container for Azure App Service](configure-custom-container.md).
ASP.NET Core apps must be deployed to Azure App Service as compiled binaries. The Visual Studio publishing tool builds the solution and then deploys the compiled binaries directly, whereas the App Service deployment engine deploys the code repository first and then compiles the binaries.
app-service Quickstart Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-dotnetcore.md
target cross-platform with .NET 6.0.
In this quickstart, you'll learn how to create and deploy your first ASP.NET web app to [Azure App Service](overview.md). App Service supports various versions of .NET apps, and provides a highly scalable, self-patching web hosting service. ASP.NET web apps are cross-platform and can be hosted on Linux or Windows. When you're finished, you'll have an Azure resource group consisting of an App Service hosting plan and an App Service with a deployed web application.
+Alternatively, you can deploy an ASP.NET web app as part of a [Windows or Linux container in App Service](quickstart-custom-container.md).
+ ## Prerequisites :::zone target="docs" pivot="development-environment-vs"
azure-arc Diagnose Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/diagnose-connection-issues.md
Title: "Diagnose connection issues for Azure Arc-enabled Kubernetes clusters" Previously updated : 11/10/2022 Last updated : 11/22/2022 description: "Learn how to resolve common issues when connecting Kubernetes clusters to Azure Arc."
When you [create your support request](../../azure-portal/supportability/how-to-
If you are using a proxy server on at least one machine, complete the first five steps of the non-proxy flowchart (through resource provider registration) for basic troubleshooting steps. Then, if you are still encountering issues, review the next flowchart for additional troubleshooting steps. More details about each step are provided below. ### Is the machine executing commands behind a proxy server?
-If the machine is executing commands behind a proxy server, you'll need to set any necessary environment variables, [explained below](#set-environment-variables).
-
-### Set environment variables
-
-Be sure you have set all of the necessary environment variables. For more information, see [Connect using an outbound proxy server](quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server).
+If the machine is executing commands behind a proxy server, you'll need to set all of the necessary environment variables. For more information, see [Connect using an outbound proxy server](quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server).
For example: ```bash
-export HTTP_PROXY=ΓÇ£http://<proxyIP>:<proxyPort>ΓÇ¥
-export HTTPS_PROXY=ΓÇ£https://<proxyIP>:<proxyPort>ΓÇ¥
-export NO_PROXY=ΓÇ£<service CIDR>,Kubernetes.default.svc,.svc.cluster.local,.svcΓÇ¥
+export HTTP_PROXY="http://<proxyIP>:<proxyPort>"
+export HTTPS_PROXY="https://<proxyIP>:<proxyPort>"
+export NO_PROXY="<cluster-apiserver-ip-address>:<proxyPort>"
``` ### Does the proxy server only accept trusted certificates?
azure-functions Functions Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-scale.md
Maximum instances are given on a per-function app (Consumption) or per-plan (Pre
| Plan | Scale out | Max # instances | | | | | | **[Consumption plan]** | [Event driven](event-driven-scaling.md). Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding additional instances of the Functions host, based on the number of incoming trigger events. | **Windows:** 200<br/>**Linux:** 100<sup>1</sup> |
-| **[Premium plan]** | [Event driven](event-driven-scaling.md). Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding additional instances of the Functions host, based on the number of events that its functions are triggered on. | **Windows:** 100<br/>**Linux:** 20-40<sup>2</sup>|
+| **[Premium plan]** | [Event driven](event-driven-scaling.md). Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding additional instances of the Functions host, based on the number of events that its functions are triggered on. | **Windows:** 100<br/>**Linux:** 20-100<sup>2</sup>|
| **[Dedicated plan]**<sup>3</sup> | Manual/autoscale |10-20| | **[ASE][Dedicated plan]**<sup>3</sup> | Manual/autoscale |100 | | **[Kubernetes]** | Event-driven autoscale for Kubernetes clusters using [KEDA](https://keda.sh). | Varies&nbsp;by&nbsp;cluster&nbsp;&nbsp;| <sup>1</sup> During scale-out, there's currently a limit of 500 instances per subscription per hour for Linux apps on a Consumption plan. <br/>
-<sup>2</sup> In some regions, Linux apps on a Premium plan can scale to 40 instances. For more information, see the [Premium plan article](functions-premium-plan.md#region-max-scale-out). <br/>
+<sup>2</sup> In some regions, Linux apps on a Premium plan can scale to 100 instances. For more information, see the [Premium plan article](functions-premium-plan.md#region-max-scale-out). <br/>
<sup>3</sup> For specific limits for the various App Service plan options, see the [App Service plan limits](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits). ## Cold start behavior
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
A [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overvi
1. In the Azure portal's search box, type in *template* and then select **Deploy a custom template**.
- :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" alt-text="Screenshot that shows portal blade to deploy custom template.":::
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" alt-text="Screenshot that shows the Azure portal with template entered in the search box and Deploy a custom template highlighted in the search results.":::
2. Click **Build your own template in the editor**.
- :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" alt-text="Screenshot that shows portal blade to build template in the editor.":::
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" alt-text="Screenshot that shows portal screen to build template in the editor.":::
3. Paste the Resource Manager template below into the editor and then click **Save**. You don't need to modify this template since you will provide values for its parameters.
- :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/edit-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/edit-template.png" alt-text="Screenshot that shows portal blade to edit Resource Manager template.":::
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/edit-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/edit-template.png" alt-text="Screenshot that shows portal screen to edit Resource Manager template.":::
```json {
A [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overvi
4. On the **Custom deployment** screen, specify a **Subscription** and **Resource group** to store the data collection rule and then provide values a **Name** for the data collection endpoint. The **Location** should be the same location as the workspace. The **Region** will already be populated and is used for the location of the data collection endpoint.
- :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/custom-deployment-values.png" lightbox="../logs/media/tutorial-workspace-transformations-api/custom-deployment-values.png" alt-text="Screenshot that shows portal blade to edit custom deployment values for data collection endpoint.":::
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/custom-deployment-values.png" lightbox="../logs/media/tutorial-workspace-transformations-api/custom-deployment-values.png" alt-text="Screenshot that shows portal screen to edit custom deployment values for data collection endpoint.":::
5. Click **Review + create** and then **Create** when you review the details. 6. Once the DCE is created, select it so you can view its properties. Note the **Logs ingestion URI** since you'll need this in a later step.
- :::image type="content" source="../logs/media/tutorial-logs-ingestion-api/data-collection-endpoint-overview.png" lightbox="../logs/media/tutorial-logs-ingestion-api/data-collection-endpoint-overview.png" alt-text="Screenshot that shows portal blade with details of data collection endpoint uri.":::
+ :::image type="content" source="../logs/media/tutorial-logs-ingestion-api/data-collection-endpoint-overview.png" lightbox="../logs/media/tutorial-logs-ingestion-api/data-collection-endpoint-overview.png" alt-text="Screenshot that shows the DCE Overview pane in the portal with details of data collection endpoint uri.":::
7. Click **JSON View** to view other details for the DCE. Copy the **Resource ID** since you'll need this in a later step.
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
1. In the Azure portal's search box, type in *template* and then select **Deploy a custom template**.
- :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" alt-text="Screenshot that shows portal blade to deploy custom template.":::
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" alt-text="Screenshot that shows the Azure portal with template entered in the search box and Deploy a custom template highlighted in the search results.":::
2. Click **Build your own template in the editor**.
- :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" alt-text="Screenshot that shows portal blade to build template in the editor.":::
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" alt-text="Screenshot that shows portal screen to build template in the editor.":::
3. Paste one of the Resource Manager templates below into the editor and then change the following values:
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
4. Click **Save**.
- :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/edit-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/edit-template.png" alt-text="Screenshot that shows portal blade to edit Resource Manager template.":::
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/edit-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/edit-template.png" alt-text="Screenshot that shows portal screen to edit Resource Manager template.":::
**Data collection rule for text log**
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
5. On the **Custom deployment** screen, specify a **Subscription** and **Resource group** to store the data collection rule and then provide values defined in the template. This includes a **Name** for the data collection rule and the **Workspace Resource ID** and **Endpoint Resource ID**. The **Location** should be the same location as the workspace. The **Region** will already be populated and is used for the location of the data collection rule.
- :::image type="content" source="media/data-collection-text-log/custom-deployment-values.png" lightbox="media/data-collection-text-log/custom-deployment-values.png" alt-text="Screenshot that shows portal blade to edit custom deployment values for data collection rule.":::
+ :::image type="content" source="media/data-collection-text-log/custom-deployment-values.png" lightbox="media/data-collection-text-log/custom-deployment-values.png" alt-text="Screenshot that shows the Custom Deployment screen in the portal to edit custom deployment values for data collection rule.":::
6. Click **Review + create** and then **Create** when you review the details. 7. When the deployment is complete, expand the **Deployment details** box and click on your data collection rule to view its details. Click **JSON View**.
- :::image type="content" source="media/data-collection-text-log/data-collection-rule-details.png" lightbox="media/data-collection-text-log/data-collection-rule-details.png" alt-text="Screenshot that shows portal blade with data collection rule details.":::
+ :::image type="content" source="media/data-collection-text-log/data-collection-rule-details.png" lightbox="media/data-collection-text-log/data-collection-rule-details.png" alt-text="Screenshot that shows the Overview pane in the portal with data collection rule details.":::
8. Change the API version to **2021-09-01-preview**.
The final step is to create a data collection association that associates the da
1. From the **Monitor** menu in the Azure portal, select **Data Collection Rules** and select the rule that you just created.
- :::image type="content" source="media/data-collection-text-log/data-collection-rules.png" lightbox="media/data-collection-text-log/data-collection-rules.png" alt-text="Screenshot that shows portal blade with data collection rules menu item.":::
+ :::image type="content" source="media/data-collection-text-log/data-collection-rules.png" lightbox="media/data-collection-text-log/data-collection-rules.png" alt-text="Screenshot that shows the Data Collection Rules pane in the portal with data collection rules menu item.":::
2. Select **Resources** and then click **Add** to view the available resources.
- :::image type="content" source="media/data-collection-text-log/data-collection-rules.png" lightbox="media/data-collection-text-log/data-collection-rules.png" alt-text="Screenshot that shows portal blade with resources for the data collection rule.":::
+ :::image type="content" source="media/data-collection-text-log/data-collection-rules.png" lightbox="media/data-collection-text-log/data-collection-rules.png" alt-text="Screenshot that shows the Data Collection Rules pane in the portal with resources for the data collection rule.":::
3. Select either individual agents to associate the data collection rule, or select a resource group to create an association for all agents in that resource group. Click **Apply**.
- :::image type="content" source="media/data-collection-text-log/select-resources.png" lightbox="media/data-collection-text-log/select-resources.png" alt-text="Screenshot that shows portal blade to add resources to the data collection rule.":::
+ :::image type="content" source="media/data-collection-text-log/select-resources.png" lightbox="media/data-collection-text-log/select-resources.png" alt-text="Screenshot that shows the Resources pane in the portal to add resources to the data collection rule.":::
## Troubleshooting - text logs Use the following steps to troubleshoot collection of text logs.
azure-monitor Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/log-analytics-agent.md
This section explains how to install the Log Analytics agent on different types
### Linux virtual machine on-premises or in another cloud - Use [Azure Arc-enabled servers](../../azure-arc/servers/overview.md) to deploy and manage the Log Analytics VM extension. Review the [deployment options](../../azure-arc/servers/concept-log-analytics-extension-deployment.md) to understand the different deployment methods available for the extension on machines registered with Azure Arc-enabled servers.-- [Manually install](../vm/monitor-virtual-machine.md) the agent calling a wrapper-script hosted on GitHub.
+- [Manually install](../agents/agent-linux.md#install-the-agent) the agent calling a wrapper-script hosted on GitHub.
- Integrate [System Center Operations Manager](./om-agents.md) with Azure Monitor to forward collected data from Windows computers reporting to a management group. ## Data collected
azure-monitor Alerts Manage Alerts Previous Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alerts-previous-version.md
## Manage log alerts using PowerShell [!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
-> [!NOTE]
-> PowerShell is not currently supported in API version `2021-08-01`.
Use the PowerShell cmdlets listed below to manage rules with the [Scheduled Query Rules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules).
New-AzResourceGroupDeployment -Name AlertDeployment -ResourceGroupName ResourceG
* Learn about [log alerts](./alerts-unified-log.md). * Create log alerts using [Azure Resource Manager Templates](./alerts-log-create-templates.md). * Understand [webhook actions for log alerts](./alerts-log-webhook.md).
-* Learn more about [log queries](../logs/log-query-overview.md).
+* Learn more about [log queries](../logs/log-query-overview.md).
azure-monitor Alerts Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot.md
If you can see a fired alert in the portal, but its configured action did not tr
1. **Are you calling Slack or Microsoft Teams?** Each of these endpoints expects a specific JSON format. Follow [these instructions](../alerts/action-groups-logic-app.md) to configure a logic app action instead.
- 1. **Did your webhook became unresponsive or returned errors?**
-
- Our timeout period for a webhook response is 10 seconds. The webhook call will be retried up to two additional times when the following HTTP status codes are returned: 408, 429, 503, 504, or when the HTTP endpoint does not respond. The first retry happens after 10 seconds. The second and final retry happens after 100 seconds. If the second retry fails, the endpoint will not be called again for 30 minutes for any action group.
+ 1. **Did your webhook become unresponsive or return errors?**
+
+ The webhook response timeout period is 10 seconds. When the HTTP endpoint does not respond or when the following HTTP status codes are returned, the webhook call is retried up to two times:
+
+ - `408`
+ - `429`
+ - `503`
+ - `504`
+
+ One retry occurs after 10 seconds and another retry occurs after 100 seconds. If the second retry fails, the endpoint is not called again for 15 minutes for any action group.
## Action or notification happened more than once
azure-monitor Itsmc Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-servicenow.md
ServiceNow supported versions include San Diego, Rome, Quebec, Paris, Orlando,
ServiceNow admins must generate a client ID and client secret for their ServiceNow instance. See the following information as required:
+- [Set up OAuth for Tokyo](https://docs.servicenow.com/bundle/tokyo-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)
- [Set up OAuth for San Diego](https://docs.servicenow.com/bundle/sandiego-platform-administration/page/administer/security/task/t_SettingUpOAuth.html) - [Set up OAuth for Rome](https://docs.servicenow.com/bundle/rome-platform-administration/page/administer/security/task/t_SettingUpOAuth.html) - [Set up OAuth for Quebec](https://docs.servicenow.com/bundle/quebec-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)
azure-monitor Proactive Performance Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-performance-diagnostics.md
Last updated 05/04/2017
[Application Insights](../app/app-insights-overview.md) automatically analyzes the performance of your web application, and can warn you about potential problems.
-This feature requires no special setup, other than configuring your app for Application Insights for your [supported language](../app/platforms.md). It's active when your app generates enough telemetry.
+This feature requires no special setup, other than configuring your app for Application Insights for your [supported language](../app/app-insights-overview.md#supported-languages). It's active when your app generates enough telemetry.
## When would I get a smart detection notification?
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
The core API is uniform across all platforms, apart from a few variations like `
| Method | Used for | | | |
-| [`TrackPageView`](#page-views) |Pages, screens, blades, or forms. |
+| [`TrackPageView`](#page-views) |Pages, screens, panes, or forms. |
| [`TrackEvent`](#trackevent) |User actions and other events. Used to track user behavior or to monitor performance. | | [`GetMetric`](#getmetric) |Zero and multidimensional metrics, centrally configured aggregation, C# only. | | [`TrackMetric`](#trackmetric) |Performance measurements such as queue lengths not related to specific events. |
The telemetry is available in the `customMetrics` table in [Application Insights
## Page views
-In a device or webpage app, page view telemetry is sent by default when each screen or page is loaded. But you can change the default to track page views at more or different times. For example, in an app that displays tabs or blades, you might want to track a page whenever the user opens a new blade.
+In a device or webpage app, page view telemetry is sent by default when each screen or page is loaded. But you can change the default to track page views at more or different times. For example, in an app that displays tabs or panes, you might want to track a page whenever the user opens a new pane.
User and session data is sent as properties along with page views, so the user and session charts come alive when there's page view telemetry.
azure-monitor Api Filtering Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md
cfg: { // Application Insights Configuration
</script> ```
-For a summary of the noncustom properties available on the telemetry item, see [Application Insights Export Data Model](./export-data-model.md).
+For a summary of the noncustom properties available on the telemetry item, see [Application Insights Export Data Model](./export-telemetry.md#application-insights-export-data-model).
You can add as many initializers as you like. They're called in the order that they're added.
public void Initialize(ITelemetry telemetry)
} ```
-#### Control the client IP address used for gelocation mappings
+#### Control the client IP address used for geolocation mappings
The following sample initializer sets the client IP which will be used for geolocation mapping, instead of the client socket IP address, during telemetry ingestion.
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
Post coding questions to [Stack Overflow]() using an Application Insights tag.
### User Voice Leave product feedback for the engineering team on [UserVoice](https://feedback.azure.com/d365community/forum/3887dc70-2025-ec11-b6e6-000d3a4f09d0).+
+## Supported languages
+
+* [C#|VB (.NET)](./asp-net.md)
+* [Java](./java-in-process-agent.md)
+* [JavaScript](./javascript.md)
+* [Node.js](./nodejs.md)
+* [Python](./opencensus-python.md)
+
+### Supported platforms and frameworks
+
+Supported platforms and frameworks are listed here.
+
+#### Azure service integration (portal enablement, Azure Resource Manager deployments)
+* [Azure Virtual Machines and Azure Virtual Machine Scale Sets](./azure-vm-vmss-apps.md)
+* [Azure App Service](./azure-web-apps.md)
+* [Azure Functions](../../azure-functions/functions-monitoring.md)
+* [Azure Cloud Services](./azure-web-apps-net-core.md), including both web and worker roles
+
+#### Auto-instrumentation (enable without code changes)
+* [ASP.NET - for web apps hosted with IIS](./status-monitor-v2-overview.md)
+* [ASP.NET Core - for web apps hosted with IIS](./status-monitor-v2-overview.md)
+* [Java](./java-in-process-agent.md)
+
+#### Manual instrumentation / SDK (some code changes required)
+* [ASP.NET](./asp-net.md)
+* [ASP.NET Core](./asp-net-core.md)
+* [Node.js](./nodejs.md)
+* [Python](./opencensus-python.md)
+* [JavaScript - web](./javascript.md)
+ * [React](./javascript-react-plugin.md)
+ * [React Native](./javascript-react-native-plugin.md)
+ * [Angular](./javascript-angular-plugin.md)
+* [Windows desktop applications, services, and worker roles](./windows-desktop.md)
+* [Universal Windows app](../app/mobile-center-quickstart.md) (App Center)
+* [Android](../app/mobile-center-quickstart.md) (App Center)
+* [iOS](../app/mobile-center-quickstart.md) (App Center)
+
+> [!NOTE]
+> OpenTelemetry-based instrumentation is available in preview for [C#, Node.js, and Python](opentelemetry-enable.md). Review the limitations noted at the beginning of each language's official documentation. If you require a full-feature experience, use the existing Application Insights SDKs.
+
+### Logging frameworks
+* [ILogger](./ilogger.md)
+* [Log4Net, NLog, or System.Diagnostics.Trace](./asp-net-trace-logs.md)
+* [Log4J, Logback, or java.util.logging](./java-in-process-agent.md#autocollected-logs)
+* [LogStash plug-in](https://github.com/Azure/azure-diagnostics-tools/tree/master/Logstash/logstash-output-applicationinsights)
+* [Azure Monitor](/archive/blogs/msoms/application-insights-connector-in-oms)
+
+### Export and data analysis
+* [Power BI](https://powerbi.microsoft.com/blog/explore-your-application-insights-data-with-power-bi/)
+* [Power BI for workspace-based resources](../logs/log-powerbi.md)
+
+### Unsupported SDKs
+Several other community-supported Application Insights SDKs exist. However, Azure Monitor only provides support when you use the supported instrumentation options listed on this page. We're constantly assessing opportunities to expand our support for other languages. Follow [Azure Updates for Application Insights](https://azure.microsoft.com/updates/?query=application%20insights) for the latest SDK news.
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
If you're having trouble getting Application Map to work as expected, try these
1. Make sure you're using an officially supported SDK. Unsupported or community SDKs might not support correlation.
- For a list of supported SDKs, see [Application Insights: Languages, platforms, and integrations](./platforms.md).
+ For a list of supported SDKs, see [Application Insights: Languages, platforms, and integrations](./app-insights-overview.md#supported-languages).
1. Upgrade all components to the latest SDK version.
azure-monitor Auto Collect Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/auto-collect-dependencies.md
A list of the latest [currently-supported modules](https://github.com/microsoft/
- Set up custom dependency tracking for [OpenCensus Python](./opencensus-python-dependency.md). - [Write custom dependency telemetry](./api-custom-events-metrics.md#trackdependency) - See [data model](./data-model.md) for Application Insights types and data model.-- Check out [platforms](./platforms.md) supported by Application Insights.
+- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
azure-monitor Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/correlation.md
You can also set the cloud role name via environment variable or system property
- Write [custom telemetry](../../azure-monitor/app/api-custom-events-metrics.md). - For advanced correlation scenarios in ASP.NET Core and ASP.NET, see [Track custom operations](custom-operations-tracking.md). - Learn more about [setting cloud_RoleName](./app-map.md#set-or-override-cloud-role-name) for other SDKs.-- Onboard all components of your microservice on Application Insights. Check out the [supported platforms](./platforms.md).
+- Onboard all components of your microservice on Application Insights. Check out the [supported platforms](./app-insights-overview.md#supported-languages).
- See the [data model](./data-model.md) for Application Insights types. - Learn how to [extend and filter telemetry](./api-filtering-sampling.md). - Review the [Application Insights config reference](configuration-with-applicationinsights-config.md).
azure-monitor Data Model Dependency Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-dependency-telemetry.md
Indication of successful or unsuccessful call.
- Set up dependency tracking for [Java](./java-in-process-agent.md). - [Write custom dependency telemetry](./api-custom-events-metrics.md#trackdependency) - See [data model](data-model.md) for Application Insights types and data model.-- Check out [platforms](./platforms.md) supported by Application Insights.
+- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
azure-monitor Data Model Event Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-event-telemetry.md
Max length: 512 characters
- See [data model](data-model.md) for Application Insights types and data model. - [Write custom event telemetry](./api-custom-events-metrics.md#trackevent)-- Check out [platforms](./platforms.md) supported by Application Insights.
+- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
azure-monitor Data Model Exception Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-exception-telemetry.md
Trace severity level. Value can be `Verbose`, `Information`, `Warning`, `Error`,
- See [data model](data-model.md) for Application Insights types and data model. - Learn how to [diagnose exceptions in your web apps with Application Insights](./asp-net-exceptions.md).-- Check out [platforms](./platforms.md) supported by Application Insights.
+- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
azure-monitor Data Model Metric Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-metric-telemetry.md
Metric with the custom property `CustomPerfCounter` set to `true` indicate that
- Learn how to use [Application Insights API for custom events and metrics](./api-custom-events-metrics.md#trackmetric). - See [data model](data-model.md) for Application Insights types and data model.-- Check out [platforms](./platforms.md) supported by Application Insights.
+- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
azure-monitor Data Model Request Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-request-telemetry.md
You can read more on request result code and status code in the [blog post](http
- [Write custom request telemetry](./api-custom-events-metrics.md#trackrequest) - See [data model](data-model.md) for Application Insights types and data model. - Learn how to [configure ASP.NET Core](./asp-net.md) application with Application Insights.-- Check out [platforms](./platforms.md) supported by Application Insights.
+- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
azure-monitor Data Model Trace Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-trace-telemetry.md
Trace severity level. Value can be `Verbose`, `Information`, `Warning`, `Error`,
- [Explore Java trace logs in Application Insights](./java-in-process-agent.md#autocollected-logs). - See [data model](data-model.md) for Application Insights types and data model. - [Write custom trace telemetry](./api-custom-events-metrics.md#tracktrace)-- Check out [platforms](./platforms.md) supported by Application Insights.
+- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
azure-monitor Data Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model.md
To report data model or schema problems and suggestions, use our [GitHub reposit
- [Write custom telemetry](./api-custom-events-metrics.md). - Learn how to [extend and filter telemetry](./api-filtering-sampling.md). - Use [sampling](./sampling.md) to minimize the amount of telemetry based on data model.-- Check out [platforms](./platforms.md) supported by Application Insights.
+- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
azure-monitor Data Retention Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md
This product includes GeoLite2 data created by [MaxMind](https://www.maxmind.com
[config]: ./configuration-with-applicationinsights-config.md [greenbrown]: ./asp-net.md [java]: ./java-in-process-agent.md
-[platforms]: ./platforms.md
+[platforms]: ./app-insights-overview.md#supported-languages
[pricing]: https://azure.microsoft.com/pricing/details/application-insights/ [redfield]: ./status-monitor-v2-overview.md [start]: ./app-insights-overview.md
azure-monitor Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/devops.md
When an alert is raised, Application Insights can automatically create a work it
Getting started with Application Insights is easy. The main options are: * [IIS servers](./status-monitor-v2-overview.md)
-* Instrument your project during development. You can do this for [ASP.NET](./asp-net.md) or [Java](./java-in-process-agent.md) apps, and [Node.js](./nodejs.md) and a host of [other types](./platforms.md).
+* Instrument your project during development. You can do this for [ASP.NET](./asp-net.md) or [Java](./java-in-process-agent.md) apps, and [Node.js](./nodejs.md) and a host of [other types](./app-insights-overview.md#supported-languages).
* Instrument [any web page](./javascript.md) by adding a short code snippet.
azure-monitor Export Data Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-data-model.md
- Title: Azure Application Insights Data Model | Microsoft Docs
-description: Describes properties exported from continuous export in JSON, and used as filters.
- Previously updated : 01/08/2019---
-# Application Insights Export Data Model
-This table lists the properties of telemetry sent from the [Application Insights](./app-insights-overview.md) SDKs to the portal.
-You'll see these properties in data output from [Continuous Export](export-telemetry.md).
-They also appear in property filters in [Metric Explorer](../essentials/metrics-charts.md) and [Diagnostic Search](./diagnostic-search.md).
-
-Points to note:
-
-* `[0]` in these tables denotes a point in the path where you have to insert an index; but it isn't always 0.
-* Time durations are in tenths of a microsecond, so 10000000 == 1 second.
-* Dates and times are UTC, and are given in the ISO format `yyyy-MM-DDThh:mm:ss.sssZ`
-
-## Example
-
-```json
-// A server report about an HTTP request
-{
- "request": [
- {
- "urlData": { // derived from 'url'
- "host": "contoso.org",
- "base": "/",
- "hashTag": ""
- },
- "responseCode": 200, // Sent to client
- "success": true, // Default == responseCode<400
- // Request id becomes the operation id of child events
- "id": "fCOhCdCnZ9I=",
- "name": "GET Home/Index",
- "count": 1, // 100% / sampling rate
- "durationMetric": {
- "value": 1046804.0, // 10000000 == 1 second
- // Currently the following fields are redundant:
- "count": 1.0,
- "min": 1046804.0,
- "max": 1046804.0,
- "stdDev": 0.0,
- "sampledValue": 1046804.0
- },
- "url": "/"
- }
- ],
- "internal": {
- "data": {
- "id": "7f156650-ef4c-11e5-8453-3f984b167d05",
- "documentVersion": "1.61"
- }
- },
- "context": {
- "device": { // client browser
- "type": "PC",
- "screenResolution": { },
- "roleInstance": "WFWEB14B.fabrikam.net"
- },
- "application": { },
- "location": { // derived from client ip
- "continent": "North America",
- "country": "United States",
- // last octagon is anonymized to 0 at portal:
- "clientip": "168.62.177.0",
- "province": "",
- "city": ""
- },
- "data": {
- "isSynthetic": true, // we identified source as a bot
- // percentage of generated data sent to portal:
- "samplingRate": 100.0,
- "eventTime": "2016-03-21T10:05:45.7334717Z" // UTC
- },
- "user": {
- "isAuthenticated": false,
- "anonId": "us-tx-sn1-azr", // bot agent id
- "anonAcquisitionDate": "0001-01-01T00:00:00Z",
- "authAcquisitionDate": "0001-01-01T00:00:00Z",
- "accountAcquisitionDate": "0001-01-01T00:00:00Z"
- },
- "operation": {
- "id": "fCOhCdCnZ9I=",
- "parentId": "fCOhCdCnZ9I=",
- "name": "GET Home/Index"
- },
- "cloud": { },
- "serverDevice": { },
- "custom": { // set by custom fields of track calls
- "dimensions": [ ],
- "metrics": [ ]
- },
- "session": {
- "id": "65504c10-44a6-489e-b9dc-94184eb00d86",
- "isFirst": true
- }
- }
-}
-```
-
-## Context
-All types of telemetry are accompanied by a context section. Not all of these fields are transmitted with every data point.
-
-| Path | Type | Notes |
-| | | |
-| context.custom.dimensions [0] |object [ ] |Key-value string pairs set by custom properties parameter. Key max length 100, values max length 1024. More than 100 unique values, the property can be searched but cannot be used for segmentation. Max 200 keys per ikey. |
-| context.custom.metrics [0] |object [ ] |Key-value pairs set by custom measurements parameter and by TrackMetrics. Key max length 100, values may be numeric. |
-| context.data.eventTime |string |UTC |
-| context.data.isSynthetic |boolean |Request appears to come from a bot or web test. |
-| context.data.samplingRate |number |Percentage of telemetry generated by the SDK that is sent to portal. Range 0.0-100.0. |
-| context.device |object |Client device |
-| context.device.browser |string |IE, Chrome, ... |
-| context.device.browserVersion |string |Chrome 48.0, ... |
-| context.device.deviceModel |string | |
-| context.device.deviceName |string | |
-| context.device.id |string | |
-| context.device.locale |string |en-GB, de-DE, ... |
-| context.device.network |string | |
-| context.device.oemName |string | |
-| context.device.os |string | |
-| context.device.osVersion |string |Host OS |
-| context.device.roleInstance |string |ID of server host |
-| context.device.roleName |string | |
-| context.device.screenResolution |string | |
-| context.device.type |string |PC, Browser, ... |
-| context.location |object |Derived from `clientip`. |
-| context.location.city |string |Derived from `clientip`, if known |
-| context.location.clientip |string |Last octagon is anonymized to 0. |
-| context.location.continent |string | |
-| context.location.country |string | |
-| context.location.province |string |State or province |
-| context.operation.id |string |Items that have the same `operation id` are shown as Related Items in the portal. Usually the `request id`. |
-| context.operation.name |string |url or request name |
-| context.operation.parentId |string |Allows nested related items. |
-| context.session.id |string |`Id` of a group of operations from the same source. A period of 30 minutes without an operation signals the end of a session. |
-| context.session.isFirst |boolean | |
-| context.user.accountAcquisitionDate |string | |
-| context.user.accountId |string | |
-| context.user.anonAcquisitionDate |string | |
-| context.user.anonId |string | |
-| context.user.authAcquisitionDate |string |[Authenticated User](./api-custom-events-metrics.md#authenticated-users) |
-| context.user.authId |string | |
-| context.user.isAuthenticated |boolean | |
-| context.user.storeRegion |string | |
-| internal.data.documentVersion |string | |
-| internal.data.id |string | `Unique id` that is assigned when an item is ingested to Application Insights |
-
-## Events
-Custom events generated by [TrackEvent()](./api-custom-events-metrics.md#trackevent).
-
-| Path | Type | Notes |
-| | | |
-| event [0] count |integer |100/([sampling](./sampling.md) rate). For example 4 =&gt; 25%. |
-| event [0] name |string |Event name. Max length 250. |
-| event [0] url |string | |
-| event [0] urlData.base |string | |
-| event [0] urlData.host |string | |
-
-## Exceptions
-Reports [exceptions](./asp-net-exceptions.md) in the server and in the browser.
-
-| Path | Type | Notes |
-| | | |
-| basicException [0] assembly |string | |
-| basicException [0] count |integer |100/([sampling](./sampling.md) rate). For example 4 =&gt; 25%. |
-| basicException [0] exceptionGroup |string | |
-| basicException [0] exceptionType |string | |
-| basicException [0] failedUserCodeMethod |string | |
-| basicException [0] failedUserCodeAssembly |string | |
-| basicException [0] handledAt |string | |
-| basicException [0] hasFullStack |boolean | |
-| basicException [0] `id` |string | |
-| basicException [0] method |string | |
-| basicException [0] message |string |Exception message. Max length 10k. |
-| basicException [0] outerExceptionMessage |string | |
-| basicException [0] outerExceptionThrownAtAssembly |string | |
-| basicException [0] outerExceptionThrownAtMethod |string | |
-| basicException [0] outerExceptionType |string | |
-| basicException [0] outerId |string | |
-| basicException [0] parsedStack [0] assembly |string | |
-| basicException [0] parsedStack [0] fileName |string | |
-| basicException [0] parsedStack [0] level |integer | |
-| basicException [0] parsedStack [0] line |integer | |
-| basicException [0] parsedStack [0] method |string | |
-| basicException [0] stack |string |Max length 10k |
-| basicException [0] typeName |string | |
-
-## Trace Messages
-Sent by [TrackTrace](./api-custom-events-metrics.md#tracktrace), and by the [logging adapters](./asp-net-trace-logs.md).
-
-| Path | Type | Notes |
-| | | |
-| message [0] loggerName |string | |
-| message [0] parameters |string | |
-| message [0] raw |string |The log message, max length 10k. |
-| message [0] severityLevel |string | |
-
-## Remote dependency
-Sent by TrackDependency. Used to report performance and usage of [calls to dependencies](./asp-net-dependencies.md) in the server, and AJAX calls in the browser.
-
-| Path | Type | Notes |
-| | | |
-| remoteDependency [0] async |boolean | |
-| remoteDependency [0] baseName |string | |
-| remoteDependency [0] commandName |string |For example "home/index" |
-| remoteDependency [0] count |integer |100/([sampling](./sampling.md) rate). For example 4 =&gt; 25%. |
-| remoteDependency [0] dependencyTypeName |string |HTTP, SQL, ... |
-| remoteDependency [0] durationMetric.value |number |Time from call to completion of response by dependency |
-| remoteDependency [0] `id` |string | |
-| remoteDependency [0] name |string |Url. Max length 250. |
-| remoteDependency [0] resultCode |string |from HTTP dependency |
-| remoteDependency [0] success |boolean | |
-| remoteDependency [0] type |string |Http, Sql,... |
-| remoteDependency [0] url |string |Max length 2000 |
-| remoteDependency [0] urlData.base |string |Max length 2000 |
-| remoteDependency [0] urlData.hashTag |string | |
-| remoteDependency [0] urlData.host |string |Max length 200 |
-
-## Requests
-Sent by [TrackRequest](./api-custom-events-metrics.md#trackrequest). The standard modules use this to reports server response time, measured at the server.
-
-| Path | Type | Notes |
-| | | |
-| request [0] count |integer |100/([sampling](./sampling.md) rate). For example: 4 =&gt; 25%. |
-| request [0] durationMetric.value |number |Time from request arriving to response. 1e7 == 1s |
-| request [0] `id` |string |`Operation id` |
-| request [0] name |string |GET/POST + url base. Max length 250 |
-| request [0] responseCode |integer |HTTP response sent to client |
-| request [0] success |boolean |Default == (responseCode &lt; 400) |
-| request [0] url |string |Not including host |
-| request [0] urlData.base |string | |
-| request [0] urlData.hashTag |string | |
-| request [0] urlData.host |string | |
-
-## Page View Performance
-Sent by the browser. Measures the time to process a page, from user initiating the request to display complete (excluding async AJAX calls).
-
-Context values show client OS and browser version.
-
-| Path | Type | Notes |
-| | | |
-| clientPerformance [0] clientProcess.value |integer |Time from end of receiving the HTML to displaying the page. |
-| clientPerformance [0] name |string | |
-| clientPerformance [0] networkConnection.value |integer |Time taken to establish a network connection. |
-| clientPerformance [0] receiveRequest.value |integer |Time from end of sending the request to receiving the HTML in reply. |
-| clientPerformance [0] sendRequest.value |integer |Time from taken to send the HTTP request. |
-| clientPerformance [0] total.value |integer |Time from starting to send the request to displaying the page. |
-| clientPerformance [0] url |string |URL of this request |
-| clientPerformance [0] urlData.base |string | |
-| clientPerformance [0] urlData.hashTag |string | |
-| clientPerformance [0] urlData.host |string | |
-| clientPerformance [0] urlData.protocol |string | |
-
-## Page Views
-Sent by trackPageView() or [stopTrackPage](./api-custom-events-metrics.md#page-views)
-
-| Path | Type | Notes |
-| | | |
-| view [0] count |integer |100/([sampling](./sampling.md) rate). For example 4 =&gt; 25%. |
-| view [0] durationMetric.value |integer |Value optionally set in trackPageView() or by startTrackPage() - stopTrackPage(). Not the same as clientPerformance values. |
-| view [0] name |string |Page title. Max length 250 |
-| view [0] url |string | |
-| view [0] urlData.base |string | |
-| view [0] urlData.hashTag |string | |
-| view [0] urlData.host |string | |
-
-## Availability
-Reports [availability web tests](./monitor-web-app-availability.md).
-
-| Path | Type | Notes |
-| | | |
-| availability [0] availabilityMetric.name |string |availability |
-| availability [0] availabilityMetric.value |number |1.0 or 0.0 |
-| availability [0] count |integer |100/([sampling](./sampling.md) rate). For example 4 =&gt; 25%. |
-| availability [0] dataSizeMetric.name |string | |
-| availability [0] dataSizeMetric.value |integer | |
-| availability [0] durationMetric.name |string | |
-| availability [0] durationMetric.value |number |Duration of test. 1e7==1s |
-| availability [0] message |string |Failure diagnostic |
-| availability [0] result |string |Pass/Fail |
-| availability [0] runLocation |string |Geo source of http req |
-| availability [0] testName |string | |
-| availability [0] testRunId |string | |
-| availability [0] testTimestamp |string | |
-
-## Metrics
-Generated by TrackMetric().
-
-The metric value is found in context.custom.metrics[0]
-
-For example:
-
-```json
-{
- "metric": [ ],
- "context": {
- ...
- "custom": {
- "dimensions": [
- { "ProcessId": "4068" }
- ],
- "metrics": [
- {
- "dispatchRate": {
- "value": 0.001295,
- "count": 1.0,
- "min": 0.001295,
- "max": 0.001295,
- "stdDev": 0.0,
- "sampledValue": 0.001295,
- "sum": 0.001295
- }
- }
- ]
- }
- }
-}
-```
-
-## About metric values
-Metric values, both in metric reports and elsewhere, are reported with a standard object structure. For example:
-
-```json
-"durationMetric": {
- "name": "contoso.org",
- "type": "Aggregation",
- "value": 468.71603053650279,
- "count": 1.0,
- "min": 468.71603053650279,
- "max": 468.71603053650279,
- "stdDev": 0.0,
- "sampledValue": 468.71603053650279
-}
-```
-
-Currently - though this might change in the future - in all values reported from the standard SDK modules, `count==1` and only the `name` and `value` fields are useful. The only case where they would be different would be if you write your own TrackMetric calls in which you set the other parameters.
-
-The purpose of the other fields is to allow metrics to be aggregated in the SDK, to reduce traffic to the portal. For example, you could average several successive readings before sending each metric report. Then you would calculate the min, max, standard deviation and aggregate value (sum or average) and set count to the number of readings represented by the report.
-
-In the tables above, we have omitted the rarely used fields count, min, max, stdDev, and sampledValue.
-
-Instead of pre-aggregating metrics, you can use [sampling](./sampling.md) if you need to reduce the volume of telemetry.
-
-### Durations
-Except where otherwise noted, durations are represented in tenths of a microsecond, so that 10000000.0 means 1 second.
-
-## See also
-* [Application Insights](./app-insights-overview.md)
-* [Continuous Export](export-telemetry.md)
-* [Code samples](export-telemetry.md#code-samples)
-
azure-monitor Export Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-telemetry.md
After the first export is finished, you'll find the following structure in your
|Name | Description | |:-|:|
-| [Availability](export-data-model.md#availability) | Reports [availability web tests](./monitor-web-app-availability.md). |
-| [Event](export-data-model.md#events) | Custom events generated by [TrackEvent()](./api-custom-events-metrics.md#trackevent).
-| [Exceptions](export-data-model.md#exceptions) |Reports [exceptions](./asp-net-exceptions.md) in the server and in the browser.
-| [Messages](export-data-model.md#trace-messages) | Sent by [TrackTrace](./api-custom-events-metrics.md#tracktrace), and by the [logging adapters](./asp-net-trace-logs.md).
-| [Metrics](export-data-model.md#metrics) | Generated by metric API calls.
-| [PerformanceCounters](export-data-model.md) | Performance Counters collected by Application Insights.
-| [Requests](export-data-model.md#requests)| Sent by [TrackRequest](./api-custom-events-metrics.md#trackrequest). The standard modules use requests to report server response time, measured at the server.|
+| [Availability](#availability) | Reports [availability web tests](./monitor-web-app-availability.md). |
+| [Event](#events) | Custom events generated by [TrackEvent()](./api-custom-events-metrics.md#trackevent).
+| [Exceptions](#exceptions) |Reports [exceptions](./asp-net-exceptions.md) in the server and in the browser.
+| [Messages](#trace-messages) | Sent by [TrackTrace](./api-custom-events-metrics.md#tracktrace), and by the [logging adapters](./asp-net-trace-logs.md).
+| [Metrics](#metrics) | Generated by metric API calls.
+| [PerformanceCounters](#application-insights-export-data-model) | Performance Counters collected by Application Insights.
+| [Requests](#requests)| Sent by [TrackRequest](./api-custom-events-metrics.md#trackrequest). The standard modules use requests to report server response time, measured at the server.|
### Edit continuous export
Time durations are in ticks, where 10 000 ticks = 1 ms. For example, these value
"clientProcess": {"value": 17970000.0} ```
-For a detailed data model reference for the property types and values, see [Application Insights export data model](export-data-model.md).
+For a detailed data model reference for the property types and values, see [Application Insights export data model](#application-insights-export-data-model).
## Process the data On a small scale, you can write some code to pull apart your data and read it into a spreadsheet. For example:
Yes. Select **Disable**.
* [Stream Analytics sample](../../stream-analytics/app-insights-export-stream-analytics.md) * [Export to SQL by using Stream Analytics][exportasa]
-* [Detailed data model reference for property types and values](export-data-model.md)
+* [Detailed data model reference for property types and values](#application-insights-export-data-model)
## Diagnostic settings-based export
To migrate to diagnostic settings export:
> > These steps are necessary because Application Insights accesses telemetry across Application Insight resources, including Log Analytics workspaces, to provide complete end-to-end transaction operations and accurate application maps. Because diagnostic logs use the same table names, duplicate telemetry can be displayed if the user has access to multiple resources that contain the same data.
+## Application Insights Export Data Model
+This table lists the properties of telemetry sent from the [Application Insights](./app-insights-overview.md) SDKs to the portal.
+You'll see these properties in data output from [Continuous Export](export-telemetry.md).
+They also appear in property filters in [Metric Explorer](../essentials/metrics-charts.md) and [Diagnostic Search](./diagnostic-search.md).
+
+Points to note:
+
+* `[0]` in these tables denotes a point in the path where you have to insert an index; but it isn't always 0.
+* Time durations are in tenths of a microsecond, so 10000000 == 1 second.
+* Dates and times are UTC, and are given in the ISO format `yyyy-MM-DDThh:mm:ss.sssZ`
+
+### Example
+
+```json
+// A server report about an HTTP request
+{
+ "request": [
+ {
+ "urlData": { // derived from 'url'
+ "host": "contoso.org",
+ "base": "/",
+ "hashTag": ""
+ },
+ "responseCode": 200, // Sent to client
+ "success": true, // Default == responseCode<400
+ // Request id becomes the operation id of child events
+ "id": "fCOhCdCnZ9I=",
+ "name": "GET Home/Index",
+ "count": 1, // 100% / sampling rate
+ "durationMetric": {
+ "value": 1046804.0, // 10000000 == 1 second
+ // Currently the following fields are redundant:
+ "count": 1.0,
+ "min": 1046804.0,
+ "max": 1046804.0,
+ "stdDev": 0.0,
+ "sampledValue": 1046804.0
+ },
+ "url": "/"
+ }
+ ],
+ "internal": {
+ "data": {
+ "id": "7f156650-ef4c-11e5-8453-3f984b167d05",
+ "documentVersion": "1.61"
+ }
+ },
+ "context": {
+ "device": { // client browser
+ "type": "PC",
+ "screenResolution": { },
+ "roleInstance": "WFWEB14B.fabrikam.net"
+ },
+ "application": { },
+ "location": { // derived from client ip
+ "continent": "North America",
+ "country": "United States",
+ // last octagon is anonymized to 0 at portal:
+ "clientip": "168.62.177.0",
+ "province": "",
+ "city": ""
+ },
+ "data": {
+ "isSynthetic": true, // we identified source as a bot
+ // percentage of generated data sent to portal:
+ "samplingRate": 100.0,
+ "eventTime": "2016-03-21T10:05:45.7334717Z" // UTC
+ },
+ "user": {
+ "isAuthenticated": false,
+ "anonId": "us-tx-sn1-azr", // bot agent id
+ "anonAcquisitionDate": "0001-01-01T00:00:00Z",
+ "authAcquisitionDate": "0001-01-01T00:00:00Z",
+ "accountAcquisitionDate": "0001-01-01T00:00:00Z"
+ },
+ "operation": {
+ "id": "fCOhCdCnZ9I=",
+ "parentId": "fCOhCdCnZ9I=",
+ "name": "GET Home/Index"
+ },
+ "cloud": { },
+ "serverDevice": { },
+ "custom": { // set by custom fields of track calls
+ "dimensions": [ ],
+ "metrics": [ ]
+ },
+ "session": {
+ "id": "65504c10-44a6-489e-b9dc-94184eb00d86",
+ "isFirst": true
+ }
+ }
+}
+```
+
+### Context
+All types of telemetry are accompanied by a context section. Not all of these fields are transmitted with every data point.
+
+| Path | Type | Notes |
+| | | |
+| context.custom.dimensions [0] |object [ ] |Key-value string pairs set by custom properties parameter. Key max length 100, values max length 1024. More than 100 unique values, the property can be searched but cannot be used for segmentation. Max 200 keys per ikey. |
+| context.custom.metrics [0] |object [ ] |Key-value pairs set by custom measurements parameter and by TrackMetrics. Key max length 100, values may be numeric. |
+| context.data.eventTime |string |UTC |
+| context.data.isSynthetic |boolean |Request appears to come from a bot or web test. |
+| context.data.samplingRate |number |Percentage of telemetry generated by the SDK that is sent to portal. Range 0.0-100.0. |
+| context.device |object |Client device |
+| context.device.browser |string |IE, Chrome, ... |
+| context.device.browserVersion |string |Chrome 48.0, ... |
+| context.device.deviceModel |string | |
+| context.device.deviceName |string | |
+| context.device.id |string | |
+| context.device.locale |string |en-GB, de-DE, ... |
+| context.device.network |string | |
+| context.device.oemName |string | |
+| context.device.os |string | |
+| context.device.osVersion |string |Host OS |
+| context.device.roleInstance |string |ID of server host |
+| context.device.roleName |string | |
+| context.device.screenResolution |string | |
+| context.device.type |string |PC, Browser, ... |
+| context.location |object |Derived from `clientip`. |
+| context.location.city |string |Derived from `clientip`, if known |
+| context.location.clientip |string |Last octagon is anonymized to 0. |
+| context.location.continent |string | |
+| context.location.country |string | |
+| context.location.province |string |State or province |
+| context.operation.id |string |Items that have the same `operation id` are shown as Related Items in the portal. Usually the `request id`. |
+| context.operation.name |string |url or request name |
+| context.operation.parentId |string |Allows nested related items. |
+| context.session.id |string |`Id` of a group of operations from the same source. A period of 30 minutes without an operation signals the end of a session. |
+| context.session.isFirst |boolean | |
+| context.user.accountAcquisitionDate |string | |
+| context.user.accountId |string | |
+| context.user.anonAcquisitionDate |string | |
+| context.user.anonId |string | |
+| context.user.authAcquisitionDate |string |[Authenticated User](./api-custom-events-metrics.md#authenticated-users) |
+| context.user.authId |string | |
+| context.user.isAuthenticated |boolean | |
+| context.user.storeRegion |string | |
+| internal.data.documentVersion |string | |
+| internal.data.id |string | `Unique id` that is assigned when an item is ingested to Application Insights |
+
+### Events
+Custom events generated by [TrackEvent()](./api-custom-events-metrics.md#trackevent).
+
+| Path | Type | Notes |
+| | | |
+| event [0] count |integer |100/([sampling](./sampling.md) rate). For example 4 =&gt; 25%. |
+| event [0] name |string |Event name. Max length 250. |
+| event [0] url |string | |
+| event [0] urlData.base |string | |
+| event [0] urlData.host |string | |
+
+### Exceptions
+Reports [exceptions](./asp-net-exceptions.md) in the server and in the browser.
+
+| Path | Type | Notes |
+| | | |
+| basicException [0] assembly |string | |
+| basicException [0] count |integer |100/([sampling](./sampling.md) rate). For example 4 =&gt; 25%. |
+| basicException [0] exceptionGroup |string | |
+| basicException [0] exceptionType |string | |
+| basicException [0] failedUserCodeMethod |string | |
+| basicException [0] failedUserCodeAssembly |string | |
+| basicException [0] handledAt |string | |
+| basicException [0] hasFullStack |boolean | |
+| basicException [0] `id` |string | |
+| basicException [0] method |string | |
+| basicException [0] message |string |Exception message. Max length 10k. |
+| basicException [0] outerExceptionMessage |string | |
+| basicException [0] outerExceptionThrownAtAssembly |string | |
+| basicException [0] outerExceptionThrownAtMethod |string | |
+| basicException [0] outerExceptionType |string | |
+| basicException [0] outerId |string | |
+| basicException [0] parsedStack [0] assembly |string | |
+| basicException [0] parsedStack [0] fileName |string | |
+| basicException [0] parsedStack [0] level |integer | |
+| basicException [0] parsedStack [0] line |integer | |
+| basicException [0] parsedStack [0] method |string | |
+| basicException [0] stack |string |Max length 10k |
+| basicException [0] typeName |string | |
+
+### Trace Messages
+Sent by [TrackTrace](./api-custom-events-metrics.md#tracktrace), and by the [logging adapters](./asp-net-trace-logs.md).
+
+| Path | Type | Notes |
+| | | |
+| message [0] loggerName |string | |
+| message [0] parameters |string | |
+| message [0] raw |string |The log message, max length 10k. |
+| message [0] severityLevel |string | |
+
+### Remote dependency
+Sent by TrackDependency. Used to report performance and usage of [calls to dependencies](./asp-net-dependencies.md) in the server, and AJAX calls in the browser.
+
+| Path | Type | Notes |
+| | | |
+| remoteDependency [0] async |boolean | |
+| remoteDependency [0] baseName |string | |
+| remoteDependency [0] commandName |string |For example "home/index" |
+| remoteDependency [0] count |integer |100/([sampling](./sampling.md) rate). For example 4 =&gt; 25%. |
+| remoteDependency [0] dependencyTypeName |string |HTTP, SQL, ... |
+| remoteDependency [0] durationMetric.value |number |Time from call to completion of response by dependency |
+| remoteDependency [0] `id` |string | |
+| remoteDependency [0] name |string |Url. Max length 250. |
+| remoteDependency [0] resultCode |string |from HTTP dependency |
+| remoteDependency [0] success |boolean | |
+| remoteDependency [0] type |string |Http, Sql,... |
+| remoteDependency [0] url |string |Max length 2000 |
+| remoteDependency [0] urlData.base |string |Max length 2000 |
+| remoteDependency [0] urlData.hashTag |string | |
+| remoteDependency [0] urlData.host |string |Max length 200 |
+
+### Requests
+Sent by [TrackRequest](./api-custom-events-metrics.md#trackrequest). The standard modules use this to reports server response time, measured at the server.
+
+| Path | Type | Notes |
+| | | |
+| request [0] count |integer |100/([sampling](./sampling.md) rate). For example: 4 =&gt; 25%. |
+| request [0] durationMetric.value |number |Time from request arriving to response. 1e7 == 1s |
+| request [0] `id` |string |`Operation id` |
+| request [0] name |string |GET/POST + url base. Max length 250 |
+| request [0] responseCode |integer |HTTP response sent to client |
+| request [0] success |boolean |Default == (responseCode &lt; 400) |
+| request [0] url |string |Not including host |
+| request [0] urlData.base |string | |
+| request [0] urlData.hashTag |string | |
+| request [0] urlData.host |string | |
+
+### Page View Performance
+Sent by the browser. Measures the time to process a page, from user initiating the request to display complete (excluding async AJAX calls).
+
+Context values show client OS and browser version.
+
+| Path | Type | Notes |
+| | | |
+| clientPerformance [0] clientProcess.value |integer |Time from end of receiving the HTML to displaying the page. |
+| clientPerformance [0] name |string | |
+| clientPerformance [0] networkConnection.value |integer |Time taken to establish a network connection. |
+| clientPerformance [0] receiveRequest.value |integer |Time from end of sending the request to receiving the HTML in reply. |
+| clientPerformance [0] sendRequest.value |integer |Time from taken to send the HTTP request. |
+| clientPerformance [0] total.value |integer |Time from starting to send the request to displaying the page. |
+| clientPerformance [0] url |string |URL of this request |
+| clientPerformance [0] urlData.base |string | |
+| clientPerformance [0] urlData.hashTag |string | |
+| clientPerformance [0] urlData.host |string | |
+| clientPerformance [0] urlData.protocol |string | |
+
+### Page Views
+Sent by trackPageView() or [stopTrackPage](./api-custom-events-metrics.md#page-views)
+
+| Path | Type | Notes |
+| | | |
+| view [0] count |integer |100/([sampling](./sampling.md) rate). For example 4 =&gt; 25%. |
+| view [0] durationMetric.value |integer |Value optionally set in trackPageView() or by startTrackPage() - stopTrackPage(). Not the same as clientPerformance values. |
+| view [0] name |string |Page title. Max length 250 |
+| view [0] url |string | |
+| view [0] urlData.base |string | |
+| view [0] urlData.hashTag |string | |
+| view [0] urlData.host |string | |
+
+### Availability
+Reports [availability web tests](./monitor-web-app-availability.md).
+
+| Path | Type | Notes |
+| | | |
+| availability [0] availabilityMetric.name |string |availability |
+| availability [0] availabilityMetric.value |number |1.0 or 0.0 |
+| availability [0] count |integer |100/([sampling](./sampling.md) rate). For example 4 =&gt; 25%. |
+| availability [0] dataSizeMetric.name |string | |
+| availability [0] dataSizeMetric.value |integer | |
+| availability [0] durationMetric.name |string | |
+| availability [0] durationMetric.value |number |Duration of test. 1e7==1s |
+| availability [0] message |string |Failure diagnostic |
+| availability [0] result |string |Pass/Fail |
+| availability [0] runLocation |string |Geo source of http req |
+| availability [0] testName |string | |
+| availability [0] testRunId |string | |
+| availability [0] testTimestamp |string | |
+
+### Metrics
+Generated by TrackMetric().
+
+The metric value is found in context.custom.metrics[0]
+
+For example:
+
+```json
+{
+ "metric": [ ],
+ "context": {
+ ...
+ "custom": {
+ "dimensions": [
+ { "ProcessId": "4068" }
+ ],
+ "metrics": [
+ {
+ "dispatchRate": {
+ "value": 0.001295,
+ "count": 1.0,
+ "min": 0.001295,
+ "max": 0.001295,
+ "stdDev": 0.0,
+ "sampledValue": 0.001295,
+ "sum": 0.001295
+ }
+ }
+ ]
+ }
+ }
+}
+```
+
+### About metric values
+Metric values, both in metric reports and elsewhere, are reported with a standard object structure. For example:
+
+```json
+"durationMetric": {
+ "name": "contoso.org",
+ "type": "Aggregation",
+ "value": 468.71603053650279,
+ "count": 1.0,
+ "min": 468.71603053650279,
+ "max": 468.71603053650279,
+ "stdDev": 0.0,
+ "sampledValue": 468.71603053650279
+}
+```
+
+Currently - though this might change in the future - in all values reported from the standard SDK modules, `count==1` and only the `name` and `value` fields are useful. The only case where they would be different would be if you write your own TrackMetric calls in which you set the other parameters.
+
+The purpose of the other fields is to allow metrics to be aggregated in the SDK, to reduce traffic to the portal. For example, you could average several successive readings before sending each metric report. Then you would calculate the min, max, standard deviation and aggregate value (sum or average) and set count to the number of readings represented by the report.
+
+In the tables above, we have omitted the rarely used fields count, min, max, stdDev, and sampledValue.
+
+Instead of pre-aggregating metrics, you can use [sampling](./sampling.md) if you need to reduce the volume of telemetry.
+
+#### Durations
+Except where otherwise noted, durations are represented in tenths of a microsecond, so that 10000000.0 means 1 second.
+
+## See also
+* [Application Insights](./app-insights-overview.md)
+* [Continuous Export](export-telemetry.md)
+* [Code samples](export-telemetry.md#code-samples)
+ <!--Link references--> [exportasa]: ../../stream-analytics/app-insights-export-sql-stream-analytics.md
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md
Most configuration fields are named so that they can default to false. All field
| disableFlush&#8203;OnBeforeUnload | If true, flush method won't be called when `onBeforeUnload` event triggers. | boolean<br/> false | | enableSessionStorageBuffer | If true, the buffer with all unsent telemetry is stored in session storage. The buffer is restored on page load. | boolean<br />true | | cookieCfg | Defaults to cookie usage enabled. For full defaults, see [ICookieCfgConfig](#icookiemgrconfig) settings. | [ICookieCfgConfig](#icookiemgrconfig)<br>(Since 2.6.0)<br/>undefined |
-| ~~isCookieUseDisabled~~<br>disableCookiesUsage | If true, the SDK won't store or read any data from cookies. Disables the User and Session cookies and renders the usage blades and experiences useless. `isCookieUseDisable` is deprecated in favor of `disableCookiesUsage`. When both are provided, `disableCookiesUsage` takes precedence.<br>(Since v2.6.0) And if `cookieCfg.enabled` is also defined, it will take precedence over these values. Cookie usage can be re-enabled after initialization via `core.getCookieMgr().setEnabled(true)`. | Alias for [`cookieCfg.enabled`](#icookiemgrconfig)<br>false |
+| ~~isCookieUseDisabled~~<br>disableCookiesUsage | If true, the SDK won't store or read any data from cookies. Disables the User and Session cookies and renders the usage panes and experiences useless. `isCookieUseDisable` is deprecated in favor of `disableCookiesUsage`. When both are provided, `disableCookiesUsage` takes precedence.<br>(Since v2.6.0) And if `cookieCfg.enabled` is also defined, it will take precedence over these values. Cookie usage can be re-enabled after initialization via `core.getCookieMgr().setEnabled(true)`. | Alias for [`cookieCfg.enabled`](#icookiemgrconfig)<br>false |
| cookieDomain | Custom cookie domain. This option is helpful if you want to share Application Insights cookies across subdomains.<br>(Since v2.6.0) If `cookieCfg.domain` is defined, it will take precedence over this value. | Alias for [`cookieCfg.domain`](#icookiemgrconfig)<br>null | | cookiePath | Custom cookie path. This option is helpful if you want to share Application Insights cookies behind an application gateway.<br>If `cookieCfg.path` is defined, it will take precedence over this value. | Alias for [`cookieCfg.path`](#icookiemgrconfig)<br>(Since 2.6.0)<br/>null | | isRetryDisabled | If false, retry on 206 (partial success), 408 (timeout), 429 (too many requests), 500 (internal server error), 503 (service unavailable), and 0 (offline, only if detected). | boolean<br/>false |
azure-monitor Platforms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/platforms.md
- Title: 'Application Insights: Languages, platforms, and integrations | Microsoft Docs'
-description: Languages, platforms, and integrations that are available for Application Insights.
- Previously updated : 11/15/2022---
-# Supported languages
-
-* [C#|VB (.NET)](./asp-net.md)
-* [Java](./java-in-process-agent.md)
-* [JavaScript](./javascript.md)
-* [Node.js](./nodejs.md)
-* [Python](./opencensus-python.md)
-
-## Supported platforms and frameworks
-
-Supported platforms and frameworks are listed here.
-
-### Azure service integration (portal enablement, Azure Resource Manager deployments)
-* [Azure Virtual Machines and Azure Virtual Machine Scale Sets](./azure-vm-vmss-apps.md)
-* [Azure App Service](./azure-web-apps.md)
-* [Azure Functions](../../azure-functions/functions-monitoring.md)
-* [Azure Cloud Services](./azure-web-apps-net-core.md), including both web and worker roles
-
-### Auto-instrumentation (enable without code changes)
-* [ASP.NET - for web apps hosted with IIS](./status-monitor-v2-overview.md)
-* [ASP.NET Core - for web apps hosted with IIS](./status-monitor-v2-overview.md)
-* [Java](./java-in-process-agent.md)
-
-### Manual instrumentation / SDK (some code changes required)
-* [ASP.NET](./asp-net.md)
-* [ASP.NET Core](./asp-net-core.md)
-* [Node.js](./nodejs.md)
-* [Python](./opencensus-python.md)
-* [JavaScript - web](./javascript.md)
- * [React](./javascript-react-plugin.md)
- * [React Native](./javascript-react-native-plugin.md)
- * [Angular](./javascript-angular-plugin.md)
-* [Windows desktop applications, services, and worker roles](./windows-desktop.md)
-* [Universal Windows app](../app/mobile-center-quickstart.md) (App Center)
-* [Android](../app/mobile-center-quickstart.md) (App Center)
-* [iOS](../app/mobile-center-quickstart.md) (App Center)
-
-> [!NOTE]
-> OpenTelemetry-based instrumentation is available in preview for [C#, Node.js, and Python](opentelemetry-enable.md). Review the limitations noted at the beginning of each language's official documentation. If you require a full-feature experience, use the existing Application Insights SDKs.
-
-## Logging frameworks
-* [ILogger](./ilogger.md)
-* [Log4Net, NLog, or System.Diagnostics.Trace](./asp-net-trace-logs.md)
-* [Log4J, Logback, or java.util.logging](./java-in-process-agent.md#autocollected-logs)
-* [LogStash plug-in](https://github.com/Azure/azure-diagnostics-tools/tree/master/Logstash/logstash-output-applicationinsights)
-* [Azure Monitor](/archive/blogs/msoms/application-insights-connector-in-oms)
-
-## Export and data analysis
-* [Power BI](https://powerbi.microsoft.com/blog/explore-your-application-insights-data-with-power-bi/)
-* [Power BI for workspace-based resources](../logs/log-powerbi.md)
-
-## Unsupported SDKs
-Several other community-supported Application Insights SDKs exist. However, Azure Monitor only provides support when you use the supported instrumentation options listed on this page. We're constantly assessing opportunities to expand our support for other languages. Follow [Azure Updates for Application Insights](https://azure.microsoft.com/updates/?query=application%20insights) for the latest SDK news.
azure-monitor Usage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md
Which features of your web or mobile app are most popular? Do your users achieve
The best experience is obtained by installing Application Insights both in your app server code and in your webpages. The client and server components of your app send telemetry back to the Azure portal for analysis.
-1. **Server code:** Install the appropriate module for your [ASP.NET](./asp-net.md), [Azure](./app-insights-overview.md), [Java](./java-in-process-agent.md), [Node.js](./nodejs.md), or [other](./platforms.md) app.
+1. **Server code:** Install the appropriate module for your [ASP.NET](./asp-net.md), [Azure](./app-insights-overview.md), [Java](./java-in-process-agent.md), [Node.js](./nodejs.md), or [other](./app-insights-overview.md#supported-languages) app.
* If you don't want to install server code, [create an Application Insights resource](./create-new-resource.md).
azure-monitor Azure Monitor Operations Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-operations-manager.md
There are certain scenarios though where you may need to continue using Operatio
- [Availability tests](app/monitor-web-app-availability.md), which allow you to monitor and alert on the availability and responsiveness of your applications require incoming requests from the IP addresses of web test agents. If your policy won't allow such access, you may need to keep using [Web Application Availability Monitors](/system-center/scom/web-application-availability-monitoring-template) in Operations Manager. - In Operations Manager you can set any polling interval for availability tests, with many customers checking every 60-120 seconds. Application Insights has a minimum polling interval of 5 minutes which may be too long for some customers. - A significant amount of monitoring in Operations Manager is performed by collecting events generated by applications and by running scripts on the local agent. These aren't standard options in Application Insights, so you could require custom work to achieve your business requirements. This might include custom alert rules using event data stored in a Log Analytics workspace and scripts launched in a virtual machines guest using [hybrid runbook worker](../automation/automation-hybrid-runbook-worker.md).-- Depending on the language that your application is written in, you may be limited in the [instrumentation you can use with Application Insights](app/platforms.md).
+- Depending on the language that your application is written in, you may be limited in the [instrumentation you can use with Application Insights](app/app-insights-overview.md#supported-languages).
Following the basic strategy in the other sections of this guide, continue to use Operations Manager for your business applications, but take advantage of additional features provided by Application Insights. As you're able to replace critical functionality with Azure Monitor, you can start to retire your custom management packs.
azure-monitor Best Practices Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-data-collection.md
To enable monitoring for an application, you must decide whether you'll use code
- [Java](app/java-in-process-agent.md) - [Node.js](app/nodejs.md) - [Python](app/opencensus-python.md)-- [Other platforms](app/platforms.md)
+- [Other platforms](app/app-insights-overview.md#supported-languages)
### Configure availability testing
azure-monitor Ad Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/ad-assessment.md
View the summarized compliance assessments for your infrastructure and then dril
1. On the **Overview** page, click the **Active Directory Health Check** tile.
-2. On the **Health Check** page, review the summary information in one of the focus area panes and then click one to view recommendations for that focus area.
+2. On the **Health Check** page, review the summary information in one of the focus area sections and then click one to view recommendations for that focus area.
3. On any of the focus area pages, you can view the prioritized recommendations made for your environment. Click a recommendation under **Affected Objects** to view details about why the recommendation is made.
azure-monitor Capacity Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/capacity-performance.md
Click on the Capacity and Performance tile to open the Capacity and Performance
- **Host Density** The top tile shows the total number of hosts and virtual machines available to the solution. Click the top tile to view additional details in log search. Also lists all hosts and the number of virtual machines that are hosted. Click a host to drill into the VM results in a log search.
-![dashboard Hosts blade](./media/capacity-performance/dashboard-hosts.png)
+![dashboard Hosts columns](./media/capacity-performance/dashboard-hosts.png)
-![dashboard virtual machines blade](./media/capacity-performance/dashboard-vms.png)
+![dashboard virtual machines columns](./media/capacity-performance/dashboard-vms.png)
### Evaluate performance
azure-monitor Dns Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/dns-analytics.md
From the Log Analytics workspace in the Azure portal, select **Workspace summary
You can modify the list to add any domain name suffix that you want to view lookup insights for. You can also remove any domain name suffix that you don't want to view lookup insights for. -- **Talkative Client Threshold**. DNS clients that exceed the threshold for the number of lookup requests are highlighted in the **DNS Clients** blade. The default threshold is 1,000. You can edit the threshold.
+- **Talkative Client Threshold**. DNS clients that exceed the threshold for the number of lookup requests are highlighted in the **DNS Clients** pane. The default threshold is 1,000. You can edit the threshold.
![Allowlisted domain names](./media/dns-analytics/dns-config.png)
The solution dashboard shows summarized information for the various features of
![Time selection control](./media/dns-analytics/dns-time.png)
-The solution dashboard shows the following blades:
+The solution dashboard shows the following sections:
**DNS Security**. Reports the DNS clients that are trying to communicate with malicious domains. By using Microsoft threat intelligence feeds, DNS Analytics can detect client IPs that are trying to access malicious domains. In many cases, malware-infected devices "dial out" to the "command and control" center of the malicious domain by resolving the malware domain name.
-![DNS Security blade](./media/dns-analytics/dns-security-blade.png)
+![DNS Security section](./media/dns-analytics/dns-security-blade.png)
When you click a client IP in the list, Log Search opens and shows the lookup details of the respective query. In the following example, DNS Analytics detected that the communication was done with an [IRCbot](https://www.microsoft.com/en-us/wdsi/threats/malware-encyclopedia-description?Name=Backdoor:Win32/IRCbot&threatId=2621):
The information helps you to identify the:
**Domains Queried**. Provides the most frequent domain names being queried by the DNS clients in your environment. You can view the list of all the domain names queried. You can also drill down into the lookup request details of a specific domain name in Log Search.
-![Domains Queried blade](./media/dns-analytics/domains-queried-blade.png)
+![Domains Queried section](./media/dns-analytics/domains-queried-blade.png)
**DNS Clients**. Reports the clients *breaching the threshold* for number of queries in the chosen time period. You can view the list of all the DNS clients and the details of the queries made by them in Log Search.
-![DNS Clients blade](./media/dns-analytics/dns-clients-blade.png)
+![DNS Clients section](./media/dns-analytics/dns-clients-blade.png)
**Dynamic DNS Registrations**. Reports name registration failures. All registration failures for address [resource records](https://en.wikipedia.org/wiki/List_of_DNS_record_types) (Type A and AAAA) are highlighted along with the client IPs that made the registration requests. You can then use this information to find the root cause of the registration failure by following these steps:
The information helps you to identify the:
1. Check whether the zone is configured for secure dynamic update or not.
- ![Dynamic DNS Registrations blade](./media/dns-analytics/dynamic-dns-reg-blade.png)
+ ![Dynamic DNS Registrations section](./media/dns-analytics/dynamic-dns-reg-blade.png)
**Name registration requests**. The upper tile shows a trendline of successful and failed DNS dynamic update requests. The lower tile lists the top 10 clients that are sending failed DNS update requests to the DNS servers, sorted by the number of failures.
-![Name registration requests blade](./media/dns-analytics/name-reg-req-blade.png)
+![Name registration requests section](./media/dns-analytics/name-reg-req-blade.png)
**Sample DDI Analytics Queries**. Contains a list of the most common search queries that fetch raw analytics data directly.
azure-monitor Scom Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/scom-assessment.md
View the summarized compliance assessments for your infrastructure and then dril
2. In the Azure portal, click **More services** found on the lower left-hand corner. In the list of resources, type **Log Analytics**. As you begin typing, the list filters based on your input. Select **Log Analytics**. 3. In the Log Analytics subscriptions pane, select a workspace and then click the **Workspace summary** menu item. 4. On the **Overview** page, click the **System Center Operations Manager Health Check** tile.
-5. On the **System Center Operations Manager Health Check** page, review the summary information in one of the focus area blades and then click one to view recommendations for that focus area.
+5. On the **System Center Operations Manager Health Check** page, review the summary information in one of the focus area sections and then click one to view recommendations for that focus area.
6. On any of the focus area pages, you can view the prioritized recommendations made for your environment. Click a recommendation under **Affected Objects** to view details about why the recommendation is made.<br><br> ![focus area](./media/scom-assessment/log-analytics-scom-healthcheck-dashboard-02.png)<br> 7. You can take corrective actions suggested in **Suggested Actions**. When the item has been addressed, later assessments will record that recommended actions were taken and your compliance score will increase. Corrected items appear as **Passed Objects**.
azure-monitor Sql Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/sql-assessment.md
View the summarized compliance assessments for your infrastructure and then dril
2. In the Azure portal, click **More services** found on the lower left-hand corner. In the list of resources, type **Monitor**. As you begin typing, the list filters based on your input. Select **Monitor**. 3. In the **Insights** section of the menu, select **More**. 4. On the **Overview** page, click the **SQL Health Check** tile.
-5. On the **Health Check** page, review the summary information in one of the focus area blades and then click one to view recommendations for that focus area.
+5. On the **Health Check** page, review the summary information in one of the focus area sections and then click one to view recommendations for that focus area.
6. On any of the focus area pages, you can view the prioritized recommendations made for your environment. Click a recommendation under **Affected Objects** to view details about why the recommendation is made.<br><br> ![image of SQL Health Check recommendations](./media/sql-assessment/sql-healthcheck-dashboard-02.png)<br> 7. You can take corrective actions suggested in **Suggested Actions**. When the item has been addressed, later assessments will record that recommended actions were taken and your compliance score will increase. Corrected items appear as **Passed Objects**.
azure-monitor Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/vmware.md
The VMware tile appears in your Log Analytics workspace. It provides a high-leve
![Screenshot shows the VMware tile, displaying nine failures.](./media/vmware/tile.png) #### Navigate the dashboard view
-In the **VMware** dashboard view, blades are organized by:
+In the **VMware** dashboard view, sections are organized by:
* Failure Status Count * Top Host by Event Counts
In the **VMware** dashboard view, blades are organized by:
![solution2](./media/vmware/solutionview1-2.png)
-Click any blade to open Log Analytics search pane that shows detailed information specific for the blade.
+Click any section to open Log Analytics search pane that shows detailed information specific for the section.
From here, you can edit the log query to modify it for something specific. For details on creating log queries, see [Find data using log queries in Azure Monitor](../logs/log-query-overview.md).
You can drill further by clicking an ESXi host or an event type.
When you click an ESXi host name, you view information from that ESXi host. If you want to narrow results with the event type, add `ΓÇ£ProcessName_s=EVENT TYPEΓÇ¥` in your search query. You can select **ProcessName** in the search filter. That narrows the information for you.
-![Screenshot of the ESXi Host Per Event Count and Breakdown Per Event Type blades in the VMware Monitoring dashboard view.](./media/vmware/eventhostdrilldown.png)
+![Screenshot of the ESXi Host Per Event Count and Breakdown Per Event Type sections in the VMware Monitoring dashboard view.](./media/vmware/eventhostdrilldown.png)
#### Find high VM activities A virtual machine can be created and deleted on any ESXi host. It's helpful for an administrator to identify how many VMs an ESXi host creates. That in-turn, helps to understand performance and capacity planning. Keeping track of VM activity events is crucial when managing your environment.
-![Screenshot of the Virtual Machine Activities blade in the VMware Monitoring dashboard, showing a graph of VM creation and deletion by the ESXi host.](./media/vmware/vmactivities1.png)
+![Screenshot of the Virtual Machine Activities section in the VMware Monitoring dashboard, showing a graph of VM creation and deletion by the ESXi host.](./media/vmware/vmactivities1.png)
If you want to see additional ESXi host VM creation data, click an ESXi host name.
azure-monitor Wire Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/wire-data.md
rpm -e dependency-agent dependency-agent-connector
## Using the Wire Data 2.0 solution
-In the **Overview** page for your Log Analytics workspace in the Azure portal, click the **Wire Data 2.0** tile to open the Wire Data dashboard. The dashboard includes the blades in the following table. Each blade lists up to 10 items matching that blade's criteria for the specified scope and time range. You can run a log search that returns all records by clicking **See all** at the bottom of the blade or by clicking the blade header.
+In the **Overview** page for your Log Analytics workspace in the Azure portal, click the **Wire Data 2.0** tile to open the Wire Data dashboard. The dashboard includes the sections in the following table. Each section lists up to 10 items matching that section's criteria for the specified scope and time range. You can run a log search that returns all records by clicking **See all** at the bottom of the section or by clicking the section header.
-| **Blade** | **Description** |
+| **Section** | **Description** |
| | | | Agents capturing network traffic | Shows the number of agents that are capturing network traffic and lists the top 10 computers that are capturing traffic. Click the number to run a log search for <code>WireData \| summarize sum(TotalBytes) by Computer \| take 500000</code>. Click a computer in the list to run a log search returning the total number of bytes captured. | | Local Subnets | Shows the number of local subnets that agents have discovered. Click the number to run a log search for <code>WireData \| summarize sum(TotalBytes) by LocalSubnet</code> that lists all subnets with the number of bytes sent over each one. Click a subnet in the list to run a log search returning the total number of bytes sent over the subnet. |
In the **Overview** page for your Log Analytics workspace in the Azure portal, c
![Wire Data dashboard](./media/wire-data/wire-data-dash.png)
-You can use the **Agents capturing network traffic** blade to determine how much network bandwidth is being consumed by computers. This blade can help you easily find the _chattiest_ computer in your environment. Such computers could be overloaded, acting abnormally, or using more network resources than normal.
+You can use the **Agents capturing network traffic** section to determine how much network bandwidth is being consumed by computers. This section can help you easily find the _chattiest_ computer in your environment. Such computers could be overloaded, acting abnormally, or using more network resources than normal.
-![Screenshot of the Agents capturing network traffic blade in the Wire Data 2.0 dashboard showing the network bandwidth consumed by each computer.](./media/wire-data/log-search-example01.png)
+![Screenshot of the Agents capturing network traffic section in the Wire Data 2.0 dashboard showing the network bandwidth consumed by each computer.](./media/wire-data/log-search-example01.png)
-Similarly, you can use the **Local Subnets** blade to determine how much network traffic is moving through your subnets. Users often define subnets around critical areas for their applications. This blade offers a view into those areas.
+Similarly, you can use the **Local Subnets** section to determine how much network traffic is moving through your subnets. Users often define subnets around critical areas for their applications. This section offers a view into those areas.
-![Screenshot of the Local Subnets blade in the Wire Data 2.0 dashboard showing the network bandwidth consumed by each LocalSubnet.](./media/wire-data/log-search-example02.png)
+![Screenshot of the Local Subnets section in the Wire Data 2.0 dashboard showing the network bandwidth consumed by each LocalSubnet.](./media/wire-data/log-search-example02.png)
-The **Application-level Protocols** blade is useful because it's helpful know what protocols are in use. For example, you might expect SSH to not be in use in your network environment. Viewing information available in the blade can quickly confirm or disprove your expectation.
+The **Application-level Protocols** section is useful because it's helpful know what protocols are in use. For example, you might expect SSH to not be in use in your network environment. Viewing information available in the section can quickly confirm or disprove your expectation.
-![Screenshot of the Application-level Protocols blade in the Wire Data 2.0 dashboard showing the network bandwidth consumed by each protocol.](./media/wire-data/log-search-example03.png)
+![Screenshot of the Application-level Protocols section in the Wire Data 2.0 dashboard showing the network bandwidth consumed by each protocol.](./media/wire-data/log-search-example03.png)
It's also useful to know if protocol traffic is increasing or decreasing over time. For example, if the amount of data being transmitted by an application is increasing, that might be something you should be aware of, or that you might find noteworthy.
azure-monitor App Insights Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/app-insights-connector.md
This solution does not install any management packs in connected management grou
## Use the solution
-The following sections describe how you can use the panes shown in the Application Insights dashboard to view and interact with data from your apps.
+The following sections describe how you can use the sections shown in the Application Insights dashboard to view and interact with data from your apps.
### View Application Insights Connector information
-Click the **Application Insights** tile to open the **Application Insights** dashboard to see the following panes.
+Click the **Application Insights** tile to open the **Application Insights** dashboard to see the following sections.
-![Screenshot of the Application Insights dashboard showing the panes for Applications, Data Volume, and Availability.](./media/app-insights-connector/app-insights-dash01.png)
+![Screenshot of the Application Insights dashboard showing the sections for Applications, Data Volume, and Availability.](./media/app-insights-connector/app-insights-dash01.png)
-![Screenshot of the Application Insights dashboard showing the panes for Server Requests, Failures, and Exceptions.](./media/app-insights-connector/app-insights-dash02.png)
+![Screenshot of the Application Insights dashboard showing the sections for Server Requests, Failures, and Exceptions.](./media/app-insights-connector/app-insights-dash02.png)
-The dashboard includes the panes shown in the table. Each pane lists up to 10 items matching that pane's criteria for the specified scope and time range. You can run a log search that returns all records when you click **See all** at the bottom of the pane or when you click the pane header.
+The dashboard includes the sections shown in the table. Each section lists up to 10 items matching that section's criteria for the specified scope and time range. You can run a log search that returns all records when you click **See all** at the bottom of the section or when you click the section header.
| **Column** | **Description** |
The dashboard includes the panes shown in the table. Each pane lists up to 10 it
When you click any item in the dashboard, you see an Application Insights perspective shown in search. The perspective provides an extended visualization, based on the telemetry type that selected. So, visualization content changes for different telemetry types.
-When you click anywhere in the Applications pane, you see the default **Applications** perspective.
+When you click anywhere in the Applications section, you see the default **Applications** perspective.
![Application Insights Applications perspective](./media/app-insights-connector/applications-blade-drill-search.png) The perspective shows an overview of the application that you selected.
-The **Availability** pane shows a different perspective view where you can see web test results and related failed requests.
+The **Availability** section shows a different perspective view where you can see web test results and related failed requests.
![Application Insights Availability perspective](./media/app-insights-connector/availability-blade-drill-search.png)
-When you click anywhere in the **Server Requests** or **Failures** panes, the perspective components change to give you a visualization that related to requests.
+When you click anywhere in the **Server Requests** or **Failures** sections, the perspective components change to give you a visualization that related to requests.
-![Application Insights Failures pane](./media/app-insights-connector/server-requests-failures-drill-search.png)
+![Application Insights Failures section](./media/app-insights-connector/server-requests-failures-drill-search.png)
-When you click anywhere in the **Exceptions** pane, you see a visualization that's tailored to exceptions.
+When you click anywhere in the **Exceptions** section, you see a visualization that's tailored to exceptions.
-![Application Insights Exceptions pane](./media/app-insights-connector/exceptions-blade-drill-search.png)
+![Application Insights Exceptions section](./media/app-insights-connector/exceptions-blade-drill-search.png)
Regardless of whether you click something one the **Application Insights Connector** dashboard, within the **Search** page itself, any query returning Application Insights data shows the Application Insights perspective. For example, if you are viewing Application Insights data, a **&#42;** query also shows the perspective tab like the following image:
Perspective components are updated depending on the search query. This means tha
### Pivot to an app in the Azure portal
-Application Insights Connector panes are designed to enable you to pivot to the selected Application Insights app *when you use the Azure portal*. You can use the solution as a high-level monitoring platform that helps you troubleshoot an app. When you see a potential problem in any of your connected applications, you can either drill into it in Log Analytics search or you can pivot directly to the Application Insights app.
+Application Insights Connector sections are designed to enable you to pivot to the selected Application Insights app *when you use the Azure portal*. You can use the solution as a high-level monitoring platform that helps you troubleshoot an app. When you see a potential problem in any of your connected applications, you can either drill into it in Log Analytics search or you can pivot directly to the Application Insights app.
To pivot, click the ellipses (**…**) that appears at the end of each line, and select **Open in Application Insights**.
azure-monitor Profiler Aspnetcore Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-aspnetcore-linux.md
In this guide, you'll:
You can add Application Insights to your web app either via: -- The Enablement blade in the Azure portal,-- The Configuration blade in the Azure portal, or
+- The Application Insights pane in the Azure portal,
+- The Configuration pane in the Azure portal, or
- Manually adding to your web app settings.
-# [Enablement blade](#tab/enablement)
+# [Application Insights pane](#tab/enablement)
1. In your web app on the Azure portal, select **Application Insights** in the left side menu. 1. Click **Turn on Application Insights**.
You can add Application Insights to your web app either via:
1. Click **Apply** > **Yes** to apply and confirm.
-# [Configuration blade](#tab/config)
+# [Configuration pane](#tab/config)
1. [Create an Application Insights resource](../app/create-workspace-resource.md) in the same Azure subscription as your App Service. 1. Navigate to the Application Insights resource.
You can add Application Insights to your web app either via:
1. In your web app on the Azure portal, select **Configuration** in the left side menu. 1. Click **New application setting**.
- :::image type="content" source="./media/profiler-aspnetcore-linux/new-setting-configuration.png" alt-text="Screenshot of adding new application setting in the configuration blade.":::
+ :::image type="content" source="./media/profiler-aspnetcore-linux/new-setting-configuration.png" alt-text="Screenshot of adding new application setting in the configuration pane.":::
1. Add the following settings in the **Add/Edit application setting** pane, using your saved iKey:
azure-monitor Profiler Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-azure-functions.md
From your Functions app overview page in the Azure portal:
1. Click **Save** in the top menu, then **Continue**.
- :::image type="content" source="./media/profiler-azure-functions/save-button.png" alt-text="Screenshot outlining the save button in the top menu of the configuration blade.":::
+ :::image type="content" source="./media/profiler-azure-functions/save-button.png" alt-text="Screenshot outlining the save button in the top menu of the configuration pane.":::
:::image type="content" source="./media/profiler-azure-functions/continue-button.png" alt-text="Screenshot outlining the continue button in the dialog after saving."::: The app settings now show up in the table:
- :::image type="content" source="./media/profiler-azure-functions/app-settings-table.png" alt-text="Screenshot showing the two new app settings in the table on the configuration blade.":::
+ :::image type="content" source="./media/profiler-azure-functions/app-settings-table.png" alt-text="Screenshot showing the two new app settings in the table on the configuration pane.":::
> [!NOTE]
azure-monitor Profiler Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-containers.md
Service Profiler session finished. # A profiling session is complet
## View the Service Profiler traces 1. Wait for 2-5 minutes so the events can be aggregated to Application Insights.
-1. Open the **Performance** blade in your Application Insights resource.
+1. Open the **Performance** pane in your Application Insights resource.
1. Once the trace process is complete, you'll see the Profiler Traces button like it below:
- :::image type="content" source="./media/profiler-containerinstances/profiler-traces.png" alt-text="Screenshot of Profile traces in the performance blade.":::
+ :::image type="content" source="./media/profiler-containerinstances/profiler-traces.png" alt-text="Screenshot of Profile traces in the performance pane.":::
azure-monitor Snapshot Debugger Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-upgrade.md
If you enabled the Snapshot debugger using the site extension, you can upgrade u
:::image type="content" source="./media/snapshot-debugger-upgrade/app-service-resource.png" alt-text="Screenshot of individual App Service resource named DiagService01.":::
-1. After you've navigated to your resource, click on the **Extensions** blade and wait for the list of extensions to populate:
+1. After you've navigated to your resource, click on the **Extensions** pane and wait for the list of extensions to populate:
:::image type="content" source="./media/snapshot-debugger-upgrade/application-insights-site-extension-to-be-deleted.png" alt-text="Screenshot of App Service Extensions showing Application Insights extension for Azure App Service installed.":::
If you enabled the Snapshot debugger using the site extension, you can upgrade u
:::image type="content" source="./media/snapshot-debugger-upgrade/application-insights-site-extension-delete.png" alt-text="Screenshot of App Service Extensions showing Application Insights extension for Azure App Service with the Delete button highlighted.":::
-1. Go to the **Overview** blade of your resource and select **Application Insights**:
+1. Go to the **Overview** pane of your resource and select **Application Insights**:
:::image type="content" source="./media/snapshot-debugger-upgrade/application-insights-button.png" alt-text="Screenshot of three buttons. Center button with name Application Insights is selected.":::
-1. If this is the first time you've viewed the Application Insights blade for this App Service, you'll be prompted to turn on Application Insights. Select **Turn on Application Insights**.
+1. If this is the first time you've viewed the Application Insights pane for this App Service, you'll be prompted to turn on Application Insights. Select **Turn on Application Insights**.
- :::image type="content" source="./media/snapshot-debugger-upgrade/turn-on-application-insights.png" alt-text="Screenshot of the first-time experience for the Application Insights blade with the Turn on Application Insights button highlighted.":::
+ :::image type="content" source="./media/snapshot-debugger-upgrade/turn-on-application-insights.png" alt-text="Screenshot of the first-time experience for the Application Insights pane with the Turn on Application Insights button highlighted.":::
-1. In the Application Insights settings blade, switch the Snapshot Debugger setting toggles to **On** and select **Apply**.
+1. In the Application Insights settings pane, switch the Snapshot Debugger setting toggles to **On** and select **Apply**.
- If you decide to change *any* Application Insights settings, the **Apply** button on the bottom of the blade will be activated.
+ If you decide to change *any* Application Insights settings, the **Apply** button on the bottom of the pane will be activated.
:::image type="content" source="./media/snapshot-debugger-upgrade/view-application-insights-data.png" alt-text="Screenshot of Application Insights App Service Configuration page with Apply button highlighted in red.":::
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger.md
When an exception occurs, you can automatically collect a debug snapshot from yo
Simply include the [Snapshot collector NuGet package](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) in your application and configure collection parameters in [`ApplicationInsights.config`](../app/configuration-with-applicationinsights-config.md).
-Snapshots appear on [**Exceptions**](../app/asp-net-exceptions.md) in the Application Insights blade of the Azure portal.
+Snapshots appear on [**Exceptions**](../app/asp-net-exceptions.md) in the Application Insights pane of the Azure portal.
You can view debug snapshots in the portal to see the call stack and inspect variables at each call stack frame. To get a more powerful debugging experience with source code, open snapshots with Visual Studio Enterprise. You can also [set SnapPoints to interactively take snapshots](/visualstudio/debugger/debug-live-azure-applications) without waiting for an exception.
azure-monitor Workbooks Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-configurations.md
There are several ways that you can create interactive reports and experiences i
- **Parameters**: When you update a [parameter](workbooks-parameters.md), any control that uses the parameter automatically refreshes and redraws to reflect the new value. This behavior is how most of the Azure portal reports support interactivity. Workbooks provide this functionality in a straightforward manner with minimal user effort. - **Grid, tile, and chart selections**: You can construct scenarios where selecting a row in a grid updates subsequent charts based on the content of the row. For example, you might have a grid that shows a list of requests and some statistics like failure counts. You can set it up so that if you select the row of a request, the detailed charts below update to show only that request. Learn how to [set up a grid row click](#set-up-a-grid-row-click).
+ - **Grid cell clicks**: You can add interactivity with a special type of grid column renderer called a [link renderer](#link-renderer-actions). A link renderer converts a grid cell into a hyperlink based on the contents of the cell. Workbooks support many kinds of link renderers including renderers that open resource overview panes, property bag viewers, and Application Insights search, usage, and transaction tracing. Learn how to [set up a grid cell click](#set-up-grid-cell-clicks).
- **Conditional visibility**: You can make controls appear or disappear based on the values of parameters. This way you can have reports that look different based on user input or telemetry state. For example, you can show consumers a summary when there are no issues. You can also show detailed information when there's something wrong. Learn how to [set up conditional visibility](#set-conditional-visibility). - **Export parameters with multi-selections**: You can export parameters from query and metrics workbook components when a row or multiple rows are selected. Learn how to [set up multi-selects in grids and charts](#set-up-multi-selects-in-grids-and-charts).
azure-monitor Workbooks Link Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-link-actions.md
When you use the link renderer, the following settings are available:
|View to open| Allows you to select one of the actions enumerated above. | |Menu item| If **Resource Overview** is selected, this menu item is in the resource's overview. You can use it to open alerts or activity logs instead of the "overview" for the resource. Menu item values are different for each Azure Resource type.| |Link label| If specified, this value appears in the grid column. If this value isn't specified, the value of the cell appears. If you want another value to appear, like a heatmap or icon, don't use the link renderer. Instead, use the appropriate renderer and select the **Make this item a link** option. |
-|Open link in Context Blade| If specified, the link is opened as a pop-up "context" view on the right side of the window instead of opening as a full view. |
+|Open link in Context pane| If specified, the link is opened as a pop-up "context" view on the right side of the window instead of opening as a full view. |
When you use the **Make this item a link** option, the following settings are available:
When you use the **Make this item a link** option, the following settings are av
|Link value comes from| When a cell is displayed as a renderer with a link, this field specifies where the "link" value to be used in the link comes from. You can select from a dropdown of the other columns in the grid. For example, the cell might be a heatmap value. But perhaps you want the link to open the **Resource Overview** for the resource ID in the row. In that case, you would set the link value to come from the **Resource ID** field. |View to open| Same as above. | |Menu item| Same as above. |
-|Open link in Context Blade| Same as above. |
+|Open link in Context pane| Same as above. |
## Azure Resource Manager deployment link settings
This section defines where the template should come from and the parameters used
|:- |:-| |Resource group id comes from| The resource ID is used to manage deployed resources. The subscription is used to manage deployed resources and costs. The resource groups are used like folders to organize and manage all your resources. If this value isn't specified, the deployment will fail. Select from **Cell**, **Column**, **Parameter**, and **Static Value** in [Link sources](#link-sources).| |ARM template URI from| The URI to the ARM template itself. The template URI needs to be accessible to the users who will deploy the template. Select from **Cell**, **Column**, **Parameter**, and **Static Value** in [Link sources](#link-sources). For more information, see [Azure quickstart templates](https://azure.microsoft.com/resources/templates/).|
-|ARM Template Parameters|Defines the template parameters used for the template URI defined earlier. These parameters are used to deploy the template on the run page. The grid contains an **Expand** toolbar button to help fill the parameters by using the names defined in the template URI and set to static empty values. This option can only be used when there are no parameters in the grid and the template URI has been set. The lower section is a preview of what the parameter output looks like. Select **Refresh** to update the preview with current changes. Parameters are typically values. References are something that could point to key vault secrets that the user has access to. <br/><br/> **Template Viewer blade limitation** doesn't render reference parameters correctly and will show up as null/value. As a result, users won't be able to correctly deploy reference parameters from the **Template Viewer** tab.|
+|ARM Template Parameters|Defines the template parameters used for the template URI defined earlier. These parameters are used to deploy the template on the run page. The grid contains an **Expand** toolbar button to help fill the parameters by using the names defined in the template URI and set to static empty values. This option can only be used when there are no parameters in the grid and the template URI has been set. The lower section is a preview of what the parameter output looks like. Select **Refresh** to update the preview with current changes. Parameters are typically values. References are something that could point to key vault secrets that the user has access to. <br/><br/> **Template Viewer pane limitation** doesn't render reference parameters correctly and will show up as null/value. As a result, users won't be able to correctly deploy reference parameters from the **Template Viewer** tab.|
![Screenshot that shows the Template Settings tab.](./media/workbooks-link-actions/template-settings.png)
azure-monitor Workbooks Renderers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-renderers.md
The following instructions show you how to use thresholds with links to assign i
1. Select the **Make this item a link** checkbox. - Under **View to open**, select **Workbook (Template)**. - Under **Link value comes from**, select **link**.
- - Select the **Open link in Context Blade** checkbox.
+ - Select the **Open link in Context pane** checkbox.
- Choose the following settings in **Workbook Link Settings**: - Under **Template Id comes from**, select **Column**. - Under **Column**, select **link**.
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 11/21/2022 Last updated : 11/22/2022 # Create and manage Active Directory connections for Azure NetApp Files
Several features of Azure NetApp Files require that you have an Active Directory
* **AD Site Name (required)** This is the AD DS site name that will be used by Azure NetApp Files for domain controller discovery.
+ The default site name for both ADDS and AADDS is `Default-First-Site-Name`. Follow the [naming conventions for site names](/troubleshoot/windows-server/identity/naming-conventions-for-computer-domain-site-ou.md#site-names) if you want to rename the site name.
+ >[!NOTE] > See [Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files](understand-guidelines-active-directory-domain-service-site.md). Ensure that your AD DS site design and configuration meets the requirements for Azure NetApp Files. Otherwise, Azure NetApp Files service operations, SMB authentication, Kerberos, or LDAP operations might fail.
azure-video-indexer Animated Characters Recognition How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/animated-characters-recognition-how-to.md
Follow these steps to connect your Custom Vision account to Azure Video Indexer,
1. Select **Connect Custom Vision Account** and select **Try it**. 1. Fill in the required fields and the access token and select **Send**.
- For more information about how to get the Video Indexer access token go to the [developer portal](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token), and see the [relevant documentation](video-indexer-use-apis.md#obtain-access-token-using-the-authorization-api).
+ For more information about how to get the Azure Video Indexer access token, go to the [developer portal](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token), and see the [relevant documentation](video-indexer-use-apis.md#obtain-access-token-using-the-authorization-api).
1. Once the call return 200 OK response, your account is connected. 1. To verify your connection by browse to the [Azure Video Indexer](https://vi.microsoft.com/) portal: 1. Select the **Content model customization** button in the top-right corner.
azure-video-indexer Connect Classic Account To Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-classic-account-to-arm.md
Before the end of the 30 days of transition state, you can remove access from us
## Get started
-### Browse to [Azure Video Indexer portal](https://aka.ms/vi-portal-link)
+### Browse to the [Azure Video Indexer website](https://aka.ms/vi-portal-link)
1. Sign in using your Azure AD account. 1. On the top right bar press *User account* to open the side pane account list.
azure-video-indexer Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-to-azure.md
If your storage account is behind a firewall, see [storage account that is behin
> [!NOTE] > Make sure to write down the Media Services resource and account names.
-1. Before you can play your videos in the Azure Video Indexer web app, you must start the default **Streaming Endpoint** of the new Media Services account.
+1. Before you can play your videos in the [Azure Video Indexer](https://www.videoindexer.ai/) website, you must start the default **Streaming Endpoint** of the new Media Services account.
In the new Media Services account, select **Streaming endpoints**. Then select the streaming endpoint and press start.
The following Azure Media Services related considerations apply:
![Media Services streaming endpoint](./media/create-account/ams-streaming-endpoint.png)
- Streaming endpoints have a considerable startup time. Therefore, it may take several minutes from the time you connected your account to Azure until your videos can be streamed and watched in the Azure Video Indexer web app.
+ Streaming endpoints have a considerable startup time. Therefore, it may take several minutes from the time you connected your account to Azure until your videos can be streamed and watched in the [Azure Video Indexer](https://www.videoindexer.ai/) website.
* If you connect to an existing Media Services account, Azure Video Indexer doesn't change the default Streaming Endpoint configuration. If there's no running **Streaming Endpoint**, you can't watch videos from this Media Services account or in Azure Video Indexer. ## Create a classic account
azure-video-indexer Deploy With Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-arm-template.md
You need an Azure Media Services account. You can create one for free through [C
If you're new to Azure Video Indexer, see:
-* [Azure Video Indexer documentation](./index.yml)
-* [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/)
+* [The Azure Video Indexer documentation](./index.yml)
+* [The Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai/)
After you complete this tutorial, head to other Azure Video Indexer samples described in [README.md](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/README.md).
azure-video-indexer Deploy With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-bicep.md
In this tutorial, you create an Azure Video Indexer account by using [Bicep](../
> [!NOTE] > This sample is *not* for connecting an existing Azure Video Indexer classic account to an ARM-based Azure Video Indexer account.
-> For full documentation on Azure Video Indexer API, visit the [Developer portal](https://aka.ms/avam-dev-portal) page.
+> For full documentation on Azure Video Indexer API, visit the [developer portal](https://aka.ms/avam-dev-portal) page.
> For the latest API version for Microsoft.VideoIndexer, see the [template reference](/azure/templates/microsoft.videoindexer/accounts?tabs=bicep). ## Prerequisites
Check [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-tem
If you're new to Azure Video Indexer, see:
-* [Azure Video Indexer Documentation](./index.yml)
-* [Azure Video Indexer Developer Portal](https://api-portal.videoindexer.ai/)
+* [The Azure Video Indexer documentation](./index.yml)
+* [The Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/)
* After completing this tutorial, head to other Azure Video Indexer samples, described on [README.md](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/README.md) If you're new to Bicep deployment, see:
azure-video-indexer Edit Transcript Lines Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/edit-transcript-lines-portal.md
This section shows how to examine word-level transcription information based on
## Next steps
-For updating transcript lines and text using API visit [Azure Video Indexer Developer portal](https://aka.ms/avam-dev-portal)
+For updating transcript lines and text using API visit the [Azure Video Indexer API developer portal](https://aka.ms/avam-dev-portal)
azure-video-indexer Import Content From Trial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/import-content-from-trial.md
Review the following considerations.
To import your data, follow the steps:
- 1. Go to [Azure Video Indexer portal](https://aka.ms/vi-portal-link)
+ 1. Go to the [Azure Video Indexer website](https://aka.ms/vi-portal-link)
2. Select your trial account and go to the **Account settings** page. 3. Click the **Import content to an ARM-based account**. 4. From the dropdown menu choose the ARM-based account you wish to import the data to.
azure-video-indexer Invite Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/invite-users.md
In addition to bringing up the **Share this account with others** dialog by clic
## Next steps
-You can now use the [Azure Video Indexer website](video-indexer-view-edit.md) or [Azure Video Indexer Developer Portal](video-indexer-use-apis.md) to see the insights of the video.
+You can now use the [Azure Video Indexer website](video-indexer-view-edit.md) or [Azure Video Indexer API developer portal](video-indexer-use-apis.md) to see the insights of the video.
## See also
azure-video-indexer Logic Apps Connector Arm Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-arm-accounts.md
The following image shows the first flow:
|Video URL|Select **Web Url** from the dynamic content of **Create SAS URI by path** action.| | Body| Can be left as default.|
- ![Screenshot of the upload and index action.](./media/logic-apps-connector-arm-accounts/upload-and-index.png)
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/logic-apps-connector-arm-accounts/upload-and-index-expression.png" alt-text="Screenshot of the upload and index action." lightbox="./media/logic-apps-connector-arm-accounts/upload-and-index-expression.png":::
Select **Save**.
azure-video-indexer Manage Multiple Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/manage-multiple-tenants.md
When using this architecture, an Azure Video Indexer account is created for each
* Harder to manage due to multiple Azure Video Indexer (and associated Media Services) accounts per tenant. > [!TIP]
-> Create an admin user for your system in [Video Indexer Developer Portal](https://api-portal.videoindexer.ai/) and use the Authorization API to provide your tenants the relevant [account access token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token).
+> Create an admin user for your system in [the Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/) and use the Authorization API to provide your tenants the relevant [account access token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token).
## Single Azure Video Indexer account for all users
azure-video-indexer Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/network-security.md
# NSG service tags for Azure Video Indexer
-Azure Video Indexer is a service hosted on Azure. In some architecture cases the service needs to interact with other services in order to index video files (that is, a Storage Account) or when a customer orchestrates indexing jobs against our API endpoint using their own service hosted on Azure (i.e AKS, Web Apps, Logic Apps, Functions). Customers who would like to limit access to their resources on a network level can use [Network Security Groups with Service Tags](../virtual-network/service-tags-overview.md). A service tag represents a group of IP address prefixes from a given Azure service, in this case Azure Video Indexer. Microsoft manages the address prefixes grouped by the service tag and automatically updates the service tag as addresses change in our backend, minimizing the complexity of frequent updates to network security rules by the customer.
+Azure Video Indexer is a service hosted on Azure. In some cases the service needs to interact with other services in order to index video files (for example, a Storage account) or when you orchestrate indexing jobs against Azure Video Indexer API endpoint using your own service hosted on Azure (for example, AKS, Web Apps, Logic Apps, Functions).
+
+Use [Network Security Groups with Service Tags](../virtual-network/service-tags-overview.md) to limit access to your resources on a network level. A service tag represents a group of IP address prefixes from a given Azure service, in this case Azure Video Indexer. Microsoft manages the address prefixes grouped by the service tag and automatically updates the service tag as addresses change in our backend, minimizing the complexity of frequent updates to network security rules by the customer.
## Get started with service tags
azure-video-indexer Odrv Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/odrv-download.md
You can use the [Upload Video](https://api-portal.videoindexer.ai/api-details#ap
### Configurations and parameters
-This section describes some of the optional parameters and when to set them. For the most up-to-date info about parameters, see the [Azure Video Indexer portal](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video).
+This section describes some of the optional parameters and when to set them. For the most up-to-date info about parameters, see the [Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video).
#### externalID
After you copy the following code into your development platform, you'll need to
To get your API key:
- 1. Go to the [Azure Video Indexer portal](https://api-portal.videoindexer.ai/).
+ 1. Go to the [Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai/).
1. Sign in. 1. Go to **Products** > **Authorization** > **Authorization subscription**. 1. Copy the **Primary key** value.
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Title: Azure Video Indexer release notes | Microsoft Docs
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Indexer. Previously updated : 11/07/2022 Last updated : 11/22/2022
To stay up-to-date with the most recent Azure Video Indexer developments, this article provides you with information about:
-* [Important notice](#upcoming-critical-changes) about planned changes
+<!--* [Important notice](#upcoming-critical-changes) about planned changes-->
* The latest releases * Known issues * Bug fixes * Deprecated functionality
-## Upcoming critical changes
-
-> [!Important]
-> This section describes a critical upcoming change for the `Upload-Video` API.
-
-### Upload-Video API
-
-In the past, the `Upload-Video` API was tolerant to calls to upload a video from a URL where an empty multipart form body was provided in the C# code, such as:
-
-```csharp
-var content = new MultipartFormDataContent();
-var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos?{queryParams}", content);
-```
-
-In the coming weeks, our service will fail requests of this type.
-
-In order to upload a video from a URL, change your code to send null in the request body:
-
-```csharp
-var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos?{queryParams}", null);
-```
- ## November 2022 ### Speakers' names can now be edited from the Azure Video Indexer website
For details, see [Slate detection](slate-detection-insight.md).
### New source languages support for STT, translation, and search
-Now supporting source languages for STT (speech-to-text), translation, and search in Ukraine and Vietnamese. It means transcription, translation, and search features are also supported for these languages in Azure Video Indexer web applications, widgets and APIs.
+Now supporting source languages for STT (speech-to-text), translation, and search in Ukraine and Vietnamese. It means transcription, translation, and search features are also supported for these languages in the [Azure Video Indexer](https://www.videoindexer.ai/) website, widgets and APIs.
For more information, see [supported languages](language-support.md).
For more information, see [Audio effects detection](audio-effects-detection.md).
### New source languages support for STT, translation, and search on the website Azure Video Indexer introduces source languages support for STT (speech-to-text), translation, and search in Hebrew (he-IL), Portuguese (pt-PT), and Persian (fa-IR) on the [Azure Video Indexer](https://www.videoindexer.ai/) website.
-It means transcription, translation, and search features are also supported for these languages in Azure Video Indexer web applications and widgets.
+It means transcription, translation, and search features are also supported for these languages in the [Azure Video Indexer](https://www.videoindexer.ai/) website and widgets.
## December 2021
The Video Indexer service was renamed to Azure Video Indexer.
### Improved upload experience in the portal
-Azure Video Indexer has a new upload experience in the [portal](https://www.videoindexer.ai). To upload your media file, press the **Upload** button from the **Media files** tab.
+Azure Video Indexer has a new upload experience in the [website](https://www.videoindexer.ai). To upload your media file, press the **Upload** button from the **Media files** tab.
### New developer portal in available in gov-cloud
-[Azure Video Indexer Developer Portal](https://api-portal.videoindexer.ai) is now also available in Azure for US Government.
+The [Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai) is now also available in Azure for US Government.
### Observed people tracing (preview)
The newly added bundle is available when indexing or re-indexing your file by ch
### New developer portal
-Azure Video Indexer has a new [Developer Portal](https://api-portal.videoindexer.ai/), try out the new Azure Video Indexer APIs and find all the relevant resources in one place: [GitHub repository](https://github.com/Azure-Samples/media-services-video-indexer), [Stack overflow](https://stackoverflow.com/questions/tagged/video-indexer), [Azure Video Indexer tech community](https://techcommunity.microsoft.com/t5/azure-media-services/bg-p/AzureMediaServices/label-name/Video%20Indexer) with relevant blog posts, [Azure Video Indexer FAQs](faq.yml), [User Voice](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858) to provide your feedback and suggest features, and ['CodePen' link](https://codepen.io/videoindexer) with widgets code samples.
+Azure Video Indexer has a new [developer portal](https://api-portal.videoindexer.ai/), try out the new Azure Video Indexer APIs and find all the relevant resources in one place: [GitHub repository](https://github.com/Azure-Samples/media-services-video-indexer), [Stack overflow](https://stackoverflow.com/questions/tagged/video-indexer), [Azure Video Indexer tech community](https://techcommunity.microsoft.com/t5/azure-media-services/bg-p/AzureMediaServices/label-name/Video%20Indexer) with relevant blog posts, [Azure Video Indexer FAQs](faq.yml), [User Voice](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858) to provide your feedback and suggest features, and ['CodePen' link](https://codepen.io/videoindexer) with widgets code samples.
### Advanced customization capabilities for insight widget
You can now create an Azure Video Indexer paid account in the East US region.
Azure Video Indexer regional endpoints were all unified to start only with www. No action item is required.
-From now on, you reach www.videoindexer.ai whether it is for embedding widgets or logging into Azure Video Indexer web applications.
+From now on, you reach www.videoindexer.ai whether it is for embedding widgets or logging into the [Azure Video Indexer](https://www.videoindexer.ai/) website.
Also wus.videoindexer.ai would be redirected to www. More information is available in [Embed Azure Video Indexer widgets in your apps](video-indexer-embed-widgets.md).
https://github.com/Azure-Samples/media-services-video-indexer
### Swagger update
-Azure Video Indexer unified **authentications** and **operations** into a single [Azure Video Indexer OpenAPI Specification (swagger)](https://api-portal.videoindexer.ai/api-details#api=Operations&operation). Developers can find the APIs in [Azure Video Indexer Developer Portal](https://api-portal.videoindexer.ai/).
+Azure Video Indexer unified **authentications** and **operations** into a single [Azure Video Indexer OpenAPI Specification (swagger)](https://api-portal.videoindexer.ai/api-details#api=Operations&operation). Developers can find the APIs in the [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
## December 2019
azure-video-indexer Upload Index Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/upload-index-videos.md
This article shows how to upload and index videos by using the Azure Video Indexer website (see [get started with the website](video-indexer-get-started.md)) and the Upload Video API (see [get started with API](video-indexer-use-apis.md)).
-After you upload and index a video, you can use [Azure Video Indexer website](video-indexer-view-edit.md) or [Azure Video Indexer Developer Portal](video-indexer-use-apis.md) to see the insights of the video (see [Examine the Azure Video Indexer output](video-indexer-output-json-v2.md)).
+After you upload and index a video, you can use [Azure Video Indexer website](video-indexer-view-edit.md) or [Azure Video Indexer API developer portal](video-indexer-use-apis.md) to see the insights of the video (see [Examine the Azure Video Indexer output](video-indexer-output-json-v2.md).
## Supported file formats
You can use the [Upload Video](https://api-portal.videoindexer.ai/api-details#ap
### Configurations and parameters
-This section describes some of the optional parameters and when to set them. For the most up-to-date info about parameters, see the [Azure Video Indexer portal](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video).
+This section describes some of the optional parameters and when to set them. For the most up-to-date info about parameters, see the [Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video).
#### externalID
After you copy the following code into your development platform, you'll need to
To get your API key:
- 1. Go to the [Azure Video Indexer portal](https://api-portal.videoindexer.ai/).
+ 1. Go to the [Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai/).
1. Sign in. 1. Go to **Products** > **Authorization** > **Authorization subscription**. 1. Copy the **Primary key** value.
azure-video-indexer Video Indexer Embed Widgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-embed-widgets.md
If you embed Azure Video Indexer insights with your own [Azure Media Player](htt
### Cognitive Insights widget
-You can choose the types of insights that you want. To do this, specify them as a value to the following URL parameter that's added to the embed code that you get (from the API or from the web app): `&widgets=<list of wanted widgets>`.
+You can choose the types of insights that you want. To do this, specify them as a value to the following URL parameter that's added to the embed code that you get (from the [API](https://aka.ms/avam-dev-portal) or from the [Azure Video Indexer](https://www.videoindexer.ai/) website): `&widgets=<list of wanted widgets>`.
The possible values are: `people`, `animatedCharacters` , `keywords`, `labels`, `sentiments`, `emotions`, `topics`, `keyframes`, `transcript`, `ocr`, `speakers`, `scenes`, and `namedEntities`.
azure-video-indexer Video Indexer Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-get-started.md
See the [input container/file formats](/azure/media-services/latest/encode-media
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/video-indexer-get-started/uploaded.png" alt-text="Uploaded the upload":::
-After you upload and index a video, you can continue using [Azure Video Indexer website](video-indexer-view-edit.md) or [Azure Video Indexer Developer Portal](video-indexer-use-apis.md) to see the insights of the video (see [Examine the Azure Video Indexer output](video-indexer-output-json-v2.md)).
+After you upload and index a video, you can continue using [Azure Video Indexer website](video-indexer-view-edit.md) or [Azure Video Indexer API developer portal](video-indexer-use-apis.md) to see the insights of the video (see [Examine the Azure Video Indexer output](video-indexer-output-json-v2.md)).
## Start using insights
azure-video-indexer Video Indexer Output Json V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-output-json-v2.md
Azure Video Indexer makes an inference of main topics from transcripts. When pos
## Next steps
-Explore the [Azure Video Indexer Developer Portal](https://api-portal.videoindexer.ai).
+Explore the [Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai).
For information about how to embed widgets in your application, see [Embed Azure Video Indexer widgets into your applications](video-indexer-embed-widgets.md).
azure-video-indexer Video Indexer Use Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-use-apis.md
Before you start, see the [Recommendations](#recommendations) section (that foll
## Subscribe to the API
-1. Sign in to [Azure Video Indexer Developer Portal](https://api-portal.videoindexer.ai/).
+1. Sign in to the [Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai/).
> [!Important] > * You must use the same provider you used when you signed up for Azure Video Indexer. > * Personal Google and Microsoft (Outlook/Live) accounts can only be used for trial accounts. Accounts connected to Azure require Azure AD. > * There can be only one active account per email. If a user tries to sign in with user@gmail.com for LinkedIn and later with user@gmail.com for Google, the latter will display an error page, saying the user already exists.
- ![Sign in to Azure Video Indexer Developer Portal](./media/video-indexer-use-apis/sign-in.png)
+ ![Sign in to the Azure Video Indexer API developer portal](./media/video-indexer-use-apis/sign-in.png)
1. Subscribe. Select the [Products](https://api-portal.videoindexer.ai/products) tab. Then, select **Authorization** and subscribe.
Before you start, see the [Recommendations](#recommendations) section (that foll
After you subscribe, you can find your subscription under **[Products](https://api-portal.videoindexer.ai/products)** -> **Profile**. In the subscriptions section, you'll find the primary and secondary keys. The keys should be protected. The keys should only be used by your server code. They shouldn't be available on the client side (.js, .html, and so on).
- ![Subscription and keys in Video Indexer Developer Portal](./media/video-indexer-use-apis/subscriptions.png)
+ ![Subscription and keys in the Azure Video Indexer API developer portal](./media/video-indexer-use-apis/subscriptions.png)
An Azure Video Indexer user can use a single subscription key to connect to multiple Azure Video Indexer accounts. You can then link these Azure Video Indexer accounts to different Media Services accounts.
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
Azure Backup provides several ways to restore a VM.
**Restore disk** | Restores a VM disk, which can then be used to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings, and create a VM.<br/><br/> The disks are copied to the Resource Group you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured using the template or PowerShell. **Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault, and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs, including VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's unsupported for classic VMs, unmanaged VMs, and [generalized VMs](../virtual-machines/windows/upload-generalized-managed.md).<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) or [Key Vault](../key-vault/general/overview.md). **Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> During the backup, snapshots aren't replicated to the secondary region. Only the data stored in the vault is replicated. So secondary region restores are only [vault tier](about-azure-vm-restore.md#concepts) restores. The restore time for the secondary region will be almost the same as the vault tier restore time for the primary region. <br><br> This feature is available for the options below:<br><br> - [Create a VM](#create-a-vm) <br> - [Restore Disks](#restore-disks) <br><br> We don't currently support the [Replace existing disks](#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins.
-**Cross Subscription Restore** | Allows you to restore Azure Virtual Machines or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
-**Cross Zonal Restore** | Allows you to restore Azure Virtual Machines or disks pinned to any zone to different available zones (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Zonal Restore for managed virtual machines only. <br><br> Cross Zonal Restore is supported for [Restore with Managed System Identities (MSI)](#restore-vms-with-managed-identities). <br><br> Cross Zonal Restore supports restore of an Azure zone pinned/non-zone pinned VM from a vault with Zonal-redundant storage (ZRS) enabled. Learn [how to set Storage Redundancy](backup-create-rs-vault.md#set-storage-redundancy). <br><br> It's supported to restore an Azure zone pinned VM only from a [vault with Cross Region Restore (CRR)](backup-create-rs-vault.md#set-storage-redundancy) (if the secondary region supports zones) or Zone Redundant Storage (ZRS) enabled. <br><br> Cross Zonal Restore is supported from [secondary regions](#restore-in-secondary-region). <br><br> It's unsupported from [snapshots](backup-azure-vms-introduction.md#snapshot-creation) restore point. <br><br> It's unsupported for [Encrypted Azure VMs and Trusted Launch VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups).
+**Cross Subscription Restore (preview)** | Allows you to restore Azure Virtual Machines or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
+**Cross Zonal Restore (preview)** | Allows you to restore Azure Virtual Machines or disks pinned to any zone to different available zones (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Zonal Restore for managed virtual machines only. <br><br> Cross Zonal Restore is supported for [Restore with Managed System Identities (MSI)](#restore-vms-with-managed-identities). <br><br> Cross Zonal Restore supports restore of an Azure zone pinned/non-zone pinned VM from a vault with Zonal-redundant storage (ZRS) enabled. Learn [how to set Storage Redundancy](backup-create-rs-vault.md#set-storage-redundancy). <br><br> It's supported to restore an Azure zone pinned VM only from a [vault with Cross Region Restore (CRR)](backup-create-rs-vault.md#set-storage-redundancy) (if the secondary region supports zones) or Zone Redundant Storage (ZRS) enabled. <br><br> Cross Zonal Restore is supported from [secondary regions](#restore-in-secondary-region). <br><br> It's unsupported from [snapshots](backup-azure-vms-introduction.md#snapshot-creation) restore point. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
>[!Tip] >To receive alerts/notifications when a restore operation fails, use [Azure Monitor alerts for Azure Backup](backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup). This helps you to monitor such failures and take necessary actions to remediate the issues.
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Recovery points on DPM/MABS disk | 64 for file servers, and 448 for app servers.
**Create a new VM** | Quickly creates and gets a basic VM up and running from a restore point.<br/><br/> You can specify a name for the VM, select the resource group and virtual network (VNet) in which it will be placed, and specify a storage account for the restored VM. The new VM must be created in the same region as the source VM. **Restore disk** | Restores a VM disk, which can then be used to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings, and create a VM.<br/><br/> The disks are copied to the Resource Group you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured using the template or PowerShell. **Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault, and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs and for VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's not supported for unmanaged disks and VMs, classic VMs, and [generalized VMs](../virtual-machines/windows/capture-image-resource.md).<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) and [Key Vault](../key-vault/general/overview.md).
-**Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> This feature is available for the options below:<br> <li> [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> <li> [Restore Disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> We don't currently support the [Replace existing disks](./backup-azure-arm-restore-vms.md#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins.
+**Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore Disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> We don't currently support the [Replace existing disks](./backup-azure-arm-restore-vms.md#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins.
+**Cross Subscription (preview)** | Cross Subscription restore can be used to restore Azure managed VMs in different subscriptions.<br><br> You can restore Azure VMs or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore Disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross Subscription Restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
+**Cross Zonal Restore (preview)** | Cross Zonal restore can be used to restore Azure zone pinned VMs in available zones.<br><br> You can restore Azure VMs or disks to different zones (as per the Azure RBAC capabilities) from restore points. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore Disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross Zonal Restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) restore points. It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
+ ## Support for file-level restore
The following table summarizes support for backup during VM management tasks, su
**Restore** | **Supported** |
-<a name="backup-azure-cross-subscription-restore">Restore across subscription</a> | [Cross Subscription Restore](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs.
+<a name="backup-azure-cross-subscription-restore">Restore across subscription</a> | [Cross Subscription Restore (preview)](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs.
[Restore across region](backup-azure-arm-restore-vms.md#cross-region-restore) | Supported.
-<a name="backup-azure-cross-zonal-restore">Restore across zone</a> | [Cross Zonal Restore](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs.
+<a name="backup-azure-cross-zonal-restore">Restore across zone</a> | [Cross Zonal Restore (preview)](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs.
Restore to an existing VM | Use replace disk option. Restore disk with storage account enabled for Azure Storage Service Encryption (SSE) | Not supported.<br/><br/> Restore to an account that doesn't have SSE enabled. Restore to mixed storage accounts |Not supported.<br/><br/> Based on the storage account type, all restored disks will be either premium or standard, and not mixed.
batch Monitor Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/monitor-application-insights.md
Last updated 04/13/2021
[Application Insights](../azure-monitor/app/app-insights-overview.md) provides an elegant and powerful way for developers to monitor and debug applications deployed to Azure services. Use Application Insights to monitor performance counters and exceptions as well as instrument your code with custom metrics and tracing. Integrating Application Insights with your Azure Batch application allows you to gain deep insights into behaviors and investigate issues in near-real time.
-This article shows how to add and configure the Application Insights library into your Azure Batch .NET solution and instrument your application code. It also shows ways to monitor your application via the Azure portal and build custom dashboards. For Application Insights support in other languages, see the [languages, platforms, and integrations documentation](../azure-monitor/app/platforms.md).
+This article shows how to add and configure the Application Insights library into your Azure Batch .NET solution and instrument your application code. It also shows ways to monitor your application via the Azure portal and build custom dashboards. For Application Insights support in other languages, see the [languages, platforms, and integrations documentation](../azure-monitor/app/app-insights-overview.md#supported-languages).
A sample C# solution with code to accompany this article is available on [GitHub](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/ArticleProjects/ApplicationInsights). This example adds Application Insights instrumentation code to the [TopNWords](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/TopNWords) example. If you're not familiar with that example, try building and running TopNWords first. Doing this will help you understand a basic Batch workflow of processing a set of input blobs in parallel on multiple compute nodes.
Due to the large-scale nature of Azure Batch applications running in production,
## Next steps - Learn more about [Application Insights](../azure-monitor/app/app-insights-overview.md).-- For Application Insights support in other languages, see the [languages, platforms, and integrations documentation](../azure-monitor/app/platforms.md).
+- For Application Insights support in other languages, see the [languages, platforms, and integrations documentation](../azure-monitor/app/app-insights-overview.md#supported-languages).
chaos-studio Sample Template Experiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/sample-template-experiment.md
In this sample, we create a chaos experiment with a single target resource and a
{ "type": "Microsoft.Chaos/experiments", "apiVersion": "2021-09-15-preview",
- "name": "parameters('experimentName')",
- "location": "parameters('location')",
+ "name": "[parameters('experimentName')]",
+ "location": "[parameters('location')]",
"identity": { "type": "SystemAssigned" },
In this sample, we create a chaos experiment with a single target resource and a
"targets": [ { "type": "ChaosTarget",
- "id": "parameters('chaosTargetResourceId')"
+ "id": "[parameters('chaosTargetResourceId')]"
} ] }
chaos-studio Sample Template Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/sample-template-targets.md
In this sample, we onboard an Azure Cosmos DB instance using [targets and capabi
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-CosmosDB/Failover-1.0')]", "location": "[parameters('location')]", "dependsOn": [
- "[concat(resourceId('Microsoft.DocumentDB/databaseAccounts', parameters('resourceName'), parameters('resourceGroup')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-CosmosDB')]"
+ "[concat(resourceId('Microsoft.DocumentDB/databaseAccounts', parameters('resourceName')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-CosmosDB')]"
], "properties": {} }
In this sample, we onboard an Azure Kubernetes Service cluster using [targets an
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/NetworkChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
- "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName'), parameters('resourceGroup')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
+ "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
], "properties": {} },
In this sample, we onboard an Azure Kubernetes Service cluster using [targets an
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/PodChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
- "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName'), parameters('resourceGroup')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
+ "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
], "properties": {} },
In this sample, we onboard an Azure Kubernetes Service cluster using [targets an
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/StressChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
- "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName'), parameters('resourceGroup')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
+ "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
], "properties": {} },
In this sample, we onboard an Azure Kubernetes Service cluster using [targets an
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/IOChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
- "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName'), parameters('resourceGroup')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
+ "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
], "properties": {} },
In this sample, we onboard an Azure Kubernetes Service cluster using [targets an
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/TimeChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
- "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName'), parameters('resourceGroup')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
+ "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
], "properties": {} },
In this sample, we onboard an Azure Kubernetes Service cluster using [targets an
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/KernelChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
- "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName'), parameters('resourceGroup')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
+ "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
], "properties": {} },
In this sample, we onboard an Azure Kubernetes Service cluster using [targets an
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/DNSChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
- "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName'), parameters('resourceGroup')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
+ "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
], "properties": {} },
In this sample, we onboard an Azure Kubernetes Service cluster using [targets an
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/HTTPChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
- "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName'), parameters('resourceGroup')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
+ "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
], "properties": {} }
cognitive-services Find Similar Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/find-similar-faces.md
Previously updated : 05/05/2022 Last updated : 11/07/2022
This guide demonstrates how to use the Find Similar feature in the different lan
This guide uses remote images that are accessed by URL. Save a reference to the following URL string. All of the images accessed in this guide are located at this URL path. ```
-"https://csdx.blob.core.windows.net/resources/Face/media/"
+https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/
``` ## Detect faces for comparison
cognitive-services Mitigate Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/mitigate-latency.md
Previously updated : 1/5/2021 Last updated : 11/07/2021 ms.devlang: csharp
The Face service must then download the image from the remote server. If the con
To mitigate this situation, consider [storing the image in Azure Premium Blob Storage](../../../storage/blobs/storage-upload-process-images.md?tabs=dotnet). For example: ``` csharp
-var faces = await client.Face.DetectWithUrlAsync("https://csdx.blob.core.windows.net/resources/Face/Images/Family1-Daughter1.jpg");
+var faces = await client.Face.DetectWithUrlAsync("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Daughter1.jpg");
``` ### Large upload size
If the file to upload is large, that will impact the response time of the `Detec
Mitigations: - Consider [storing the image in Azure Premium Blob Storage](../../../storage/blobs/storage-upload-process-images.md?tabs=dotnet). For example: ``` csharp
-var faces = await client.Face.DetectWithUrlAsync("https://csdx.blob.core.windows.net/resources/Face/Images/Family1-Daughter1.jpg");
+var faces = await client.Face.DetectWithUrlAsync("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Daughter1.jpg");
``` - Consider uploading a smaller file. - See the guidelines regarding [input data for face detection](../concept-face-detection.md#input-data) and [input data for face recognition](../concept-face-recognition.md#input-data).
cognitive-services Overview Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-identity.md
Previously updated : 06/13/2022 Last updated : 11/06/2022 keywords: facial recognition, facial recognition software, facial analysis, face matching, face recognition app, face search by image, facial recognition search
This documentation contains the following types of articles:
* The [conceptual articles](./concept-face-detection.md) provide in-depth explanations of the service's functionality and features. * The [tutorials](./enrollment-overview.md) are longer guides that show you how to use this service as a component in broader business solutions.
-For a more structured approach, follow a Learn module for Face.
+For a more structured approach, follow a Training module for Face.
* [Detect and analyze faces with the Face service](/training/modules/detect-analyze-faces/) ## Example use cases
cognitive-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-image-analysis.md
Previously updated : 11/03/2022 Last updated : 11/06/2022 keywords: computer vision, computer vision applications, computer vision service
This documentation contains the following types of articles:
* The [conceptual articles](concept-tagging-images.md) provide in-depth explanations of the service's functionality and features. * The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.
-For a more structured approach, follow a Learn module for Image Analysis.
+For a more structured approach, follow a Training module for Image Analysis.
* [Analyze images with the Computer Vision service](/training/modules/analyze-images-computer-vision/) ## Image Analysis features
cognitive-services Vehicle Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/vehicle-analysis.md
Previously updated : 09/28/2022 Last updated : 11/07/2022
Vehicle analysis is a set of capabilities that, when used with the Spatial Analy
* To utilize the operations of vehicle analysis, you must first follow the steps to [install and run spatial analysis container](./spatial-analysis-container.md) including configuring your host machine, downloading and configuring your [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest.json) file, executing the deployment, and setting up device [logging](spatial-analysis-logging.md). * When you configure your [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest.json) file, refer to the steps below to add the graph configurations for vehicle analysis to your manifest prior to deploying the container. Or, once the spatial analysis container is up and running, you may add the graph configurations and follow the steps to redeploy. The steps below will outline how to properly configure your container.
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
> [!NOTE] > Make sure that the edge device has at least 50GB disk space available before deploying the Spatial Analysis module.
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CSHARP&Pillar=Vision&Product=spatial-analysis&Page=howto&Section=prerequisites" target="_target">I ran into an issue</a>
+ ## Vehicle analysis operations Similar to Spatial Analysis, vehicle analysis enables the analysis of real-time streaming video from camera devices. For each camera device you configure, the operations for vehicle analysis will generate an output stream of JSON messages that are being sent to your instance of Azure IoT Hub.
Below is the graph optimized for the **vehicle in polygon** operation, utilized
} ```
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CSHARP&Pillar=Vision&Product=spatial-analysis&Page=howto&Section=configuring-the-vehicle-analysis-operations" target="_target">I ran into an issue</a>
+ ## Sample cognitiveservices.vision.vehicleanalysis-vehiclecount-preview and cognitiveservices.vision.vehicleanalysis-vehiclecount.cpu-preview output The JSON below demonstrates an example of the vehicle count operation graph output.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/overview.md
Previously updated : 09/29/2021 Last updated : 11/06/2021 keywords: content moderator, azure content moderator, online moderator, content filtering software, content moderation service, content moderation
This documentation contains the following article types:
* [**Concepts**](text-moderation-api.md) provide in-depth explanations of the service functionality and features. * [**Tutorials**](ecommerce-retail-catalog-moderation.md) are longer guides that show you how to use the service as a component in broader business solutions.
-For a more structured approach, follow a Learn module for Content Moderator.
+For a more structured approach, follow a Training module for Content Moderator.
* [Introduction to Content Moderator](/training/modules/intro-to-content-moderator/) * [Classify and moderate text with Azure Content Moderator](/training/modules/classify-and-moderate-text-with-azure-content-moderator/)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/overview.md
Previously updated : 07/20/2022 Last updated : 11/06/2022 keywords: image recognition, image identifier, image recognition app, custom vision
This documentation contains the following types of articles:
* The [tutorials](./iot-visual-alerts-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions. <!--* The [conceptual articles](Vision-API-How-to-Topics/call-read-api.md) provide in-depth explanations of the service's functionality and features.-->
-For a more structured approach, follow a Learn module for Custom Vision:
+For a more structured approach, follow a Training module for Custom Vision:
* [Classify images with the Custom Vision service](/training/modules/classify-images-custom-vision/) * [Classify endangered bird species with Custom Vision](/training/modules/cv-classify-bird-species/)
cognitive-services Batch Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-synthesis.md
You can use the following REST API operations for batch synthesis:
| List batch synthesis | `GET` | texttospeech/3.1-preview1/batchsynthesis | | Delete batch synthesis | `DELETE` | texttospeech/3.1-preview1/batchsynthesis/{id} |
+For code samples, see [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch-synthesis).
+ ## Create batch synthesis To submit a batch synthesis request, construct the HTTP POST request body according to the following instructions:
cognitive-services Extract Excel Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/tutorials/extract-excel-information.md
Previously updated : 07/27/2022 Last updated : 11/21/2022
In this tutorial, you'll learn how to:
- A Microsoft Azure account. [Create a free account](https://azure.microsoft.com/free/cognitive-services/) or [sign in](https://portal.azure.com/). - A Language resource. If you don't have one, you can [create one in the Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) and use the free tier to complete this tutorial. - The [key and endpoint](../../../cognitive-services-apis-create-account.md#get-the-keys-for-your-resource) that was generated for you during sign-up.-- A spreadsheet containing tenant issues. Example data is provided on GitHub-- Microsoft 365, with OneDrive for business.
+- A spreadsheet containing tenant issues. Example data for this tutorial is [available on GitHub](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/TextAnalytics/sample-data/ReportedIssues.xlsx).
+- Microsoft 365, with [OneDrive for business](https://www.microsoft.com/microsoft-365/onedrive/onedrive-for-business).
## Add the Excel file to OneDrive for Business
cognitive-services What Is Personalizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/what-is-personalizer.md
ms.
Previously updated : 07/06/2022 Last updated : 11/17/2022 keywords: personalizer, Azure personalizer, machine learning # What is Personalizer?
-Azure Personalizer helps your applications make smarter decisions at scale using **reinforcement learning**. Personalizer can determine the best actions to take in a variety of scenarios:
+Azure Personalizer is an AI service that your applications make smarter decisions at scale using **reinforcement learning**. Personalizer processes information about the state of your application, scenario, and/or users (*contexts*), and a set of possible decisions and related attributes (*actions*) to determine the best decision to make. Feedback from your application (*rewards*) is sent to Personalizer to learn how to improve its decision-making ability in near-real time.
+
+Personalizer can determine the best actions to take in a variety of scenarios:
* E-commerce: What product should be shown to customers to maximize the likelihood of a purchase? * Content recommendation: What article should be shown to increase the click-through rate? * Content design: Where should an advertisement be placed to optimize user engagement on a website? * Communication: When and how should a notification be sent to maximize the chance of a response?
-Personalizer processes information about the state of your application, scenario, and/or users (*contexts*), and a set of possible decisions and related attributes (*actions*) to determine the best decision to make. Feedback from your application (*rewards*) is sent to Personalizer to learn how to improve its decision-making ability in near-real time.
-
-To get started with the Personalizer, follow the [**quickstart guide**](quickstart-personalizer-sdk.md), or try Personalizer with this [interactive demo](https://personalizerdevdemo.azurewebsites.net/).
-
+To get started with the Personalizer, follow the [**quickstart guide**](quickstart-personalizer-sdk.md), or try Personalizer in your browser with this [interactive demo](https://personalizerdevdemo.azurewebsites.net/).
This documentation contains the following types of articles:
Personalizer uses reinforcement learning to select the best *action* for a given
Personalizer empowers you to take advantage of the power and flexibility of reinforcement learning using just two primary APIs.
-The **Rank** [API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank) is called by your application each time there is a decision to be made. The application sends a JSON containing a set of actions, features that describe each action, and features that describe the current context. Each Rank API call is known as an **event** and noted with a unique _event ID_. Personalizer then returns the ID of the best action that maximizes the total average reward as determined by the underlying model.
+The **Rank** [API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank) is called by your application each time there's a decision to be made. The application sends a JSON containing a set of actions, features that describe each action, and features that describe the current context. Each Rank API call is known as an **event** and noted with a unique _event ID_. Personalizer then returns the ID of the best action that maximizes the total average reward as determined by the underlying model.
-The **Reward** [API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Reward) is called by your application whenever there is feedback that can help Personalizer learn if the action ID returned in the *Rank* call provided value. For example, if a user clicked on the suggested news article, or completed the purchase of a suggested product. A call to then Reward API can be in real-time (just after the Rank call is made) or delayed to better fit the needs of the scenario. The reward score is determined your business metrics and objectives, and can be generated by an algorithm or rules in your application. The score is a real-valued number between 0 and 1.
+The **Reward** [API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Reward) is called by your application whenever there's feedback that can help Personalizer learn if the action ID returned in the *Rank* call provided value. For example, if a user clicked on the suggested news article, or completed the purchase of a suggested product. A call to then Reward API can be in real-time (just after the Rank call is made) or delayed to better fit the needs of the scenario. The reward score is determined your business metrics and objectives, and can be generated by an algorithm or rules in your application. The score is a real-valued number between 0 and 1.
### Learning modes
-* **[Apprentice mode](concept-apprentice-mode.md)** Similar to how an apprentice learns a craft from observing an expert, Apprentice mode enables Personalizer to learn by observing your application's current decision logic. This helps to mitigate the so-called "cold start" problem with a new untrained model, and allows you to validate the action and context features that are sent to Personalizer. In Apprentice mode, each call to the Rank API returns the _baseline action_ or _default action_, that is the action that the application would've taken without using Personalizer. This is sent by your application to Personalizer in the Rank API as the first item in the set of possible actions.
+* **[Apprentice mode](concept-apprentice-mode.md)** Similar to how an apprentice learns a craft from observing an expert, Apprentice mode enables Personalizer to learn by observing your application's current decision logic. This helps to mitigate the so-called "cold start" problem with a new untrained model, and allows you to validate the action and context features that are sent to Personalizer. In Apprentice mode, each call to the Rank API returns the _baseline action_ or _default action_ that is the action that the application would have taken without using Personalizer. This is sent by your application to Personalizer in the Rank API as the first item in the set of possible actions.
* **Online mode** Personalizer will return the best action, given the context, as determined by the underlying RL model and explores other possible actions that may improve performance. Personalizer learns from feedback provided in calls to the Reward API.
Note that Personalizer uses collective information across all users to learn the
* Log individual users' preferences or historical data.
-### Example scenarios
+## Example scenarios
Here are a few examples where Personalizer can be used to select the best content to render for a user.
Here are a few examples where Personalizer can be used to select the best conten
Use Personalizer when your scenario has:
-* A limited set of actions or items to select from in each personalization event. We recommend no more than ~50 actions in each Rank API call. If you have a larger set of possible actions, we suggest using a [using a recommendation engine].(where-can-you-use-personalizer.md#how-to-use-personalizer-with-a-recommendation-solution) or another mechanism to reduce the list of actions prior to calling the Rank API.
+* A limited set of actions or items to select from in each personalization event. We recommend no more than ~50 actions in each Rank API call. If you have a larger set of possible actions, we suggest [using a recommendation engine](where-can-you-use-personalizer.md#how-to-use-personalizer-with-a-recommendation-solution) or another mechanism to reduce the list of actions prior to calling the Rank API.
* Information describing the actions (_action features_). * Information describing the current context (_contextual features_). * Sufficient data volume to enable Personalizer to learn. In general, we recommend a minimum of ~1,000 events per day to enable Personalizer learn effectively. If Personalizer doesn't receive sufficient data, the service takes longer to determine the best actions.
+## Responsible use of AI
+At Microsoft, we're committed to the advancement of AI driven by principles that put people first. AI models such as the ones available in the Personalizer service have significant potential benefits,
+but without careful design and thoughtful mitigations, such models have the potential to generate incorrect or even harmful content. Microsoft has made significant investments to help guard against abuse and unintended harm, incorporating [MicrosoftΓÇÖs principles for responsible AI use](https://www.microsoft.com/ai/responsible-ai), building content filters to support customers, and providing responsible AI implementation guidance to onboarded customers. See the [Responsible AI docs for Personalizer](responsible-use-cases.md).
-## Integrating Personalizer in an application
+## Integrate Personalizer into an application
-1. [Design](concepts-features.md) and plan the **_actions_**, and **_context_**. Determine the how to interpret feedback as a **_reward_** score.
-1. Each [Personalizer Resource](how-to-settings.md) you create is defined as one _Learning Loop_. The loop will receive the both the Rank and Reward calls for that content or user experience and train an underlying RL model. There are
+1. [Design](concepts-features.md) and plan the **_actions_**, and **_context_**. Determine how to interpret feedback as a **_reward_** score.
+1. Each [Personalizer Resource](how-to-settings.md) you create is defined as one _Learning Loop_. The loop will receive both the Rank and Reward calls for that content or user experience and train an underlying RL model. There are
|Resource type| Purpose| |--|--|
Use Personalizer when your scenario has:
1. Add Personalizer to your application, website, or system: 1. Add a **Rank** call to Personalizer in your application, website, or system to determine the best action.
- 1. Use the the best action, as specified as a _reward action ID_ in your scenario.
+ 1. Use the best action, as specified as a _reward action ID_ in your scenario.
1. Apply _business logic_ to user behavior or feedback data to determine the **reward** score. For example:
- |Behavior|Calculated reward score|
- |--|--|
- |User selected a news article suggested by Personalizer |**1**|
- |User selected a news article _not_ suggested by Personalizer |**0**|
- |User hesitated to select a news article, scrolled around indecisively, and ultimately selected the news article suggested by Personalizer |**0.5**|
+ |Behavior|Calculated reward score|
+ |--|--|
+ |User selected a news article suggested by Personalizer |**1**|
+ |User selected a news article _not_ suggested by Personalizer |**0**|
+ |User hesitated to select a news article, scrolled around indecisively, and ultimately selected the news article suggested by Personalizer |**0.5**|
1. Add a **Reward** call sending a reward score between 0 and 1 * Immediately after feedback is received. * Or sometime later in scenarios where delayed feedback is expected. 1. Evaluate your loop with an [offline evaluation](concepts-offline-evaluation.md) after a period of time when Personalizer has received significant data to make online decisions. An offline evaluation allows you to test and assess the effectiveness of the Personalizer Service without code changes or user impact.
-## Reference
-
-* [Personalizer C#/.NET SDK](/dotnet/api/overview/azure/cognitiveservices/client/personalizer)
-* [Personalizer Go SDK](https://github.com/Azure/azure-sdk-for-go/tree/master/services/preview)
-* [Personalizer JavaScript SDK](/javascript/api/@azure/cognitiveservices-personalizer/)
-* [Personalizer Python SDK](/python/api/overview/azure/cognitiveservices/personalizer)
-* [REST APIs](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank)
## Next steps > [!div class="nextstepaction"]
-> [How Personalizer works](how-personalizer-works.md)
-> [What is Reinforcement Learning?](concepts-reinforcement-learning.md)
+> [Personalizer quickstart](quickstart-personalizer-sdk.md)
+
+* [How Personalizer works](how-personalizer-works.md)
+* [What is Reinforcement Learning?](concepts-reinforcement-learning.md)
communication-services Simulcast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/simulcast.md
+
+ Title: Azure Communication Services Simulcast
+
+description: Overview of Simulcast
+++++ Last updated : 11/21/2022+++
+# Simulcast
+Simulcast is provided as a preview for developers and may change based on feedback that we receive. To use this feature, use 1.9.1-beta.1+ release of Azure Communication Services Calling Web SDK. Currently, we support simulcast send from desktop chrome and desktop edge. Simulcast send from mobile devices will be available shortly in the future.
+
+Simulcast is a technique by which an endpoint encodes the same video feed using different qualities, sends these video feeds of multiple qualities to a selective forwarding unit ΓÇô SFU that decides which of the receivers gets which quality.
+The lack of simulcast support leads to a degraded video experience in calls with three or more participants. If a video receiver with poor network conditions joins the conference, it will impact the quality of video received from the sender without simulcast support for all other participants. This is because the video sender will optimize its video feed against the lowest common denominator. With simulcast, the impact of lowest common denominator will be minimized. That is because the video sender will produce specialized low fidelity video encoding for a subset of receivers that run on poor networks (or otherwise constrained).
+## Scenarios where simulcast is useful
+- Users with unknown bandwidth constraints joining. When a new joiner joins the call, its bandwidth conditions are unknown when starting to receive video. It will not be sent high quality content before reliable estimation of its bandwidth is known to prevent overshooting the available bandwidth. In unicast, if everyone was receiving high quality content, then that would cause degradation for every other receiver until the reliable estimate of the bandwidth conditions can be achieved. In simulcast, lower resolution video can be sent to the new joiner until itsΓÇÖ bandwidth conditions are known while other keep receiving high quality video.
+In a similar way, if one of the receivers is on poor network, video quality of all other receivers on good network will be degraded to accommodate for the receiver on poor network in unicast. But in simulcast, lower resolution/bitrate content can be sent to the receiver on poor network and higher resolution/bitrate content can be sent to receivers on good network.
+- In content sharing, where thumbnails are often used for video content, lower resolution videos are requested from the producers. If in parallel zooming of someoneΓÇÖs video is needed, zoomed video will be low quality to prevent others looking at the content not to receive both content and video at high quality thus wasting bandwidth.
+- When video is sent to a receiver who has a larger view(like a desktop receiver. On desktop, videos are usually rendered on big views) than another receiver who has a smaller view(like a mobile receiver. Mobile screens are usually small). With simulcast, the quality of the larger view will not be affected by the quality of the smaller view. Sender will send a high resolution to the larger view receiver and a smaller resolution to the smaller view receiver.
+
+## How it's used/works
+Simulcast is adaptively enabled on-demand to save bandwidth and CPU resources of the publisher.
+Subscribers notify SFU of its maximum resolution preference based on the size of the renderer element.
+SFU tracks the bandwidth conditions and resolution requirements of all current subscribers to the publisherΓÇÖs video and forwards the aggregated parameters of all subscribers to the publisher. Publisher will pick the best set of parameters to give optimal quality to all receivers considering all publisherΓÇÖs and subscribersΓÇÖ constraints.
+SFU will receive multiple qualities of the content and will choose the quality to forward to the subscriber. There will be no transcoding of the content on the SFU. SFU won't forward higher resolution than requested by the subscriber.
+## Limitations
+Web endpoints support simulcast only for video content with maximum two distinct qualities.
+## Resolutions
+In adaptive simulcast, there are no set resolutions for high- and low-quality video streams. Optimal set of either single or multiple streams are chosen. If every subscriber to video is requesting and capable of receiving maximum resolution what publisher can provide, only that maximum resolution will be sent.
+Following resolutions are supported and requested by the receivers in web simulcast ΓÇô 180p, 240p, 360p, 540p, 720p.
+In limited input resolution, resolution received will be capped at that resolution.
+In simulcast, effective resolution sent can be also degraded internally, thus actual received resolution of video can vary.
communication-services Quickstart Botframework Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/quickstart-botframework-integration.md
# Add a bot to your chat app > [!IMPORTANT]
-> This functionality is in private preview, and restricted to a limited number of Azure Communication Services early adopters. You can [submit this form to request participation in the preview](https://forms.office.com/r/HBm8jRuuGZ) and we will review your scenario(s) and evaluate your participation in the preview.
+> This functionality is in public preview.
>
-> Private Preview APIs and SDKs are provided without a service-level agreement, and are not appropriate for production workloads and should only be used with test users and test data. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-- In this quickstart, you will learn how to build conversational AI experiences in a chat application using Azure Communication Services Chat messaging channel that is available under Azure Bot Services. This article will describe how to create a bot using BotFramework SDK and how to integrate this bot into any chat application that is built using Communication Services Chat SDK.
Sometimes the bot wouldn't be able to understand or answer a question or a custo
## Handling bot to bot communication
- There may be certain usecases where two bots need to be added to the same thread. If this occurs, then the bots may start replying to each other's messages. If this scenario is not handled properly, the bots' automated interaction between themselves may result in an infinite loop of messages. This scenario is handled by Azure Communication Services Chat by throttling the requests which will result in the bot not being able to send and receive the messages. You can learn more about the [throttle limits](/azure/communication-services/concepts/service-limits#chat).
+ There may be certain use cases where two bots need to be added to the same chat thread to provide different services. In such use cases, you may need to ensure that bots don't start sending automated replies to each other's messages. If not handled properly, the bots' automated interaction between themselves may result in an infinite loop of messages. You can verify the ACS user identity of the sender of a message from the activity's `From.Id` field to see if it belongs to another bot and take required action to prevent such a communication flow. If such a scenario results in high call volumes, then Azure Communication Services Chat channel will start throttling the requests, which will result in the bot not being able to send and receive the messages. You can learn more about the [throttle limits](/azure/communication-services/concepts/service-limits#chat).
## Troubleshooting
communication-services Chat Android Push Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/chat-android-push-notification.md
Push notifications let clients be notified for incoming messages and other opera
11. Add a custom `WorkManager` initializer by creating a class implementing `Configuration.Provider`: ```java
-public class MyAppConfiguration extends Application implements Configuration.Provider {
- Consumer<Throwable> exceptionHandler = new Consumer<Throwable>() {
+ public class MyAppConfiguration extends Application implements Configuration.Provider {
+ Consumer<Throwable> exceptionHandler = new Consumer<Throwable>() {
+ @Override
+ public void accept(Throwable throwable) {
+ Log.i("YOUR_TAG", "Registration failed for push notifications!" + throwable.getMessage());
+ }
+ };
+
@Override
- public void accept(Throwable throwable) {
- Log.i("YOUR_TAG", "Registration failed for push notifications!" + throwable.getMessage());
+ public void onCreate() {
+ super.onCreate();
+ // Initialize application parameters here
+ WorkManager.initialize(getApplicationContext(), getWorkManagerConfiguration());
+ }
+
+ @NonNull
+ @Override
+ public Configuration getWorkManagerConfiguration() {
+ return new Configuration.Builder().
+ setWorkerFactory(new RegistrationRenewalWorkerFactory(COMMUNICATION_TOKEN_CREDENTIAL, exceptionHandler)).build();
}
- };
- @Override
- public void onCreate() {
- super.onCreate();
- WorkManager.initialize(getApplicationContext(), getWorkManagerConfiguration());
- }
- @NonNull
- @Override
- public Configuration getWorkManagerConfiguration() {
- return new Configuration.Builder().
- setWorkerFactory(new RegistrationRenewalWorkerFactory(COMMUNICATION_TOKEN_CREDENTIAL, exceptionHandler)).build();
}
-}
```
+**Explanation to code above:** The default initializer of `WorkManager` has been disabled in step 9. This step implements `Configuration.Provider` to provide a customized 'WorkFactory', which is responsible to create `WorkerManager` during runtime.
+
+If the app is integrated with Azure Function, initialization of application parameters should be added in method 'onCreate()'. Method 'getWorkManagerConfiguration()' is called when the application is starting, before any activity, service, or receiver objects (excluding content providers) have been created, so that application parameters could be initialized before being used. More details can be found in the sample chat app.
12. Add the `android:name=.MyAppConfiguration` field, which uses the class name from step 11, into `AndroidManifest.xml`:
communication-services Integrate Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/integrate-azure-function.md
+
+ Title: Enable Azure Function in chat app
+
+description: Learn how to enable Azure Function
+++ Last updated : 11/03/2022++++
+# Integrate Azure Function
+## Introduction
+This tutorial provides detailed guidance on how to set up an Azure Function to receive user-related information. Setting up an Azure Function is highly recommended. It helps to avoid hard-coding application parameters in the Contoso app (such as user ID and user token). This information is highly confidential. More importantly, we refresh user tokens periodically on the backend. Hard-coding the user ID and token combination requires editing the value after every refresh.
+
+## Prerequisites
+
+Before you get started, make sure to:
+
+- Create an Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Install Visual Studio Code.
+
+## Setting up functions
+1. Install the Azure Function extension in Visual Studio Code. You can install it from Visual Studio Code's plugin browser or by following [this link](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions)
+2. Set up a local Azure Function app by following [this link](../../azure-functions/functions-develop-vs-code.md?tabs=csharp#create-an-azure-functions-project). We need to create a local function using the HTTP trigger template in JavaScript.
+3. Install Azure Communication Services libraries. We'll use the Identity library to generate User Access Tokens. Run the npm install command in your local Azure Function app directory, to install the Azure Communication Services Identity SDK for JavaScript.
+
+```
+ npm install @azure/communication-identity --save
+```
+4. Modify the `index.js` file so it looks like the code below:
+```JavaScript
+ const { CommunicationIdentityClient } = require('@azure/communication-identity');
+ const connectionString = '<your_connection_string>'
+ const acsEndpoint = "<your_ACS_endpoint>"
+
+ module.exports = async function (context, req) {
+ let tokenClient = new CommunicationIdentityClient(connectionString);
+
+ const userIDHolder = await tokenClient.createUser();
+ const userId = userIDHolder.communicationUserId
+
+ const userToken = await (await tokenClient.getToken(userIDHolder, ["chat"])).token;
+
+ context.res = {
+ body: {
+ acsEndpoint,
+ userId,
+ userToken
+ }
+ };
+ }
+```
+**Explanation to code above**: The first line import the interface for the `CommunicationIdentityClient`. The connection string in the second line can be found in your ACS resource in the Azure portal. The `ACSEndpoint` is the URL of the ACS resource that was created.
+
+5. Open the local Azure Function folder in Visual Studio Code. Open the `index.js` and run the local Azure Function. A local Azure Function endpoint will be created and printed in the terminal. The printed message looks similar to:
+
+```
+Functions:
+HttpTrigger1: [GET,POST] http://localhost:7071/api/HttpTrigger1
+```
+
+Open the link in a browser. The result will be similar to this example:
+```
+ {
+ "acsEndpoint": "<Azure Function endpoint>",
+ "userId": "8:acs:a636364c-c565-435d-9818-95247f8a1471_00000014-f43f-b90f-9f3b-8e3a0d00c5d9",
+ "userToken": "eyJhbGciOiJSUzI1NiIsImtpZCI6IjEwNiIsIng1dCI6Im9QMWFxQnlfR3hZU3pSaXhuQ25zdE5PU2p2cyIsInR5cCI6IkpXVCJ9.eyJza3lwZWlkIjoiYWNzOmE2MzYzNjRjLWM1NjUtNDM1ZC05ODE4LTk1MjQ3ZjhhMTQ3MV8wMDAwMDAxNC1mNDNmLWI5MGYtOWYzYi04ZTNhMGQwMGM1ZDkiLCJzY3AiOjE3OTIsImNzaSI6IjE2Njc4NjI3NjIiLCJleHAiOjE2Njc5NDkxNjIsImFjc1Njb3BlIjoiY2hhdCIsInJlc291cmNlSWQiOiJhNjM2MzY0Yy1jNTY1LTQzNWQtOTgxOC05NTI0N2Y4YTE0NzEiLCJyZXNvdXJjZUxvY2F0aW9uIjoidW5pdGVkc3RhdGVzIiwiaWF0IjoxNjY3ODYyNzYyfQ.t-WpaUUmLJaD0V2vgn3M5EKdJUQ_JnR2jnBUZq3J0zMADTnFss6TNHMIQ-Zvsumwy14T1rpw-1FMjR-zz2icxo_mcTEM6hG77gHzEgMR4ClGuE1uRN7O4-326ql5MDixczFeIvIG8s9kAeJQl8N9YjulvRkGS_JZaqMs2T8Mu7qzdIOiXxxlmcl0HeplxLaW59ICF_M4VPgUYFb4PWMRqLXWjKyQ_WhiaDC3FvhpE_Bdb5U1eQXDw793V1_CRyx9jMuOB8Ao7DzqLBQEhgNN3A9jfEvIE3gdwafpBWlQEdw-Uuf2p1_xzvr0Akf3ziWUsVXb9pxNlQQCc19ztl3MIQ"
+ }
+```
+
+6. Deploy the local function to the cloud. More details can be found in [this documentation](../../azure-functions/functions-develop-vs-code.md).
+
+7. **Test the deployed Azure Function.** First, find your Azure Function in the Azure portal. Then, use the "Get Function URL" button to get the Azure Function endpoint. The result you see should be similar to what was shown in step 5. The Azure Function endpoint will be used in the app for initializing application parameters.
+
+8. Implement `UserTokenClient`, which is used to call the target Azure Function resource and obtain the ACS endpoint, user ID and user token from the returned JSON object. Refer to the sample app for usage.
+
+## Troubleshooting guide
+1. If the Azure Function extension failed to deploy the local function to the Azure cloud, it's likely due to the version of Visual Studio Code and the Azure Function extension being used having a bug. This version combination has been tested to work: Visual Studio Code version `1.68.1` and Azure Function extension version `1.2.1`.
+2. The place to initialize application constants is tricky but important. Double check the [chat Android quick-start](https://learn.microsoft.com/azure/communication-services/quickstarts/chat/get-started). More specifically, the highlight note in the section "Set up application constants", and compare with the sample app of the version you are consuming.
+
+## (Optional) secure the Azure Function endpoint
+For demonstration purposes, this sample uses a publicly accessible endpoint by default to fetch an Azure Communication Services token. For production scenarios, one option is to use your own secured endpoint to provision your own tokens.
+
+With extra configuration, this sample supports connecting to an Azure Active Directory (Azure AD) protected endpoint so that user log is required for the app to fetch an Azure Communication Services token. The user will be required to sign in with Microsoft account to access the app. This setup increases the security level while users are required to log in. Decide whether to enable it based on the use cases.
+
+Note that we currently don't support Azure AD in sample code. Follow the links below to enable it in your app and Azure Function:
+
+[Register your app under Azure Active Directory (using Android platform settings)](../../active-directory/develop/tutorial-v2-android.md).
+
+[Configure your App Service or Azure Functions app to use Azure AD log in](../../app-service/configure-authentication-provider-aad.md).
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
IP addresses are broken down into the following types:
| Type | Description | |--|--| | Public inbound IP address | Used for app traffic in an external deployment, and management traffic in both internal and external deployments. |
-| Outbound public IP | Used as the "from" IP for outbound connections that leave the virtual network. These connections aren't routed down a VPN. |
+| Outbound public IP | Used as the "from" IP for outbound connections that leave the virtual network. These connections aren't routed down a VPN. Using a NAT gateway or other proxy for outbound traffic from a Container App environment is not supported. |
| Internal load balancer IP address | This address only exists in an internal deployment. | | App-assigned IP-based TLS/SSL addresses | These addresses are only possible with an external deployment, and when IP-based TLS/SSL binding is configured. |
container-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quotas.md
Previously updated : 10/25/2022 Last updated : 11/21/2022
The following quotas are on a per subscription basis for Azure Container Apps.
-To request an increase in quota amounts for your container app, [submit a support ticket](https://azure.microsoft.com/support/create-ticket/).
+To request an increase in quota amounts for your container app, learn [how to request a limit increase](faq.yml#how-can-i-request-a-quota-increase-) and [submit a support ticket](https://azure.microsoft.com/support/create-ticket/).
| Feature | Scope | Default | Is Configurable<sup>1</sup> | Remarks | |--|--|--|--|--|
-| Environments | Region | 5 | Yes | |
+| Environments | Region | Up to 5 | Yes | Limit up to five environments per subscription, per region.<br><br>For example, if you deploy to three regions you can get up to 15 environments for a single subscription. |
| Container Apps | Environment | 20 | Yes | | | Revisions | Container app | 100 | No | | | Replicas | Revision | 30 | Yes | | | Cores | Replica | 2 | No | Maximum number of cores that can be requested by a revision replica. | | Cores | Environment | 20 | Yes | Maximum number of cores an environment can accommodate. Calculated by the sum of cores requested by each active replica of all revisions in an environment. |
-<sup>1</sup> The **Is Configurable** column denotes that a feature maximum may be increased through a [support request](https://azure.microsoft.com/support/create-ticket/).
+For more information regarding quotas, see the [Quotas Roadmap](https://github.com/microsoft/azure-container-apps/issues/503) in the Azure Container Apps GitHub repository.
+
+> [!NOTE]
+> [Free trial](https://azure.microsoft.com/offers/ms-azr-0044p) and [Azure for Students](https://azure.microsoft.com/free/students/) subscriptions are limited to one environment per subscription globally.
+
+<sup>1</sup> The **Is Configurable** column denotes that a feature maximum may be increased through a [support request](https://azure.microsoft.com/support/create-ticket/). For more information, see [how to request a limit increase](faq.yml#how-can-i-request-a-quota-increase-).
## Considerations * If an environment runs out of allowed cores: * Provisioning times out with a failure
- * The app silently refuses to scale out
+ * The app may be restricted from scaling out
container-instances Container Instances Init Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-init-container.md
This article shows how to use an Azure Resource Manager template to configure a
* **Order of execution** - Init containers are executed in the order specified in the template, and before other containers. By default, you can specify a maximum of 59 init containers per container group. At least one non-init container must be in the group. * **Host environment** - Init containers run on the same hardware as the rest of the containers in the group. * **Resources** - You don't specify resources for init containers. They are granted the total resources such as CPUs and memory available to the container group. While an init container runs, no other containers run in the group.
-* **Supported properties** - Init containers can use group properties such as volumes, secrets, and managed identities. However, they can't use ports or an IP address if configured for the container group.
+* **Supported properties** - Init containers can use some group properties such as volumes and secrets. However, they can't use ports, IP address and managed identities if configured for the container group.
* **Restart policy** - Each init container must exit successfully before the next container in the group starts. If an init container doesn't exit successfully, its restart action depends on the [restart policy](container-instances-restart-policy.md) configured for the group: |Policy in group |Policy in init |
container-registry Container Registry Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-private-link.md
This article shows how to configure a private endpoint for your registry using t
[!INCLUDE [container-registry-scanning-limitation](../../includes/container-registry-scanning-limitation.md)] > [!NOTE]
-> Starting October 2021, new container registries allow a maximum of 200 private endpoints. Registries created earlier allow a maximum of 10 private endpoints. Use the az acr show-usage command to see the limit for your registry. Please open a support ticket if this limit needs to be increased to 200 private endpoints.
+> Starting from October 2021, new container registries allow a maximum of 200 private endpoints. Registries created earlier allow a maximum of 10 private endpoints. Use the [az acr show-usage](/cli/azure/acr#az-acr-show-usage) command to see the limit for your registry. Please open a support ticket if the maximum limit of private endpoints increases to 200.
## Prerequisites
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
After the analytical store is enabled, based on the data retention needs of the
Analytical store relies on Azure Storage and offers the following protection against physical failure:
- * Single region Azure Cosmos DB database accounts allocate analytical store in Locally Redundant Storage (LRS) Azure Storage accounts.
- * If any geo-region replication is configured for the Azure Cosmos DB database account, analytical store is allocated in Zone-Redundant Storage (ZRS) Azure storage accounts.
+ * By default, Azure Cosmos DB database accounts allocate analytical store in Locally Redundant Storage (LRS) accounts.
+ * If any geo-region of the database account is configured for zone-redundancy, it is allocated in Zone-redundant Storage (ZRS) accounts. Customers need to enable Availability Zones on a region of their Azure Cosmos DB database account to have analytical data of that region stored in ZRS.
## Backup
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
You can [provision and manage your Azure Cosmos DB account](how-to-manage-databa
| Maximum number of accounts per subscription | 50 by default. <sup>1</sup> | | Maximum number of regional failovers | 10/hour by default. <sup>1</sup> <sup>2</sup> |
-<sup>1</sup> You can increase these limits by creating an [Azure Support request](create-support-request-quota-increase.md).
+<sup>1</sup> You can increase these limits by creating an [Azure Support request](create-support-request-quota-increase.md) up to 1,000 max.
<sup>2</sup> Regional failovers only apply to single region writes accounts. Multi-region write accounts don't require or have any limits on changing the write region.
cosmos-db Migrate Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-dotnet-v3.md
Some settings in `ConnectionPolicy` have been renamed or replaced by `CosmosClie
|`MediaRequestTimeout`|Removed. Attachments are no longer supported.| |`SetCurrentLocation`|`CosmosClientOptions.ApplicationRegion` can be used to achieve the same effect.| |`PreferredLocations`|`CosmosClientOptions.ApplicationPreferredRegions` can be used to achieve the same effect.|
-|`UserAgentSuffix`| | `CosmosClientBuilder.ApplicationName` can be used to achieve the same effect.|
+|`UserAgentSuffix`|`CosmosClientBuilder.ApplicationName` can be used to achieve the same effect.|
### Indexing policy
cost-management-billing Manage Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/manage-automation.md
Title: Manage Azure costs with automation
description: This article explains how you can manage Azure costs with automation. Previously updated : 04/05/2022 Last updated : 11/22/2022
You can configure budgets to start automated actions using Azure Action Groups.
## Data latency and rate limits
-We recommend that you call the APIs no more than once per day. Cost Management data is refreshed every four hours as new usage data is received from Azure resource providers. Calling more frequently doesn't provide more data. Instead, it creates increased load.
+We recommend that you call the APIs no more than once per day. Cost Management data is refreshed every four hours as new usage data is received from Azure resource providers. Calling more frequently doesn't provide more data. Instead, it creates increased load.
-<!-- For more information, see [Cost Management API latency and rate limits](../automate/api-latency-rate-limits.md) -->
+### Query API query processing units
+
+In addition to the existing rate limiting processes, the [Query API](/rest/api/cost-management/query) also limits processing based on the cost of API calls. The cost of an API call is expressed as query processing units (QPUs). QPU is a performance currency, like [Cosmos DB RUs](../../cosmos-db/request-units.md). They abstract system resources like CPU and memory.
+
+#### QPU calculation
+
+Currently, one QPU is deducted for one month of data queried from the allotted quotas. This logic might change without notice.
+
+#### QPU factors
+
+The following factor affects the number of QPUs consumed by an API request.
+
+- Date range, as the date range in the request increases, the number of QPUs consumed increases.
+
+Other QPU factors might be added without notice.
+
+#### QPU quotas
+
+The following quotas are configured per tenant. Requests are throttled when any of the following quotas are exhausted.
+
+- 12 QPU per 10 seconds
+- 60 QPU per 1 min
+- 600 QPU per 1 hour
+
+The quotas maybe be changed as needed and more quotas may be added.
+
+#### Response headers
+
+You can examine the response headers to track the number of QPUs consumed by an API request and number of QPUs remaining.
+
+`x-ms-ratelimit-microsoft.costmanagement-qpu-retry-after`
+
+Indicates the time to back-off in seconds. When a request is throttled with 429, back off for the time specified in this header before retrying the request.
+
+`x-ms-ratelimit-microsoft.costmanagement-qpu-consumed`
+
+QPUs consumed by an API call.
+
+`x-ms-ratelimit-microsoft.costmanagement-qpu-remaining`
+
+List of remaining quotas.
## Next steps
cost-management-billing Ea Portal Rest Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-rest-apis.md
This article describes the REST APIs for use with your Azure enterprise enrollme
Microsoft Enterprise Azure customers can get usage and billing information through REST APIs. The role owner (Enterprise Administrator, Department Administrator, Account Owner) must enable access to the API by generating a key from the Azure EA portal. Then, anyone provided with the enrollment number and key can access the data through the API.
-### Available APIs
+## Available APIs
**Balance and Summary -** The [Balance and Summary API](/rest/api/billing/enterprise/billing-enterprise-api-balance-summary) provides a monthly summary of information about balances, new purchases, Azure Marketplace service charges, adjustments, and overage charges. For more information, see [Reporting APIs for Enterprise customers - Balance and Summary](/rest/api/billing/enterprise/billing-enterprise-api-balance-summary).
Microsoft Enterprise Azure customers can get usage and billing information throu
**Billing Periods -** The [Billing Periods API](/rest/api/billing/enterprise/billing-enterprise-api-billing-periods) returns a list of billing periods that have consumption data for an enrollment in reverse chronological order. Each period contains a property pointing to the API route for the four sets of data, BalanceSummary, UsageDetails, Marketplace Charges, and PriceSheet. For more information, see [Reporting APIs for Enterprise customers - Billing Periods](/rest/api/billing/enterprise/billing-enterprise-api-billing-periods).
-### API key generation
+## Enable API data access
-Role owners can perform the following steps in the Azure EA portal. Navigate to **Reports** > **Download Usage** > **API Access Key**. Then they can:
+Role owners can perform the following steps in the Azure portal to enable API data access.
-- Generate and regenerate primary and secondary access keys.-- Revoke access keys.-- View start and end dates of access keys.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Search for Cost Management + Billing and then select it.
+3. Select **Billing scopes** from the navigation menu and then select the billing account that you want to work with.
+4. In the left navigation menu, select **Usage + Charges**.
+5. Select **Manage API Access Keys** to open the Manage API Access Keys window.
+ :::image type="content" source="./media/ea-portal-rest-apis/manage-api-access-keys.png" alt-text="Screenshot showing the Manage API Access Keys option." lightbox="./media/ea-portal-rest-apis/manage-api-access-keys.png" :::
-### Generate or retrieve the API Key
+In the Manage API Access Keys window, you can perform the following tasks:
-1. Sign in as an enterprise administrator.
-2. Select **Reports** on the left navigation window and then select the **Download Usage** tab.
-3. Select **API Access Key**.
-4. Under **Enrollment Access Keys**, select **regenerate** to generate either a primary or secondary key.
-5. Select **Expand Key** to view the entire generated API access key.
-6. Select **Copy** to get the API access key for immediate use.
+- Generate and view primary and secondary access keys
+- View start and end dates for access keys
+- Disable access keys
+### Generate the primary or secondary API key
-If you want to give the API access keys to people who aren't enterprise administrators in your enrollment, perform the following steps:
+1. Sign in to the Azure portal as an enterprise administrator.
+2. Select **Cost Management + Billing**.
+3. Select **Billing scopes** from the navigation menu and then select the billing account that you want to work with.
+4. In the navigation menu, select **Usage + Charges**.
+5. Select **Manage API Access Keys**.
+6. Select **Generate** to generate the key.
+ :::image type="content" source="./media/ea-portal-rest-apis/manage-api-access-keys-window.png" alt-text="Screenshot showing the Manage API Access Keys window." lightbox="./media/ea-portal-rest-apis/manage-api-access-keys-window.png" :::
+7. Select the **expand symbol** or select **Copy** to get the API access key for immediate use.
+ :::image type="content" source="./media/ea-portal-rest-apis/expand-symbol-copy.png" alt-text="Screenshot showing the expand symbol and Copy option." lightbox="./media/ea-portal-rest-apis/expand-symbol-copy.png" :::
-1. In the left navigation window, select **Manage**.
-2. Select the pencil symbol next to **DA view charges** (Department Administrator view charges).
-3. Select **Enable** and then select **Save**.
-4. Select the pencil symbol next to **AO view charges** (Account Owner view charges).
-5. Select **Enable** and then select **Save**.
+### Regenerate the primary or secondary API key
-![Screenshot showing DA and AO view charges enabled.](./media/ea-portal-rest-apis/create-ea-generate-or-retrieve-api-key-enable-ao-do-view.png)
+1. Sign in to the Azure portal as an enterprise administrator.
+2. Select **Cost Management + Billing**.
+3. Select **Billing scopes** from the navigation menu and then select the billing account that you want to work with.
+4. In the navigation menu, select **Usage + Charges**.
+5. Select **Manage API Access Keys**.
+6. Select **Regenerate** to regenerate the key.
-The preceding steps give API access key holders with access to cost and pricing information in usage reports.
+### Revoke the primary or secondary API key
-### Pass keys in the API
+1. Sign in to the Azure portal as an enterprise administrator.
+2. Search for and select **Cost Management + Billing**.
+3. Select **Billing scopes** from the navigation menu and then select the billing account that you want to work with.
+4. In the navigation menu, select **Usage + Charges**.
+5. Select **Manage API Access Keys**.
+6. Select **Revoke** to revoke the key.
+
+### Allow API access to non-administrators
+
+If you want to give the API access keys to people who aren't enterprise administrators in your enrollment, perform the following steps.
+
+The steps give API access to key holders so they can view cost and pricing information in usage reports.
+
+1. In the left navigation window, selectΓÇ»**Policies**.
+2. Select **On** under the DEPARTMENT ADMINS CAN VIEW CHARGES section and then select **Save**.
+3. Select **On** under the ACCOUNT OWNERS CAN VIEW CHARGES section and then select **Save**.
+ :::image type="content" source="./media/ea-portal-rest-apis/policies-view-charges.png" alt-text="Screenshot showing the Polices window where you change view charges options." lightbox="./media/ea-portal-rest-apis/policies-view-charges.png" :::
+
+## Pass keys in the API
Pass the API key for each call for authentication and authorization. Pass the following property to HTTP headers:
Pass the API key for each call for authentication and authorization. Pass the fo
| Authorization | Specify the value in this format: **bearer {API\_KEY}** Example: bearer \<APIKey\> |
-### Swagger
+## Swagger
A Swagger endpoint is available at [Enterprise Reporting v3 APIs](https://consumption.azure.com/swagger/ui/index)for the following APIs. Swagger helps inspect the API. Use Swagger to generate client SDKs using [AutoRest](https://github.com/Azure/AutoRest) or [Swagger CodeGen](https://swagger.io/swagger-codegen/). Data available after May 1, 2014 is available through the API.
-### API response codes
+## API response codes
When you're using an API, response status codes are shown. The following table describes them.
When you're using an API, response status codes are shown. The following table d
| 400 | Bad Request | Invalid parameters ΓÇô Date ranges, EA numbers etc. | | 500 | Server Error | Unexpected error processing request |
-### Usage and billing data update frequency
+## Usage and billing data update frequency
Usage and billing data files are updated every 24 hours for the current billing month. However, data latency can occur for up to three days. For example, if usage is incurred on Monday, data might not appear in the data file until Thursday.
-### Azure service catalog
+## Azure service catalog
You can download all Azure services in the Azure portal as part of the Price Sheet download. For more information about downloading your price sheet, see [Download pricing for an Enterprise Agreement](ea-pricing.md#download-pricing-for-an-enterprise-agreement).
-### CSV data file details
+## CSV data file details
The following information describes the properties of API reports.
-#### Usage summary
+### Usage summary
JSON format is generated from the CSV report. As a result, the format is same as the summary CSV format. The column name is wielded, so you should deserialize into a data table when you consume the JSON summary data.
JSON format is generated from the CSV report. As a result, the format is same as
| Unit of Measure | UnitOfMeasure | UnitOfMeasure | Example values: Hours, GB, Events, Pushes, Unit, Unit Hours, MB, Daily Units | | ResourceGroup | ResourceGroup | ResourceGroup | |
-#### Azure Marketplace report
+### Azure Marketplace report
| CSV column name | JSON column name | JSON new column | | | | |
JSON format is generated from the CSV report. As a result, the format is same as
| Cost Center | CostCenters | CostCenter | | Resource Group | ResourceGroup | ResourceGroup |
-#### Price sheet
+### Price sheet
| CSV column name | JSON column name | Comment | | | | |
JSON format is generated from the CSV report. As a result, the format is same as
| Overage Unit Price | ConsumptionPrice | | | Currency Code | CurrencyCode | |
-### Common API issues
+## Common API issues
As you use Azure Enterprise REST APIs, you might encounter any of the following common issues.
data-factory Connect Data Factory To Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connect-data-factory-to-azure-purview.md
Last updated 10/25/2021
-# Connect Data Factory to Microsoft Purview (Preview)
+# Connect Data Factory to Microsoft Purview
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
databox-online Azure Stack Edge Gpu 2202 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2202-release-notes.md
The 2202 release has the following features and enhancements:
- **Multi-Access Edge Computing (MEC) and Virtual Network Functions (VNF) improvements**: - In this release, VM create and delete for VNF create and delete were parallelized. This has significantly reduced the creation time for VNFs that contain multiple VMs. - The VHD ingestion job resource clean up was moved out of VNF create and delete. This reduced the VNF creation and deletion times.-- **Updates for Azure Arc and Edge container registry** - Azure Arc and Edge container registry versions were updated. For more information, see [About updates](azure-stack-edge-gpu-install-update.md#about-latest-update).
+- **Updates for Azure Arc and Edge container registry** - Azure Arc and Edge container registry versions were updated. For more information, see [About updates](azure-stack-edge-gpu-install-update.md#about-latest-updates).
- **Security fixes** - Starting this release, a pod security policy is set up on the Kubernetes cluster on your Azure Stack Edge device. If you are using root privileges in your containerized solution, you may experience some change in the behavior. No action is required on your part.
databox-online Azure Stack Edge Gpu 2207 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2207-release-notes.md
Previously updated : 11/09/2022 Last updated : 11/21/2022
The following release notes identify the critical open issues and the resolved i
The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes.
-This article applies to the **Azure Stack Edge 2207** release, which maps to software version number **2.2.2038.5916**. This software can be applied to your device if you're running at least Azure Stack Edge 2106 (2.2.1636.3457) software.
+This article applies to the **Azure Stack Edge 2207** release, which maps to software version number **2.2.2039.84**. This software can be applied to your device if you're running at least Azure Stack Edge 2106 (2.2.1636.3457) software.
## What's new
The 2207 release has the following features and enhancements:
- **Kubernetes version update** - This release contains a Kubernetes version update from 1.20.9 to v1.22.6.
-## Known issues in 2207 release
+## Known issues in this release
The following table provides a summary of known issues in this release.
databox-online Azure Stack Edge Gpu 2210 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2210-release-notes.md
The following release notes identify the critical open issues and the resolved i
The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes.
-This article applies to the **Azure Stack Edge 2210** release, which maps to software version **2.2.2111.1002**. This software can be applied to your device if you're running **Azure Stack Edge 2207 or later** (2.2.2038.5916).
+This article applies to the **Azure Stack Edge 2210** release, which maps to software version **2.2.2111.1002**. This software can be applied to your device if you're running **Azure Stack Edge 2207 or later** (2.2.2026.5318).
## What's new
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-install-update.md
This article describes the steps required to install update on your Azure Stack
The procedure described in this article was performed using a different version of software, but the process remains the same for the current software version.
-## About latest update
+## About latest updates
The current update is Update 2210. This update installs two updates, the device update followed by Kubernetes updates. The associated versions for this update are:
databox-online Azure Stack Edge Move To Self Service Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-move-to-self-service-iot-edge.md
+
+ Title: Move workloads from Azure Stack Edge's managed IoT Edge to an IoT Edge solution on a Linux VM
+description: Describes steps to move workloads from Azure Stack Edge to a self-service IoT Edge solution on a Linux VM.
++++++ Last updated : 11/21/2022+
+#Customer intent: As an IT admin, I need to understand how to move an IoT Edge workload from native/managed Azure Stack Edge to a self-service IoT Edge solution on a Linux VM, so that I can efficiently manage my VMs.
++
+# Move workloads from Azure Stack Edge's managed IoT Edge to an IoT Edge solution on a Linux VM
++
+This article provides steps to move your managed IoT Edge workloads to IoT Edge running on a Linux VM on Azure Stack Edge. This article will use IoT Edge on an Ubuntu VM as an example. You can use other [supported Linux distributions](../iot-edge/support.md#linux-containers).
+
+> [!NOTE]
+> We recommend that you deploy the latest IoT Edge version in a Linux VM to run IoT Edge workloads on Azure Stack Edge. For more information about earlier versions of IoT Edge, see [IoT Edge v1.1 EoL: What does that mean for me?](https://techcommunity.microsoft.com/t5/internet-of-things-blog/iot-edge-v1-1-eol-what-does-that-mean-for-me/ba-p/3662137).
+
+## Workflow to deploy onto an IoT Edge VM
+
+The high-level workflow is as follows:
+
+1. Deploy a Linux VM and install IoT Edge runtime on it using symmetric keys.
+
+1. Connect the newly deployed IoT Edge runtime to the newly created IoT Edge device from the previous step.
+
+1. From IoT Hub, redeploy IoT Edge modules onto the new IoT Edge device.
+
+1. Once your solution is running on IoT Edge on a Linux VM, you can remove the modules running on the native or managed IoT Edge on Azure Stack Edge. From IoT Hub, delete the IoT Edge device to remove the modules running on Azure Stack Edge.
+
+1. Optional: If you aren't using the Kubernetes cluster on Azure Stack Edge, you can delete the whole Kubernetes cluster.
+
+1. Optional: If you have leaf IoT devices communicating with IoT Edge on Kubernetes, this step documents how to make changes to communicate with the IoT Edge on a VM.
+
+## Step 1. Create an IoT Edge device on Linux using symmetric keys
+
+Create and provision an IoT Edge device on Linux using symmetric keys. For detailed steps, see [Create and provision an IoT Edge device on Linux using symmetric keys](../iot-edge/how-to-provision-single-device-linux-symmetric.md).
+
+## Step 2. Install and provision an IoT Edge on a Linux VM
+
+Follow the steps at [Deploy IoT Edge on an Ubuntu VM on Azure Stack Edge](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md). For other supported Linux distributions, see [Linux containers](../iot-edge/support.md).
+
+## Step 3. Deploy Azure IoT Edge modules from the Azure portal
+
+Deploy Azure IoT modules to the new IoT Edge. For detailed steps, see [Deploy Azure IoT Edge modules from the Azure portal](../iot-edge/how-to-deploy-modules-portal.md).
+
+ With the latest IoT Edge version, you can deploy your IoT Edge modules at scale. For more information, see [Deploy IoT Edge modules at scale using the Azure portal](../iot-edge/how-to-deploy-at-scale.md).
+
+## Step 4. Remove Azure IoT Edge modules
+
+Once your modules are successfully running on the new IoT Edge instance running on a VM, you can delete the whole IoT Edge device associated with that IoT Edge instance. From IoT Hub on the Azure portal, delete the IoT Edge device connected to the IoT Edge, as shown below.
+
+![Screenshot showing delete IoT Edge device from IoT Edge instance in Azure portal UI.](media/azure-stack-edge-move-to-self-service-iot-edge/azure-stack-edge-delete-iot-edge-device.png)
+
+## Step 5. Optional: Remove the IoT Edge service
+
+If you aren't using the Kubernetes cluster on Azure Stack Edge, use the following steps to [remove the IoT Edge service](azure-stack-edge-gpu-manage-compute.md#remove-iot-edge-service). This action will remove modules running on the IoT Edge device, the IoT Edge runtime, and the Kubernetes cluster that hosts the IoT Edge runtime.
+
+From the Azure Stack Edge resource on Azure portal, under the Azure IoT Edge service, there's a **Remove** button to remove the Kubernetes cluster.
+
+> [!IMPORTANT]
+> Once the Kubernetes cluster is removed, there is no way to recover information from the Kubernetes cluster, whether it's IoT Edge-related or not.
+
+## Step 6. Optional: Configure an IoT Edge device as a transparent gateway
+
+If your IoT Edge device on Azure Stack Edge was configured as a gateway for downstream IoT devices, you must configure the IoT Edge running on the Linux VM as a transparent gateway. For more information, see [Configure and IoT Edge device as a transparent gateway](../iot-edge/how-to-create-transparent-gateway.md).
+
+For more information about configuring downstream IoT devices to connect to a newly deployed IoT Edge running on a Linux VM, see [Connect a downstream device to an Azure IoT Edge gateway](../iot-edge/how-to-connect-downstream-device.md).
+
+## Next steps
+
+[Deploy IoT Edge on an Ubuntu VM on Azure Stack Edge](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md)
defender-for-cloud Continuous Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/continuous-export.md
To view the event schemas of the exported data types, visit the [Log Analytics t
## Export data to an Azure Event hub or Log Analytics workspace in another tenant
-You can export data to an Azure Event hub or Log Analytics workspace in a different tenant, without using [Azure Lighthouse](/azure/lighthouse/overview.md). When collecting data into a tenant, you can analyze the data from one central location.
+You can export data to an Azure Event hub or Log Analytics workspace in a different tenant, without using [Azure Lighthouse](../lighthouse/overview.md). When collecting data into a tenant, you can analyze the data from one central location.
To export data to an Azure Event hub or Log Analytics workspace in a different tenant:
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Learn more about:
- [Vulnerability assessment for Azure Container Registry (ACR)](defender-for-containers-vulnerability-assessment-azure.md) - [Vulnerability assessment for Amazon AWS Elastic Container Registry (ECR)](defender-for-containers-vulnerability-assessment-elastic.md)
-### View vulnerabilities for running images in Azure Container Registry (ACR)
-
-Defender for Cloud gives its customers the ability to prioritize the remediation of vulnerabilities in images that are currently being used within their environment using the [Running container images should have vulnerability findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c/showSecurityCenterCommandBar~/false) recommendation.
-
-To provide findings for the recommendation, Defender for Cloud collects the inventory of your running containers that are collected by the Defender agent installed on your AKS clusters. Defender for Cloud correlates that inventory with the vulnerability assessment scan of images that are stored in ACR. The recommendation shows your running containers with the vulnerabilities associated with the images that are used by each container and provides vulnerability reports and remediation steps.
--
-Learn more about [viewing vulnerabilities for running images in (ACR)](defender-for-containers-vulnerability-assessment-azure.md).
- ## Run-time protection for Kubernetes nodes and clusters Defender for Containers provides real-time threat protection for your containerized environments and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers. Threat protection at the cluster level is provided by the Defender agent and analysis of the Kubernetes audit logs. Examples of events at this level include exposed Kubernetes dashboards, creation of high-privileged roles, and the creation of sensitive mounts.
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
To create a rule:
:::image type="content" source="./media/remediate-vulnerability-findings-vm/modify-rule.png" alt-text="Modify or delete an existing rule."::: 1. To view or delete the rule, select the ellipsis menu ("...").
+## View vulnerabilities for images running on your AKS clusters
+
+Defender for Cloud gives its customers the ability to prioritize the remediation of vulnerabilities in images that are currently being used within their environment using the [Running container images should have vulnerability findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c/showSecurityCenterCommandBar~/false) recommendation.
+
+To provide findings for the recommendation, Defender for Cloud collects the inventory of your running containers that are collected by the Defender agent installed on your AKS clusters. Defender for Cloud correlates that inventory with the vulnerability assessment scan of images that are stored in ACR. The recommendation shows your running containers with the vulnerabilities associated with the images that are used by each container and provides vulnerability reports and remediation steps.
+++ ## FAQ ### How does Defender for Containers scan an image?
defender-for-cloud Defender For Databases Enable Cosmos Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-databases-enable-cosmos-protections.md
You can enable Microsoft Defender for Cloud on a specific Azure Cosmos DB accoun
### [ARM template](#tab/arm-template)
-Use an Azure Resource Manager template to deploy an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled. For more information, see [Create an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled](https://azure.microsoft.com/resources/templates/microsoft-defender-cosmosdb-create-account/).
+Use an Azure Resource Manager template to deploy an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled. For more information, see [Create an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.documentdb/microsoft-defender-cosmosdb-create-account).
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Updates in August include:
- [Vulnerabilities for running images are now visible with Defender for Containers on your Windows containers](#vulnerabilities-for-running-images-are-now-visible-with-defender-for-containers-on-your-windows-containers) - [Azure Monitor Agent integration now in preview](#azure-monitor-agent-integration-now-in-preview) - [Deprecated VM alerts regarding suspicious activity related to a Kubernetes cluster](#deprecated-vm-alerts-regarding-suspicious-activity-related-to-a-kubernetes-cluster)+ ### Vulnerabilities for running images are now visible with Defender for Containers on your Windows containers Defender for Containers now shows vulnerabilities for running Windows containers. When vulnerabilities are detected, Defender for Cloud generates the following security recommendation listing the detected issues: [Running container images should have vulnerability findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c/showSecurityCenterCommandBar~/false).
-Learn more about [viewing vulnerabilities for running images](defender-for-containers-introduction.md#view-vulnerabilities-for-running-images-in-azure-container-registry-acr).
+Learn more about [viewing vulnerabilities for running images](defender-for-containers-vulnerability-assessment-azure.md#view-vulnerabilities-for-images-running-on-your-aks-clusters).
### Azure Monitor Agent integration now in preview
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alert-engine-messages.md
Several alerts are disabled by default, as indicated by asterisks (*) in the tab
If you disable alerts that are referenced in other places, such as alert forwarding rules, make sure to update those references as needed.
-See [What's new in Microsoft Defender for IoT?](release-notes.md#whats-new-in-microsoft-defender-for-iot) for detailed information about changes made to alerts.
- ## Supported alert types | Alert type | Description |
defender-for-iot Hpe Proliant Dl20 Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-legacy.md
+
+ Title: HPE ProLiant DL20 for OT monitoring in enterprise deployments - Microsoft Defender for IoT
+description: Learn about the HPE ProLiant DL20 appliance when used for OT monitoring with Microsoft Defender for IoT in enterprise deployments.
Last updated : 10/30/2022+++
+# HPE ProLiant DL20 Gen10
+
+This article describes the **HPE ProLiant DL20 Gen10** appliance for OT sensors in an enterprise deployment.
+
+Legacy appliances are certified but aren't currently offered as pre-configured appliances.
+
+| Appliance characteristic |Details |
+|||
+|**Hardware profile** | E1800 |
+|**Performance** | Max bandwidth: 1 Gbp/s <br>Max devices: 10,000 |
+|**Physical specifications** | Mounting: 1U <br> Ports: 8x RJ45 or 6x SFP (OPT)|
+|**Status** | Supported, not available pre-configured |
+
+The following image shows a sample of the HPE ProLiant DL20 front panel:
++
+The following image shows a sample of the HPE ProLiant DL20 back panel:
++
+## Specifications
+
+|Component |Specifications|
+|||
+|Chassis |1U rack server |
+|Dimensions |Four 3.5" chassis: 4.29 x 43.46 x 38.22 cm / 1.70 x 17.11 x 15.05 in |
+|Weight | Max 7.9 kg / 17.41 lb |
+
+## DL20 Gen10 BOM
+
+| Quantity | PN| Description: high end |
+|--|--|--|
+|1| P06963-B21 | HPE DL20 Gen10 4SFF CTO Server |
+|1| P17104-L21 | HPE DL20 Gen10 E-2234 FIO Kit |
+|2| 879507-B21 | HPE 16-GB 2Rx8 PC4-2666V-E STND Kit |
+|3| 655710-B21 | HPE 1-TB SATA 7.2 K SFF SC DS HDD |
+|1| P06667-B21 | HPE DL20 Gen10 x8x16 FLOM Riser Kit |
+|1| 665240-B21 | HPE Ethernet 1-Gb 4-port 366FLR Adapter |
+|1| 782961-B21 | HPE 12-W Smart Storage Battery |
+|1| 869081-B21 | HPE Smart Array P408i-a SR G10 LH Controller |
+|2| 865408-B21 | HPE 500-W FS Plat Hot Plug LH Power Supply Kit |
+|1| 512485-B21 | HPE iLO Adv 1-Server License 1 Year Support |
+|1| P06722-B21 | HPE DL20 Gen10 RPS Enablement FIO Kit |
+|1| 775612-B21 | HPE 1U Short Friction Rail Kit |
+
+## Port expansion
+
+Optional modules for port expansion include:
+
+|Location |Type|Specifications|
+|-- | --| |
+| PCI Slot 1 (Low profile)| Quad Port Ethernet NIC| 811546-B21 - HPE 1 GbE 4p BASE-T I350 Adapter SI |
+| PCI Slot 1 (Low profile) | DP F/O NIC|727054-B21 - HPE 10 GbE 2p FLR-SFP+ X710 Adapter|
+| PCI Slot 2 (High profile)| Quad Port Ethernet NIC|811546-B21 - HPE 1 GbE 4p BASE-T I350 Adapter SI|
+| PCI Slot 2 (High profile)|DP F/O NIC| 727054-B21 - HPE 10 GbE 2p FLR-SFP+ X710 Adapter|
+| PCI Slot 2 (High profile)|Quad Port F/O NIC| 869585-B21 - HPE 10 GbE 4p SFP+ X710 Adapter SI|
+| SFPs for Fiber Optic NICs|MultiMode, Short Range|455883-B21 - HPE BLc 10G SFP+ SR Transceiver|
+| SFPs for Fiber Optic NICs|SingleMode, Long Range | 455886-B21 - HPE BLc 10G SFP+ LR Transceiver|
+
+## HPE ProLiant DL20 Gen10 installation
+
+This section describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 appliance.
+
+Installation includes:
+
+- Enabling remote access and updating the default administrator password
+- Configuring iLO port on network port 1
+- Configuring BIOS and RAID settings
+- Installing Defender for IoT software
+
+> [!NOTE]
+> Installation procedures are only relevant if you need to re-install software on a pre-configured device, or if you buy your own hardware and configure the appliance yourself.
+>
+
+### Enable remote access and update the password
+
+Use the following procedure to set up network options and update the default password.
+
+**To enable, and update the password**:
+
+1. Connect a screen and a keyboard to the HPE appliance, turn on the appliance, and press **F9**.
+
+ :::image type="content" source="../media/tutorial-install-components/hpe-proliant-screen-v2.png" alt-text="Screenshot that shows the HPE ProLiant window.":::
+
+1. Go to **System Utilities** > **System Configuration** > **iLO 5 Configuration Utility** > **Network Options**.
+
+ :::image type="content" source="../media/tutorial-install-components/system-configuration-window-v2.png" alt-text="Screenshot that shows the System Configuration window.":::
+
+ 1. Select **Shared Network Port-LOM** from the **Network Interface Adapter** field.
+
+ 1. Set **Enable DHCP** to **Off**.
+
+ 1. Enter the IP address, subnet mask, and gateway IP address.
+
+1. Select **F10: Save**.
+
+1. Select **Esc** to get back to the **iLO 5 Configuration Utility**, and then select **User Management**.
+
+1. Select **Edit/Remove User**. The administrator is the only default user defined.
+
+1. Change the default password and select **F10: Save**.
+
+### Configure the HPE BIOS
+
+This procedure describes how to update the HPE BIOS configuration for your OT deployment.
+
+**To configure the HPE BIOS**:
+
+1. Select **System Utilities** > **System Configuration** > **BIOS/Platform Configuration (RBSU)**.
+
+1. In the **BIOS/Platform Configuration (RBSU)** form, select **Boot Options**.
+
+1. Change **Boot Mode** to **Legacy BIOS Mode**, and then select **F10: Save**.
+
+1. Select **Esc** twice to close the **System Configuration** form.
+
+1. Select **Embedded RAID 1: HPE Smart Array P408i-a SR Gen 10** > **Array Configuration** > **Create Array**.
+
+1. In the **Create Array** form, select all four disk options, and on the next page select **RAID10**.
+
+> [!NOTE]
+> For **Data-at-Rest** encryption, see the HPE guidance for activating RAID Secure Encryption or using Self-Encrypting-Drives (SED).
+>
+
+### Install Defender for IoT software on the HPE ProLiant DL20 Gen10
+
+This procedure describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10
+
+The installation process takes about 20 minutes. After the installation, the system is restarted several times.
+
+**To install Defender for IoT software**:
+
+1. Connect the screen and keyboard to the appliance, and then connect to the CLI.
+
+1. Connect an external CD or disk-on-key that contains the software you downloaded from the Azure portal.
+
+1. Start the appliance.
+
+1. Continue with the generic procedure for installing Defender for IoT software. For more information, see [Defender for IoT software installation](../how-to-install-software.md).
+
+## Next steps
+
+Continue understanding system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md).
+
+Then, use any of the following procedures to continue:
+
+- [Purchase sensors or download software for sensors](../onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)
+- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)
+- [Install software](../how-to-install-software.md)
defender-for-iot Hpe Proliant Dl20 Plus Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-plus-enterprise.md
Title: HPE ProLiant DL20/DL20 Plus for OT monitoring in enterprise deployments- Microsoft Defender for IoT
-description: Learn about the HPE ProLiant DL20/DL20 Plus appliance when used for OT monitoring with Microsoft Defender for IoT in enterprise deployments.
+ Title: HPE ProLiant DL20 Gen10 Plus for OT monitoring in enterprise deployments - Microsoft Defender for IoT
+description: Learn about the HPE ProLiant DL20 Gen10 Plus appliance when used for OT monitoring with Microsoft Defender for IoT in enterprise deployments.
Last updated 04/24/2022
+# HPE ProLiant DL20 Gen10 Plus (4SFF)
-# HPE ProLiant DL20 Gen10/DL20 Gen10 Plus
-
-This article describes the **HPE ProLiant DL20 Gen10** or **HPE ProLiant DL20 Gen10 Plus** appliance for OT sensors in an enterprise deployment.
+This article describes the **HPE ProLiant DL20 Gen10 Plus** appliance for OT sensors in an enterprise deployment.
The HPE ProLiant DL20 Gen10 Plus is also available for the on-premises management console.
The HPE ProLiant DL20 Gen10 Plus is also available for the on-premises managemen
|**Hardware profile** | E1800 | |**Performance** | Max bandwidth: 1 Gbp/s <br>Max devices: 10,000 | |**Physical specifications** | Mounting: 1U <br> Ports: 8x RJ45 or 6x SFP (OPT)|
-|**Status** | Supported, Available preconfigured |
+|**Status** | Supported, available pre-configured |
The following image shows a sample of the HPE ProLiant DL20 front panel:
The following image shows a sample of the HPE ProLiant DL20 back panel:
:::image type="content" source="../media/tutorial-install-components/hpe-proliant-dl20-back-panel-v2.png" alt-text="Photo of the back panel of the HPE ProLiant DL20." border="false":::
-### Specifications
+## Specifications
+
+|Component|Technical specifications|
+|-|-|
+|Chassis|1U rack server|
+|Physical Characteristics | HPE DL20 Gen10+ NHP 4SFF CTO Server |
+|Processor| Intel Xeon E-2334 <br> 3.4 GHz 4C 65 W|
+|Chipset|Intel C256 |
+|Memory|2x 16-GB Dual Rank x8 DDR4-3200|
+|Storage|4x 1-TB SATA 6G Midline 7.2 K SFF (2.5 in) ΓÇô RAID 10 |
+|Network controller|On-board: 2x 1 Gb|
+|External| 1 x HPE Ethernet 1-Gb 4-port 366FLR Adapter|
+|On-board| iLO Port Card 1 Gb|
+|Management|HPE iLO Advanced|
+|Device access| Front: One USB 3.0 1 x USB iLO Service Port<br> Rear: Two USBs 3.0|
+|Internal| One USB 3.0|
+|Power|2x Hot Plug Power Supply 290 W|
+|Rack support|HPE 1U Short Friction Rail Kit|
+
+## DL20 Gen10 Plus (4SFF) - Bill of Materials
-|Component |Specifications|
-|||
-|Chassis |1U rack server |
-|Dimensions |Four 3.5" chassis: 4.29 x 43.46 x 38.22 cm / 1.70 x 17.11 x 15.05 in |
-|Weight | Max 7.9 kg / 17.41 lb |
-
-**DL20 Gen10 BOM**
-
-| Quantity | PN| Description: high end |
-|--|--|--|
-|1| P06963-B21 | HPE DL20 Gen10 4SFF CTO Server |
-|1| P17104-L21 | HPE DL20 Gen10 E-2234 FIO Kit |
-|2| 879507-B21 | HPE 16-GB 2Rx8 PC4-2666V-E STND Kit |
-|3| 655710-B21 | HPE 1-TB SATA 7.2 K SFF SC DS HDD |
-|1| P06667-B21 | HPE DL20 Gen10 x8x16 FLOM Riser Kit |
-|1| 665240-B21 | HPE Ethernet 1-Gb 4-port 366FLR Adapter |
-|1| 782961-B21 | HPE 12-W Smart Storage Battery |
-|1| 869081-B21 | HPE Smart Array P408i-a SR G10 LH Controller |
-|2| 865408-B21 | HPE 500-W FS Plat Hot Plug LH Power Supply Kit |
-|1| 512485-B21 | HPE iLO Adv 1-Server License 1 Year Support |
-|1| P06722-B21 | HPE DL20 Gen10 RPS Enablement FIO Kit |
-|1| 775612-B21 | HPE 1U Short Friction Rail Kit |
-
-**DL20 Gen10 Plus BOM**:
+|Quantity|PN|Description|
+|-||-|
+|1| P44111-B21 | HPE DL20 Gen10+ 4SFF CTO Server|
+|1| P45252-B21 | Intel Xeon E-2334 FIO CPU for HPE|
+|4| P28610-B21 | HPE 1TB SATA 7.2K SFF BC HDD|
+|2| P43019-B21 | HPE 16GB 1Rx8 PC4-3200AA-E Standard Kit|
+|1| 869079-B21 | HPE Smart Array E208i-a SR G10 LH Ctrlr (RAID10)|
+|1| P21106-B21 | INT I350 1GbE 4p BASE-T Adapter|
+|1| P45948-B21 | HPE DL20 Gen10+ RPS FIO Enable Kit|
+|2| 865408-B21 | HPE 500W FS Plat Hot Plug LH Power Supply Kit|
+|1| 775612-B21 | HPE 1U Short Friction Rail Kit|
+|1| 512485-B21 | HPE iLO Adv 1 Server License 1 year support|
+|1| P46114-B21 | HPE DL20 Gen10+ 2x8 LP FIO Riser Kit|
+
+## Optional Storage Arrays
|Quantity|PN|Description| |-||-|
-|1| P44111-B21| HPE DL20 Gen10+ 4SFF CTO Server|
-|1| P45252-B21| Intel Xeon E-2334 FIO CPU for HPE|
-|1| 869081-B21| HPE Smart Array P408i-a SR G10 LH Controller|
-|1| 782961-B21| HPE 12W Smart Storage Battery|
-|1| P45948-B21| HPE DL20 Gen10+ RPS FIO Enable Kit|
-|2| 865408-B21| HPE 500W FS Plat Hot Plug LH Power Supply Kit|
-|1| 775612-B21| HPE 1U Short Friction Rail Kit|
-|1| 512485-B21| HPE iLO Adv 1 Server License 1 year support|
-|1| P46114-B21| HPE DL20 Gen10+ 2x8 LP FIO Riser Kit|
-|1| P21106-B21| INT I350 1GbE 4p BASE-T Adapter|
-|3| P28610-B21| HPE 1TB SATA 7.2K SFF BC HDD|
-|2| P43019-B21| HPE 16GB 1Rx8 PC4-3200AA-E Standard Kit|
+|1| P26325-B21 | Broadcom MegaRAID MR216i-a x16 Lanes without Cache NVMe/SAS 12G Controller (RAID5)<br><br>**Note**: This RAID controller occupies the PCIe expansion slot and does not allow expansion of networking port expansion |
## Port expansion Optional modules for port expansion include: |Location |Type|Specifications|
-|-- | --| |
-| PCI Slot 1 (Low profile)| Quad Port Ethernet NIC| 811546-B21 - HPE 1 GbE 4p BASE-T I350 Adapter SI |
-| PCI Slot 1 (Low profile) | DP F/O NIC|727054-B21 - HPE 10 GbE 2p FLR-SFP+ X710 Adapter|
-| PCI Slot 2 (High profile)| Quad Port Ethernet NIC|811546-B21 - HPE 1 GbE 4p BASE-T I350 Adapter SI|
-| PCI Slot 2 (High profile)|DP F/O NIC| 727054-B21 - HPE 10 GbE 2p FLR-SFP+ X710 Adapter|
-| PCI Slot 2 (High profile)|Quad Port F/O NIC| 869585-B21 - HPE 10 GbE 4p SFP+ X710 Adapter SI|
+|--|--||
+| PCI Slot 1 (Low profile) | DP F/O NIC |P26262-B21 - Broadcom BCM57414 Ethernet 10/25Gb 2-port SFP28 Adapter for HPE |
+| PCI Slot 1 (Low profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10Gb 2-port SFP+ Adapter for HPE |
+| PCI Slot 2 (High profile) | Quad Port Ethernet NIC| P21106-B21 - Intel I350-T4 Ethernet 1Gb 4-port BASE-T Adapter for HPE |
+| PCI Slot 2 (High profile) | DP F/O NIC |P26262-B21 - Broadcom BCM57414 Ethernet 10/25Gb 2-port SFP28 Adapter for HPE |
+| PCI Slot 2 (High profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10Gb 2-port SFP+ Adapter for HPE |
| SFPs for Fiber Optic NICs|MultiMode, Short Range|455883-B21 - HPE BLc 10G SFP+ SR Transceiver| | SFPs for Fiber Optic NICs|SingleMode, Long Range | 455886-B21 - HPE BLc 10G SFP+ LR Transceiver|
-## HPE ProLiant DL20 Gen10 / HPE ProLiant DL20 Gen10 Plus installation
+## HPE ProLiant DL20 Gen10 Plus installation
-This section describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 or HPE ProLiant DL20 Gen10 Plus appliance.
+This section describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 Plus appliance.
Installation includes: - Enabling remote access and updating the default administrator password - Configuring iLO port on network port 1-- Configuring BIOS and RAID settings
+- Configuring BIOS and RAID10 settings
- Installing Defender for IoT software > [!NOTE]
This procedure describes how to update the HPE BIOS configuration for your OT de
1. Select **Esc** twice to close the **System Configuration** form.
-1. Select **Embedded RAID 1: HPE Smart Array P408i-a SR Gen 10** > **Array Configuration** > **Create Array**.
+1. Select **Embedded RAID 1: HPE Smart Array E208i-a SR Gen 10** > **Array Configuration** > **Create Array**.
-1. In the **Create Array** form, select all the options. Three options are available for the **Enterprise** appliance.
+1. In the **Create Array** form, select all four disk options, and on the next page select **RAID10**.
> [!NOTE]
-> For **Data-at-Rest** encryption, see the HPE guidance for activating RAID Secure Encryption or using Self-Encrypting-Drives (SED).
+> For **Data-at-Rest** encryption, see HPE guidance for activating RAID SR Secure Encryption or using Self-Encrypting-Drives (SED).
>
-### Install Defender for IoT software on the HPE ProLiant DL20 Gen10 or HPE ProLiant DL20 Gen10 Plus
+### Install Defender for IoT software on the HPE ProLiant DL20 Gen10 Plus
-This procedure describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 or HPE ProLiant DL20 Gen10 Plus.
+This procedure describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 Plus.
The installation process takes about 20 minutes. After the installation, the system is restarted several times.
defender-for-iot Hpe Proliant Dl20 Plus Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-plus-smb.md
Title: HPE ProLiant DL20 Gen10/DL20 Gen10 Plus (NHP 2LFF) for OT monitoring in SMB deployments- Microsoft Defender for IoT
-description: Learn about the HPE ProLiant DL20 Gen10/DL20 Gen10 Plus appliance when used for in SMB deployments for OT monitoring with Microsoft Defender for IoT.
+ Title: HPE ProLiant DL20 Gen10 Plus (NHP 2LFF) for OT monitoring in SMB deployments - Microsoft Defender for IoT
+description: Learn about the HPE ProLiant DL20 Gen10 Plus appliance when used for in SMB deployments for OT monitoring with Microsoft Defender for IoT.
Last updated 04/24/2022
-# HPE ProLiant DL20 Gen10/DL20 Gen10 Plus (NHP 2LFF) for SMB deployments
+# HPE ProLiant DL20 Gen10 Plus (NHP 2LFF)
-This article describes the **HPE ProLiant DL20 Gen10** or **HPE ProLiant DL20 Gen10 Plus** appliance for OT sensors in an SBM deployment.
+This article describes the **HPE ProLiant DL20 Gen10 Plus** appliance for OT sensors monitoring production lines.
The HPE ProLiant DL20 Gen10 Plus is also available for the on-premises management console.
The HPE ProLiant DL20 Gen10 Plus is also available for the on-premises managemen
|**Hardware profile** | L500| |**Performance** | Max bandwidth: 200Mbp/s <br>Max devices: 1,000 | |**Physical specifications** | Mounting: 1U<br>Ports: 4x RJ45|
-|**Status** | Supported; Available as pre-configured |
+|**Status** | Supported; available pre-configured |
The following image shows a sample of the HPE ProLiant DL20 Gen10 front panel:
The following image shows a sample of the HPE ProLiant DL20 Gen10 back panel:
|Component|Technical specifications| |-|-| |Chassis|1U rack server|
-|Dimensions |4.32 x 43.46 x 38.22 cm / 1.70 x 17.11 x 15.05 in|
-|Weight|7.88 kg / 17.37 lb|
-|Processor| Intel Xeon E-2224 <br> 3.4 GHz 4C 71 W|
-|Chipset|Intel C242|
-|Memory|One 8-GB Dual Rank x8 DDR4-2666|
-|Storage|Two 1-TB SATA 6G Midline 7.2 K SFF (2.5 in) ΓÇô RAID 1 with Smart Array P208i-a|
-|Network controller|On-board: Two 1 Gb|
-|On-board| iLO Port Card 1 Gb|
+|Physical Characteristics | HPE DL20 Gen10+ NHP 2LFF CTO Server |
+|Processor| Intel Xeon E-2334 <br> 3.4 GHz 4C 65 W|
+|Chipset|Intel C256|
+|Memory|1x 8-GB Dual Rank x8 DDR4-3200|
+|Storage|4x 1-TB SATA 6G Midline 7.2 K SFF (2.5 in) ΓÇô RAID 1 |
+|Network controller|On-board: 2x 1 Gb|
|External| 1 x HPE Ethernet 1-Gb 4-port 366FLR Adapter|
+|On-board| iLO Port Card 1 Gb|
|Management|HPE iLO Advanced| |Device access| Front: One USB 3.0 1 x USB iLO Service Port<br> Rear: Two USBs 3.0| |Internal| One USB 3.0| |Power|Hot Plug Power Supply 290 W| |Rack support|HPE 1U Short Friction Rail Kit|
-## Appliance BOM
+## DL20 Gen10 Plus (NHP 2LFF) - Bill of Materials
+
+|Quantity|PN|Description|
+|-||-|
+|1| P44111-B21 | HPE DL20 Gen10+ NHP 2LFF CTO Server|
+|1| P45252-B21 | Intel Xeon E-2334 FIO CPU for HPE|
+|2| P28610-B21 | HPE 1TB SATA 7.2K SFF BC HDD|
+|1| P43016-B21 | HPE 8GB 1Rx8 PC4-3200AA-E Standard Kit|
+|1| 869079-B21 | HPE Smart Array E208i-a SR G10 LH Ctrlr (RAID10)|
+|1| P21106-B21 | INT I350 1GbE 4p BASE-T Adapter|
+|1| P45948-B21 | HPE DL20 Gen10+ RPS FIO Enable Kit|
+|1| 865408-B21 | HPE 500W FS Plat Hot Plug LH Power Supply Kit|
+|1| 775612-B21 | HPE 1U Short Friction Rail Kit|
+|1| 512485-B21 | HPE iLO Adv 1 Server License 1 year support|
+|1| P46114-B21 | HPE DL20 Gen10+ 2x8 LP FIO Riser Kit|
+
+## Optional Storage Arrays
-|PN|Description|Quantity|
-|:-|:-|:-|
-|P06961-B21|HPE DL20 Gen10 NHP 2LFF CTO Server|1|
-|P17102-L21|HPE DL20 Gen10 E-2224 FIO Kit|1|
-|879505-B21|HPE 8-GB 1Rx8 PC4-2666V-E Standard Kit|1|
-|801882-B21|HPE 1-TB SATA 7.2 K LFF RW HDD|2|
-|P06667-B21|HPE DL20 Gen10 x8x16 FLOM Riser Kit|1|
-|665240-B21|HPE Ethernet 1-Gb 4-port 366FLR Adapter|1|
-|869079-B21|HPE Smart Array E208i-a SR G10 LH Controller|1|
-|P21649-B21|HPE DL20 Gen10 Plat 290 W FIO PSU Kit|1|
-|P06683-B21|HPE DL20 Gen10 M.2 SATA/LFF AROC Cable Kit|1|
-|512485-B21|HPE iLO Adv 1-Server License 1 Year Support|1|
-|775612-B21|HPE 1U Short Friction Rail Kit|1|
+|Quantity|PN|Description|
+|-||-|
+|1| P26325-B21 | Broadcom MegaRAID MR216i-a x16 Lanes without Cache NVMe/SAS 12G Controller (RAID5)<br><br>**Note**: This RAID controller occupies the PCIe expansion slot and does not allow expansion of networking port expansion |
-## HPE ProLiant DL20 Gen10/HPE ProLiant DL20 Gen10 Plus installation
+## Port expansion
-This section describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 or HPE ProLiant DL20 Gen10 Plus appliance.
+Optional modules for port expansion include:
+
+|Location |Type|Specifications|
+|--|--||
+| PCI Slot 1 (Low profile) | DP F/O NIC |P26262-B21 - Broadcom BCM57414 Ethernet 10/25Gb 2-port SFP28 Adapter for HPE |
+| PCI Slot 1 (Low profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10Gb 2-port SFP+ Adapter for HPE |
+| PCI Slot 2 (High profile) | Quad Port Ethernet NIC| P21106-B21 - Intel I350-T4 Ethernet 1Gb 4-port BASE-T Adapter for HPE |
+| PCI Slot 2 (High profile) | DP F/O NIC |P26262-B21 - Broadcom BCM57414 Ethernet 10/25Gb 2-port SFP28 Adapter for HPE |
+| PCI Slot 2 (High profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10Gb 2-port SFP+ Adapter for HPE |
+| SFPs for Fiber Optic NICs|MultiMode, Short Range|455883-B21 - HPE BLc 10G SFP+ SR Transceiver|
+| SFPs for Fiber Optic NICs|SingleMode, Long Range | 455886-B21 - HPE BLc 10G SFP+ LR Transceiver|
+
+## HPE ProLiant HPE ProLiant DL20 Gen10 Plus installation
+
+This section describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 Plus appliance.
Installation includes: - Enabling remote access and updating the default administrator password - Configuring iLO port on network port 1-- Configuring BIOS and RAID settings
+- Configuring BIOS and RAID1 settings
- Installing Defender for IoT software > [!NOTE]
-> Installation procedures are only relevant if you need to re-install software on a preconfigured device, or if you buy your own hardware and configure the appliance yourself.
+> Installation procedures are only relevant if you need to re-install software on a pre-configured device, or if you buy your own hardware and configure the appliance yourself.
> - ### Enable remote access and update the password Use the following procedure to set up network options and update the default password.
This procedure describes how to update the HPE BIOS configuration for your OT de
1. Select **Esc** twice to close the **System Configuration** form.
-1. Select **Embedded RAID 1: HPE Smart Array P208i-a SR Gen 10** > **Array Configuration** > **Create Array**.
+1. Select **Embedded RAID 1: HPE Smart Array E208i-a SR Gen 10** > **Array Configuration** > **Create Array**.
1. Select **Proceed to Next Form**.
-1. In the **Set RAID Level** form, set the level to **RAID 5** for enterprise deployments and **RAID 1** for SMB deployments.
+1. In the **Set RAID Level** form, set the level to **RAID 1**.
1. Select **Proceed to Next Form**.
This procedure describes how to update the HPE BIOS configuration for your OT de
:::image type="content" source="../media/tutorial-install-components/boot-override-window-two-v2.png" alt-text="Screenshot that shows the second Boot Override window.":::
-### Install Defender for IoT software on the HPE ProLiant DL20 Gen10 or HPE ProLiant DL20 Gen10 Plus
+### Install Defender for IoT software on the HPE ProLiant DL20 Gen10 Plus
-This procedure describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 or HPE ProLiant DL20 Gen10 Plus.
+This procedure describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 Plus.
The installation process takes about 20 minutes. After the installation, the system is restarted several times.
defender-for-iot Hpe Proliant Dl20 Smb Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-smb-legacy.md
+
+ Title: HPE ProLiant DL20 Gen10 (NHP 2LFF) for OT monitoring in SMB deployments- Microsoft Defender for IoT
+description: Learn about the HPE ProLiant DL20 Gen10 appliance when used for in SMB deployments for OT monitoring with Microsoft Defender for IoT.
Last updated : 10/30/2022+++
+# HPE ProLiant DL20 Gen10 (NHP 2LFF)
+
+This article describes the **HPE ProLiant DL20 Gen10** appliance for OT sensors for monitoring production lines.
+
+Legacy appliances are certified but are not currently offered as pre-configured appliances.
+
+| Appliance characteristic |Details |
+|||
+|**Hardware profile** | L500|
+|**Performance** | Max bandwidth: 200Mbp/s <br>Max devices: 1,000 |
+|**Physical specifications** | Mounting: 1U<br>Ports: 4x RJ45|
+|**Status** | Supported, not available pre-configured |
+
+The following image shows a sample of the HPE ProLiant DL20 Gen10 front panel:
++
+The following image shows a sample of the HPE ProLiant DL20 Gen10 back panel:
++
+## Specifications
+
+|Component|Technical specifications|
+|-|-|
+|Chassis|1U rack server|
+|Dimensions |4.32 x 43.46 x 38.22 cm / 1.70 x 17.11 x 15.05 in|
+|Weight|7.88 kg / 17.37 lb|
+|Processor| Intel Xeon E-2224 <br> 3.4 GHz 4C 71 W|
+|Chipset|Intel C242|
+|Memory|One 8-GB Dual Rank x8 DDR4-2666|
+|Storage|Two 1-TB SATA 6G Midline 7.2 K SFF (2.5 in) ΓÇô RAID 1 with Smart Array P208i-a|
+|Network controller|On-board: Two 1 Gb|
+|On-board| iLO Port Card 1 Gb|
+|External| 1 x HPE Ethernet 1-Gb 4-port 366FLR Adapter|
+|Management|HPE iLO Advanced|
+|Device access| Front: One USB 3.0 1 x USB iLO Service Port<br> Rear: Two USBs 3.0|
+|Internal| One USB 3.0|
+|Power|Hot Plug Power Supply 290 W|
+|Rack support|HPE 1U Short Friction Rail Kit|
+
+## Appliance BOM
+
+|PN|Description|Quantity|
+|:-|:-|:-|
+|P06961-B21|HPE DL20 Gen10 NHP 2LFF CTO Server|1|
+|P17102-L21|HPE DL20 Gen10 E-2224 FIO Kit|1|
+|879505-B21|HPE 8-GB 1Rx8 PC4-2666V-E Standard Kit|1|
+|801882-B21|HPE 1-TB SATA 7.2 K LFF RW HDD|2|
+|P06667-B21|HPE DL20 Gen10 x8x16 FLOM Riser Kit|1|
+|665240-B21|HPE Ethernet 1-Gb 4-port 366FLR Adapter|1|
+|869079-B21|HPE Smart Array E208i-a SR G10 LH Controller|1|
+|P21649-B21|HPE DL20 Gen10 Plat 290 W FIO PSU Kit|1|
+|P06683-B21|HPE DL20 Gen10 M.2 SATA/LFF AROC Cable Kit|1|
+|512485-B21|HPE iLO Adv 1-Server License 1 Year Support|1|
+|775612-B21|HPE 1U Short Friction Rail Kit|1|
+
+## HPE ProLiant DL20 Gen10 installation
+
+This section describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 appliance.
+
+Installation includes:
+
+- Enabling remote access and updating the default administrator password
+- Configuring iLO port on network port 1
+- Configuring BIOS and RAID settings
+- Installing Defender for IoT software
+
+> [!NOTE]
+> Installation procedures are only relevant if you need to re-install software on a pre-configured device, or if you buy your own hardware and configure the appliance yourself.
+>
+
+### Enable remote access and update the password
+
+Use the following procedure to set up network options and update the default password.
+
+**To enable and update the password**:
+
+1. Connect a screen, and a keyboard to the HPE appliance, turn on the appliance, and press **F9**.
+
+ :::image type="content" source="../media/tutorial-install-components/hpe-proliant-screen-v2.png" alt-text="Screenshot that shows the HPE ProLiant window.":::
+
+1. Go to **System Utilities** > **System Configuration** > **iLO 5 Configuration Utility** > **Network Options**.
+
+ :::image type="content" source="../media/tutorial-install-components/system-configuration-window-v2.png" alt-text="Screenshot that shows the System Configuration window.":::
+
+ 1. Select **Shared Network Port-LOM** from the **Network Interface Adapter** field.
+
+ 1. Set **Enable DHCP** to **Off**.
+
+ 1. Enter the IP address, subnet mask, and gateway IP address.
+
+1. Select **F10: Save**.
+
+1. Select **Esc** to get back to the **iLO 5 Configuration Utility**, and then select **User Management**.
+
+1. Select **Edit/Remove User**. The administrator is the only default user defined.
+
+1. Change the default password and select **F10: Save**.
+
+### Configure the HPE BIOS
+
+This procedure describes how to update the HPE BIOS configuration for your OT deployment.
+
+**To configure the HPE BIOS**:
+
+1. Select **System Utilities** > **System Configuration** > **BIOS/Platform Configuration (RBSU)**.
+
+1. In the **BIOS/Platform Configuration (RBSU)** form, select **Boot Options**.
+
+1. Change **Boot Mode** to **Legacy BIOS Mode**, and then select **F10: Save**.
+
+1. Select **Esc** twice to close the **System Configuration** form.
+
+1. Select **Embedded RAID 1: HPE Smart Array P208i-a SR Gen 10** > **Array Configuration** > **Create Array**.
+
+1. Select **Proceed to Next Form**.
+
+1. In the **Set RAID Level** form, set the level to **RAID 5** for enterprise deployments and **RAID 1** for SMB deployments.
+
+1. Select **Proceed to Next Form**.
+
+1. In the **Logical Drive Label** form, enter **Logical Drive 1**.
+
+1. Select **Submit Changes**.
+
+1. In the **Submit** form, select **Back to Main Menu**.
+
+1. Select **F10: Save** and then press **Esc** twice.
+
+1. In the **System Utilities** window, select **One-Time Boot Menu**.
+
+1. In the **One-Time Boot Menu** form, select **Legacy BIOS One-Time Boot Menu**.
+
+1. The **Booting in Legacy** and **Boot Override** windows appear. Choose a boot override option; for example, to a CD-ROM, USB, HDD, or UEFI shell.
+
+ :::image type="content" source="../media/tutorial-install-components/boot-override-window-one-v2.png" alt-text="Screenshot that shows the first Boot Override window.":::
+
+ :::image type="content" source="../media/tutorial-install-components/boot-override-window-two-v2.png" alt-text="Screenshot that shows the second Boot Override window.":::
+
+### Install Defender for IoT software on the HPE ProLiant DL20 Gen10
+
+This procedure describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10.
+
+The installation process takes about 20 minutes. After the installation, the system is restarted several times.
+
+**To install Defender for IoT software**:
+
+1. Connect the screen and keyboard to the appliance, and then connect to the CLI.
+
+1. Connect an external CD or disk-on-key that contains the software you downloaded from the Azure portal.
+
+1. Start the appliance.
+
+1. Continue by installing your Defender for IoT software. For more information, see [Defender for IoT software installation](../how-to-install-software.md).
+
+## Next steps
+
+Continue understanding system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md).
+
+Then, use any of the following procedures to continue:
+
+- [Purchase sensors or download software for sensors](../onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)
+- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)
+- [Install software](../how-to-install-software.md)
defender-for-iot Custom Columns Sample Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/custom-columns-sample-script.md
Title: Sample automation script for custom columns on on-premises management consoles - Microsoft Defender for IoT
-description: Learn how to view and manage OT devices (assets) from the Device inventory page on an on-premises management console.
+description: Use a sample script when adding custom columns to your on-premises management console Device inventory page.
Last updated 07/12/2022
defender-for-iot How To Accelerate Alert Incident Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-accelerate-alert-incident-response.md
The following alert groups are automatically defined:
- Bandwidth anomalies - Internet access - Suspicion of malware-- Buffer overflow
+- Buffer overflow
- Operation failures - Suspicion of malicious activity - Command failures
Alert groups are predefined. For details about alerts associated with alert grou
## Customize alert rules
-Add custom alert rule to pinpoint specific activity needed for your organization such as for particular protocols, source or destination addresses, or a combination of parameters.
+Add custom alert rules to pinpoint specific activity needed for your organization. The rules can refer, among others, to particular protocols, source or destination addresses, or a combination of parameters.
+For example, for an environment running MODBUS, you can define a rule to detect any written commands to a memory register on a specific IP address and ethernet destination. Another example would be setting an alert about any access to a particular IP address.
-For example, you might want to define an alert for an environment running MODBUS to detect any written commands to a memory register on a specific IP address and ethernet destination. Another example would be an alert for any access to a particular IP address.
-
-Use custom alert rule actions to instruct Defender for IT to take specific action when the alert is triggered, such as allowing users to access PCAP files from the alert, assigning alert severity, or generating an event that shows in the event timeline. Alert messages indicate that the alert was generated from a custom alert rule.
+Specify in the custom alert rule what action Defender for IT should take when the alert is triggered. For example, the action can be allowing users to access PCAP files from the alert, assigning alert severity, or generating an event that shows in the event timeline. Alert messages show that the alert was generated from a custom alert rule.
**To create a custom alert rule**:
Use custom alert rule actions to instruct Defender for IT to take specific actio
1. In the **Create custom alert rule** pane that shows on the right, define the following fields:
- - **Alert name**. Enter a meaningful name for the alert.
-
- - **Alert protocol**. Select the protocol you want to detect. In specific cases, select one of the following protocols:
-
- - For a database data or structure manipulation event, select **TNS** or **TDS**
- - For a file event, select **HTTP**, **DELTAV**, **SMB**, or **FTP**, depending on the file type
- - For a package download event, select **HTTP**
- - For an open ports (dropped) event, select **TCP** or **UDP**, depending on the port type.
-
- To create rules that monitor for specific changes in one of your OT protocols, such as S7 or CIP, use any parameters found on that protocol, such as `tag` or `sub-function`.
-
- - **Message**. Define a message to display when the alert is triggered. Alert messages support alphanumeric characters and any traffic variables detected. For example, you might want to include the detected source and destination addresses. Use curly brackets (**{}**) to add variables to the alert message.
+ |Name |Description |
+ |||
+ |**Alert name** | Enter a meaningful name for the alert. |
+ |**Alert protocol** | Select the protocol you want to detect. <br> In specific cases, select one of the following protocols: <br> <br> - For a database data or structure manipulation event, select **TNS** or **TDS**. <br> - For a file event, select **HTTP**, **DELTAV**, **SMB**, or **FTP**, depending on the file type. <br> - For a package download event, select **HTTP**. <br> - For an open ports (dropped) event, select **TCP** or **UDP**, depending on the port type. <br> <br> To create rules that track specific changes in one of your OT protocols, such as S7 or CIP, use any parameters found on that protocol, such as `tag` or `sub-function`. |
+ |**Message** | Define a message to display when the alert is triggered. Alert messages support alphanumeric characters and any traffic variables detected. <br> <br> For example, you might want to include the detected source and destination addresses. Use curly brackets (**{}**) to add variables to the alert message. |
+ |**Direction** | Enter a source and/or destination IP address where you want to detect traffic. |
+ |**Conditions** | Define one or more conditions that must be met to trigger the alert. Select the **+** sign to create a condition set with multiple conditions that use the **AND** operator. If you select a MAC address or IP address as a variable, you must convert the value from a dotted-decimal address to decimal format. <br><br> Note that the **+** sign is enabled only after selecting an **Alert protocol** from above. <br> You must add at least one condition in order to create a custom alert rule. |
+ |**Detected** | Define a date and/or time range for the traffic you want to detect. You can customize the days and time range to fit with maintenance hours or set working hours. |
+ |**Action** | Define an action you want Defender for IoT to take automatically when the alert is triggered. |
- - **Direction**. Enter a source and/or destination IP address where you want to detect traffic.
+ For example:
+
+ :::image type="content" source="media/how-to-accelerate-alert-incident-response/create-custom-alert-rule.png" alt-text="Screenshot of the Create custom alert rule pane for creating custom alert rules." lightbox="media/how-to-accelerate-alert-incident-response/create-custom-alert-rule.png":::
- - **Conditions**. Define one or more conditions that must be met to trigger the alert. Select the **+** sign to create a condition set with multiple conditions that use the **AND** operator. If you select a MAC address or IP address as a variable, you must convert the value from a dotted-decimal address to decimal format.
+1. Select **Save** when you're done to save the rule.
- - **Detected**. Define a date and/or time range for the traffic you want to detect.
- - **Action**. Define an action you want Defender for IoT to take automatically when the alert is triggered.
+### Edit a custom alert rule
To edit a custom alert rule, select the rule and then select the options (**...**) menu > **Edit**. Modify the alert rule as needed and save your changes. Edits made to custom alert rules, such as changing a severity level or protocol, are tracked in the **Event timeline** page on the sensor console. For more information, see [Track sensor activity](how-to-track-sensor-activity.md).
-**To enable or disable custom alert rules**
+### Disable, enable, or delete custom alert rules
-You can disable custom alert rules to prevent them from running without deleting them altogether.
+Disable custom alert rules to prevent them from running without deleting them altogether.
In the **Custom alert rules** page, select one or more rules, and then select **Enable**, **Disable**, or **Delete** in the toolbar as needed.
defender-for-iot Ot Pre Configured Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-pre-configured-appliances.md
This article provides a catalog of the pre-configured appliances available for M
Use the links in the tables below to jump to articles with more details about each appliance.
-Microsoft has partnered with [Arrow Electronics](https://www.arrow.com/) to provide pre-configured sensors. To purchase a pre-configured sensor, contact Arrow at: [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsoft.com&subject=Information%20about%20MD4IoT%20pre-configured%20appliances).
+Microsoft has partnered with [Arrow Electronics](https://www.arrow.com/) to provide pre-configured sensors. To purchase a pre-configured sensor, contact Arrow at: [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsoft.com&subject=Information%20about%20Microsoft%20Defender%20for%20IoT%20pre-configured%20appliances&body=Dear%20Arrow%20Representative,%0D%0DOur%20organization%20is%20interested%20in%20receiving%20quotes%20for%20Microsoft%20Defender%20for%20IoT%20appliances%20as%20well%20as%20fulfillment%20options.%0D%0DThe%20purpose%20of%20this%20communication%20is%20to%20inform%20you%20of%20a%20project%20which%20involves%20[NUMBER]%20sites%20and%20[NUMBER]%20sensors%20for%20[ORGANIZATION%20NAME].%20Having%20reviewed%20potential%20appliances%20suitable%20for%20our%20project,%20we%20would%20like%20to%20obtain%20more%20information%20about:%20___________%0D%0D%0DI%20would%20appreciate%20being%20contacted%20by%20an%20Arrow%20representative%20to%20receive%20a%20quote%20for%20the%20items%20mentioned%20above.%0DI%20understand%20the%20quote%20and%20appliance%20delivery%20shall%20be%20in%20accordance%20with%20the%20relevant%20Arrow%20terms%20and%20conditions%20for%20Microsoft%20Defender%20for%20IoT%20pre-configured%20appliances.%0D%0D%0DBest%20Regards,%0D%0D%0D%0D%0D%0D//////////////////////////////%20%0D/////////%20Replace%20[NUMBER]%20with%20appropriate%20values%20related%20to%20your%20request.%0D/////////%20Replace%20[ORGANIZATION%20NAME]%20with%20the%20name%20of%20the%20organization%20you%20represent.%0D//////////////////////////////%0D%0D).
For more information, see [Purchase sensors or download software for sensors](onboard-sensors.md#purchase-sensors-or-download-software-for-sensors).
-## Advantages of preconfigured appliances
+## Advantages of pre-configured appliances
Pre-configured physical appliances have been validated for Defender for IoT OT system monitoring, and have the following advantages over installing your own software:
Pre-configured physical appliances have been validated for Defender for IoT OT s
## Appliances for OT network sensors
-You can [order](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsoft.com&subject=Information%20about%20MD4IoT%20pre-configured%20appliances). any of the following preconfigured appliances for monitoring your OT networks:
+You can [order](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsoft.com&subject=Information%20about%20Microsoft%20Defender%20for%20IoT%20pre-configured%20appliances&body=Dear%20Arrow%20Representative,%0D%0DOur%20organization%20is%20interested%20in%20receiving%20quotes%20for%20Microsoft%20Defender%20for%20IoT%20appliances%20as%20well%20as%20fulfillment%20options.%0D%0DThe%20purpose%20of%20this%20communication%20is%20to%20inform%20you%20of%20a%20project%20which%20involves%20[NUMBER]%20sites%20and%20[NUMBER]%20sensors%20for%20[ORGANIZATION%20NAME].%20Having%20reviewed%20potential%20appliances%20suitable%20for%20our%20project,%20we%20would%20like%20to%20obtain%20more%20information%20about:%20___________%0D%0D%0DI%20would%20appreciate%20being%20contacted%20by%20an%20Arrow%20representative%20to%20receive%20a%20quote%20for%20the%20items%20mentioned%20above.%0DI%20understand%20the%20quote%20and%20appliance%20delivery%20shall%20be%20in%20accordance%20with%20the%20relevant%20Arrow%20terms%20and%20conditions%20for%20Microsoft%20Defender%20for%20IoT%20pre-configured%20appliances.%0D%0D%0DBest%20Regards,%0D%0D%0D%0D%0D%0D//////////////////////////////%20%0D/////////%20Replace%20[NUMBER]%20with%20appropriate%20values%20related%20to%20your%20request.%0D/////////%20Replace%20[ORGANIZATION%20NAME]%20with%20the%20name%20of%20the%20organization%20you%20represent.%0D//////////////////////////////%0D%0D) any of the following pre-configured appliances for monitoring your OT networks:
|Hardware profile |Appliance |Performance / Monitoring |Physical specifications | ||||| |**C5600** | [HPE ProLiant DL360](appliance-catalog/hpe-proliant-dl360.md) | **Max bandwidth**: 3Gbp/s <br>**Max devices**: 12,000 <br> 32 Cores/32G RAM/5.6TB | **Mounting**: 1U <br>**Ports**: 15x RJ45 or 8x SFP (OPT) |
-|**E1800** | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/1.8TB | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
+|**E1800** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/1.8TB | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
|**E500** | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/512GB | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
-|**L500** | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: 200Mbp/s<br>**Max devices**: 1,000 <br> 4 Cores/8G RAM/500GB | **Mounting**: 1U<br>**Ports**: 4x RJ45 |
+|**L500** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: 200Mbp/s<br>**Max devices**: 1,000 <br> 4 Cores/8G RAM/500GB | **Mounting**: 1U<br>**Ports**: 4x RJ45 |
|**L100** | [YS-Techsystems YS-FIT2](appliance-catalog/ys-techsystems-ys-fit2.md) <br>(Rugged MIL-STD-810G) | **Max bandwidth**: 10Mbp/s <br>**Max devices**: 100 <br> 4 Cores/8G RAM/128GB | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 | - > [!NOTE] > Bandwidth performance may vary depending on protocol distribution.
You can purchase any of the following appliances for your OT on-premises managem
|Hardware profile |Appliance |Max sensors |Physical specifications | |||||
-|**E1800** | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | 300 | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
+|**E1800** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | 300 | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
+
+For information about previously supported legacy appliances, see the [appliance catalog](/azure/defender-for-iot/organizations/appliance-catalog/).
## Next steps
-Continue understanding system requirements for physical or virtual appliances.
+Continue understanding system requirements for physical or virtual appliances.
For more information, see [Which appliances do I need?](ot-appliance-sizing.md) and [OT monitoring with virtual appliances](ot-virtual-appliances.md).
defender-for-iot Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes-archive.md
Last updated 08/07/2022
This article serves as an archive for features and enhancements released for Microsoft Defender for IoT for organizations more than nine months ago.
-For more recent updates, see [What's new in Microsoft Defender for IoT?](release-notes.md).
+For more recent updates, see [What's new in Microsoft Defender for IoT?](whats-new.md).
Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
defender-for-iot Release Notes Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes-sentinel.md
The **Microsoft Defender for IoT** solution enhances the integration between Def
For more information, see: -- [What's new in Microsoft Defender for IoT?](release-notes.md)
+- [What's new in Microsoft Defender for IoT?](whats-new.md)
- [Tutorial: Integrate Microsoft Sentinel and Microsoft Defender for IoT](../../sentinel/iot-solution.md?bc=%2fazure%2fdefender-for-iot%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fdefender-for-iot%2forganizations%2ftoc.json) - [Tutorial: Investigate and detect threats for IoT devices](../../sentinel/iot-advanced-threat-monitoring.md?bc=%2fazure%2fdefender-for-iot%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fdefender-for-iot%2forganizations%2ftoc.json).+ ## Version 2.1 **Released**: September 2022
New features in this version include:
- New SOC playbooks for automation with CVEs, triaging incidents that involve sensitive devices, and email notifications to device owners for new incidents.
-For more information, see [Updates to the Microsoft Defender for IoT solution](release-notes.md#updates-to-the-microsoft-defender-for-iot-solution-in-microsoft-sentinels-content-hub).
+For more information, see [Updates to the Microsoft Defender for IoT solution](whats-new.md#updates-to-the-microsoft-defender-for-iot-solution-in-microsoft-sentinels-content-hub).
## Version 2.0
For more information, see [Updates to the Microsoft Defender for IoT solution](r
This version provides enhanced experiences for managing, installing, and updating the solution package in the Microsoft Sentinel content hub. For more information, see [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions](../../sentinel/sentinel-solutions-deploy.md)+ ## Version 1.0.14 **Released**: July 2022 New features in this version include: -- [Microsoft Sentinel incident synch with Defender for IoT alerts](release-notes.md#microsoft-sentinel-incident-synch-with-defender-for-iot-alerts)
+- [Microsoft Sentinel incident synch with Defender for IoT alerts](whats-new.md#microsoft-sentinel-incident-synch-with-defender-for-iot-alerts)
- IoT device entities displayed in related Microsoft Sentinel incidents.
For more information about earlier versions of the **Microsoft Defender for IoT*
## Next steps
-Learn more in [What's new in Microsoft Defender for IoT?](release-notes.md) and the [Microsoft Sentinel documentation](../../sentinel/index.yml).
+Learn more in [What's new in Microsoft Defender for IoT?](whats-new.md) and the [Microsoft Sentinel documentation](../../sentinel/index.yml).
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Title: What's new in Microsoft Defender for IoT
-description: This article lets you know what's new in the latest release of Defender for IoT.
+ Title: OT monitoring software versions - Microsoft Defender for IoT
+description: This article lists Microsoft Defender for IoT on-premises OT monitoring software versions, including release and support dates and highlights for new features.
Previously updated : 11/03/2022 Last updated : 11/22/2022
-# What's new in Microsoft Defender for IoT?
+# OT monitoring software versions
-This article lists Microsoft Defender for IoT's new features and enhancements for end-user organizations from the last nine months.
+The Microsoft Defender for IoT architecture uses on-premises sensors and management servers.
-Features released earlier than nine months ago are listed in [What's new archive for Microsoft Defender for IoT for organizations](release-notes-archive.md).
+This article lists the supported software versions for the OT sensor and on-premises management software, including release dates, support dates, and highlights for the updated features.
-Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+For more information, including detailed descriptions and updates for cloud-only features, see [What's new in Microsoft Defender for IoT?](whats-new.md) Cloud-only features aren't dependent on specific sensor versions.
## Versioning and support for on-premises software versions
-The Defender for IoT architecture uses on-premises sensors and management servers. This section describes the servicing information and timelines for the available on-premises software versions.
+This section describes the servicing information, timelines, and guidance for the available on-premises software versions.
-- **Starting in version 22.1.x**, each General Availability (GA) version of the Defender for IoT sensor and on-premises management console software is supported for nine months after its first minor release date, not including hotfix releases.
+### Version update recommendations
- Release versions have the following syntax: **[Major][Minor][Hotfix]**
+When updating your on-premises software, we recommend:
- Therefore, for example, all **22.1.x** versions, including all hotfix versions, are supported for nine months after the first **22.1.x** release.
+- Plan to **update your sensor versions to the latest version once every 6 months**.
- Fixes and new functionality are applied to each new version and aren't applied to older versions.
--- **Software update packages include new functionality and security patches**. Urgent, high-risk security updates are applied in minor versions that may be released throughout the quarter. --- **Features available from the Azure portal that are dependent on a specific sensor version** are only available for sensors that have the required version installed, or higher.-
-For more information, see the [Microsoft Security Development Lifecycle practices](https://www.microsoft.com/en-us/securityengineering/sdl/), which describes Microsoft's SDK practices, including training, compliance, threat modeling, design requirements, tools such as Microsoft Component Governance, pen testing, and more.
-
-> [!IMPORTANT]
-> Manual changes to software packages may have detrimental effects on the sensor and on-premises management console. Microsoft is unable to support deployments with manual changes made to packages.
->
-
-> [!TIP]
-> - Version numbers are listed only in this article, and not in detailed descriptions elsewhere in the documentation. To understand whether a feature is supported in your sensor version, check the listed features for that sensor version on this page.
->
-> - When updating your sensor software version, make sure to also update your on-premises management console. For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md).
-
-**Current versions of the sensor and on-premises management console software include**:
-
-| Version | Date released | End support date |
-|--|--|--|
-| 22.2.7 | 10/2022 | 04/2023 |
-| 22.2.6 | 09/2022 | 04/2023 |
-| 22.2.5 | 08/2022 | 04/2023 |
-| 22.2.4 | 07/2022 | 04/2023 |
-| 22.2.3 | 07/2022 | 04/2023 |
-| 22.1.7 | 07/2022 | 04/2023 |
-| 22.1.6 | 06/2022 | 10/2022 |
-| 22.1.5 | 06/2022 | 10/2022 |
-| 22.1.4 | 04/2022 | 10/2022 |
-| 22.1.3 | 03/2022 | 10/2022 |
-| 22.1.1 | 02/2022 | 10/2022 |
-| 10.5.5 | 12/2021 | 09/2022 |
-| 10.5.4 | 12/2021 | 09/2022 |
-| 10.5.3 | 10/2021 | 07/2022 |
-| 10.5.2 | 10/2021 | 07/2022 |
-
-## October 2022
-
-|Service area |Updates |
-|||
-|**OT networks** | [Enhanced OT monitoring alert reference](#enhanced-ot-monitoring-alert-reference) |
-
-### Enhanced OT monitoring alert reference
-
-Our alert reference article now includes the following details for each alert:
--- **Alert category**, helpful when you want to investigate alerts that are aggregated by a specific activity or configure SIEM rules to generate incidents based on specific activities.--- **MITRE ATT&CK for ICS tactics and techniques**, which describe the actions an adversary may take while operating within the network. Use the tactics and techniques listed for each alert to learn about the network areas that might be at risk and collaborate more efficiently across your security and OT teams more as you secure those assets.--- **Alert threshold**, for relevant alerts. Thresholds indicate the specific point at which an alert is triggered. Modify alert thresholds as needed from the sensor's **Support** page.-
-For more information, see [OT monitoring alert types and descriptions](alert-engine-messages.md), specifically [Supported alert categories](alert-engine-messages.md#supported-alert-categories).
-
-## September 2022
-
-|Service area |Updates |
-|||
-|**OT networks** |**All supported OT sensor software versions**: <br>- [Device vulnerabilities from the Azure portal](#device-vulnerabilities-from-the-azure-portal-public-preview)<br>- [Security recommendations for OT networks](#security-recommendations-for-ot-networks-public-preview)<br><br> **All OT sensor software versions 22.x**: [Updates for Azure cloud connection firewall rules](#updates-for-azure-cloud-connection-firewall-rules-public-preview) <br><br>**Sensor software version 22.2.7**: <br> - Bug fixes and stability improvements <br><br> **Sensor software version 22.2.6**: <br> - Bug fixes and stability improvements <br>- Enhancements to the device type classification algorithm<br><br>**Microsoft Sentinel integration**: <br>- [Investigation enhancements with IoT device entities](#investigation-enhancements-with-iot-device-entities-in-microsoft-sentinel)<br>- [Updates to the Microsoft Defender for IoT solution](#updates-to-the-microsoft-defender-for-iot-solution-in-microsoft-sentinels-content-hub)|
-
-### Security recommendations for OT networks (Public preview)
-
-Defender for IoT now provides security recommendations to help customers manage their OT/IoT network security posture. Defender for IoT recommendations help users form actionable, prioritized mitigation plans that address the unique challenges of OT/IoT networks. Use recommendations for lower your network's risk and attack surface.
-
-You can see the following security recommendations from the Azure portal for detected devices across your networks:
--- **Review PLC operating mode**. Devices with this recommendation are found with PLCs set to unsecure operating mode states. We recommend setting PLC operating modes to the **Secure Run** state if access is no longer required to the PLC to reduce the threat of malicious PLC programming.--- **Review unauthorized devices**. Devices with this recommendation must be identified and authorized as part of the network baseline. We recommend taking action to identify any indicated devices. Disconnect any devices from your network that remain unknown even after investigation to reduce the threat of rogue or potentially malicious devices.-
-Access security recommendations from one of the following locations:
--- The **Recommendations** page, which displays all current recommendations across all detected OT devices.--- The **Recommendations** tab on a device details page, which displays all current recommendations for the selected device.-
-From either location, select a recommendation to drill down further and view lists of all detected OT devices that are currently in a *healthy* or *unhealthy* state, according to the selected recommendation. From the **Unhealthy devices** or **Healthy devices** tab, select a device link to jump to the selected device details page. For example:
--
-For more information, see [View the device inventory](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory).
-
-### Device vulnerabilities from the Azure portal (Public preview)
-
-Defender for IoT now provides vulnerability data in the Azure portal for detected OT network devices. Vulnerability data is based on the repository of standards based vulnerability data documented at the [US government National Vulnerability Database (NVD)](https://www.nist.gov/programs-projects/national-vulnerability-database-nvd).
-
-Access vulnerability data in the Azure portal from the following locations:
--- On a device details page, select the **Vulnerabilities** tab to view current vulnerabilities on the selected device. For example, from the **Device inventory** page, select a specific device and then select **Vulnerabilities**.-
- For more information, see [View the device inventory](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory).
--- A new **Vulnerabilities** workbook displays vulnerability data across all monitored OT devices. Use the **Vulnerabilities** workbook to view data like CVE by severity or vendor, and full lists of detected vulnerabilities and vulnerable devices and components.-
- Select an item in the **Device vulnerabilities**, **Vulnerable devices**, or **Vulnerable components** tables to view related information in the tables on the right.
-
- For example:
-
- :::image type="content" source="media/release-notes/vulnerabilities-workbook.png" alt-text="Screenshot of a Vulnerabilities workbook in Defender for IoT.":::
-
- For more information, see [Use Azure Monitor workbooks in Microsoft Defender for IoT](workbooks.md).
-
-### Updates for Azure cloud connection firewall rules (Public preview)
-
-OT network sensors connect to Azure to provide alert and device data and sensor health messages, access threat intelligence packages, and more. Connected Azure services include IoT Hub, Blob Storage, Event Hubs, and the Microsoft Download Center.
-
-For OT sensors with software versions 22.x and higher, Defender for IoT now supports increased security when adding outbound allow rules for connections to Azure. Now you can define your outbound allow rules to connect to Azure without using wildcards.
-
-When defining outbound allow rules to connect to Azure, you'll need to enable HTTPS traffic to each of the required endpoints on port 443. Outbound allow rules are defined once for all OT sensors onboarded to the same subscription.
-
-For supported sensor versions, download the full list of required secure endpoints from the following locations in the Azure portal:
--- **A successful sensor registration page**: After onboarding a new OT sensor, version 22.x, the successful registration page now provides instructions for next steps, including a link to the endpoints you'll need to add as secure outbound allow rules on your network. Select the **Download endpoint details** link to download the JSON file.-
- For example:
-
- :::image type="content" source="media/release-notes/download-endpoints.png" alt-text="Screenshot of a successful OT sensor registration page with the download endpoints link.":::
--- **The Sites and sensors page**: Select an OT sensor with software versions 22.x or higher, or a site with one or more supported sensor versions. Then, select **More actions** > **Download endpoint details** to download the JSON file. For example:-
- :::image type="content" source="media/release-notes/download-endpoints-sites-sensors.png" alt-text="Screenshot of the Sites and sensors page with the download endpoint details link.":::
-
-For more information, see:
--- [Tutorial: Get started with Microsoft Defender for IoT for OT security](tutorial-onboarding.md)-- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)-- [Networking requirements](how-to-set-up-your-network.md#sensor-access-to-azure-portal)-
-### Investigation enhancements with IoT device entities in Microsoft Sentinel
-
-Defender for IoT's integration with Microsoft Sentinel now supports an IoT device entity page. When investigating incidents and monitoring IoT security in Microsoft Sentinel, you can now identify your most sensitive devices and jump directly to more details on each device entity page.
-
-The IoT device entity page provides contextual device information about an IoT device, with basic device details and device owner contact information. Device owners are defined by site in the **Sites and sensors** page in Defender for IoT.
-
-The IoT device entity page can help prioritize remediation based on device importance and business impact, as per each alert's site, zone, and sensor. For example:
--
-You can also now hunt for vulnerable devices on the Microsoft Sentinel **Entity behavior** page. For example, view the top five IoT devices with the highest number of alerts, or search for a device by IP address or device name:
--
-For more information, see [Investigate further with IoT device entities](../../sentinel/iot-advanced-threat-monitoring.md#investigate-further-with-iot-device-entities) and [Site management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#site-management-options-from-the-azure-portal).
-
-### Updates to the Microsoft Defender for IoT solution in Microsoft Sentinel's content hub
-
-This month, we've released version 2.0 of the **Microsoft Defender for IoT** solution in Microsoft Sentinel's content hub, previously known as the **IoT/OT Threat Monitoring with Defender for IoT** solution.
-
-Updates in this version of the solution include:
--- **A name change**. If you'd previously installed the **IoT/OT Threat Monitoring with Defender for IoT** solution in your Microsoft Sentinel workspace, the solution is automatically renamed to **Microsoft Defender for IoT**, even if you don't update the solution.--- **Workbook improvements**: The **Defender for IoT** workbook now includes:-
- - A new **Overview** dashboard with key metrics on the device inventory, threat detection, and security posture. For example:
-
- :::image type="content" source="media/release-notes/sentinel-workbook-overview.png" alt-text="Screenshot of the new Overview tab in the IoT OT Threat Monitoring with Defender for IoT workbook." lightbox="media/release-notes/sentinel-workbook-overview.png":::
-
- - A new **Vulnerabilities** dashboard with details about CVEs shown in your network and their related vulnerable devices. For example:
-
- :::image type="content" source="media/release-notes/sentinel-workbook-vulnerabilities.png" alt-text="Screenshot of the new Vulnerability tab in the IoT OT Threat Monitoring with Defender for IoT workbook." lightbox="media/release-notes/sentinel-workbook-vulnerabilities.png":::
-
- - Improvements on the **Device inventory** dashboard, including access to device recommendations, vulnerabilities, and direct links to the Defender for IoT device details pages. The **Device inventory** dashboard in the **IoT/OT Threat Monitoring with Defender for IoT** workbook is fully aligned with the Defender for IoT [device inventory data](how-to-manage-device-inventory-for-organizations.md).
--- **Playbook updates**: The **Microsoft Defender for IoT** solution now supports the following SOC automation functionality with new playbooks:-
- - **Automation with CVE details**: Use the *AD4IoT-CVEAutoWorkflow* playbook to enrich incident comments with CVEs of related devices based on Defender for IoT data. The incidents are triaged, and if the CVE is critical, the asset owner is notified about the incident by email.
-
- - **Automation for email notifications to device owners**. Use the *AD4IoT-SendEmailtoIoTOwner* playbook to have a notification email automatically sent to a device's owner about new incidents. Device owners can then reply to the email to update the incident as needed. Device owners are defined at the site level in Defender for IoT.
-
- - **Automation for incidents with sensitive devices**: Use the *AD4IoT-AutoTriageIncident* playbook to automatically update an incident's severity based on the devices involved in the incident, and their sensitivity level or importance to your organization. For example, any incident involving a sensitive device can be automatically escalated to a higher severity level.
-
-For more information, see [Investigate Microsoft Defender for IoT incidents with Microsoft Sentinel](../../sentinel/iot-advanced-threat-monitoring.md?bc=%2fazure%2fdefender-for-iot%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fdefender-for-iot%2forganizations%2ftoc.json).
-
-## August 2022
-
-|Service area |Updates |
-|||
-|**OT networks** |**Sensor software version 22.2.5**: Minor version with stability improvements<br><br>**Sensor software version 22.2.4**: [New alert columns with timestamp data](#new-alert-columns-with-timestamp-data)<br><br>**Sensor software version 22.1.3**: [Sensor health from the Azure portal (Public preview)](#sensor-health-from-the-azure-portal-public-preview) |
-
-### New alert columns with timestamp data
-
-Starting with OT sensor version 22.2.4, Defender for IoT alerts in the Azure portal and the sensor console now show the following columns and data:
--- **Last detection**. Defines the last time the alert was detected in the network, and replaces the **Detection time** column.--- **First detection**. Defines the first time the alert was detected in the network.--- **Last activity**. Defines the last time the alert was changed, including manual updates for severity or status, or automated changes for device updates or device/alert de-duplication.-
-The **First detection** and **Last activity** columns aren't displayed by default. Add them to your **Alerts** page as needed.
-
-> [!TIP]
-> If you're also a Microsoft Sentinel user, you'll be familiar with similar data from your Log Analytics queries. The new alert columns in Defender for IoT are mapped as follows:
->
-> - The Defender for IoT **Last detection** time is similar to the Log Analytics **EndTime**
-> - The Defender for IoT **First detection** time is similar to the Log Analytics **StartTime**
-> - The Defender for IoT **Last activity** time is similar to the Log Analytics **TimeGenerated**
-For more information, see:
--- [View alerts on the Defender for IoT portal](how-to-manage-cloud-alerts.md)-- [View alerts on your sensor](how-to-view-alerts.md)-- [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md)-
-### Sensor health from the Azure portal (Public preview)
-
-For OT sensor versions 22.1.3 and higher, you can use the new sensor health widgets and table column data to monitor sensor health directly from the **Sites and sensors** page on the Azure portal.
--
-We've also added a sensor details page, where you drill down to a specific sensor from the Azure portal. On the **Sites and sensors** page, select a specific sensor name. The sensor details page lists basic sensor data, sensor health, and any sensor settings applied.
-
-For more information, see [Understand sensor health (Public preview)](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health-public-preview) and [Sensor health message reference](sensor-health-messages.md).
-
-## July 2022
-
-|Service area |Updates |
-|||
-|**Enterprise IoT networks** | - [Enterprise IoT and Defender for Endpoint integration in GA](#enterprise-iot-and-defender-for-endpoint-integration-in-ga) |
-|**OT networks** |**Sensor software version 22.2.4**: <br>- [Device inventory enhancements](#device-inventory-enhancements)<br>- [Enhancements for the ServiceNow integration API](#enhancements-for-the-servicenow-integration-api)<br><br>**Sensor software version 22.2.3**:<br>- [OT appliance hardware profile updates](#ot-appliance-hardware-profile-updates)<br>- [PCAP access from the Azure portal](#pcap-access-from-the-azure-portal-public-preview)<br>- [Bi-directional alert synch between sensors and the Azure portal](#bi-directional-alert-synch-between-sensors-and-the-azure-portal-public-preview)<br>- [Sensor connections restored after certificate rotation](#sensor-connections-restored-after-certificate-rotation)<br>- [Support diagnostic log enhancements](#support-diagnostic-log-enhancements-public-preview)<br>- [Improved security for uploading protocol plugins](#improved-security-for-uploading-protocol-plugins)<br>- [Sensor names shown in browser tabs](#sensor-names-shown-in-browser-tabs)<br><br>**Sensor software version 22.1.7**: <br>- [Same passwords for *cyberx_host* and *cyberx* users](#same-passwords-for-cyberx_host-and-cyberx-users) <br><br>**To update to version 22.2.x**:<br>- **From version 22.1.x**, update directly to the latest **22.2.x** version<br>- **From version 10.x**, first update to the latest **22.1.x** version, and then update again to the latest **22.2.x** version <br><br>For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md). |
-|**Cloud-only features** | - [Microsoft Sentinel incident synch with Defender for IoT alerts](#microsoft-sentinel-incident-synch-with-defender-for-iot-alerts) |
-
-### Enterprise IoT and Defender for Endpoint integration in GA
-
-The Enterprise IoT integration with Microsoft Defender for Endpoint is now in General Availability (GA). With this update, we've made the following updates and improvements:
--- Onboard an Enterprise IoT plan directly in Defender for Endpoint. For more information, see [Enable Enterprise IoT security in Defender for Endpoint](eiot-defender-for-endpoint.md).--- Seamless integration with Microsoft Defender for Endpoint to view detected Enterprise IoT devices, and their related alerts, vulnerabilities, and recommendations in the Microsoft 365 Security portal. For more information, see [Securing IoT devices in the enterprise](concept-enterprise.md). You can continue to use an Enterprise IoT network sensor (Public preview) and view detected Enterprise IoT devices on the Defender for IoT **Device inventory** page in the Azure portal.--- All Enterprise IoT sensors are now automatically added to the same site in Defender for IoT, named **Enterprise network**. When onboarding a new Enterprise IoT device, you only need to define a sensor name and select your subscription, without defining a site or zone.
+- Update to a **patch version only for specific bug fixes or security patches**. When working with the Microsoft support team on a specific issue, verify which patch version is recommended to resolve your issue.
> [!NOTE]
-> The Enterprise IoT network sensor and all detections remain in Public Preview.
-
-### Same passwords for cyberx_host and cyberx users
-
-During OT monitoring software installations and updates, the **cyberx** user is assigned a random password. When updating from version 10.x.x to version 22.1.7, the **cyberx_host** password is assigned with an identical password to the **cyberx** user.
-
-For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md).
-
-### Device inventory enhancements
-
-Starting in OT sensor versions 22.2.4, you can now take the following actions from the sensor console's **Device inventory** page:
--- **Merge duplicate devices**. You may need to merge devices if the sensor has discovered separate network entities that are associated with a single, unique device. Examples of this scenario might include a PLC with four network cards, a laptop with both WiFi and a physical network card, or a single workstation with multiple network cards.--- **Delete single devices**. Now, you can delete a single device that hasn't communicated for at least 10 minutes.--- **Delete inactive devices by admin users**. Now, all admin users, in addition to the **cyberx** user, can delete inactive devices.-
-Also starting in version 22.2.4, in the sensor console's **Device inventory** page, the **Last seen** value in the device details pane is replaced by **Last activity**. For example:
--
-For more information, see [Manage your OT device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md).
-
-### Enhancements for the ServiceNow integration API
-
-OT sensor version 22.2.4 provides enhancements for the `devicecves` API, which gets details about the CVEs found for a given device.
-
-Now you can add any of the following parameters to your query to fine tune your results:
--- ΓÇ£**sensorId**ΓÇ¥ - Shows results from a specific sensor, as defined by the given sensor ID.-- ΓÇ£**score**ΓÇ¥ - Determines a minimum CVE score to be retrieved. All results will have a CVE score equal to or higher than the given value. Default = **0**.-- ΓÇ£**deviceIds**ΓÇ¥ - A comma-separated list of device IDs from which you want to show results. For example: **1232,34,2,456**-
-For more information, see [devicecves (Get device CVEs)](api/management-integration-apis.md#devicecves-get-device-cves).
-
-### OT appliance hardware profile updates
-
-We've refreshed the naming conventions for our OT appliance hardware profiles for greater transparency and clarity.
-
-The new names reflect both the *type* of profile, including *Corporate*, *Enterprise*, and *Production line*, and also the related disk storage size.
-
-Use the following table to understand the mapping between legacy hardware profile names and the current names used in the updated software installation:
-
-|Legacy name |New name | Description |
-||||
-|**Corporate** | **C5600** | A *Corporate* environment, with: <br>16 Cores<br>32-GB RAM<br>5.6-TB disk storage |
-|**Enterprise** | **E1800** | An *Enterprise* environment, with: <br>8 Cores<br>32-GB RAM<br>1.8-TB disk storage |
-|**SMB** | **L500** | A *Production line* environment, with: <br>4 Cores<br>8-GB RAM<br>500-GB disk storage |
-|**Office** | **L100** | A *Production line* environment, with: <br>4 Cores<br>8-GB RAM<br>100-GB disk storage |
-|**Rugged** | **L64** | A *Production line* environment, with: <br>4 Cores<br>8-GB RAM<br>64-GB disk storage |
-
-We also now support new enterprise hardware profiles, for sensors supporting both 500 GB and 1-TB disk sizes.
-
-For more information, see [Which appliances do I need?](ot-appliance-sizing.md)
-
-### PCAP access from the Azure portal (Public preview)
-
-Now you can access the raw traffic files, known as packet capture files or PCAP files, directly from the Azure portal. This feature supports SOC or OT security engineers who want to investigate alerts from Defender for IoT or Microsoft Sentinel, without having to access each sensor separately.
--
-PCAP files are downloaded to your Azure storage.
-
-For more information, see [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md).
-
-### Bi-directional alert synch between sensors and the Azure portal (Public preview)
-
-For sensors updated to version 22.2.1, alert statuses and learn statuses are now fully synchronized between the sensor console and the Azure portal. For example, this means that you can close an alert on the Azure portal or the sensor console, and the alert status is updated in both locations.
-
-*Learn* an alert from either the Azure portal or the sensor console to ensure that it's not triggered again the next time the same network traffic is detected.
-
-The sensor console is also synchronized with an on-premises management console, so that alert statuses and learn statuses remain up-to-date across your management interfaces.
-
-For more information, see:
--- [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md)-- [View alerts on your sensor](how-to-view-alerts.md)-- [Manage alerts from the sensor console](how-to-manage-the-alert-event.md)-- [Work with alerts on the on-premises management console](how-to-work-with-alerts-on-premises-management-console.md)-
-### Sensor connections restored after certificate rotation
-
-Starting in version 22.2.3, after rotating your certificates, your sensor connections are automatically restored to your central manager, and you don't need to reconnect them manually.
-
-For more information, see [About certificates](how-to-deploy-certificates.md).
-
-### Support diagnostic log enhancements (Public preview)
-
-Starting in sensor version [22.1.1](#new-support-diagnostics-log), you've been able to download a diagnostic log from the sensor console to send to support when you open a ticket.
-
-Now, for locally managed sensors, you can upload that diagnostic log directly on the Azure portal.
--
-> [!TIP]
-> For cloud-connected sensors, starting from sensor version [22.1.3](#march-2022), the diagnostic log is automatically available to support when you open the ticket.
+> If you have an on-premises management console, make sure to also update your on-premises management console to the same version as your sensors.
>
-For more information, see:
--- [Download a diagnostics log for support](how-to-manage-individual-sensors.md#download-a-diagnostics-log-for-support)-- [Upload a diagnostics log for support](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support-public-preview)-
-### Improved security for uploading protocol plugins
-
-This version of the sensor provides an improved security for uploading proprietary plugins you've created using the Horizon SDK.
--
-For more information, see [Manage proprietary protocols with Horizon plugins](resources-manage-proprietary-protocols.md).
-
-### Sensor names shown in browser tabs
-
-Starting in sensor version 22.2.3, your sensor's name is displayed in the browser tab, making it easier for you to identify the sensors you're working with.
-
-For example:
--
-### Microsoft Sentinel incident synch with Defender for IoT alerts
-
-The **IoT OT Threat Monitoring with Defender for IoT** solution now ensures that alerts in Defender for IoT are updated with any related incident **Status** changes from Microsoft Sentinel.
-
-This synchronization overrides any status defined in Defender for IoT, in the Azure portal or the sensor console, so that the alert statuses match that of the related incident.
-
-Update your **IoT OT Threat Monitoring with Defender for IoT** solution to use the latest synchronization support, including the new [**AD4IoT-AutoAlertStatusSync** playbook](../../sentinel/iot-advanced-threat-monitoring.md#update-alert-statuses-in-defender-for-iot). After updating the solution, make sure that you also take the [required steps](../../sentinel/iot-advanced-threat-monitoring.md#playbook-prerequisites) to ensure that the new playbook works as expected.
-
-For more information, see:
--- [Integrate Defender for Iot and Sentinel](../../sentinel/iot-advanced-threat-monitoring.md)-- [Update alert statuses playbook](../../sentinel/iot-advanced-threat-monitoring.md#update-alert-statuses-in-defender-for-iot)-- [View and manage alerts on the Defender for IoT portal (Preview)](how-to-manage-cloud-alerts.md)-- [View alerts on your sensor](how-to-view-alerts.md)-
-## June 2022
--- **Sensor software version 22.1.6**: Minor version with maintenance updates for internal sensor components--- **Sensor software version 22.1.5**: Minor version to improve TI installation packages and software updates-
-We've also recently optimized and enhanced our documentation as follows:
--- [Updated appliance catalog for OT environments](#updated-appliance-catalog-for-ot-environments)-- [Documentation reorganization for end-user organizations](#documentation-reorganization-for-end-user-organizations)
+For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md).
-### Updated appliance catalog for OT environments
-
-We've refreshed and revamped the catalog of supported appliances for monitoring OT environments. These appliances support flexible deployment options for environments of all sizes and can be used to host both the OT monitoring sensor and on-premises management consoles.
-
-Use the new pages as follows:
-
-1. **Understand which hardware model best fits your organization's needs.** For more information, see [Which appliances do I need?](ot-appliance-sizing.md)
+### On-premises monitoring software versions
-1. **Learn about the preconfigured hardware appliances that are available to purchase, or system requirements for virtual machines.** For more information, see [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md) and [OT monitoring with virtual appliances](ot-virtual-appliances.md).
+Cloud features may be dependent on a specific sensor version. Such features are listed below for the relevant software versions, and are only available for data coming from sensors that have the required version installed, or higher.
- For more information about each appliance type, use the linked reference page, or browse through our new **Reference > OT monitoring appliances** section.
- :::image type="content" source="media/release-notes/appliance-catalog.png" alt-text="Screenshot of the new appliance catalog reference section." lightbox="media/release-notes/appliance-catalog.png":::
+| Version / Patch | Release date | Scope | Supported until |
+| - | | -- | - |
+| **22.2** | | | |
+| 22.2.7| 10/2022 | Patch | 09/2023 |
+| 22.2.6|09/2022 |Patch | 04/2023|
+|22.2.5 |08/2022 | Patch| 04/2023 |
+|22.2.4 |07/2022 |Patch |04/2023 |
+| 22.2.3| 07/2022| Major| 04/2023|
+| **22.1** | | | |
+| 22.1.7| 07/2022 |Patch | 06/2023 |
+| 22.1.6| 06/2022 |Patch |10/2022 |
+| 22.1.5| 06/2022 |Patch | 10/2022 |
+| 22.1.4|04/2022 | Patch|10/2022 |
+| 22.1.3|03/2022 |Patch | 10/2022|
+| 22.1.2| 02/2022 | Major|10/2022 |
+| **10.5** | | | |
+|10.5.5 |12/2022 |Patch | 09/2022|
+|10.5.4 |12/2022 |Patch | 09/2022|
+| 10.5.3| 10/2021 |Patch | 07/2022|
+| 10.5.2| 10/2021 | Major| 07/2022|
- Reference articles for each appliance type, including virtual appliances, include specific steps to configure the appliance for OT monitoring with Defender for IoT. Generic software installation and troubleshooting procedures are still documented in [Defender for IoT software installation](how-to-install-software.md).
+### Threat intelligence updates
-### Documentation reorganization for end-user organizations
+Threat intelligence updates are continuously available and are independent of specific sensor versions. You don't need to update your sensor version in order to get the latest threat intelligence updates.
-We recently reorganized our Defender for IoT documentation for end-user organizations, highlighting a clearer path for onboarding and getting started.
+For more information, see [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md).
-Check out our new structure to follow through viewing devices and assets, managing alerts, vulnerabilities and threats, integrating with other services, and deploying and maintaining your Defender for IoT system.
+### Support model
-**New and updated articles include**:
+Versions **22.1.7**, **22.2.7**, and any later versions are supported for 1 year from their release. For example, version **22.2.7** was released in **October 2022** and is supported through **September 2023**.
-- [Welcome to Microsoft Defender for IoT for organizations](overview.md)-- [Microsoft Defender for IoT architecture](architecture.md)-- [Quickstart: Get started with Defender for IoT](getting-started.md)-- [Tutorial: Microsoft Defender for IoT trial setup](tutorial-onboarding.md)-- [Plan your sensor connections for OT monitoring](best-practices/plan-network-monitoring.md)-- [About Microsoft Defender for IoT network setup](how-to-set-up-your-network.md)
+Other versions use a legacy support model. For more information, see the tables and sections for each version below.
-> [!NOTE]
-> To send feedback on docs via GitHub, scroll to the bottom of the page and select the **Feedback** option for **This page**. We'd be glad to hear from you!
+> [!IMPORTANT]
+> Manual changes to software packages may have detrimental effects on the sensor and on-premises management console. Microsoft is unable to support deployments with manual changes made to software packages.
>
+### Feature documentation per versions
-## April 2022
--- [Extended device property data in the Device inventory](#extended-device-property-data-in-the-device-inventory)-
-### Extended device property data in the Device inventory
-
-**Sensor software version**: 22.1.4
-
-Starting for sensors updated to version 22.1.4, the **Device inventory** page on the Azure portal shows extended data for the following fields:
--- **Description**-- **Tags**-- **Protocols**-- **Scanner**-- **Last Activity**-
-For more information, see [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md).
-
-## March 2022
-
-**Sensor version**: 22.1.3
--- [Use Azure Monitor workbooks with Microsoft Defender for IoT](#use-azure-monitor-workbooks-with-microsoft-defender-for-iot-public-preview)-- [IoT OT Threat Monitoring with Defender for IoT solution GA](#iot-ot-threat-monitoring-with-defender-for-iot-solution-ga)-- [Edit and delete devices from the Azure portal](#edit-and-delete-devices-from-the-azure-portal-public-preview)-- [Key state alert updates](#key-state-alert-updates-public-preview)-- [Sign out of a CLI session](#sign-out-of-a-cli-session)--
-### Use Azure Monitor workbooks with Microsoft Defender for IoT (Public preview)
-
-[Azure Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md) provide graphs and dashboards that visually reflect your data, and are now available directly in Microsoft Defender for IoT with data from [Azure Resource Graph](../../governance/resource-graph/index.yml).
-
-In the Azure portal, use the new Defender for IoT **Workbooks** page to view workbooks created by Microsoft and provided out-of-the-box, or create custom workbooks of your own.
--
-For more information, see [Use Azure Monitor workbooks in Microsoft Defender for IoT](workbooks.md).
-
-### IoT OT Threat Monitoring with Defender for IoT solution GA
-
-The IoT OT Threat Monitoring with Defender for IoT solution in Microsoft Sentinel is now GA. In the Azure portal, use this solution to help secure your entire OT environment, whether you need to protect existing OT devices or build security into new OT innovations.
-
-For more information, see [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md) and [Tutorial: Investigate Microsoft Defender for IoT devices with Microsoft Sentinel](../../sentinel/iot-advanced-threat-monitoring.md).
-
-### Edit and delete devices from the Azure portal (Public preview)
-
-The **Device inventory** page in the Azure portal now supports the ability to edit device details, such as security, classification, location, and more:
--
-For more information, see [Edit device details](how-to-manage-device-inventory-for-organizations.md#edit-device-details).
-
-You can only delete devices from Defender for IoT if they've been inactive for more than 14 days. For more information, see [Delete a device](how-to-manage-device-inventory-for-organizations.md#delete-a-device).
-
-### Key state alert updates (Public preview)
-
-Defender for IoT now supports the Rockwell protocol for PLC operating mode detections.
-
-For the Rockwell protocol, the **Device inventory** pages in both the Azure portal and the sensor console now indicate the PLC operating mode key and run state, and whether the device is currently in a secure mode.
+Version numbers are listed only in this article and in the [What's new in Microsoft Defender for IoT?](whats-new.md) article, and not in detailed descriptions elsewhere in the documentation.
-If the device's PLC operating mode is ever switched to an unsecured mode, such as *Program* or *Remote*, a **PLC Operating Mode Changed** alert is generated.
+To understand whether a feature is supported in your sensor version, check the relevant version section below and its listed features.
-For more information, see [Manage your IoT devices with the device inventory for organizations](how-to-manage-device-inventory-for-organizations.md).
+## Versions 22.2.x
-### Sign out of a CLI session
-Starting in this version, CLI users are automatically signed out of their session after 300 inactive seconds. To sign out manually, use the new `logout` CLI command.
+To update to 22.2.x versions:
-For more information, see [Work with Defender for IoT CLI commands](references-work-with-defender-for-iot-cli-commands.md).
+- **From version 22.1.x**, update directly to the latest **22.2.x** version
+- **From version 10.x**, first update to the latest **22.1.x** version, and then update again to the latest **22.2.x** version.
+For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md).
-## February 2022
-
-**Sensor software version**: 22.1.1
--- [New sensor installation wizard](#new-sensor-installation-wizard)-- [Sensor redesign and unified Microsoft product experience](#sensor-redesign-and-unified-microsoft-product-experience)-- [Enhanced sensor Overview page](#enhanced-sensor-overview-page)-- [New support diagnostics log](#new-support-diagnostics-log)-- [Alert updates](#alert-updates)-- [Custom alert updates](#custom-alert-updates)-- [CLI command updates](#cli-command-updates)-- [Update to version 22.1.x](#update-to-version-221x)-- [New connectivity model and firewall requirements](#new-connectivity-model-and-firewall-requirements)-- [Protocol improvements](#protocol-improvements)-- [Modified, replaced, or removed options and configurations](#modified-replaced-or-removed-options-and-configurations)-
-### New sensor installation wizard
-
-Previously, you needed to use separate dialogs to upload a sensor activation file, verify your sensor network configuration, and configure your SSL/TLS certificates.
-
-Now, when installing a new sensor or a new sensor version, our installation wizard provides a streamlined interface to do all these tasks from a single location.
-
-For more information, see [Defender for IoT installation](how-to-install-software.md).
-
-### Sensor redesign and unified Microsoft product experience
-
-The Defender for IoT sensor console has been redesigned to create a unified Microsoft Azure experience and enhance and simplify workflows.
-
-These features are now Generally Available (GA). Updates include the general look and feel, drill-down panes, search and action options, and more. For example:
-
-**Simplified workflows include**:
--- The **Device inventory** page now includes detailed device pages. Select a device in the table and then select **View full details** on the right.-
- :::image type="content" source="media/release-notes/device-inventory-details.png" alt-text="Screenshot of the View full details button." lightbox="media/release-notes/device-inventory-details.png":::
--- Properties updated from the sensor's inventory are now automatically updated in the cloud device inventory.--- The device details pages, accessed either from the **Device map** or **Device inventory** pages, is shown as read only. To modify device properties, select **Edit properties** on the bottom-left.--- The **Data mining** page now includes reporting functionality. While the **Reports** page was removed, users with read-only access can view updates on the **Data mining page** without the ability to modify reports or settings.-
- For admin users creating new reports, you can now toggle on a **Send to CM** option to send the report to a central management console as well. For more information, see [Create a report](how-to-create-data-mining-queries.md#create-a-report).
--- The **System settings** area has been reorganized in to sections for *Basic* settings, settings for *Network monitoring*, *Sensor management*, *Integrations*, and *Import settings*.--- The sensor online help now links to key articles in the Microsoft Defender for IoT documentation.-
-**Defender for IoT maps now include**:
--- A new **Map View** is now shown for alerts and on the device details pages, showing where in your environment the alert or device is found.--- Right-click a device on the map to view contextual information about the device, including related alerts, event timeline data, and connected devices.--- To enable the ability to collapse IT networks, ensure that the **Toggle IT Networks Grouping** option is enabled. This option is now only available from the map.--- The **Simplified Map View** option has been removed.-
-We've also implemented global readiness and accessibility features to comply with Microsoft standards. In the on-premises sensor console, these updates include both high contrast and regular screen display themes and localization for over 15 languages.
-
-For example:
--
-Access global readiness and accessibility options from the **Settings** icon at the top-right corner of your screen:
--
-### Enhanced sensor Overview page
-
-The Defender for IoT sensor portal's **Dashboard** page has been renamed as **Overview**, and now includes data that better highlights system deployment details, critical network monitoring health, top alerts, and important trends and statistics.
--
-The Overview page also now serves as a *black box* to view your overall sensor status in case your outbound connections, such as to the Azure portal, go down.
-
-Create more dashboards using the **Trends & Statistics** page, located under the **Analyze** menu on the left.
-
-### New support diagnostics log
-
-Now you can get a summary of the log and system information that gets added to your support tickets. In the **Backup and Restore** dialog, select **Support Ticket Diagnostics**.
--
-### Alert updates
-
-**In the Azure portal**:
-
-Alerts are now available in Defender for IoT in the Azure portal. Work with alerts to enhance the security and operation of your IoT/OT network.
-
-The new **Alerts** page is currently in Public Preview, and provides:
--- An aggregated, real-time view of threats detected by network sensors.-- Remediation steps for devices and network processes.-- Streaming alerts to Microsoft Sentinel and empower your SOC team.-- Alert storage for 90 days from the time they're first detected.-- Tools to investigate source and destination activity, alert severity and status, MITRE ATT&CK information, and contextual information about the alert.-
-For example:
--
-**On the sensor console**:
-
-On the sensor console, the **Alerts** page now shows details for alerts detected by sensors that are configured with a cloud-connection to Defender for IoT on Azure. Users working with alerts in both Azure and on-premises should understand how alerts are managed between the Azure portal and the on-premises components.
--
-Other alert updates include:
--- **Access contextual data** for each alert, such as events that occurred around the same time, or a map of connected devices. Maps of connected devices are available for sensor console alerts only.--- **Alert statuses** are updated, and, for example, now include a *Closed* status instead of *Acknowledged*.--- **Alert storage** for 90 days from the time that they're first detected.--- The **Backup Activity with Antivirus Signatures Alert**. This new alert warning is triggered for traffic detected between a source device and destination backup server, which is often legitimate backup activity. Critical or major malware alerts are no longer triggered for such activity.--- **During upgrades**, sensor console alerts that are currently archived are deleted. Pinned alerts are no longer supported, so pins are removed for sensor console alerts as relevant.-
-### Custom alert updates
-
-The sensor console's **Custom alert rules** page now provides:
--- Hit count information in the **Custom alert rules** table, with at-a-glance details about the number of alerts triggered in the last week for each rule you've created.--- The ability to schedule custom alert rules to run outside of regular working hours.--- The ability to alert on any field that can be extracted from a protocol using the DPI engine.--- Complete protocol support when creating custom rules, and support for an extensive range of related protocol variables.-
- :::image type="content" source="media/how-to-manage-sensors-on-the-cloud/protocol-support-custom-alerts.png" alt-text="Screenshot of the updated Custom alerts dialog. "lightbox="media/how-to-manage-sensors-on-the-cloud/protocol-support-custom-alerts.png":::
-
-For more information and the updated custom alert procedure, see [Customize alert rules](how-to-accelerate-alert-incident-response.md#customize-alert-rules).
-
-### CLI command updates
+### 22.2.7
-The Defender for Iot sensor software installation is now containerized. With the now-containerized sensor, you can use the *cyberx_host* user to investigate issues with other containers or the operating system, or to send files via FTP.
+**Release date**: 10/2022
-This *cyberx_host* user is available by default and connects to the host machine. If you need to, recover the password for the *cyberx_host* user from the **Sites and sensors** page in Defender for IoT.
+**Supported until**: 09/2023
-As part of the containerized sensor, the following CLI commands have been modified:
+This version includes bug fixes for stability improvements.
-|Legacy name |Replacement |
-|||
-|`cyberx-xsense-reconfigure-interfaces` |`sudo dpkg-reconfigure iot-sensor` |
-|`cyberx-xsense-reload-interfaces` | `sudo dpkg-reconfigure iot-sensor` |
-|`cyberx-xsense-reconfigure-hostname` | `sudo dpkg-reconfigure iot-sensor` |
-| `cyberx-xsense-system-remount-disks` |`sudo dpkg-reconfigure iot-sensor` |
+### 22.2.6
-The `sudo cyberx-xsense-limit-interface-I eth0 -l value` CLI command was removed. This command was used to limit the interface bandwidth that the sensor uses for day-to-day procedures, and is no longer supported.
+**Release date**: 09/2022
-For more information, see [Defender for IoT installation](how-to-install-software.md) and [Work with Defender for IoT CLI commands](references-work-with-defender-for-iot-cli-commands.md).
+**Supported until**: 04/2023
-### Update to version 22.1.x
+This version includes the following new updates and fixes:
-To use all of Defender for IoT's latest features, make sure to update your sensor software versions to 22.1.x.
+- Bug fixes and stability improvements
+- Enhancements to the device type classification algorithm
-If you're on a legacy version, you may need to run a series of updates in order to get to the latest version. You'll also need to update your firewall rules and reactivate your sensor with a new activation file.
+### 22.2.5
-After you've upgraded to version 22.1.x, the new upgrade log can be found at the following path, accessed via SSH and the *cyberx_host* user: `/opt/sensor/logs/legacy-upgrade.log`.
+**Release date**: 08/2022
-For more information, see [Update OT system software](update-ot-software.md).
+**Supported until**: 04/2023
-> [!NOTE]
-> Upgrading to version 22.1.x is a large update, and you should expect the update process to require more time than previous updates.
->
+This version includes minor stability improvements.
-### New connectivity model and firewall requirements
+### 22.2.4
-Defender for IoT version 22.1.x supports a new set of sensor connection methods that provide simplified deployment, improved security, scalability, and flexible connectivity.
+**Release date**: 07/2022
-In addition to [migration steps](connect-sensors.md#migration-for-existing-customers), this new connectivity model requires that you open a new firewall rule. For more information, see:
+**Supported until**: 04/2023
-- **New firewall requirements**: [Sensor access to Azure portal](how-to-set-up-your-network.md#sensor-access-to-azure-portal).-- **Architecture**: [Sensor connection methods](architecture-connections.md)-- **Connection procedures**: [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md)
+This version includes the following new updates and fixes:
-### Protocol improvements
+- [Device inventory enhancements in the sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md):
-This version of Defender for IoT provides improved support for:
+ - Merge duplicate devices, delete single devices, and delete inactive devices by admin users
+ - **Last seen** value in the device details pane is replaced by **Last activity**
-- Profinet DCP-- Honeywell-- Windows endpoint detection
+- [New parameters for the *devicecves* API](api/management-integration-apis.md): `sensorId`, `score`, and `deviceIds`
-### Modified, replaced, or removed options and configurations
+- [New alert columns with timestamp data](how-to-view-alerts.md): **Last detection**, **First detection**, and **Last activity**
-The following Defender for IoT options and configurations have been moved, removed, and/or replaced:
+### 22.2.3
-- Reports previously found on the **Reports** page are now shown on the **Data Mining** page instead. You can also continue to view data mining information directly from the on-premises management console.
+**Release date**: 07/2022
-- Changing a locally managed sensor name is now supported only by onboarding the sensor to the Azure portal again with the new name. Sensor names can no longer be changed directly from the sensor. For more information, see [Change the name of a sensor](how-to-manage-individual-sensors.md#change-the-name-of-a-sensor).
+**Supported until**: 04/2023
+This version includes the following new updates and fixes:
-## December 2021
+- [New naming convention for hardware profiles](ot-appliance-sizing.md)
+- [PCAP access from the Azure portal](how-to-manage-cloud-alerts.md)
+- [Bi-directional alert synch between sensors and the Azure portal](how-to-manage-cloud-alerts.md#managing-alerts-in-a-hybrid-deployment)
+- [Sensor connections restored after certificate rotation](how-to-deploy-certificates.md)
+- [Upload diagnostic logs for support tickets from the Azure portal](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support-public-preview)
+- [Improved security for uploading protocol plugins](resources-manage-proprietary-protocols.md)
+- [Sensor names shown in browser tabs](how-to-manage-individual-sensors.md)
-**Sensor software version**: 10.5.4
+## Versions 22.1.x
-- [Enhanced integration with Microsoft Sentinel (Preview)](#enhanced-integration-with-microsoft-sentinel-preview)-- [Apache Log4j vulnerability](#apache-log4j-vulnerability)-- [Alerting](#alerting)
+Software versions 22.1.x support direct updates to the latest OT monitoring software versions available. For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md).
-### Enhanced integration with Microsoft Sentinel (Preview)
+### 22.1.7
-The new **IoT OT Threat Monitoring with Defender for IoT solution** is available and provides enhanced capabilities for Microsoft Defender for IoT integration with Microsoft Sentinel. The **IoT OT Threat Monitoring with Defender for IoT solution** is a set of bundled content, including analytics rules, workbooks, and playbooks, configured specifically for Defender for IoT data. This solution currently supports only Operational Networks (OT/ICS).
+**Release date**: 07/2022
-For information on integrating with Microsoft Sentinel, see [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](../../sentinel/iot-solution.md) and [Tutorial: Investigate and detect threats for IoT devices](../../sentinel/iot-advanced-threat-monitoring.md).
+**Supported until**: 06/2023
-### Apache Log4j vulnerability
+This version includes the following new updates and fixes:
-Version 10.5.4 of Microsoft Defender for IoT mitigates the Apache Log4j vulnerability. For details, see [the security advisory update](https://techcommunity.microsoft.com/t5/microsoft-defender-for-iot/updated-15-dec-defender-for-iot-security-advisory-apache-log4j/m-p/3036844).
+- [Identical passwords for *cyberx_host* and *cyberx* users created during installations and updates](how-to-install-software.md)
-### Alerting
+### 22.1.6
-Version 10.5.4 of Microsoft Defender for IoT delivers important alert enhancements:
+**Release date**: 06/2022
-- Alerts for certain minor events or edge-cases are now disabled.-- For certain scenarios, similar alerts are minimized in a single alert message.
+**Supported until**: 10/2022
-These changes reduce alert volume and enable more efficient targeting and analysis of security and operational events.
+This version minor maintenance updates for internal sensor components.
-#### Alerts permanently disabled
+### 22.1.5
-The alerts listed below are permanently disabled with version 10.5.4. Detection and monitoring are still supported for traffic associated with the alerts.
+**Release date**: 06/2022
-**Policy engine alerts**
+**Supported until**: 10/2022
-- RPC Procedure Invocations-- Unauthorized HTTP Server-- Abnormal usage of MAC Addresses
+This version minor updates to improve TI installation packages and software updates.
-#### Alerts disabled by default
+### 22.1.4
-The alerts listed below are disabled by default with version 10.5.4. You can re-enable the alerts from the Support page of the sensor console, if necessary.
+**Release date**: 04/2022
-**Anomaly engine alert**
-- Abnormal Number of Parameters in HTTP Header-- Abnormal HTTP Header Length-- Illegal HTTP Header Content
+**Supported until**: 10/2022
-**Operational engine alerts**
-- HTTP Client Error-- RPC Operation Failed
+This version includes the following new updates and fixes:
-**Policy engine alerts**
+- [Extended device property data in the **Device inventory** page on the Azure portal](how-to-manage-device-inventory-for-organizations.md), for the **Description**, **Tags**. **Protocols**, **Scanner**, and **Last Activity** fields
-Disabling these alerts also disables monitoring of related traffic. Specifically, this traffic won't be reported in Data Mining reports.
+### 22.1.3
-- Illegal HTTP Communication alert and HTTP Connections Data Mining traffic-- Unauthorized HTTP User Agent alert and HTTP User Agents Data Mining traffic-- Unauthorized HTTP SOAP Action and HTTP SOAP Actions Data Mining traffic
+**Release date**: 03/2022
-#### Updated alert functionality
+**Supported until**: 10/2022
-**Unauthorized Database Operation alert**
-Previously, this alert covered DDL and DML alerting and Data Mining reporting. Now:
-- DDL traffic: alerting and monitoring are supported.-- DML traffic: Monitoring is supported. Alerting isn't supported.
+This version includes the following new updates and fixes:
-**New Asset Detected alert**
-This alert is disabled for new devices detected in IT subnets. The New Asset Detected alert is still triggered for new devices discovered in OT subnets. OT subnets are detected automatically and can be updated by users if necessary.
+- [Diagnostic logs automatically available to support for cloud-connected sensors](how-to-manage-individual-sensors.md#download-a-diagnostics-log-for-support)
+- [Rockwell protocol: Device inventory shows PLC operating mode key state, run state, and security mode](how-to-manage-device-inventory-for-organizations.md)
+- [Automatic CLI session timeouts](references-work-with-defender-for-iot-cli-commands.md)
+- [Sensor health widgets in the Azure portal](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health-public-preview)
-### Minimized alerting
+### 22.1.1
-Alert triggering for specific scenarios has been minimized to help reduce alert volume and simplify alert investigation. In these scenarios, if a device performs repeated activity on targets, an alert is triggered once. Previously, a new alert was triggered each time the same activity was carried out.
+**Release date**: 02/2022
-This new functionality is available on the following alerts:
+**Supported until**: 10/2022
-- Port Scan Detected alerts, based on activity of the source device (generated by the Anomaly engine)-- Malware alerts, based on activity of the source device. (generated by the Malware engine). -- Suspicion of Denial of Service Attack alerts, based on activity of the destination device (generated by the Malware engine)
+This version includes the following new updates and fixes:
-## November 2021
+- [New sensor installation wizard](how-to-install-software.md)
-**Sensor software version**: 10.5.3
+- [Sensor redesign and unified Microsoft product experience](how-to-manage-individual-sensors.md)
-The following feature enhancements are available with version 10.5.3 of Microsoft Defender for IoT.
+- [Enhanced sensor Overview page](how-to-manage-individual-sensors.md)
-- The on-premises management console, has a new ServiceNow integration API. For more information, see [Integration API reference for on-premises management consoles (Public preview)](api/management-integration-apis.md).
+- [New sensor diagnostics log](how-to-manage-individual-sensors.md#download-a-diagnostics-log-for-support)
-- Enhancements have been made to the network traffic analysis of multiple OT and ICS protocol dissectors.
+- [Alert updates](how-to-view-alerts.md):
-- As part of our automated maintenance, archived alerts that are over 90 days old will now be automatically deleted.
+ - Contextual data for each alert
+ - Refreshed alert statuses
+ - Alert storage updates
+ - A new **Backup Activity with Antivirus Signatures** alert
+ - Alert management changes during software updates
-- Many enhancements have been made to the exporting of alert metadata based on customer feedback.
+- [Enhancements for creating custom alerts on the sensor](how-to-accelerate-alert-incident-response.md#customize-alert-rules): Hit count data, advanced scheduling options, and more supported fields and protocols
-## October 2021
+- [Modified CLI commands](references-work-with-defender-for-iot-cli-commands.md): Including the following new commands:
-**Sensor software version**: 10.5.2
+ - `sudo dpkg-reconfigure iot-sensor`
+ - `sudo dpkg-reconfigure iot-sensor`
+ - `sudo dpkg-reconfigure iot-sensor`
-The following feature enhancements are available with version 10.5.2 of Microsoft Defender for IoT.
+- [Refreshed update process and update log](update-ot-software.md)
-- [PLC operating mode detections (Public Preview)](#plc-operating-mode-detections-public-preview)
+- [New connectivity models](architecture-connections.md)
-- [PCAP API](#pcap-api)
+- [New firewall requirements](how-to-set-up-your-network.md#sensor-access-to-azure-portal)
-- [On-premises Management Console Audit](#on-premises-management-console-audit)
+- [Improved support for Profinet DCP, Honeywell, and Windows endpoint detection protocols](concept-supported-protocols.md)
-- [Webhook Extended](#webhook-extended)
+- [Sensor reports now accessible from the **Data Mining** page](how-to-create-data-mining-queries.md)
-- [Unicode support for certificate passphrases](#unicode-support-for-certificate-passphrases)
+- [Updated process for sensor name changes](how-to-manage-individual-sensors.md#change-the-name-of-a-sensor)
-### PLC operating mode detections (Public Preview)
+## Versions 10.5.x
-Users can now view PLC operating mode states, changes, and risks. The PLC Operating mode consists of the PLC logical Run state and the physical Key state, if a physical key switch exists on the PLC.
+To update your software to the latest version available, first update to version 22.1.7, and then update again to the latest 22.2.x version. For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md).
-This new capability helps improve security by detecting *unsecure* PLCs, and as a result prevents malicious attacks such as PLC Program Downloads. The 2017 Triton attack on a petrochemical plant illustrates the effects of such risks.
-This information also provides operational engineers with critical visibility into the operational mode of enterprise PLCs.
+### 10.5.5
-#### What is an unsecure mode?
+**Release date**: 12/2022
-If the Key state is detected as *Program* or the *Run state* is detected as either *Remote* or *Program*, the PLC is defined by Defender for IoT as *unsecure*.
+**Supported until**: 9/2022
-#### Visibility and risk assessment
+This version minor maintenance updates.
-- Use the Device Inventory to view the PLC state of organizational PLCs, and contextual device information. Use the Device Inventory Settings dialog box to add this column to the Inventory.
+### 10.5.4
- :::image type="content" source="media/release-notes/device-inventory-plc.png" alt-text="Device inventory showing PLC operating mode.":::
+**Release date**: 12/2021
-- View PLC secure status and last change information per PLC in the Attributes section of the Device Properties screen. If the *Key state* is detected as *Program* or the *Run state* is detected as either *Remote* or *Program*, the PLC is defined by Defender for IoT as *unsecure*. The Device Properties PLC Secured option will read false.
+**Supported until**: 09/2022
- :::image type="content" source="media/release-notes/attributes-plc.png" alt-text="Attributes screen showing PLC information.":::
+This version includes the following new updates and fixes:
-- View all network PLC Run and Key State statuses by creating a Data Mining with PLC operating mode information.
+- [New Microsoft Sentinel solution for Defender for IoT](../../sentinel/iot-solution.md)
+- [Mitigation for the Apache Log4j vulnerability](https://techcommunity.microsoft.com/t5/microsoft-defender-for-iot/updated-15-dec-defender-for-iot-security-advisory-apache-log4j/m-p/3036844)
+- [Alerts for minor events and edge cases disabled or minimized](alert-engine-messages.md)
- :::image type="content" source="media/release-notes/data-mining-plc.png" alt-text="Data inventory screen showing PLC option.":::
+### 10.5.3
-- Use the Risk Assessment Report to review the number of network PLCs in the unsecure mode, and additional information you can use to mitigate unsecure PLC risks.
+**Release date**: 10/2021
-### PCAP API
+**Supported until**: 07/2022
-The new PCAP API lets the user retrieve PCAP files from the sensor via the on-premises management console with, or without direct access to the sensor itself.
+This version includes the following new updates and fixes:
-### On-premises Management Console audit
+- [New integration APIs](api/management-integration-apis.md)
+- [Network traffic analysis enhancements for multiple OT and ICS protocols](concept-supported-protocols.md)
+- [Automatic deletion for older, archived alerts](how-to-view-alerts.md)
+- [Export alert enhancements](how-to-work-with-alerts-on-premises-management-console.md#export-alert-information)
-Audit logs for the on-premises management console can now be exported to facilitate investigations into what changes were made, and by who.
+### 10.5.2
-### Webhook extended
+**Release date**: 10/2021
-Webhook extended can be used to send extra data to the endpoint. The extended feature includes all of the information in the Webhook alert and adds the following information to the report:
+**Supported until**: 07/2022
-- sensorID-- sensorName-- zoneID-- zoneName-- siteID-- siteName-- sourceDeviceAddress-- destinationDeviceAddress-- remediationSteps-- handled-- additionalInformation
+This version includes the following new updates and fixes:
-### Unicode support for certificate passphrases
+- [PLC operating mode detections](how-to-create-risk-assessment-reports.md)
+- [New PCAP API](api/management-alert-apis.md#pcap-request-alert-pcap)
+- [On-premises management console audit](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#export-audit-logs-for-troubleshooting)
+- [Support for Webhook extended to send data to endpoints](how-to-forward-alert-information-to-partners.md#webhook-extended)
+- [Unicode support for certificate passphrases](how-to-deploy-certificates.md)
-Unicode characters are now supported when working with sensor certificate passphrases. For more information, see [Certificates for appliance encryption and authentication (OT appliances)](how-to-deploy-certificates.md#certificates-for-appliance-encryption-and-authentication-ot-appliances).
## Next steps
-[Getting started with Defender for IoT](getting-started.md)
+For more information about the features listed in this article, see [What's new in Microsoft Defender for IoT?](whats-new.md) and [What's new archive for in Microsoft Defender for IoT for organizations](release-notes-archive.md).
defender-for-iot Update Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-ot-software.md
This article describes how to update Defender for IoT software versions on OT se
You can purchase preconfigured appliances for your sensors and on-premises management consoles, or install software on your own hardware machines. In either case, you'll need to update software versions to use new features for OT sensors and on-premises management consoles.
-For more information, see [Which appliances do I need?](ot-appliance-sizing.md), [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md), and [What's new in Microsoft Defender for IoT?](release-notes.md).
+For more information, see [Which appliances do I need?](ot-appliance-sizing.md), [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md), and [OT monitoring software release notes](release-notes.md).
## Legacy version updates vs. recent version updates
In such cases, make sure to update your on-premises management consoles *before*
You can update software on your sensors individually, directly from each sensor console, or in bulk from the on-premises management console. Select one of the following tabs for the steps required in each method. > [!NOTE]
-> If you are updating from software versions earlier than [22.1.x](release-notes.md#update-to-version-221x), note that this version has a large update with more complicated background processes. Expect this update to take more time than earlier updates have required.
+> If you are updating from software versions earlier than [22.1.x](whats-new.md#update-to-version-221x), note that this version has a large update with more complicated background processes. Expect this update to take more time than earlier updates have required.
> > [!IMPORTANT]
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
+
+ Title: What's new in Microsoft Defender for IoT
+description: This article describes features available in Microsoft Defender for IoT, across both OT and Enterprise IoT networks, and both on-premises and in the Azure portal.
+ Last updated : 09/15/2022++
+# What's new in Microsoft Defender for IoT?
+
+This article describes features available in Microsoft Defender for IoT, across both OT and Enterprise IoT networks, both on-premises and in the Azure portal, and for versions released in the last nine months.
+
+Features released earlier than nine months ago are described in the [What's new archive for Microsoft Defender for IoT for organizations](release-notes-archive.md). For more information specific to OT monitoring software versions, see [OT monitoring software release notes](release-notes.md).
+
+> [!NOTE]
+> Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## November 2022
+
+|Service area |Updates |
+|||
+|**OT networks** | [New OT monitoring software release notes](#new-ot-monitoring-software-release-notes) |
+
+### New OT monitoring software release notes
+
+Defender for IoT documentation now has a new [release notes](release-notes.md) page dedicated to OT monitoring software, with details about our version support models and update recommendations.
++
+We continue to update this article, our main **What's new** page, with new features and enhancements for both OT and Enterprise IoT networks. New items listed include both on-premises and cloud features, and are listed by month.
+
+In contrast, the new [OT monitoring software release notes](release-notes.md) lists only OT network monitoring updates that require you to update your on-premises software. Items are listed by major and patch versions, with an aggregated table of versions, dates, and scope.
+
+For more information, see [OT monitoring software release notes](release-notes.md).
+
+## October 2022
+
+|Service area |Updates |
+|||
+|**OT networks** | [Enhanced OT monitoring alert reference](#enhanced-ot-monitoring-alert-reference) |
+
+### Enhanced OT monitoring alert reference
+
+Our alert reference article now includes the following details for each alert:
+
+- **Alert category**, helpful when you want to investigate alerts that are aggregated by a specific activity or configure SIEM rules to generate incidents based on specific activities
+
+- **Alert threshold**, for relevant alerts. Thresholds indicate the specific point at which an alert is triggered. The *cyberx* user can modify alert thresholds as needed from the sensor's **Support** page.
+
+For more information, see [OT monitoring alert types and descriptions](alert-engine-messages.md), specifically [Supported alert categories](alert-engine-messages.md#supported-alert-categories).
+
+## September 2022
+
+|Service area |Updates |
+|||
+|**OT networks** |**All supported OT sensor software versions**: <br>- [Device vulnerabilities from the Azure portal](#device-vulnerabilities-from-the-azure-portal-public-preview)<br>- [Security recommendations for OT networks](#security-recommendations-for-ot-networks-public-preview)<br><br> **All OT sensor software versions 22.x**: [Updates for Azure cloud connection firewall rules](#updates-for-azure-cloud-connection-firewall-rules-public-preview) <br><br>**Sensor software version 22.2.7**: <br> - Bug fixes and stability improvements <br><br> **Sensor software version 22.2.6**: <br> - Bug fixes and stability improvements <br>- Enhancements to the device type classification algorithm<br><br>**Microsoft Sentinel integration**: <br>- [Investigation enhancements with IoT device entities](#investigation-enhancements-with-iot-device-entities-in-microsoft-sentinel)<br>- [Updates to the Microsoft Defender for IoT solution](#updates-to-the-microsoft-defender-for-iot-solution-in-microsoft-sentinels-content-hub)|
+
+### Security recommendations for OT networks (Public preview)
+
+Defender for IoT now provides security recommendations to help customers manage their OT/IoT network security posture. Defender for IoT recommendations help users form actionable, prioritized mitigation plans that address the unique challenges of OT/IoT networks. Use recommendations for lower your network's risk and attack surface.
+
+You can see the following security recommendations from the Azure portal for detected devices across your networks:
+
+- **Review PLC operating mode**. Devices with this recommendation are found with PLCs set to unsecure operating mode states. We recommend setting PLC operating modes to the **Secure Run** state if access is no longer required to the PLC to reduce the threat of malicious PLC programming.
+
+- **Review unauthorized devices**. Devices with this recommendation must be identified and authorized as part of the network baseline. We recommend taking action to identify any indicated devices. Disconnect any devices from your network that remain unknown even after investigation to reduce the threat of rogue or potentially malicious devices.
+
+Access security recommendations from one of the following locations:
+
+- The **Recommendations** page, which displays all current recommendations across all detected OT devices.
+
+- The **Recommendations** tab on a device details page, which displays all current recommendations for the selected device.
+
+From either location, select a recommendation to drill down further and view lists of all detected OT devices that are currently in a *healthy* or *unhealthy* state, according to the selected recommendation. From the **Unhealthy devices** or **Healthy devices** tab, select a device link to jump to the selected device details page. For example:
++
+For more information, see [View the device inventory](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory).
+
+### Device vulnerabilities from the Azure portal (Public preview)
+
+Defender for IoT now provides vulnerability data in the Azure portal for detected OT network devices. Vulnerability data is based on the repository of standards based vulnerability data documented at the [US government National Vulnerability Database (NVD)](https://www.nist.gov/programs-projects/national-vulnerability-database-nvd).
+
+Access vulnerability data in the Azure portal from the following locations:
+
+- On a device details page select the **Vulnerabilities** tab to view current vulnerabilities on the selected device. For example, from the **Device inventory** page, select a specific device and then select **Vulnerabilities**.
+
+ For more information, see [View the device inventory](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory).
+
+- A new **Vulnerabilities** workbook displays vulnerability data across all monitored OT devices. Use the **Vulnerabilities** workbook to view data like CVE by severity or vendor, and full lists of detected vulnerabilities and vulnerable devices and components.
+
+ Select an item in the **Device vulnerabilities**, **Vulnerable devices**, or **Vulnerable components** tables to view related information in the tables on the right.
+
+ For example:
+
+ :::image type="content" source="media/release-notes/vulnerabilities-workbook.png" alt-text="Screenshot of a Vulnerabilities workbook in Defender for IoT.":::
+
+ For more information, see [Use Azure Monitor workbooks in Microsoft Defender for IoT](workbooks.md).
+
+### Updates for Azure cloud connection firewall rules (Public preview)
+
+OT network sensors connect to Azure to provide alert and device data and sensor health messages, access threat intelligence packages, and more. Connected Azure services include IoT Hub, Blob Storage, Event Hubs, and the Microsoft Download Center.
+
+For OT sensors with software versions 22.x and higher, Defender for IoT now supports increased security when adding outbound allow rules for connections to Azure. Now you can define your outbound allow rules to connect to Azure without using wildcards.
+
+When defining outbound allow rules to connect to Azure, you'll need to enable HTTPS traffic to each of the required endpoints on port 443. Outbound allow rules are defined once for all OT sensors onboarded to the same subscription.
+
+For supported sensor versions, download the full list of required secure endpoints from the following locations in the Azure portal:
+
+- **A successful sensor registration page**: After onboarding a new OT sensor, version 22.x, the successful registration page now provides instructions for next steps, including a link to the endpoints you'll need to add as secure outbound allow rules on your network. Select the **Download endpoint details** link to download the JSON file.
+
+ For example:
+
+ :::image type="content" source="media/release-notes/download-endpoints.png" alt-text="Screenshot of a successful OT sensor registration page with the download endpoints link.":::
+
+- **The Sites and sensors page**: Select an OT sensor with software versions 22.x or higher, or a site with one or more supported sensor versions. Then, select **More actions** > **Download endpoint details** to download the JSON file. For example:
+
+ :::image type="content" source="media/release-notes/download-endpoints-sites-sensors.png" alt-text="Screenshot of the Sites and sensors page with the download endpoint details link.":::
+
+For more information, see:
+
+- [Tutorial: Get started with Microsoft Defender for IoT for OT security](tutorial-onboarding.md)
+- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)
+- [Networking requirements](how-to-set-up-your-network.md#sensor-access-to-azure-portal)
+
+### Investigation enhancements with IoT device entities in Microsoft Sentinel
+
+Defender for IoT's integration with Microsoft Sentinel now supports an IoT device entity page. When investigating incidents and monitoring IoT security in Microsoft Sentinel, you can now identify your most sensitive devices and jump directly to more details on each device entity page.
+
+The IoT device entity page provides contextual device information about an IoT device, with basic device details and device owner contact information. Device owners are defined by site in the **Sites and sensors** page in Defender for IoT.
+
+The IoT device entity page can help prioritize remediation based on device importance and business impact, as per each alert's site, zone, and sensor. For example:
++
+You can also now hunt for vulnerable devices on the Microsoft Sentinel **Entity behavior** page. For example, view the top five IoT devices with the highest number of alerts, or search for a device by IP address or device name:
++
+For more information, see [Investigate further with IoT device entities](../../sentinel/iot-advanced-threat-monitoring.md#investigate-further-with-iot-device-entities) and [Site management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#site-management-options-from-the-azure-portal).
+
+### Updates to the Microsoft Defender for IoT solution in Microsoft Sentinel's content hub
+
+This month, we've released version 2.0 of the **Microsoft Defender for IoT** solution in Microsoft Sentinel's content hub, previously known as the **IoT/OT Threat Monitoring with Defender for IoT** solution.
+
+Updates in this version of the solution include:
+
+- **A name change**. If you'd previously installed the **IoT/OT Threat Monitoring with Defender for IoT** solution in your Microsoft Sentinel workspace, the solution is automatically renamed to **Microsoft Defender for IoT**, even if you don't update the solution.
+
+- **Workbook improvements**: The **Defender for IoT** workbook now includes:
+
+ - A new **Overview** dashboard with key metrics on the device inventory, threat detection, and security posture. For example:
+
+ :::image type="content" source="media/release-notes/sentinel-workbook-overview.png" alt-text="Screenshot of the new Overview tab in the IoT OT Threat Monitoring with Defender for IoT workbook." lightbox="media/release-notes/sentinel-workbook-overview.png":::
+
+ - A new **Vulnerabilities** dashboard with details about CVEs shown in your network and their related vulnerable devices. For example:
+
+ :::image type="content" source="media/release-notes/sentinel-workbook-vulnerabilities.png" alt-text="Screenshot of the new Vulnerability tab in the IoT OT Threat Monitoring with Defender for IoT workbook." lightbox="media/release-notes/sentinel-workbook-vulnerabilities.png":::
+
+ - Improvements on the **Device inventory** dashboard, including access to device recommendations, vulnerabilities, and direct links to the Defender for IoT device details pages. The **Device inventory** dashboard in the **IoT/OT Threat Monitoring with Defender for IoT** workbook is fully aligned with the Defender for IoT [device inventory data](how-to-manage-device-inventory-for-organizations.md).
+
+- **Playbook updates**: The **Microsoft Defender for IoT** solution now supports the following SOC automation functionality with new playbooks:
+
+ - **Automation with CVE details**: Use the *AD4IoT-CVEAutoWorkflow* playbook to enrich incident comments with CVEs of related devices based on Defender for IoT data. The incidents are triaged, and if the CVE is critical, the asset owner is notified about the incident by email.
+
+ - **Automation for email notifications to device owners**. Use the *AD4IoT-SendEmailtoIoTOwner* playbook to have a notification email automatically sent to a device's owner about new incidents. Device owners can then reply to the email to update the incident as needed. Device owners are defined at the site level in Defender for IoT.
+
+ - **Automation for incidents with sensitive devices**: Use the *AD4IoT-AutoTriageIncident* playbook to automatically update an incident's severity based on the devices involved in the incident, and their sensitivity level or importance to your organization. For example, any incident involving a sensitive device can be automatically escalated to a higher severity level.
+
+For more information, see [Investigate Microsoft Defender for IoT incidents with Microsoft Sentinel](../../sentinel/iot-advanced-threat-monitoring.md?bc=%2fazure%2fdefender-for-iot%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fdefender-for-iot%2forganizations%2ftoc.json).
+
+## August 2022
+
+|Service area |Updates |
+|||
+|**OT networks** |**Sensor software version 22.2.5**: Minor version with stability improvements<br><br>**Sensor software version 22.2.4**: [New alert columns with timestamp data](#new-alert-columns-with-timestamp-data)<br><br>**Sensor software version 22.1.3**: [Sensor health from the Azure portal (Public preview)](#sensor-health-from-the-azure-portal-public-preview) |
+
+### New alert columns with timestamp data
+
+Starting with OT sensor version 22.2.4, Defender for IoT alerts in the Azure portal and the sensor console now show the following columns and data:
+
+- **Last detection**. Defines the last time the alert was detected in the network, and replaces the **Detection time** column.
+
+- **First detection**. Defines the first time the alert was detected in the network.
+
+- **Last activity**. Defines the last time the alert was changed, including manual updates for severity or status, or automated changes for device updates or device/alert de-duplication.
+
+The **First detection** and **Last activity** columns aren't displayed by default. Add them to your **Alerts** page as needed.
+
+> [!TIP]
+> If you're also a Microsoft Sentinel user, you'll be familiar with similar data from your Log Analytics queries. The new alert columns in Defender for IoT are mapped as follows:
+>
+> - The Defender for IoT **Last detection** time is similar to the Log Analytics **EndTime**
+> - The Defender for IoT **First detection** time is similar to the Log Analytics **StartTime**
+> - The Defender for IoT **Last activity** time is similar to the Log Analytics **TimeGenerated**
+For more information, see:
+
+- [View alerts on the Defender for IoT portal](how-to-manage-cloud-alerts.md)
+- [View alerts on your sensor](how-to-view-alerts.md)
+- [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md)
+
+### Sensor health from the Azure portal (Public preview)
+
+For OT sensor versions 22.1.3 and higher, you can use the new sensor health widgets and table column data to monitor sensor health directly from the **Sites and sensors** page on the Azure portal.
++
+We've also added a sensor details page, where you drill down to a specific sensor from the Azure portal. On the **Sites and sensors** page, select a specific sensor name. The sensor details page lists basic sensor data, sensor health, and any sensor settings applied.
+
+For more information, see [Understand sensor health (Public preview)](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health-public-preview) and [Sensor health message reference](sensor-health-messages.md).
+
+## July 2022
+
+|Service area |Updates |
+|||
+|**Enterprise IoT networks** | - [Enterprise IoT and Defender for Endpoint integration in GA](#enterprise-iot-and-defender-for-endpoint-integration-in-ga) |
+|**OT networks** |**Sensor software version 22.2.4**: <br>- [Device inventory enhancements](#device-inventory-enhancements)<br>- [Enhancements for the ServiceNow integration API](#enhancements-for-the-servicenow-integration-api)<br><br>**Sensor software version 22.2.3**:<br>- [OT appliance hardware profile updates](#ot-appliance-hardware-profile-updates)<br>- [PCAP access from the Azure portal](#pcap-access-from-the-azure-portal-public-preview)<br>- [Bi-directional alert synch between sensors and the Azure portal](#bi-directional-alert-synch-between-sensors-and-the-azure-portal-public-preview)<br>- [Sensor connections restored after certificate rotation](#sensor-connections-restored-after-certificate-rotation)<br>- [Support diagnostic log enhancements](#support-diagnostic-log-enhancements-public-preview)<br>- [Improved security for uploading protocol plugins](#improved-security-for-uploading-protocol-plugins)<br>- [Sensor names shown in browser tabs](#sensor-names-shown-in-browser-tabs)<br><br>**Sensor software version 22.1.7**: <br>- [Same passwords for *cyberx_host* and *cyberx* users](#same-passwords-for-cyberx_host-and-cyberx-users) |
+|**Cloud-only features** | - [Microsoft Sentinel incident synch with Defender for IoT alerts](#microsoft-sentinel-incident-synch-with-defender-for-iot-alerts) |
+
+### Enterprise IoT and Defender for Endpoint integration in GA
+
+The Enterprise IoT integration with Microsoft Defender for Endpoint is now in General Availability (GA). With this update, we've made the following updates and improvements:
+
+- Onboard an Enterprise IoT plan directly in Defender for Endpoint. For more information, see [Manage your subscriptions](how-to-manage-subscriptions.md) and the [Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
+
+- Seamless integration with Microsoft Defender for Endpoint to view detected Enterprise IoT devices, and their related alerts, vulnerabilities, and recommendations in the Microsoft 365 Security portal. For more information, see the [Enterprise IoT tutorial](tutorial-getting-started-eiot-sensor.md) and the [Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration). You can continue to view detected Enterprise IoT devices on the Defender for IoT **Device inventory** page in the Azure portal.
+
+- All Enterprise IoT sensors are now automatically added to the same site in Defender for IoT, named **Enterprise network**. When onboarding a new Enterprise IoT device, you only need to define a sensor name and select your subscription, without defining a site or zone.
+
+> [!NOTE]
+> The Enterprise IoT network sensor and all detections remain in Public Preview.
+
+### Same passwords for cyberx_host and cyberx users
+
+During OT monitoring software installations and updates, the **cyberx** user is assigned a random password. When updating from version 10.x.x to version 22.1.7, the **cyberx_host** password is assigned with an identical password to the **cyberx** user.
+
+For more information, see [Install OT agentless monitoring software](how-to-install-software.md) and [Update Defender for IoT OT monitoring software](update-ot-software.md).
+
+### Device inventory enhancements
+
+Starting in OT sensor versions 22.2.4, you can now take the following actions from the sensor console's **Device inventory** page:
+
+- **Merge duplicate devices**. You may need to merge devices if the sensor has discovered separate network entities that are associated with a single, unique device. Examples of this scenario might include a PLC with four network cards, a laptop with both WiFi and a physical network card, or a single workstation with multiple network cards.
+
+- **Delete single devices**. Now, you can delete a single device that hasn't communicated for at least 10 minutes.
+
+- **Delete inactive devices by admin users**. Now, all admin users, in addition to the **cyberx** user, can delete inactive devices.
+
+Also starting in version 22.2.4, in the sensor console's **Device inventory** page, the **Last seen** value in the device details pane is replaced by **Last activity**. For example:
++
+For more information, see [Manage your OT device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md).
+
+### Enhancements for the ServiceNow integration API
+
+OT sensor version 22.2.4 provides enhancements for the `devicecves` API, which gets details about the CVEs found for a given device.
+
+Now you can add any of the following parameters to your query to fine tune your results:
+
+- ΓÇ£**sensorId**ΓÇ¥ - Shows results from a specific sensor, as defined by the given sensor ID.
+- ΓÇ£**score**ΓÇ¥ - Determines a minimum CVE score to be retrieved. All results will have a CVE score equal to or higher than the given value. Default = **0**.
+- ΓÇ£**deviceIds**ΓÇ¥ - A comma-separated list of device IDs from which you want to show results. For example: **1232,34,2,456**
+
+For more information, see [Integration API reference for on-premises management consoles (Public preview)](api/management-integration-apis.md).
+
+### OT appliance hardware profile updates
+
+We've refreshed the naming conventions for our OT appliance hardware profiles for greater transparency and clarity.
+
+The new names reflect both the *type* of profile, including *Corporate*, *Enterprise*, and *Production line*, and also the related disk storage size.
+
+Use the following table to understand the mapping between legacy hardware profile names and the current names used in the updated software installation:
+
+|Legacy name |New name | Description |
+||||
+|**Corporate** | **C5600** | A *Corporate* environment, with: <br>16 Cores<br>32 GB RAM<br>5.6 TB disk storage |
+|**Enterprise** | **E1800** | An *Enterprise* environment, with: <br>8 Cores<br>32 GB RAM<br>1.8 TB disk storage |
+|**SMB** | **L500** | A *Production line* environment, with: <br>4 Cores<br>8 GB RAM<br>500 GB disk storage |
+|**Office** | **L100** | A *Production line* environment, with: <br>4 Cores<br>8 GB RAM<br>100 GB disk storage |
+|**Rugged** | **L64** | A *Production line* environment, with: <br>4 Cores<br>8 GB RAM<br>64 GB disk storage |
+
+We also now support new enterprise hardware profiles, for sensors supporting both 500 GB and 1 TB disk sizes.
+
+For more information, see [Which appliances do I need?](ot-appliance-sizing.md)
+
+### PCAP access from the Azure portal (Public preview)
+
+Now you can access the raw traffic files, known as packet capture files or PCAP files, directly from the Azure portal. This feature supports SOC or OT security engineers who want to investigate alerts from Defender for IoT or Microsoft Sentinel, without having to access each sensor separately.
++
+PCAP files are downloaded to your Azure storage.
+
+For more information, see [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md).
+
+### Bi-directional alert synch between sensors and the Azure portal (Public preview)
+
+For sensors updated to version 22.2.1, alert statuses and learn statuses are now fully synchronized between the sensor console and the Azure portal. For example, this means that you can close an alert on the Azure portal or the sensor console, and the alert status is updated in both locations.
+
+*Learn* an alert from either the Azure portal or the sensor console to ensure that it's not triggered again the next time the same network traffic is detected.
+
+The sensor console is also synchronized with an on-premises management console, so that alert statuses and learn statuses remain up-to-date across your management interfaces.
+
+For more information, see:
+
+- [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md)
+- [View alerts on your sensor](how-to-view-alerts.md)
+- [Manage alerts from the sensor console](how-to-manage-the-alert-event.md)
+- [Work with alerts on the on-premises management console](how-to-work-with-alerts-on-premises-management-console.md)
+
+### Sensor connections restored after certificate rotation
+
+Starting in version 22.2.3, after rotating your certificates, your sensor connections are automatically restored to your central manager, and you don't need to reconnect them manually.
+
+For more information, see [About certificates](how-to-deploy-certificates.md).
+
+### Support diagnostic log enhancements (Public preview)
+
+Starting in sensor version [22.1.1](#new-support-diagnostics-log), you've been able to download a diagnostic log from the sensor console to send to support when you open a ticket.
+
+Now, for locally managed sensors, you can upload that diagnostic log directly on the Azure portal.
++
+> [!TIP]
+> For cloud-connected sensors, starting from sensor version [22.1.3](#march-2022), the diagnostic log is automatically available to support when you open the ticket.
+>
+For more information, see:
+
+- [Download a diagnostics log for support](how-to-manage-individual-sensors.md#download-a-diagnostics-log-for-support)
+- [Upload a diagnostics log for support](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support-public-preview)
+
+### Improved security for uploading protocol plugins
+
+This version of the sensor provides an improved security for uploading proprietary plugins you've created using the Horizon SDK.
++
+For more information, see [Manage proprietary protocols with Horizon plugins](resources-manage-proprietary-protocols.md).
+
+### Sensor names shown in browser tabs
+
+Starting in sensor version 22.2.3, your sensor's name is displayed in the browser tab, making it easier for you to identify the sensors you're working with.
+
+For example:
++
+For more information, see [Manage individual sensors](how-to-manage-individual-sensors.md).
+
+### Microsoft Sentinel incident synch with Defender for IoT alerts
+
+The **IoT OT Threat Monitoring with Defender for IoT** solution now ensures that alerts in Defender for IoT are updated with any related incident **Status** changes from Microsoft Sentinel.
+
+This synchronization overrides any status defined in Defender for IoT, in the Azure portal or the sensor console, so that the alert statuses match that of the related incident.
+
+Update your **IoT OT Threat Monitoring with Defender for IoT** solution to use the latest synchronization support, including the new **AD4IoT-AutoAlertStatusSync** playbook. After updating the solution, make sure that you also take the [required steps](../../sentinel/iot-advanced-threat-monitoring.md?#update-alert-statuses-in-defender-for-iot) to ensure that the new playbook works as expected.
+
+For more information, see:
+
+- [Tutorial: Integrate Defender for Iot and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
+- [View and manage alerts on the Defender for IoT portal (Preview)](how-to-manage-cloud-alerts.md)
+- [View alerts on your sensor](how-to-view-alerts.md)
+
+## June 2022
+
+- **Sensor software version 22.1.6**: Minor version with maintenance updates for internal sensor components
+
+- **Sensor software version 22.1.5**: Minor version to improve TI installation packages and software updates
+
+We've also recently optimized and enhanced our documentation as follows:
+
+- [Updated appliance catalog for OT environments](#updated-appliance-catalog-for-ot-environments)
+- [Documentation reorganization for end-user organizations](#documentation-reorganization-for-end-user-organizations)
++
+### Updated appliance catalog for OT environments
+
+We've refreshed and revamped the catalog of supported appliances for monitoring OT environments. These appliances support flexible deployment options for environments of all sizes and can be used to host both the OT monitoring sensor and on-premises management consoles.
+
+Use the new pages as follows:
+
+1. **Understand which hardware model best fits your organization's needs.** For more information, see [Which appliances do I need?](ot-appliance-sizing.md)
+
+1. **Learn about the preconfigured hardware appliances that are available to purchase, or system requirements for virtual machines.** For more information, see [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md) and [OT monitoring with virtual appliances](ot-virtual-appliances.md).
+
+ For more information about each appliance type, use the linked reference page, or browse through our new **Reference > OT monitoring appliances** section.
+
+ :::image type="content" source="media/release-notes/appliance-catalog.png" alt-text="Screenshot of the new appliance catalog reference section." lightbox="media/release-notes/appliance-catalog.png":::
+
+ Reference articles for each appliance type, including virtual appliances, include specific steps to configure the appliance for OT monitoring with Defender for IoT. Generic software installation and troubleshooting procedures are still documented in [Defender for IoT software installation](how-to-install-software.md).
+
+### Documentation reorganization for end-user organizations
+
+We recently reorganized our Defender for IoT documentation for end-user organizations, highlighting a clearer path for onboarding and getting started.
+
+Check out our new structure to follow through viewing devices and assets, managing alerts, vulnerabilities and threats, integrating with other services, and deploying and maintaining your Defender for IoT system.
+
+**New and updated articles include**:
+
+- [Welcome to Microsoft Defender for IoT for organizations](overview.md)
+- [Microsoft Defender for IoT architecture](architecture.md)
+- [Quickstart: Get started with Defender for IoT](getting-started.md)
+- [Tutorial: Microsoft Defender for IoT trial setup](tutorial-onboarding.md)
+- [Tutorial: Get started with Enterprise IoT](tutorial-getting-started-eiot-sensor.md)
+- [Plan your sensor connections for OT monitoring](best-practices/plan-network-monitoring.md)
+- [About Microsoft Defender for IoT network setup](how-to-set-up-your-network.md)
+
+> [!NOTE]
+> To send feedback on docs via GitHub, scroll to the bottom of the page and select the **Feedback** option for **This page**. We'd be glad to hear from you!
+>
++
+## April 2022
+
+- [Extended device property data in the Device inventory](#extended-device-property-data-in-the-device-inventory)
+
+### Extended device property data in the Device inventory
+
+**Sensor software version**: 22.1.4
+
+Starting for sensors updated to version 22.1.4, the **Device inventory** page on the Azure portal shows extended data for the following fields:
+
+- **Description**
+- **Tags**
+- **Protocols**
+- **Scanner**
+- **Last Activity**
+
+For more information, see [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md).
+
+## March 2022
+
+**Sensor version**: 22.1.3
+
+- [Use Azure Monitor workbooks with Microsoft Defender for IoT](#use-azure-monitor-workbooks-with-microsoft-defender-for-iot-public-preview)
+- [IoT OT Threat Monitoring with Defender for IoT solution GA](#iot-ot-threat-monitoring-with-defender-for-iot-solution-ga)
+- [Edit and delete devices from the Azure portal](#edit-and-delete-devices-from-the-azure-portal-public-preview)
+- [Key state alert updates](#key-state-alert-updates-public-preview)
+- [Sign out of a CLI session](#sign-out-of-a-cli-session)
++
+### Use Azure Monitor workbooks with Microsoft Defender for IoT (Public preview)
+
+[Azure Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md) provide graphs and dashboards that visually reflect your data, and are now available directly in Microsoft Defender for IoT with data from [Azure Resource Graph](../../governance/resource-graph/index.yml).
+
+In the Azure portal, use the new Defender for IoT **Workbooks** page to view workbooks created by Microsoft and provided out-of-the-box, or create custom workbooks of your own.
++
+For more information, see [Use Azure Monitor workbooks in Microsoft Defender for IoT](workbooks.md).
+
+### IoT OT Threat Monitoring with Defender for IoT solution GA
+
+The IoT OT Threat Monitoring with Defender for IoT solution in Microsoft Sentinel is now GA. In the Azure portal, use this solution to help secure your entire OT environment, whether you need to protect existing OT devices or build security into new OT innovations.
+
+For more information, see [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md) and [Tutorial: Integrate Defender for IoT and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended).
+
+### Edit and delete devices from the Azure portal (Public preview)
+
+The **Device inventory** page in the Azure portal now supports the ability to edit device details, such as security, classification, location, and more:
++
+For more information, see [Edit device details](how-to-manage-device-inventory-for-organizations.md#edit-device-details).
+
+You can only delete devices from Defender for IoT if they've been inactive for more than 14 days. For more information, see [Delete a device](how-to-manage-device-inventory-for-organizations.md#delete-a-device).
+
+### Key state alert updates (Public preview)
+
+Defender for IoT now supports the Rockwell protocol for PLC operating mode detections.
+
+For the Rockwell protocol, the **Device inventory** pages in both the Azure portal and the sensor console now indicate the PLC operating mode key and run state, and whether the device is currently in a secure mode.
+
+If the device's PLC operating mode is ever switched to an unsecured mode, such as *Program* or *Remote*, a **PLC Operating Mode Changed** alert is generated.
+
+For more information, see [Manage your IoT devices with the device inventory for organizations](how-to-manage-device-inventory-for-organizations.md).
+
+### Sign out of a CLI session
+
+Starting in this version, CLI users are automatically signed out of their session after 300 inactive seconds. To sign out manually, use the new `logout` CLI command.
+
+For more information, see [Work with Defender for IoT CLI commands](references-work-with-defender-for-iot-cli-commands.md).
++
+## February 2022
+
+**Sensor software version**: 22.1.1
+
+- [New sensor installation wizard](#new-sensor-installation-wizard)
+- [Sensor redesign and unified Microsoft product experience](#sensor-redesign-and-unified-microsoft-product-experience)
+- [Enhanced sensor Overview page](#enhanced-sensor-overview-page)
+- [New support diagnostics log](#new-support-diagnostics-log)
+- [Alert updates](#alert-updates)
+- [Custom alert updates](#custom-alert-updates)
+- [CLI command updates](#cli-command-updates)
+- [Update to version 22.1.x](#update-to-version-221x)
+- [New connectivity model and firewall requirements](#new-connectivity-model-and-firewall-requirements)
+- [Protocol improvements](#protocol-improvements)
+- [Modified, replaced, or removed options and configurations](#modified-replaced-or-removed-options-and-configurations)
+
+### New sensor installation wizard
+
+Previously, you needed to use separate dialogs to upload a sensor activation file, verify your sensor network configuration, and configure your SSL/TLS certificates.
+
+Now, when installing a new sensor or a new sensor version, our installation wizard provides a streamlined interface to do all these tasks from a single location.
+
+For more information, see [Defender for IoT installation](how-to-install-software.md).
+
+### Sensor redesign and unified Microsoft product experience
+
+The Defender for IoT sensor console has been redesigned to create a unified Microsoft Azure experience and enhance and simplify workflows.
+
+These features are now Generally Available (GA). Updates include the general look and feel, drill-down panes, search and action options, and more. For example:
+
+**Simplified workflows include**:
+
+- The **Device inventory** page now includes detailed device pages. Select a device in the table and then select **View full details** on the right.
+
+ :::image type="content" source="media/release-notes/device-inventory-details.png" alt-text="Screenshot of the View full details button." lightbox="media/release-notes/device-inventory-details.png":::
+
+- Properties updated from the sensor's inventory are now automatically updated in the cloud device inventory.
+
+- The device details pages, accessed either from the **Device map** or **Device inventory** pages, is shown as read only. To modify device properties, select **Edit properties** on the bottom-left.
+
+- The **Data mining** page now includes reporting functionality. While the **Reports** page was removed, users with read-only access can view updates on the **Data mining page** without the ability to modify reports or settings.
+
+ For admin users creating new reports, you can now toggle on a **Send to CM** option to send the report to a central management console as well. For more information, see [Create a report](how-to-create-data-mining-queries.md#create-a-report).
+
+- The **System settings** area has been reorganized in to sections for *Basic* settings, settings for *Network monitoring*, *Sensor management*, *Integrations*, and *Import settings*.
+
+- The sensor online help now links to key articles in the Microsoft Defender for IoT documentation.
+
+**Defender for IoT maps now include**:
+
+- A new **Map View** is now shown for alerts and on the device details pages, showing where in your environment the alert or device is found.
+
+- Right-click a device on the map to view contextual information about the device, including related alerts, event timeline data, and connected devices.
+
+- To enable the ability to collapse IT networks, ensure that the **Toggle IT Networks Grouping** option is enabled. This option is now only available from the map.
+
+- The **Simplified Map View** option has been removed.
+
+We've also implemented global readiness and accessibility features to comply with Microsoft standards. In the on-premises sensor console, these updates include both high contrast and regular screen display themes and localization for over 15 languages.
+
+For example:
++
+Access global readiness and accessibility options from the **Settings** icon at the top-right corner of your screen:
++
+### Enhanced sensor Overview page
+
+The Defender for IoT sensor portal's **Dashboard** page has been renamed as **Overview**, and now includes data that better highlights system deployment details, critical network monitoring health, top alerts, and important trends and statistics.
++
+The Overview page also now serves as a *black box* to view your overall sensor status in case your outbound connections, such as to the Azure portal, go down.
+
+Create more dashboards using the **Trends & Statistics** page, located under the **Analyze** menu on the left.
+
+### New support diagnostics log
+
+Now you can get a summary of the log and system information that gets added to your support tickets. In the **Backup and Restore** dialog, select **Support Ticket Diagnostics**.
++
+For more information, see [Download a diagnostics log for support](how-to-manage-individual-sensors.md#download-a-diagnostics-log-for-support)
+
+### Alert updates
+
+**In the Azure portal**:
+
+Alerts are now available in Defender for IoT in the Azure portal. Work with alerts to enhance the security and operation of your IoT/OT network.
+
+The new **Alerts** page is currently in Public Preview, and provides:
+
+- An aggregated, real-time view of threats detected by network sensors.
+- Remediation steps for devices and network processes.
+- Streaming alerts to Microsoft Sentinel and empower your SOC team.
+- Alert storage for 90 days from the time they're first detected.
+- Tools to investigate source and destination activity, alert severity and status, MITRE ATT&CK information, and contextual information about the alert.
+
+For example:
++
+**On the sensor console**:
+
+On the sensor console, the **Alerts** page now shows details for alerts detected by sensors that are configured with a cloud-connection to Defender for IoT on Azure. Users working with alerts in both Azure and on-premises should understand how alerts are managed between the Azure portal and the on-premises components.
++
+Other alert updates include:
+
+- **Access contextual data** for each alert, such as events that occurred around the same time, or a map of connected devices. Maps of connected devices are available for sensor console alerts only.
+
+- **Alert statuses** are updated, and, for example, now include a *Closed* status instead of *Acknowledged*.
+
+- **Alert storage** for 90 days from the time that they're first detected.
+
+- The **Backup Activity with Antivirus Signatures Alert**. This new alert warning is triggered for traffic detected between a source device and destination backup server, which is often legitimate backup activity. Critical or major malware alerts are no longer triggered for such activity.
+
+- **During upgrades**, sensor console alerts that are currently archived are deleted. Pinned alerts are no longer supported, so pins are removed for sensor console alerts as relevant.
+
+For more information, see [View alerts on your sensor](how-to-view-alerts.md).
+
+### Custom alert updates
+
+The sensor console's **Custom alert rules** page now provides:
+
+- Hit count information in the **Custom alert rules** table, with at-a-glance details about the number of alerts triggered in the last week for each rule you've created.
+
+- The ability to schedule custom alert rules to run outside of regular working hours.
+
+- The ability to alert on any field that can be extracted from a protocol using the DPI engine.
+
+- Complete protocol support when creating custom rules, and support for an extensive range of related protocol variables.
+
+ :::image type="content" source="media/how-to-manage-sensors-on-the-cloud/protocol-support-custom-alerts.png" alt-text="Screenshot of the updated Custom alerts dialog. "lightbox="media/how-to-manage-sensors-on-the-cloud/protocol-support-custom-alerts.png":::
+
+For more information and the updated custom alert procedure, see [Customize alert rules](how-to-accelerate-alert-incident-response.md#customize-alert-rules).
+
+### CLI command updates
+
+The Defender for Iot sensor software installation is now containerized. With the now-containerized sensor, you can use the *cyberx_host* user to investigate issues with other containers or the operating system, or to send files via FTP.
+
+This *cyberx_host* user is available by default and connects to the host machine. If you need to, recover the password for the *cyberx_host* user from the **Sites and sensors** page in Defender for IoT.
+
+As part of the containerized sensor, the following CLI commands have been modified:
+
+|Legacy name |Replacement |
+|||
+|`cyberx-xsense-reconfigure-interfaces` |`sudo dpkg-reconfigure iot-sensor` |
+|`cyberx-xsense-reload-interfaces` | `sudo dpkg-reconfigure iot-sensor` |
+|`cyberx-xsense-reconfigure-hostname` | `sudo dpkg-reconfigure iot-sensor` |
+| `cyberx-xsense-system-remount-disks` |`sudo dpkg-reconfigure iot-sensor` |
+
+The `sudo cyberx-xsense-limit-interface-I eth0 -l value` CLI command was removed. This command was used to limit the interface bandwidth that the sensor uses for day-to-day procedures, and is no longer supported.
+
+For more information, see [Defender for IoT installation](how-to-install-software.md) and [Work with Defender for IoT CLI commands](references-work-with-defender-for-iot-cli-commands.md).
+
+### Update to version 22.1.x
+
+To use all of Defender for IoT's latest features, make sure to update your sensor software versions to 22.1.x.
+
+If you're on a legacy version, you may need to run a series of updates in order to get to the latest version. You'll also need to update your firewall rules and re-activate your sensor with a new activation file.
+
+After you've upgraded to version 22.1.x, the new upgrade log can be found at the following path, accessed via SSH and the *cyberx_host* user: `/opt/sensor/logs/legacy-upgrade.log`.
+
+For more information, see [Update OT system software](update-ot-software.md).
+
+> [!NOTE]
+> Upgrading to version 22.1.x is a large update, and you should expect the update process to require more time than previous updates.
+>
+
+### New connectivity model and firewall requirements
+
+Defender for IoT version 22.1.x supports a new set of sensor connection methods that provide simplified deployment, improved security, scalability, and flexible connectivity.
+
+In addition to [migration steps](connect-sensors.md#migration-for-existing-customers), this new connectivity model requires that you open a new firewall rule. For more information, see:
+
+- **New firewall requirements**: [Sensor access to Azure portal](how-to-set-up-your-network.md#sensor-access-to-azure-portal).
+- **Architecture**: [Sensor connection methods](architecture-connections.md)
+- **Connection procedures**: [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md)
+
+### Protocol improvements
+
+This version of Defender for IoT provides improved support for:
+
+- Profinet DCP
+- Honeywell
+- Windows endpoint detection
+
+For more information, see [Microsoft Defender for IoT - supported IoT, OT, ICS, and SCADA protocols](concept-supported-protocols.md).
+### Modified, replaced, or removed options and configurations
+
+The following Defender for IoT options and configurations have been moved, removed, and/or replaced:
+
+- Reports previously found on the **Reports** page are now shown on the **Data Mining** page instead. You can also continue to view data mining information directly from the on-premises management console.
+
+- Changing a locally managed sensor name is now supported only by onboarding the sensor to the Azure portal again with the new name. Sensor names can no longer be changed directly from the sensor. For more information, see [Change the name of a sensor](how-to-manage-individual-sensors.md#change-the-name-of-a-sensor).
++
+## December 2021
+
+**Sensor software version**: 10.5.4
+
+- [Enhanced integration with Microsoft Sentinel (Preview)](#enhanced-integration-with-microsoft-sentinel-preview)
+- [Apache Log4j vulnerability](#apache-log4j-vulnerability)
+- [Alerting](#alerting)
+
+### Enhanced integration with Microsoft Sentinel (Preview)
+
+The new **IoT OT Threat Monitoring with Defender for IoT solution** is available and provides enhanced capabilities for Microsoft Defender for IoT integration with Microsoft Sentinel. The **IoT OT Threat Monitoring with Defender for IoT solution** is a set of bundled content, including analytics rules, workbooks, and playbooks, configured specifically for Defender for IoT data. This solution currently supports only Operational Networks (OT/ICS).
+
+For information on integrating with Microsoft Sentinel, see [Tutorial: Integrate Defender for Iot and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
+
+### Apache Log4j vulnerability
+
+Version 10.5.4 of Microsoft Defender for IoT mitigates the Apache Log4j vulnerability. For details, see [the security advisory update](https://techcommunity.microsoft.com/t5/microsoft-defender-for-iot/updated-15-dec-defender-for-iot-security-advisory-apache-log4j/m-p/3036844).
+
+### Alerting
+
+Version 10.5.4 of Microsoft Defender for IoT delivers important alert enhancements:
+
+- Alerts for certain minor events or edge-cases are now disabled.
+- For certain scenarios, similar alerts are minimized in a single alert message.
+
+These changes reduce alert volume and enable more efficient targeting and analysis of security and operational events.
+
+For more information, see [OT monitoring alert types and descriptions](alert-engine-messages.md).
+
+#### Alerts permanently disabled
+
+The alerts listed below are permanently disabled with version 10.5.4. Detection and monitoring are still supported for traffic associated with the alerts.
+
+**Policy engine alerts**
+
+- RPC Procedure Invocations
+- Unauthorized HTTP Server
+- Abnormal usage of MAC Addresses
+
+#### Alerts disabled by default
+
+The alerts listed below are disabled by default with version 10.5.4. You can re-enable the alerts from the Support page of the sensor console, if necessary.
+
+**Anomaly engine alert**
+- Abnormal Number of Parameters in HTTP Header
+- Abnormal HTTP Header Length
+- Illegal HTTP Header Content
+
+**Operational engine alerts**
+- HTTP Client Error
+- RPC Operation Failed
+
+**Policy engine alerts**
+
+Disabling these alerts also disables monitoring of related traffic. Specifically, this traffic won't be reported in Data Mining reports.
+
+- Illegal HTTP Communication alert and HTTP Connections Data Mining traffic
+- Unauthorized HTTP User Agent alert and HTTP User Agents Data Mining traffic
+- Unauthorized HTTP SOAP Action and HTTP SOAP Actions Data Mining traffic
+
+#### Updated alert functionality
+
+**Unauthorized Database Operation alert**
+Previously, this alert covered DDL and DML alerting and Data Mining reporting. Now:
+- DDL traffic: alerting and monitoring are supported.
+- DML traffic: Monitoring is supported. Alerting isn't supported.
+
+**New Asset Detected alert**
+This alert is disabled for new devices detected in IT subnets. The New Asset Detected alert is still triggered for new devices discovered in OT subnets. OT subnets are detected automatically and can be updated by users if necessary.
+
+### Minimized alerting
+
+Alert triggering for specific scenarios has been minimized to help reduce alert volume and simplify alert investigation. In these scenarios, if a device performs repeated activity on targets, an alert is triggered once. Previously, a new alert was triggered each time the same activity was carried out.
+
+This new functionality is available on the following alerts:
+
+- Port Scan Detected alerts, based on activity of the source device (generated by the Anomaly engine)
+- Malware alerts, based on activity of the source device. (generated by the Malware engine).
+- Suspicion of Denial of Service Attack alerts, based on activity of the destination device (generated by the Malware engine)
+
+## Next steps
+
+[Getting started with Defender for IoT](getting-started.md)
dns Dns Private Resolver Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-bicep.md
+
+ Title: 'Quickstart: Create an Azure DNS Private Resolver - Bicep'
+
+description: Learn how to create Azure DNS Private Resolver. This article is a step-by-step quickstart to create and manage your first Azure DNS Private Resolver using Bicep.
+++ Last updated : 10/07/2022+++
+#Customer intent: As an administrator or developer, I want to learn how to create Azure DNS Private Resolver using Bicep so I can use Azure DNS Private Resolver as forwarder.
++
+# Quickstart: Create an Azure DNS Private Resolver using Bicep
+
+This quickstart describes how to use Bicep to create Azure DNS Private Resolver.
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-dns-private-resolver).
+
+This Bicep file is configured to create a:
+
+- Virtual network
+- DNS resolver
+- Inbound & outbound endpoints
+- Forwarding Rules & rulesets.
++
+Seven resources have been defined in this template:
+
+- [**Microsoft.Network/virtualnetworks**](/azure/templates/microsoft.network/virtualnetworks)
+- [**Microsoft.Network/dnsResolvers**](/azure/templates/microsoft.network/dnsresolvers)
+- [**Microsoft.Network/dnsResolvers/inboundEndpoints**](/azure/templates/microsoft.network/dnsresolvers/inboundendpoints)
+- [**Microsoft.Network/dnsResolvers/outboundEndpoints**](/azure/templates/microsoft.network/dnsresolvers/outboundendpoints)
+- [**Microsoft.Network/dnsForwardingRulesets**](/azure/templates/microsoft.network/dnsforwardingrulesets)
+- [**Microsoft.Network/dnsForwardingRulesets/forwardingRules**](/azure/templates/microsoft.network/dnsforwardingrulesets/forwardingrules)
+- [**Microsoft.Network/dnsForwardingRulesets/virtualNetworkLinks**](/azure/templates/microsoft.network/dnsforwardingrulesets/virtualnetworklinks)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+2. Deploy the Bicep file using either Azure CLI or Azure PowerShell
+
+# [CLI](#tab/CLI)
+
+````azurecli
+az group create --name exampleRG --location eastus
+az deployment group create --resource-group exampleRG --template-file main.bicep
+````
+
+# [PowerShell](#tab/PowerShell)
+
+````azurepowershell
+New-AzResourceGroup -Name exampleRG -Location eastus
+New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
+````
+++
+When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Validate the deployment
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli
+#Show the DNS resolver
+az dns-resolver show --name "sampleDnsResolver" --resource-group "sampleResourceGroup"
+
+#List the inbound endpoint
+az dns-resolver inbound-endpoint list --dns-resolver-name "sampleDnsResolver" --resource-group "sampleResourceGroup"
+
+#List the outbound endpoint
+az dns-resolver outbound-endpoint list --dns-resolver-name "sampleDnsResolver" --resource-group "sampleResourceGroup"
+
+```
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell
+#Show the DNS resolver
+Get-AzDnsResolver -Name "sampleDNSResolver" -ResourceGroupName "sampleResourceGroup"
+
+#List the inbound endpoint list
+Get-AzDnsResolverInboundEndpoint -DnsResolverName "sampleDnsResolver" -ResourceGroupName "sampleResourceGroup"
+
+#List the outbound endpoint
+Get-AzDnsResolverOutboundEndpoint -DnsResolverName "sampleDnsResolver" -ResourceGroupName "sampleResourceGroup"
+
+```
++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resources in the following order.
+
+### Delete the DNS resolver
+
+# [CLI](#tab/CLI)
+````azurecli
+#Delete the inbound endpoint
+az dns-resolver inbound-endpoint delete --dns-resolver-name "sampleDnsResolver" --name "sampleInboundEndpoint" --resource-group "exampleRG"
+
+#Delete the virtual network link
+az dns-resolver vnet-link delete --ruleset-name "sampleDnsForwardingRuleset" --resource- group "exampleRG" --name "sampleVirtualNetworkLink"
+
+#Delete DNS forwarding ruleset
+az dns-resolver forwarding-ruleset delete --name "samplednsForwardingRulesetName" --resource-group "exampleRG"
+
+#Delete the outbound endpoint
+az dns-resolver outbound-endpoint delete --dns-resolver-name "sampleDnsResolver" --name "sampleOutboundEndpoint" --resource-group "exampleRG"
+
+#Delete the DNS resolver
+az dns-resolver delete --name "sampleDnsResolver" --resource-group "exampleRG"
+````
+
+# [PowerShell](#tab/PowerShell)
+```azurepowershell
+#Delete the inbound endpoint
+Remove-AzDnsResolverInboundEndpoint -Name myinboundendpoint -DnsResolverName mydnsresolver -ResourceGroupName myresourcegroup
+
+#Delete the virtual network link
+Remove-AzDnsForwardingRulesetVirtualNetworkLink -DnsForwardingRulesetName $dnsForwardingRuleset.Name -Name vnetlink -ResourceGroupName myresourcegroup
+
+#Delete the DNS forwarding ruleset
+Remove-AzDnsForwardingRuleset -Name $dnsForwardingRuleset.Name -ResourceGroupName myresourcegroup
+
+#Delete the outbound endpoint
+Remove-AzDnsResolverOutboundEndpoint -DnsResolverName mydnsresolver -ResourceGroupName myresourcegroup -Name myoutboundendpoint
+
+#Delete the DNS resolver
+Remove-AzDnsResolver -Name mydnsresolver -ResourceGroupName myresourcegroup
+````
++
+## Next steps
+
+In this quickstart, you created a virtual network and DNS private resolver. Now configure name resolution for Azure and on-premises domains
+- [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md)
dns Dns Private Resolver Get Started Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-template.md
+
+ Title: 'Quickstart: Create an Azure DNS Private Resolver - Azure Resource Manager template (ARM template)'
+
+description: Learn how to create Azure DNS Private Resolver. This article is a step-by-step quickstart to create and manage your first Azure DNS Private Resolver using Azure Resource Manager template (ARM template).
+++ Last updated : 10/07/2022+++
+#Customer intent: As an administrator or developer, I want to learn how to create Azure DNS Private Resolver using ARM template so I can use Azure DNS Private Resolver as forwarder..
++
+# Quickstart: Create an Azure DNS Private Resolver using an ARM template
+
+This quickstart describes how to use an Azure Resource Manager template (ARM template) to create Azure DNS Private Resolver.
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Fazure-dns-private-resolver%2Fazuredeploy.json)
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the template
+
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-dns-private-resolver).
+
+This template is configured to create a:
+
+- Virtual network
+- DNS resolver
+- Inbound & outbound endpoints
+- Forwarding Rules & rulesets.
++
+Seven resources have been defined in this template:
+
+- [**Microsoft.Network/virtualnetworks**](/azure/templates/microsoft.network/virtualnetworks)
+- [**Microsoft.Network/dnsResolvers**](/azure/templates/microsoft.network/dnsresolvers)
+- [**Microsoft.Network/dnsResolvers/inboundEndpoints**](/azure/templates/microsoft.network/dnsresolvers/inboundendpoints)
+- [**Microsoft.Network/dnsResolvers/outboundEndpoints**](/azure/templates/microsoft.network/dnsresolvers/outboundendpoints)
+- [**Microsoft.Network/dnsForwardingRulesets**](/azure/templates/microsoft.network/dnsforwardingrulesets)
+- [**Microsoft.Network/dnsForwardingRulesets/forwardingRules**](/azure/templates/microsoft.network/dnsforwardingrulesets/forwardingrules)
+- [**Microsoft.Network/dnsForwardingRulesets/virtualNetworkLinks**](/azure/templates/microsoft.network/dnsforwardingrulesets/virtualnetworklinks)
++
+## Deploy the template
+
+# [CLI](#tab/CLI)
+
+````azurecli-interactive
+read -p "Enter the location: " location
+resourceGroupName="exampleRG"
+templateUri="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.network/azure-dns-private-resolver/azuredeploy.json"
+
+az group create \
+--name $resourceGroupName \
+--locataion $location
+
+az deployment group create \
+--resource-group $resourceGroupName \
+--template-uri $templateUri
+````
+
+# [PowerShell](#tab/PowerShell)
+````azurepowershell-interactive
+$location = Read-Host -Prompt "Enter the location: "
+$templateUri = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.network/azure-dns-private-resolver/azuredeploy.json"
+
+$resourceGroupName = "exampleRG"
+
+New-AzResourceGroup -Name $resourceGroupName -Location $location
+New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri $templateUri
+````
++
+## Validate the deployment
+
+1. Sign in to the [Azure portal](https://portal.azure.com)
+
+1. Select **Resource groups** from the left pane.
+
+1. Select the resource group that you created in the previous section.
+
+1. The resource group should contain the following resources:
+
+ [ ![DNS resolver resource group](./media/dns-resolver-getstarted-template/dns-resolver-resource-group.png)](./media/dns-resolver-getstarted-template/dns-resolver-resource-group.png#lightbox)
+
+1. Select the DNS private resolver service to verify the provisioning and current state.
+
+ [ ![DNS resolver page](./media/dns-resolver-getstarted-template/resolver-page.png)](./media/dns-resolver-getstarted-template/resolver-page.png#lightbox)
+
+1. Select the Inbound Endpoints and Outbound Endpoints to verify that the endpoints are created and the outbound endpoint is associated with the forwarding ruleset.
+
+ [ ![DNS resolver inbound endpoint](./media/dns-resolver-getstarted-template/resolver-inbound-endpoint.png)](./media/dns-resolver-getstarted-template/resolver-inbound-endpoint.png#lightbox)
+
+ [ ![DNS resolver outbound endpoint](./media/dns-resolver-getstarted-template/resolver-outbound-endpoint.png)](./media/dns-resolver-getstarted-template/resolver-outbound-endpoint.png#lightbox)
+
+1. Select the **Associated ruleset** from the outbound endpoint page to verify the forwarding ruleset and rules creation.
+
+ [ ![DNS resolver forwarding rule](./media/dns-resolver-getstarted-template/resolver-forwarding-rule.png)](./media/dns-resolver-getstarted-template/resolver-forwarding-rule.png#lightbox)
+
+1. Verify the resolver Virtual network is linked with forwarding ruleset.
+
+ [ ![DNS resolver VNet link](./media/dns-resolver-getstarted-template/resolver-vnet-link.png)](./media/dns-resolver-getstarted-template/resolver-vnet-link.png#lightbox)
+
+## Next steps
+
+In this quickstart, you created a virtual network and DNS private resolver. Now configure name resolution for Azure and on-premises domains
+- [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md)
event-grid Auth0 Log Stream Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/auth0-log-stream-blob-storage.md
This article shows you how to send Auth0 events to Azure Blob Storage via Azure
1. Select the container and verify that your Auth0 logs are being stored. > [!NOTE]
- > You can use steps in the article to handle events from other event sources too. For a generic example of sending Event Grid events to Azure Blob Storage or Azure Monitor's Application Insights, see [this example on GitHub](https://github.com/awkwardindustries/azure-monitor-handler).
+ > You can use steps in the article to handle events from other event sources too. For a generic example of sending Event Grid events to Azure Blob Storage or Azure Monitor Application Insights, see [this example on GitHub](https://github.com/awkwardindustries/azure-monitor-handler).
## Next steps - [Auth0 Partner Topic](auth0-overview.md) - [Subscribe to Auth0 events](auth0-how-to.md)-- [Send Auth0 events to Azure Blob Storage](auth0-log-stream-blob-storage.md)
+- [Send Auth0 events to Azure Blob Storage](auth0-log-stream-blob-storage.md)
hdinsight Apache Spark Analyze Application Insight Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-analyze-application-insight-logs.md
For information on adding storage to an existing cluster, see the [Add additiona
### Data schema
-Application Insights provides [export data model](../../azure-monitor/app/export-data-model.md) information for the telemetry data format exported to blobs. The steps in this document use Spark SQL to work with the data. Spark SQL can automatically generate a schema for the JSON data structure logged by Application Insights.
+Application Insights provides [export data model](../../azure-monitor/app/export-telemetry.md#application-insights-export-data-model) information for the telemetry data format exported to blobs. The steps in this document use Spark SQL to work with the data. Spark SQL can automatically generate a schema for the JSON data structure logged by Application Insights.
## Export telemetry data
healthcare-apis Use Smart On Fhir Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/use-smart-on-fhir-proxy.md
Last updated 06/03/2022
# SMART on FHIR overview
-[SMART on FHIR](https://docs.smarthealthit.org/) is a set of open specifications to integrate partner applications with FHIR servers and electronic medical records systems that have Fast Healthcare Interoperability Resources (FHIR&#174;) interfaces. One of the main purposes of the specifications is to describe how an application should discover authentication endpoints for an FHIR server and start an authentication sequence.
+Substitutable Medical Applications and Reusable Technologies([SMART on FHIR](https://docs.smarthealthit.org/)) is a healthcare standard through which applications can access clinical information through a data store. It adds a security layer based on open standards including OAuth2 and OpenID Connect, to FHIR interfaces to enable integration with EHR systems. Using SMART on FHIR provides at least three important benefits:
+- Applications have a known method for obtaining authentication/authorization to a FHIR repository.
+- Users accessing a FHIR repository with SMART on FHIR are restricted to resources associated with the user, rather than having access to all data in the repository.
+- Users have the ability to grant applications access to a further limited set of their data by using SMART clinical scopes.
-Authentication is based on OAuth2. But because SMART on FHIR uses parameter naming conventions that arenΓÇÖt immediately compatible with Azure Active Directory (Azure AD), the Azure API for FHIR has a built-in Azure AD SMART on FHIR proxy that enables a subset of the SMART on FHIR launch sequences. Specifically, the proxy enables the [EHR launch sequence](https://hl7.org/fhir/smart-app-launch/#ehr-launch-sequence).
+<!SMART Implementation Guide v1.0.0 is supported by Azure Health Data Services and Azure API Management (APIM). This is our recommended approach, as it enabled Health IT developers to comply with 21st Century Act Criterion §170.315(g)(10) Standardized API for patient and population services.
-Below tutorial describes how to use the proxy to enable SMART on FHIR applications with Azure API for FHIR.
+Sample demonstrates and list steps that can be referenced to pass ONC G(10) with Inferno test suite.
-## Tutorial: SMART on FHIR proxy
-**Prerequisites**
+>
-- An instance of the Azure API for FHIR-- [.NET Core 2.2](https://dotnet.microsoft.com/download/dotnet-core/2.2)
+One of the main purposes of the specifications is to describe how an application should discover authentication endpoints for an FHIR server and start an authentication sequence. SMART on FHIR uses parameter naming conventions that arenΓÇÖt immediately compatible with Azure Active Directory (Azure AD), the Azure API for FHIR has a built-in Azure AD SMART on FHIR proxy that enables a subset of the SMART on FHIR launch sequences. Specifically, the proxy enables the [EHR launch sequence](https://hl7.org/fhir/smart-app-launch/#ehr-launch-sequence).
-## Configure Azure AD registrations
+Below tutorial describes steps to enable SMART on FHIR applications with FHIR Service.
-SMART on FHIR requires that `Audience` has an identifier URI equal to the URI of the FHIR service. The standard configuration of the Azure API for FHIR uses an `Audience` value of `https://azurehealthcareapis.com`. However, you can also set a value matching the specific URL of your FHIR service (for example `https://MYFHIRAPI.azurehealthcareapis.com`). This is required when working with the SMART on FHIR proxy.
+## Prerequisites
+
+- An instance of the FHIR Service
+- .NET SDK 6.0
+- [Enable cross-origin resource sharing (CORS)](configure-cross-origin-resource-sharing.md)
+- [Register public client application in Azure AD](https://learn.microsoft.com/azure/healthcare-apis/azure-api-for-fhir/register-public-azure-ad-client-app)
+ - After registering the application, make note of the applicationId for client application.
-You'll also need a client application registration. Most SMART on FHIR applications are single-page JavaScript applications. So you should follow the instructions for configuring a [public client application in Azure AD](register-public-azure-ad-client-app.md).
+<! Tutorial : To enable SMART on FHIR using APIM, follow below steps
+As a pre-requisite , ensure you have access to Azure Subscription of FHIR service, to create resources and add role assignments.
-After you complete these steps, you should have:
+Step 1 : Set up FHIR SMART user role
+Follow the steps listed under section [Manage Users: Assign Users to Role](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal). Any user added to role - "FHIR SMART User" will be able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token which includes a fhirUser claim and a clinical scopes claim. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes.
-- A FHIR server with the audience set to `https://MYFHIRAPI.azurehealthcareapis.com`, where `MYFHIRAPI` is the name of your Azure API for FHIR instance.-- A public client application registration. Make a note of the application ID for this client application.
+Step 2 : [Follow the steps](https://github.com/microsoft/fhir-server/tree/feature/smart-onc-g10-sample/samples/smart) for setting up the FHIR server integrated with APIM in production. >
-### Set admin consent for your app
+Lets go over individual steps to enable SMART on FHIR
+## Step 1 : Set admin consent for your client application
To use SMART on FHIR, you must first authenticate and authorize the app. The first time you use SMART on FHIR, you also must get administrative consent to let the app access your FHIR resources.
If you do have administrative privileges, complete the following steps to grant
To add yourself or another user as owner of an app: 1. In the Azure portal, go to Azure Active Directory.
-1. In the left menu, select **App Registration**.
-1. Search for the app registration you created, and then select it.
-1. In the left menu, under **Manage**, select **Owners**.
-1. Select **Add owners**, and then add yourself or the user you want to have admin consent.
-1. Select **Save**.
-
-## Enable the SMART on FHIR proxy
-
-Enable the SMART on FHIR proxy in the **Authentication** settings for your Azure API for FHIR instance by selecting the **SMART on FHIR proxy** check box:
+2. In the left menu, select **App Registration**.
+3. Search for the app registration you created, and then select it.
+4. In the left menu, under **Manage**, select **Owners**.
+5. Select **Add owners**, and then add yourself or the user you want to have admin consent.
+6. Select **Save**
-![Selections for enabling the SMART on FHIR proxy](media/tutorial-smart-on-fhir/enable-smart-on-fhir-proxy.png)
-
-## Enable CORS
+## Step 2: Enable the SMART on FHIR proxy
-Because most SMART on FHIR applications are single-page JavaScript apps, you need to [enable cross-origin resource sharing (CORS)](configure-cross-origin-resource-sharing.md) for the Azure API for FHIR:
+SMART on FHIR requires that `Audience` has an identifier URI equal to the URI of the FHIR service. The standard configuration of the Azure API for FHIR uses an `Audience` value of `https://azurehealthcareapis.com`. However, you can also set a value matching the specific URL of your FHIR service (for example `https://MYFHIRAPI.azurehealthcareapis.com`). This is required when working with the SMART on FHIR proxy.
-![Selections for enabling CORS](media/tutorial-smart-on-fhir/enable-cors.png)
+To enable the SMART on FHIR proxy in the **Authentication** settings for your Azure API for FHIR instance, select the **SMART on FHIR proxy** check box:
-## Configure the reply URL
+![Selections for enabling the SMART on FHIR proxy](media/tutorial-smart-on-fhir/enable-smart-on-fhir-proxy.png)
The SMART on FHIR proxy acts as an intermediary between the SMART on FHIR app and Azure AD. The authentication reply (the authentication code) must go to the SMART on FHIR proxy instead of the app itself. The proxy then forwards the reply to the app.
Add the reply URL to the public client application that you created earlier for
![Reply URL configured for the public client](media/tutorial-smart-on-fhir/configure-reply-url.png)
-## Get a test patient
+## Step 3: Get a test patient
To test the Azure API for FHIR and the SMART on FHIR proxy, you'll need to have at least one patient in the database. If you've not interacted with the API yet, and you don't have data in the database, see [Access the FHIR service using Postman](./../fhir/use-postman.md) to load a patient. Make a note of the ID of a specific patient.
-## Download the SMART on FHIR app launcher
+## Step 4: Download the SMART on FHIR app launcher
The open-source [FHIR Server for Azure repository](https://github.com/Microsoft/fhir-server) includes a simple SMART on FHIR app launcher and a sample SMART on FHIR app. In this tutorial, use this SMART on FHIR launcher locally to test the setup.
Use this command to run the application:
dotnet run ```
-## Test the SMART on FHIR proxy
+## Step 5: Test the SMART on FHIR proxy
After you start the SMART on FHIR app launcher, you can point your browser to `https://localhost:5001`, where you should see the following screen:
healthcare-apis Smart On Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/smart-on-fhir.md
Last updated 11/10/2022
# SMART on FHIR
-Substitutable Medical Applications and Reusable Technologies [SMART on FHIR](https://docs.smarthealthit.org/) is a healthcare standard through which applications can access clinical information through a data store. It adds a security layer based on open standards including OAuth2 and OpenID Connect, to FHIR interfaces to enable integration with EHR systems. Using SMART on FHIR provides at least three important benefits:
-- Applications have a known method for obtaining authentication/authorization to a FHIR repository-- Users accessing a FHIR repository with SMART on FHIR are restricted to resources associated with the user, rather than having access to all data in the repository-- Users have the ability to grant applications access to a further limited set of their data by using SMART clinical scopes.
+Substitutable Medical Applications and Reusable Technologies([SMART on FHIR](https://docs.smarthealthit.org/)) is a healthcare standard through which applications can access clinical information through a data store. It adds a security layer based on open standards including OAuth2 and OpenID Connect, to FHIR interfaces to enable integration with EHR systems. Using SMART on FHIR provides at least three important benefits:
+- Applications have a known method for obtaining authentication/authorization to a FHIR repository.
+- Users accessing a FHIR repository with SMART on FHIR are restricted to resources associated with the user, rather than having access to all data in the repository.
+- Users have the ability to grant applications access to a limited set of their data by using SMART clinical scopes.
<!SMART Implementation Guide v1.0.0 is supported by Azure Health Data Services and Azure API Management (APIM). This is our recommended approach, as it enabled Health IT developers to comply with 21st Century Act Criterion §170.315(g)(10) Standardized API for patient and population services. Sample demonstrates and list steps that can be referenced to pass ONC G(10) with Inferno test suite. >-
-One of the main purposes of the specifications is to describe how an application should discover authentication endpoints for an FHIR server and start an authentication sequence. SMART on FHIR uses parameter naming conventions that arenΓÇÖt immediately compatible with Azure Active Directory (Azure AD), the Azure Health Data Services (FHIR Service) has a built-in Azure AD SMART on FHIR proxy that enables a subset of the SMART on FHIR launch sequences. Specifically, the proxy enables the [EHR launch sequence](https://hl7.org/fhir/smart-app-launch/#ehr-launch-sequence).
+One of the main purposes of the specification is to describe how an application should discover authentication endpoints for an FHIR server and start an authentication sequence. SMART on FHIR uses parameter naming conventions that arenΓÇÖt immediately compatible with Azure Active Directory (Azure AD). Azure Health Data Services (FHIR Service) has a built-in Azure AD SMART on FHIR proxy that enables a subset of the SMART on FHIR launch sequences. Specifically, the proxy enables the [EHR launch sequence](https://hl7.org/fhir/smart-app-launch/#ehr-launch-sequence).
Below tutorial describes steps to enable SMART on FHIR applications with FHIR Service.
Below tutorial describes steps to enable SMART on FHIR applications with FHIR Se
- An instance of the FHIR Service - .NET SDK 6.0 - [Enable cross-origin resource sharing (CORS)](configure-cross-origin-resource-sharing.md)-- [Register public client application in Azure AD]([https://learn.microsoft.com/azure/healthcare-apis/azure-api-for-fhir/register-public-azure-ad-client-app]
+- [Register public client application in Azure AD](https://learn.microsoft.com/azure/healthcare-apis/azure-api-for-fhir/register-public-azure-ad-client-app)
- After registering the application, make note of the applicationId for client application. <! Tutorial : To enable SMART on FHIR using APIM, follow below steps
+As a pre-requisite , ensure you have access to Azure Subscription of FHIR service, to create resources and add role assignments.
+ Step 1 : Set up FHIR SMART user role Follow the steps listed under section [Manage Users: Assign Users to Role](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal). Any user added to this role will be able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token which includes a fhirUser claim and a clinical scopes claim. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes.
-Step 2 : Deploy the necessary components to set up the FHIR server integrated with APIM in production. Follow ReadMe
-Step 3 : Load US Core profiles
-Step 4 : Create Azure AD custom policy using this README >
+Step 2 : [Follow the steps](https://github.com/microsoft/fhir-server/tree/feature/smart-onc-g10-sample/samples/smart) for setting up the FHIR server integrated with APIM in production. >
Lets go over individual steps to enable SMART on FHIR ## Step 1 : Set admin consent for your client application
To add yourself or another user as owner of an app:
5. Select **Add owners**, and then add yourself or the user you want to have admin consent. 6. Select **Save**
+## Step 2: Enable the SMART on FHIR proxy
-## Step 2 : Configure Azure AD registrations
-
-SMART on FHIR requires that `Audience` has an identifier URI equal to the URI of the FHIR service. The standard configuration of the FHIR service uses an `Audience` value of `https://azurehealthcareapis.com`. However, you can also set a value matching the specific URL of your FHIR service (for example `https://MYFHIRAPI.fhir.azurehealthcareapis.com`). This is required when working with the SMART on FHIR proxy.
-## Step 3: Enable the SMART on FHIR proxy
+SMART on FHIR requires that `Audience` has an identifier URI equal to the URI of the FHIR service. The standard configuration of the FHIR service uses an `Audience` value of `https://fhir.azurehealthcareapis.com`. However, you can also set a value matching the specific URL of your FHIR service (for example `https://MYFHIRAPI.fhir.azurehealthcareapis.com`). This is required when working with the SMART on FHIR proxy.
-Enable the SMART on FHIR proxy in the **Authentication** settings for your FHIR instance by selecting the **SMART on FHIR proxy** check box.
+To enable the SMART on FHIR proxy in the **Authentication** settings for your FHIR instance, select the **SMART on FHIR proxy** check box.
The SMART on FHIR proxy acts as an intermediary between the SMART on FHIR app and Azure AD. The authentication reply (the authentication code) must go to the SMART on FHIR proxy instead of the app itself. The proxy then forwards the reply to the app.
You can generate the combined reply URL by using a script like this:
```PowerShell $replyUrl = "https://localhost:5001/sampleapp/https://docsupdatetracker.net/index.html"
-$fhirServerUrl = "https://MYFHIRAPI.azurewebsites.net"
+$fhirServerUrl = "https://MYFHIRAPI.fhir.azurewebsites.net"
$bytes = [System.Text.Encoding]::UTF8.GetBytes($ReplyUrl) $encodedText = [Convert]::ToBase64String($bytes) $encodedText = $encodedText.TrimEnd('=');
Add the reply URL to the public client application that you created earlier for
<!![Reply URL configured for the public client](media/tutorial-smart-on-fhir/configure-reply-url.png)>
-## Step 4 : Get a test patient
+## Step 3 : Get a test patient
To test the FHIR service and the SMART on FHIR proxy, you'll need to have at least one patient in the database. If you've not interacted with the API yet, and you don't have data in the database, see [Access the FHIR service using Postman](./../fhir/use-postman.md) to load a patient. Make a note of the ID of a specific patient.
-## Step 5 : Download the SMART on FHIR app launcher
+## Step 4 : Download the SMART on FHIR app launcher
The open-source [FHIR Server for Azure repository](https://github.com/Microsoft/fhir-server) includes a simple SMART on FHIR app launcher and a sample SMART on FHIR app. In this tutorial, use this SMART on FHIR launcher locally to test the setup.
Use this command to run the application:
dotnet run ```
-## Step 6 : Test the SMART on FHIR proxy
+## Step 5 : Test the SMART on FHIR proxy
After you start the SMART on FHIR app launcher, you can point your browser to `https://localhost:5001`, where you should see the following screen:
healthcare-apis Deploy 02 New Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-02-new-button.md
Previously updated : 11/18/2022 Last updated : 11/22/2022 # Quickstart: Deploy MedTech service with an Azure Resource Manager template
-In this article, you'll learn how to deploy MedTech service in the Azure portal using an Azure Resource Manager (ARM) template. This ARM template will be used with the **Deploy to Azure** button to make it easy to provide the information you need to automatically create the infrastructure and configuration of your deployment. For more information about Azure Resource Manager (ARM) templates, see [What are ARM templates?](../../azure-resource-manager/templates/overview.md).
+In this article, you'll learn how to deploy MedTech service in the Azure portal using an Azure Resource Manager (ARM) template. This ARM template will be used with the **Deploy to Azure** button to make it easy to provide the information you need to automatically create the infrastructure and configuration of your deployment.
-The ARM template used in this article is available from the [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors-with-iothub/) site using the **azuredeploy.json** file located on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/).
+For more information about ARM templates, see [What are ARM templates?](../../azure-resource-manager/templates/overview.md).
+
+The ARM template used in this article is available from the [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors/) site using the **azuredeploy.json** file located on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/).
If you need to see a diagram with information on the MedTech service deployment, there's an architecture overview at [Choose a deployment method](deploy-iot-connector-in-azure.md#deployment-architecture-overview). This diagram shows the data flow steps of deployment and how MedTech service processes data into a Fast Healthcare Interoperability Resources (FHIR&#174;) Observation.
healthcare-apis Deploy 08 New Ps Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-08-new-ps-cli.md
Previously updated : 11/18/2022 Last updated : 11/22/2022 # Quickstart: Using Azure PowerShell and Azure CLI to deploy the MedTech service with Azure Resource Manager templates
-In this article, you'll learn how to use Azure PowerShell and Azure CLI to deploy the MedTech service using an Azure Resource Manager (ARM) template. When you call the template from PowerShell or CLI, it provides automation that enables you to distribute your deployment to large numbers of developers. Using PowerShell or CLI allows for modifiable automation capabilities that will speed up your deployment configuration in enterprise environments. For more information about ARM templates, see [What are ARM templates?](./../../azure-resource-manager/templates/overview.md).
+In this article, you'll learn how to use Azure PowerShell and Azure CLI to deploy the MedTech service using an Azure Resource Manager (ARM) template. When you call the template from PowerShell or CLI, it provides automation that enables you to distribute your deployment to large numbers of developers. Using PowerShell or CLI allows for modifiable automation capabilities that will speed up your deployment configuration in enterprise environments.
-The ARM template used in this article is available from the [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors-with-iothub/) site using the **azuredeploy.json** file located on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/).
+For more information about ARM templates, see [What are ARM templates?](./../../azure-resource-manager/templates/overview.md).
-## Resources provided by the ARM template
+The ARM template used in this article is available from the [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors/) site using the **azuredeploy.json** file located on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/).
+
+## Resources provided by the Azure Resource Manager template
The ARM template will help you automatically configure and deploy the following resources. Each one can be modified to meet your deployment requirements.
Before you can begin, you need to have the following prerequisites if you're usi
- Use [Azure CLI](/cli/azure/install-azure-cli).
-## Deploy MedTech service with the ARM template and Azure PowerShell
+## Deploy MedTech service with the Azure Resource Manager template and Azure PowerShell
Complete the following five steps to deploy the MedTech service using Azure PowerShell:
Complete the following five steps to deploy the MedTech service using Azure Powe
> [!NOTE] > If you want to run the Azure PowerShell commands locally, first enter `Connect-AzAccount` into your PowerShell command-line shell and enter your Azure credentials.
-## Deploy MedTech service with the ARM template and Azure CLI
+## Deploy MedTech service with the Azure Resource Manager template and Azure CLI
Complete the following five steps to deploy the MedTech service using Azure CLI:
iot-central Concepts Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-iot-edge.md
Downstream devices connect to a module in the gateway that provides IoT Central
The IoT Edge device is provisioned in IoT Central along with the downstream devices connected to the IoT Edge device. Currently, IoT Central doesn't have runtime support for a gateway to provide an identity and to provision downstream devices. If you bring your own identity translation module, IoT Central can support this pattern.
-The [Azure IoT Central gateway module for Azure Video Analyzer](https://github.com/iot-for-all/iotc-ava-gateway/blob/main/README.md) on GitHub uses this pattern.
- ### Downstream device relationships with a gateway and modules If the downstream devices connect to an IoT Edge gateway device through the *IoT Edge hub* module, the IoT Edge device is a transparent gateway:
iot-central Howto Manage Dashboards With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-dashboards-with-rest-api.md
The IoT Central REST API lets you:
Use the following request to create a dashboard. ```http
-PUT https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-06-30-preview
+PUT https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-10-31-preview
``` `dashboardId` - A unique [DTMI](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#digital-twin-model-identifier) identifier for the dashboard.
The response to this request looks like the following example:
Use the following request to retrieve the details of a dashboard by using a dashboard ID. ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-06-30-preview
+GET https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-10-31-preview
``` The response to this request looks like the following example:
The response to this request looks like the following example:
## Update a dashboard ```http
-PATCH https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-06-30-preview
+PATCH https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-10-31-preview
``` The following example shows a request body that updates the display name of a dashboard and size of the tile:
The response to this request looks like the following example:
Use the following request to delete a dashboard by using the dashboard ID: ```http
-DELETE https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-06-30-preview
+DELETE https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-10-31-preview
``` ## List dashboards
DELETE https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboar
Use the following request to retrieve a list of dashboards from your application: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/dashboards?api-version=2022-06-30-preview
+GET https://{your app subdomain}.azureiotcentral.com/api/dashboards?api-version=2022-10-31-preview
``` The response to this request looks like the following example:
iot-central Howto Manage Data Export With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-data-export-with-rest-api.md
Each data export definition can send data to one or more destinations. Create th
Use the following request to create or update a destination definition: ```http
-PUT https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-06-30-preview
+PUT https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-10-31-preview
``` * destinationId - Unique ID for the destination.
The response to this request looks like the following example:
Use the following request to retrieve details of a destination from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-06-30-preview
+GET https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-10-31-preview
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to retrieve a list of destinations from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/dataExport/destinations?api-version=2022-06-30-preview
+GET https://{subdomain}.{baseDomain}/api/dataExport/destinations?api-version=2022-10-31-preview
``` The response to this request looks like the following example:
The response to this request looks like the following example:
### Patch a destination ```http
-PATCH https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-06-30-preview
+PATCH https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-10-31-preview
``` You can use this to perform an incremental update to an export. The sample request body looks like the following example that updates the `displayName` to a destination:
The response to this request looks like the following example:
Use the following request to delete a destination: ```http
-DELETE https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-06-30-preview
+DELETE https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-10-31-preview
``` ### Create or update an export definition
DELETE https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destination
Use the following request to create or update a data export definition: ```http
-PUT https://{subdomain}.{baseDomain}/api/dataExport/exports/{exportId}?api-version=2022-06-30-preview
+PUT https://{subdomain}.{baseDomain}/api/dataExport/exports/{exportId}?api-version=2022-10-31-preview
``` The following example shows a request body that creates an export definition for device telemetry:
The response to this request looks like the following example:
Use the following request to retrieve details of an export definition from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/dataExport/exports/{exportId}?api-version=2022-06-30-preview
+GET https://{subdomain}.{baseDomain}/api/dataExport/exports/{exportId}?api-version=2022-10-31-preview
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to retrieve a list of export definitions from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/dataExport/exports?api-version=2022-06-30-preview
+GET https://{subdomain}.{baseDomain}/api/dataExport/exports?api-version=2022-10-31-preview
``` The response to this request looks like the following example:
The response to this request looks like the following example:
### Patch an export definition ```http
-PATCH https://{subdomain}.{baseDomain}/dataExport/exports/{exportId}?api-version=2022-06-30-preview
+PATCH https://{subdomain}.{baseDomain}/dataExport/exports/{exportId}?api-version=2022-10-31-preview
``` You can use this to perform an incremental update to an export. The sample request body looks like the following example that updates the `enrichments` to an export:
The response to this request looks like the following example:
Use the following request to delete an export definition: ```http
-DELETE https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-06-30-preview
+DELETE https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-10-31-preview
``` ## Next steps
iot-central Howto Query With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-query-with-rest-api.md
To learn how to query devices by using the IoT Central UI, see [How to use data
Use the following request to run a query: ```http
-POST https://{your app subdomain}.azureiotcentral.com/api/query?api-version=2022-06-30-preview
+POST https://{your app subdomain}.azureiotcentral.com/api/query?api-version=2022-10-31-preview
``` The query is in the request body and looks like the following example:
iot-central Howto Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-transform-data.md
To configure the device bridge to transform the exported device data:
1. Select **Go &rarr;** to open the **App Service Editor** page. Make the following changes:
- 1. Open the *wwwroot/IoTCIntegration/index.js* file. Replace all the code in this file with the code in [index.js](https://raw.githubusercontent.com/iot-for-all/iot-central-compute/main/Azure_function/index.js).
+ 1. Open the *wwwroot/IoTCIntegration/index.js* file. Replace all the code in this file with the code in [index.js](https://raw.githubusercontent.com/Azure/iot-central-compute/main/Azure_function/index.js).
1. In the new *index.js*, update the `openWeatherAppId` variable file with Open Weather API key you obtained previously.
To configure the device bridge to transform the exported device data:
message.properties.add('computed', true); ```
- For reference, you can view a completed example of the [engine.js](https://raw.githubusercontent.com/iot-for-all/iot-central-compute/main/Azure_function/lib/engine.js) file.
+ For reference, you can view a completed example of the [engine.js](https://raw.githubusercontent.com/Azure/iot-central-compute/main/Azure_function/lib/engine.js) file.
1. In the **App Service Editor**, select **Console** in the left navigation. Run the following commands to install the required packages:
To configure the device bridge to transform the exported device data:
This section describes how to set up the Azure IoT Central application.
-First, save the [device model](https://raw.githubusercontent.com/iot-for-all/iot-central-compute/main/model.json) file to your local machine.
+First, save the [device model](https://raw.githubusercontent.com/Azure/iot-central-compute/main/model.json) file to your local machine.
To add a device template to your IoT Central application, navigate to your IoT Central application and then:
To run a sample device that tests the scenario:
1. Clone the GitHub repository that contains the sample code, run the following command: ```bash
- git clone https://github.com/iot-for-all/iot-central-compute
+ git clone https://github.com/Azure/iot-central-compute
``` 1. To connect the sample device to your IoT Central application, edit the connection settings in the *iot-central-compute/device/device.js* file. Replace the scope ID and group SAS key with the values you made a note of previously:
iot-central Overview Iot Central Solution Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-solution-builder.md
Scenarios that process IoT data outside of IoT Central to extract business value
For example, use the IoT Central continuous data export feature to continuously ingest your IoT data into an Azure Synapse store. Then use Azure Data Factory to bring data from external systems into the Azure Synapse store. Use the Azure Synapse store with Power BI to generate your business reports.
-To learn more, see [Transform data for IoT Central](howto-transform-data.md). For a complete, end-to-end sample, see the [IoT Central Compute](https://github.com/iot-for-all/iot-central-compute) GitHub repository.
+To learn more, see [Transform data for IoT Central](howto-transform-data.md). For a complete, end-to-end sample, see the [IoT Central Compute](https://github.com/Azure/iot-central-compute) GitHub repository.
## Integrate with other services
You can use the data export and rules capabilities in IoT Central to integrate w
- [Extend Azure IoT Central with custom rules using Stream Analytics, Azure Functions, and SendGrid](howto-create-custom-rules.md) - [Extend Azure IoT Central with custom analytics using Azure Databricks](howto-create-custom-analytics.md)
-You can use IoT Edge devices connected to your IoT Central application to integrate with [Azure Video Analyzer](../../azure-video-analyzer/video-analyzer-docs/overview.md). To learn more, see the [Azure IoT Central gateway module for Azure Video Analyzer](https://github.com/iot-for-all/iotc-ava-gateway/blob/main/README.md) on GitHub.
+You can use IoT Edge devices connected to your IoT Central application to integrate with [Azure Video Analyzer](../../azure-video-analyzer/video-analyzer-docs/overview.md).
## Integrate with companion applications
iot-dps Quick Create Simulated Device Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-tpm.md
In this section, you'll prepare a development environment used to build the [Azu
1. Open a Git CMD or Git Bash command-line environment.
-2. Clone the [azure-utpm-c](https://github.com/Azure-Samples/azure-iot-samples-csharp) GitHub repository using the following command:
+2. Clone the [azure-utpm-c](https://github.com/Azure/azure-utpm-c) GitHub repository using the following command:
```cmd/sh git clone https://github.com/Azure/azure-utpm-c.git --recursive
iot-hub Iot Hub Compare Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-compare-event-hubs.md
Title: Compare Azure IoT Hub to Azure Event Hubs | Microsoft Docs description: A comparison of the IoT Hub and Event Hubs Azure services highlighting functional differences and use cases. The comparison includes supported protocols, device management, monitoring, and file uploads. - Previously updated : 02/20/2019 Last updated : 11/21/2022 # Connecting IoT Devices to Azure: IoT Hub and Event Hubs
-Azure provides services specifically developed for diverse types of connectivity and communication to help you connect your data to the power of the cloud. Both Azure IoT Hub and Azure Event Hubs are cloud services that can ingest large amounts of data and process or store that data for business insights. The two services are similar in that they both support ingestion of data with low latency and high reliability, but they are designed for different purposes. IoT Hub was developed to address the unique requirements of connecting IoT devices to the Azure cloud while Event Hubs was designed for big data streaming. Microsoft recommends using Azure IoT Hub to connect IoT devices to Azure
+Azure provides services developed for diverse types of connectivity and communication to help you connect your data to the power of the cloud. Both Azure IoT Hub and Azure Event Hubs are cloud services that can ingest large amounts of data and process or store that data for business insights. The two services are similar in that they both support ingestion of data with low latency and high reliability, but they're designed for different purposes. IoT Hub was developed to address the unique requirements of connecting IoT devices to the Azure cloud while Event Hubs was designed for big data streaming. Microsoft recommends using Azure IoT Hub to connect IoT devices to Azure
Azure IoT Hub is the cloud gateway that connects IoT devices to gather data and drive business insights and automation. In addition, IoT Hub includes features that enrich the relationship between your devices and your backend systems. Bi-directional communication capabilities mean that while you receive data from devices you can also send commands and policies back to devices. For example, use cloud-to-device messaging to update properties or invoke device management actions. Cloud-to-device communication also enables you to send cloud intelligence to your edge devices with Azure IoT Edge. The unique device-level identity provided by IoT Hub helps better secure your IoT solution from potential attacks.
-[Azure Event Hubs](../event-hubs/event-hubs-about.md) is the big data streaming service of Azure. It is designed for high throughput data streaming scenarios where customers may send billions of requests per day. Event Hubs uses a partitioned consumer model to scale out your stream and is integrated into the big data and analytics services of Azure including Databricks, Stream Analytics, ADLS, and HDInsight. With features like Event Hubs Capture and Auto-Inflate, this service is designed to support your big data apps and solutions. Additionally, IoT Hub uses Event Hubs for its telemetry flow path, so your IoT solution also benefits from the tremendous power of Event Hubs.
+[Azure Event Hubs](../event-hubs/event-hubs-about.md) is the big data streaming service of Azure. It's designed for high throughput data streaming scenarios where customers may send billions of requests per day, and uses a partitioned consumer model to scale out your stream. Event Hubs is integrated into the big data and analytics services of Azure, including Databricks, Stream Analytics, ADLS, and HDInsight. With features like Event Hubs Capture and Auto-Inflate, this service is designed to support your big data apps and solutions. Additionally, IoT Hub uses Event Hubs for its telemetry flow path, so your IoT solution also benefits from the tremendous power of Event Hubs.
-To summarize, both solutions are designed for data ingestion at a massive scale. Only IoT Hub provides the rich IoT-specific capabilities that are designed for you to maximize the business value of connecting your IoT devices to the Azure cloud. If your IoT journey is just beginning, starting with IoT Hub to support your data ingestion scenarios will assure that you have instant access to the full-featured IoT capabilities once your business and technical needs require them.
+To summarize, both solutions are designed for data ingestion at a massive scale. Only IoT Hub provides the rich IoT-specific capabilities that are designed for you to maximize the business value of connecting your IoT devices to the Azure cloud. If your IoT journey is just beginning, starting with IoT Hub to support your data ingestion scenarios assures that you'll have instant access to full-featured IoT capabilities, once your business and technical needs require them.
-The following table provides details about how the two tiers of IoT Hub compare to Event Hubs when you're evaluating them for IoT capabilities. For more information about the standard and basic tiers of IoT Hub, see [How to choose the right IoT Hub tier](iot-hub-scaling.md).
+The following table provides details about how the two tiers of IoT Hub compare to Event Hubs when you're evaluating them for IoT capabilities. For more information about the standard and basic tiers of IoT Hub, see [Choose the right IoT Hub tier for your solution](iot-hub-scaling.md).
-| IoT Capability | IoT Hub standard tier | IoT Hub basic tier | Event Hubs |
+| IoT capability | IoT Hub standard tier | IoT Hub basic tier | Event Hubs |
| | | | | | Device-to-cloud messaging | ![Check][checkmark] | ![Check][checkmark] | ![Check][checkmark] |
-| Protocols: HTTPS, AMQP, AMQP over webSockets | ![Check][checkmark] | ![Check][checkmark] | ![Check][checkmark] |
-| Protocols: MQTT, MQTT over webSockets | ![Check][checkmark] | ![Check][checkmark] | |
+| Protocols: HTTPS, AMQP, AMQP over WebSockets | ![Check][checkmark] | ![Check][checkmark] | ![Check][checkmark] |
+| Protocols: MQTT, MQTT over WebSockets | ![Check][checkmark] | ![Check][checkmark] | |
| Per-device identity | ![Check][checkmark] | ![Check][checkmark] | | | File upload from devices | ![Check][checkmark] | ![Check][checkmark] | | | Device Provisioning Service | ![Check][checkmark] | ![Check][checkmark] | |
iot-hub Iot Hub Devguide Messages D2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-d2c.md
Previously updated : 05/14/2021 Last updated : 11/21/2022
Message routing enables you to send messages from your devices to cloud services in an automated, scalable, and reliable manner. Message routing can be used for:
-* **Sending device telemetry messages as well as events** namely, device lifecycle events, device twin change events, digital twin change events, and device connection state events to the built-in-endpoint and custom endpoints. Learn about [routing endpoints](#routing-endpoints). To learn more about the events sent from IoT Plug and Play devices, see [Understand IoT Plug and Play digital twins](../iot-develop/concepts-digital-twin.md).
+* **Sending device telemetry messages as well as events** namely, device lifecycle events, device twin change events, digital twin change events, and device connection state events to the built-in endpoint and custom endpoints. Learn about [routing endpoints](#routing-endpoints). To learn more about the events sent from IoT Plug and Play devices, see [Understand IoT Plug and Play digital twins](../iot-develop/concepts-digital-twin.md).
* **Filtering data before routing it to various endpoints** by applying rich queries. Message routing allows you to query on the message properties and message body as well as device twin tags and device twin properties. Learn more about using [queries in message routing](iot-hub-devguide-routing-query-syntax.md).
-IoT Hub needs write access to these service endpoints for message routing to work. If you configure your endpoints through the Azure portal, the necessary permissions are added for you. Make sure you configure your services to support the expected throughput. For example, if you are using Event Hubs as a custom endpoint, you must configure the **throughput units** for that event hub so it can handle the ingress of events you plan to send via IoT Hub message routing. Similarly, when using a Service Bus Queue as an endpoint, you must configure the **maximum size** to ensure the queue can hold all the data ingressed, until it is egressed by consumers. When you first configure your IoT solution, you may need to monitor your other endpoints and make any necessary adjustments for the actual load.
+IoT Hub needs write access to these service endpoints for message routing to work. If you configure your endpoints through the Azure portal, the necessary permissions are added for you. Make sure you configure your services to support the expected throughput. For example, if you're using Event Hubs as a custom endpoint, you must configure the **throughput units** for that event hub so it can handle the ingress of events you plan to send via IoT Hub message routing. Similarly, when using a Service Bus Queue as an endpoint, you must configure the **maximum size** to ensure the queue can hold all the data ingressed, until it's egressed by consumers. When you first configure your IoT solution, you may need to monitor your other endpoints and make any necessary adjustments for the actual load.
The IoT Hub defines a [common format](iot-hub-devguide-messages-construct.md) for all device-to-cloud messaging for interoperability across protocols. If a message matches multiple routes that point to the same endpoint, IoT Hub delivers message to that endpoint only once. Therefore, you don't need to configure deduplication on your Service Bus queue or topic. Use this tutorial to learn how to [configure message routing](tutorial-routing.md). ## Routing endpoints
-An IoT hub has a default built-in-endpoint (**messages/events**) that is compatible with Event Hubs. You can create [custom endpoints](iot-hub-devguide-endpoints.md#custom-endpoints) to route messages to by linking other services in your subscription to the IoT Hub.
+An IoT hub has a default built-in endpoint (**messages/events**) that is compatible with Event Hubs. You can create [custom endpoints](iot-hub-devguide-endpoints.md#custom-endpoints) to route messages to by linking other services in your subscription to the IoT hub.
Each message is routed to all endpoints whose routing queries it matches. In other words, a message can be routed to multiple endpoints.
IoT Hub currently supports the following endpoints:
## Built-in endpoint as a routing endpoint
-You can use standard [Event Hubs integration and SDKs](iot-hub-devguide-messages-read-builtin.md) to receive device-to-cloud messages from the built-in endpoint (**messages/events**). Once a Route is created, data stops flowing to the built-in-endpoint unless a Route is created to that endpoint. Even if no routes are created, a fallback route must be enabled to route messages to the built-in endpoint. The fallback is enabled by default if you create your hub using the portal or the CLI.
+You can use standard [Event Hubs integration and SDKs](iot-hub-devguide-messages-read-builtin.md) to receive device-to-cloud messages from the built-in endpoint (**messages/events**). Once a route is created, data stops flowing to the built-in endpoint unless a route is created to that endpoint. Even if no routes are created, a fallback route must be enabled to route messages to the built-in endpoint. The fallback is enabled by default if you create your hub using the portal or the CLI.
## Azure Storage as a routing endpoint There are two storage services IoT Hub can route messages to: [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md) and [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) (ADLS Gen2) accounts. Azure Data Lake Storage accounts are [hierarchical namespace](../storage/blobs/data-lake-storage-namespace.md)-enabled storage accounts built on top of blob storage. Both of these use blobs for their storage.
-IoT Hub supports writing data to Azure Storage in the [Apache Avro](https://avro.apache.org/) format and the JSON format. The default is AVRO. When using JSON encoding, you must set the contentType to **application/json** and contentEncoding to **UTF-8** in the message [system properties](iot-hub-devguide-routing-query-syntax.md#system-properties). Both of these values are case-insensitive. If the content encoding is not set, then IoT Hub will write the messages in base 64 encoded format.
+IoT Hub supports writing data to Azure Storage in the [Apache Avro](https://avro.apache.org/) format and the JSON format. The default is AVRO. When using JSON encoding, you must set the contentType property to **application/json** and contentEncoding property to **UTF-8** in the message [system properties](iot-hub-devguide-routing-query-syntax.md#system-properties). Both of these values are case-insensitive. If the content encoding isn't set, then IoT Hub will write the messages in base 64 encoded format.
-The encoding format can be only set when the blob storage endpoint is configured; it can't be edited for an existing endpoint. To switch encoding formats for an existing endpoint, you'll need to delete the endpoint and re-create it with the format you want. One helpful strategy might be to create a new custom endpoint with your desired encoding format and add a parallel route to that endpoint. In this way, you can verify your data before deleting the existing endpoint.
+The encoding format can be only set when the blob storage endpoint is configured; it can't be edited for an existing endpoint. To switch encoding formats for an existing endpoint, you'll need to first delete the endpoint, and then re-create it with the format you want. One helpful strategy might be to create a new custom endpoint with your desired encoding format and add a parallel route to that endpoint. In this way, you can verify your data before deleting the existing endpoint.
You can select the encoding format using the IoT Hub Create or Update REST API, specifically the [RoutingStorageContainerProperties](/rest/api/iothub/iothubresource/createorupdate#routingstoragecontainerproperties), the [Azure portal](https://portal.azure.com), [Azure CLI](/cli/azure/iot/hub/routing-endpoint), or [Azure PowerShell](/powershell/module/az.iothub/add-aziothubroutingendpoint). The following image shows how to select the encoding format in the Azure portal.
-![Blob storage endpoint encoding](./media/iot-hub-devguide-messages-d2c/blobencoding.png)
IoT Hub batches messages and writes data to storage whenever the batch reaches a certain size or a certain amount of time has elapsed. IoT Hub defaults to the following file naming convention:
IoT Hub batches messages and writes data to storage whenever the batch reaches a
{iothub}/{partition}/{YYYY}/{MM}/{DD}/{HH}/{mm} ```
-You may use any file naming convention, however you must use all listed tokens. IoT Hub will write to an empty blob if there is no data to write.
+You may use any file naming convention, however you must use all listed tokens. IoT Hub will write to an empty blob if there's no data to write.
We recommend listing the blobs or files and then iterating over them, to ensure all blobs or files are read without making any assumptions of partition. The partition range could potentially change during a [Microsoft-initiated failover](iot-hub-ha-dr.md#microsoft-initiated-failover) or IoT Hub [manual failover](iot-hub-ha-dr.md#manual-failover). You can use the [List Blobs API](/rest/api/storageservices/list-blobs) to enumerate the list of blobs or [List ADLS Gen2 API](/rest/api/storageservices/datalakestoragegen2/path) for the list of files. See the following sample as guidance.
public void ListBlobsInContainer(string containerName, string iothub)
} ```
-To create an Azure Data Lake Gen2-compatible storage account, create a new V2 storage account and select *enabled* on the *Hierarchical namespace* field on the **Advanced** tab as shown in the following image:
+To create an Azure Data Lake Gen2-compatible storage account, create a new V2 storage account and select **Enable hierarchical namespace** from the **Data Lake Storage Gen2** section of the **Advanced** tab, as shown in the following image:
-![Select Azure Date Lake Gen2 storage](./media/iot-hub-devguide-messages-d2c/selectadls2storage.png)
## Service Bus Queues and Service Bus Topics as a routing endpoint
Service Bus queues and topics used as IoT Hub endpoints must not have **Sessions
Apart from the built-in-Event Hubs compatible endpoint, you can also route data to custom endpoints of type Event Hubs. ## Azure Cosmos DB as a routing endpoint (preview)
-You can send data directly to Azure Cosmos DB from IoT Hub. Cosmos DB is a fully managed hyperscale multi-model database service. It provides very low latency and high availability, making it a great choice for scenarios like connected solutions and manufacturing which require extensive downstream data analysis.
-IoT Hub supports writing to Cosmos DB in JSON (if specified in the message content-type) or as Base64 encoded binary. In order to set up a route to Cosmos DB, you will have to do the following:
+You can send data directly to Azure Cosmos DB from IoT Hub. Cosmos DB is a fully managed hyperscale multi-model database service. It provides low latency and high availability, making it a great choice for scenarios like connected solutions and manufacturing that require extensive downstream data analysis.
-From your provisioned IoT Hub, go to the Hub settings and click on message routing. Go to the Custom endpoints tab, click on Add and select Cosmos DB. The following image shows the endpoint addition:
+IoT Hub supports writing to Cosmos DB in JSON (if specified in the message content-type) or as Base64 encoded binary. You can set up a Cosmos DB endpoint for message routing by performing the following steps in the Azure portal:
-![Screenshot that shows how to add a Cosmos DB endpoint.](media/iot-hub-devguide-messages-d2c/add-cosmos-db-endpoint.png)
+1. Navigate to your provisioned IoT hub.
+1. In the resource menu, select **Message routing** from **Hub settings**.
+1. Select the **Custom endpoints** tab in the working pane, then select **Add** and choose **Cosmos DB (preview)** from the dropdown list.
-Enter your endpoint name. You should be able to choose from a list of Cosmos DB accounts available for selection, along with the Database and collection.
+ The following image shows the endpoint addition options in the working pane of Azure portal:
-As Cosmos DB is a hyperscale datastore, all data/documents written to it must contain a field that represents a logical partition. The partition key property name is defined at the Container level and cannot be changed once it has been set. Each logical partition has a maximum size of 20GB. To effectively support high-scale scenarios, you can enable [Synthetic Partition Keys](/azure/cosmos-db/nosql/synthetic-partition-keys) for the Cosmos DB endpoint and configure them based on your estimated data volume. For example, in manufacturing scenarios, your logical partition might be expected to approach its max limit of 20 GB within a month. In that case, you can define a Synthetic Partition Key which is a combination of the device id and the month. This key will be automatically added to the partition key field for each new Cosmos DB record, ensuring logical partitions are created each month for each device.
+ :::image type="content" alt-text="Screenshot that shows how to add a Cosmos DB endpoint." source="media/iot-hub-devguide-messages-d2c/add-cosmos-db-endpoint.png":::
+
+1. Type a name for your Cosmos DB endpoint in **Endpoint name**.
+1. In **Cosmos DB account**, choose an existing Cosmos DB account from a list of Cosmos DB accounts available for selection, then select an existing database and collection in **Database** and **Collection**, respectively.
+1. In **Generate a synthetic partition key for messages**, select **Enable** if needed.
+
+ To effectively support high-scale scenarios, you can enable [synthetic partition keys](/azure/cosmos-db/nosql/synthetic-partition-keys) for the Cosmos DB endpoint. As Cosmos DB is a hyperscale data store, all data/documents written to it must contain a field that represents a logical partition. Each logical partition has a maximum size of 20 GB. You can specify the partition key property name in **Partition key name**. The partition key property name is defined at the container level and can't be changed once it has been set.
+
+ You can configure the synthetic partition key value by specifying a template in **Partition key template** based on your estimated data volume. For example, in manufacturing scenarios, your logical partition might be expected to approach its maximum limit of 20 GB within a month. In that case, you can define a synthetic partition key as a combination of the device ID and the month. The generated partition key value is automatically added to the partition key property for each new Cosmos DB record, ensuring logical partitions are created each month for each device.
- You can choose any of the supported authentication types for accessing the database, based on your system setup.
+1. In **Authentication type**, choose an authentication type for your Cosmos DB endpoint. You can choose any of the supported authentication types for accessing the database, based on your system setup.
-> [!Caution]
-> If you are using the System managed identity for authenticating to CosmosDB, you will need to have a ΓÇ£Cosmos DB Built in Data ContributorΓÇ¥ Role assigned via CLI. The role setup is not supported from the portal today. For more details on the various roles, see [Configure role-based access for Azure Cosmos DB](/azure/cosmos-db/how-to-setup-rbac). To understand assigning roles via CLI, see [Manage Azure Cosmos DB SQL role resources.](/cli/azure/cosmosdb/sql/role)
+ > [!CAUTION]
+ > If you're using the system assigned managed identity for authenticating to Cosmos DB, you must use Azure CLI or Azure PowerShell to assign the Cosmos DB Built-in Data Contributor built-in role definition to the identity. Role assignment for Cosmos DB isn't currently supported from the Azure portal. For more information about the various roles, see [Configure role-based access for Azure Cosmos DB](/azure/cosmos-db/how-to-setup-rbac). To understand assigning roles via CLI, see [Manage Azure Cosmos DB SQL role resources.](/cli/azure/cosmosdb/sql/role)
-Once you have selected all the details, click on create and complete the setup of the custom endpoint.
+1. Select **Create** to complete the creation of your custom endpoint.
+
+To learn more about using the Azure portal to create message routes and endpoints for your IoT hub, see [Message routing with IoT Hub ΓÇö Azure portal](how-to-routing-portal.md).
## Reading data that has been routed
You can configure a route by following this [tutorial](tutorial-routing.md).
Use the following tutorials to learn how to read messages from an endpoint.
-* Reading from [Built-in-endpoint](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs)
+* Reading from a [built-in endpoint](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs)
* Reading from [Blob storage](../storage/blobs/storage-blob-event-quickstart.md)
Use the following tutorials to learn how to read messages from an endpoint.
## Fallback route
-The fallback route sends all the messages that don't satisfy query conditions on any of the existing routes to the built-in-Event Hubs (**messages/events**), that is compatible with [Event Hubs](../event-hubs/index.yml). If message routing is turned on, you can enable the fallback route capability. Once a route is created, data stops flowing to the built-in-endpoint, unless a route is created to that endpoint. If there are no routes to the built-in-endpoint and a fallback route is enabled, only messages that don't match any query conditions on routes will be sent to the built-in-endpoint. Also, if all existing routes are deleted, fallback route must be enabled to receive all data at the built-in-endpoint.
+The fallback route sends all the messages that don't satisfy query conditions on any of the existing routes to the built-in endpoint (**messages/events**), which is compatible with [Event Hubs](../event-hubs/index.yml). If message routing is enabled, you can enable the fallback route capability. Once a route is created, data stops flowing to the built-in endpoint, unless a route is created to that endpoint. If there are no routes to the built-in endpoint and a fallback route is enabled, only messages that don't match any query conditions on routes will be sent to the built-in endpoint. Also, if all existing routes are deleted, fallback route capability must be enabled to receive all data at the built-in endpoint.
-You can enable/disable the fallback route in the Azure portal->Message Routing blade. You can also use Azure Resource Manager for [FallbackRouteProperties](/rest/api/iothub/iothubresource/createorupdate#fallbackrouteproperties) to use a custom endpoint for fallback route.
+You can enable or disable the fallback route in the Azure portal, from the **Message routing** blade. You can also use Azure Resource Manager for [FallbackRouteProperties](/rest/api/iothub/iothubresource/createorupdate#fallbackrouteproperties) to use a custom endpoint for the fallback route.
## Non-telemetry events
-In addition to device telemetry, message routing also enables sending device twin change events, device lifecycle events, digital twin change events, and device connection state events. For example, if a route is created with data source set to **device twin change events**, IoT Hub sends messages to the endpoint that contain the change in the device twin. Similarly, if a route is created with data source set to **device lifecycle events**, IoT Hub sends a message indicating whether the device or module was deleted or created. For more information about device lifecycle events, see [Device and module lifecycle notifications](./iot-hub-devguide-identity-registry.md#device-and-module-lifecycle-notifications). When using [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md), a developer can create routes with data source set to **digital twin change events** and IoT Hub sends messages whenever a digital twin property is set or changed, a digital twin is replaced, or when a change event happens for the underlying device twin. Finally, if a route is created with data source set to **device connection state events**, IoT Hub sends a message indicating whether the device was connected or disconnected.
+In addition to device telemetry, message routing also enables sending non-telemetry events, including:
+
+* Device twin change events
+* Device lifecycle events
+* Device job lifecycle events
+* Digital twin change events
+* Device connection state events
+* MQTT broker messages
+
+For example, if a route is created with the data source set to **Device Twin Change Events**, IoT Hub sends messages to the endpoint that contain the change in the device twin. Similarly, if a route is created with the data source set to **Device Lifecycle Events**, IoT Hub sends a message indicating whether the device or module was deleted or created. For more information about device lifecycle events, see [Device and module lifecycle notifications](./iot-hub-devguide-identity-registry.md#device-and-module-lifecycle-notifications). When using [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md), a developer can create routes with the data source set to **Digital Twin Change Events** and IoT Hub sends messages whenever a digital twin property is set or changed, a digital twin is replaced, or when a change event happens for the underlying device twin. Finally, if a route is created with data source set to **Device Connection State Events**, IoT Hub sends a message indicating whether the device was connected or disconnected.
[IoT Hub also integrates with Azure Event Grid](iot-hub-event-grid.md) to publish device events to support real-time integrations and automation of workflows based on these events. See key [differences between message routing and Event Grid](iot-hub-event-grid-routing-comparison.md) to learn which works best for your scenario. ## Limitations for device connection state events
-Device connection state events are available for devices connecting using either the MQTT or AMQP protocol, or using either of these protocols over WebSockets. Requests made only with HTTPS won't trigger device connection state notifications. For IoT Hub to start sending device connection state events, after opening a connection a device must call either the *cloud-to-device receive message* operation or the *device-to-cloud send telemetry* operation. Outside of the Azure IoT SDKs, in MQTT these operations equate to SUBSCRIBE or PUBLISH operations on the appropriate messaging [topics](iot-hub-mqtt-support.md). Over AMQP these equate to attaching or transferring a message on the [appropriate link paths](iot-hub-amqp-support.md).
+Device connection state events are available for devices connecting using either the MQTT or AMQP protocol, or using either of these protocols over WebSockets. Requests made only with HTTPS won't trigger device connection state notifications. For IoT Hub to start sending device connection state events, after opening a connection a device must call either the *cloud-to-device receive message* operation or the *device-to-cloud send telemetry* operation. Outside of the Azure IoT SDKs, in MQTT these operations equate to SUBSCRIBE or PUBLISH operations on the appropriate messaging [topics](iot-hub-mqtt-support.md). Over AMQP these operations equate to attaching or transferring a message on the [appropriate link paths](iot-hub-amqp-support.md).
-IoT Hub does not report each individual device connect and disconnect, but rather publishes the current connection state taken at a periodic 60 second snapshot. Receiving either the same connection state event with different sequence numbers or different connection state events both mean that there was a change in the device connection state during the 60 second window.
+IoT Hub doesn't report each individual device connect and disconnect, but rather publishes the current connection state taken at a periodic, 60-second snapshot. Receiving either the same connection state event with different sequence numbers or different connection state events both mean that there was a change in the device connection state during the 60-second window.
## Testing routes
-When you create a new route or edit an existing route, you should test the route query with a sample message. You can test individual routes or test all routes at once and no messages are routed to the endpoints during the test. Azure portal, Azure Resource Manager, Azure PowerShell, and Azure CLI can be used for testing. Outcomes help identify whether the sample message matched the query, message did not match the query, or test couldn't run because the sample message or query syntax are incorrect. To learn more, see [Test Route](/rest/api/iothub/iothubresource/testroute) and [Test all routes](/rest/api/iothub/iothubresource/testallroutes).
+When you create a new route or edit an existing route, you should test the route query with a sample message. You can test individual routes or test all routes at once and no messages are routed to the endpoints during the test. Azure portal, Azure Resource Manager, Azure PowerShell, and Azure CLI can be used for testing. Outcomes help identify whether the sample message matched or didn't match the query, or if the test couldn't run because the sample message or query syntax are incorrect. To learn more, see [Test Route](/rest/api/iothub/iothubresource/testroute) and [Test All Routes](/rest/api/iothub/iothubresource/testallroutes).
## Latency
-When you route device-to-cloud telemetry messages using built-in endpoints, there is a slight increase in the end-to-end latency after the creation of the first route.
+When you route device-to-cloud telemetry messages using built-in endpoints, there's a slight increase in the end-to-end latency after the creation of the first route.
-In most cases, the average increase in latency is less than 500 ms. However, the latency you experience can vary and can be higher depending on the tier of your IoT hub and your solution architecture. You can monitor the latency using **Routing: message latency for messages/events** or **d2c.endpoints.latency.builtIn.events** IoT Hub metric. Creating or deleting any route after the first one does not impact the end-to-end latency.
+In most cases, the average increase in latency is less than 500 milliseconds. However, the latency you experience can vary and can be higher depending on the tier of your IoT hub and your solution architecture. You can monitor the latency using the **Routing: message latency for messages/events** or **d2c.endpoints.latency.builtIn.events** IoT Hub metrics. Creating or deleting any route after the first one doesn't impact the end-to-end latency.
## Monitoring and troubleshooting
-IoT Hub provides several metrics related to routing and endpoints to give you an overview of the health of your hub and messages sent. For a list of all of the IoT Hub metrics broken out by functional category, see [Metrics in the Monitoring data reference](monitor-iot-hub-reference.md#metrics). You can track errors that occur during evaluation of a routing query and endpoint health as perceived by IoT Hub with the [**routes** category in IoT Hub resource logs](monitor-iot-hub-reference.md#routes). To learn more about using metrics and resource logs with IoT Hub, see [Monitor IoT Hub](monitor-iot-hub.md).
+IoT Hub provides several metrics related to routing and endpoints to give you an overview of the health of your hub and messages sent. For a list of all of the IoT Hub metrics broken out by functional category, see the [Metrics](monitor-iot-hub-reference.md#metrics) section of [Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md). You can track errors that occur during evaluation of a routing query and endpoint health as perceived by IoT Hub with the [**routes** category in IoT Hub resource logs](monitor-iot-hub-reference.md#routes). To learn more about using metrics and resource logs with IoT Hub, see [Monitoring Azure IoT Hub](monitor-iot-hub.md).
-You can use the REST API [Get Endpoint Health](/rest/api/iothub/iothubresource/getendpointhealth#iothubresource_getendpointhealth) to get [health status](iot-hub-devguide-endpoints.md#custom-endpoints) of the endpoints.
+You can use the REST API [Get Endpoint Health](/rest/api/iothub/iothubresource/getendpointhealth#iothubresource_getendpointhealth) to get the [health status](iot-hub-devguide-endpoints.md#custom-endpoints) of the endpoints.
Use the [troubleshooting guide for routing](troubleshoot-message-routing.md) for more details and support for troubleshooting routing. ## Next steps
-* To learn how to create Message Routes, see [Process IoT Hub device-to-cloud messages using routes](tutorial-routing.md).
+* To learn how to create message routes, see [Process IoT Hub device-to-cloud messages using routes](tutorial-routing.md).
* [How to send device-to-cloud messages](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs)
iot-hub Iot Hub Devguide Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-protocols.md
Title: Azure IoT Hub communication protocols and ports | Microsoft Docs
-description: This article describes the supported communication protocols for device-to-cloud and cloud-to-device communications and the port numbers that must be open.
+description: This article describes the supported communication protocols for device-to-cloud and cloud-to-device communications and the port numbers that must be open for those protocols.
- Previously updated : 01/29/2018 Last updated : 11/21/2022
IoT Hub allows devices to use the following protocols for device-side communicat
* [MQTT](https://docs.oasis-open.org/mqtt/mqtt/v3.1.1/mqtt-v3.1.1.pdf) * MQTT over WebSockets
-* [AMQP](https://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-complete-v1.0-os.pdf)
+* [Advanced Message Queuing Protocol (AMQP)](https://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-complete-v1.0-os.pdf)
* AMQP over WebSockets * HTTPS
The following table provides the high-level recommendations for your choice of p
| Protocol | When you should choose this protocol | | | |
-| MQTT <br> MQTT over WebSocket |Use on all devices that do not require to connect multiple devices (each with its own per-device credentials) over the same TLS connection. |
-| AMQP <br> AMQP over WebSocket |Use on field and cloud gateways to take advantage of connection multiplexing across devices. |
-| HTTPS |Use for devices that cannot support other protocols. |
+| MQTT <br> MQTT over WebSockets | Use on all devices that don't require connection to multiple devices, each with its own per-device credentials, over the same TLS connection. |
+| AMQP <br> AMQP over WebSockets | Use on field and cloud gateways to take advantage of connection multiplexing across devices. |
+| HTTPS | Use for devices that can't support other protocols. |
Consider the following points when you choose your protocol for device-side communications:
-* **Cloud-to-device pattern**. HTTPS does not have an efficient way to implement server push. As such, when you are using HTTPS, devices poll IoT Hub for cloud-to-device messages. This approach is inefficient for both the device and IoT Hub. Under current HTTPS guidelines, each device should poll for messages every 25 minutes or more. Issuing more HTTPS receives results in IoT Hub throttling the requests. MQTT and AMQP support server push when receiving cloud-to-device messages. They enable immediate pushes of messages from IoT Hub to the device. If delivery latency is a concern, MQTT or AMQP are the best protocols to use. For rarely connected devices, HTTPS works as well.
+* **Cloud-to-device pattern**. HTTPS doesn't have an efficient way to implement server push. As such, when you're using HTTPS, devices poll IoT Hub for cloud-to-device messages. This approach is inefficient for both the device and IoT Hub. Under current HTTPS guidelines, each device should poll for messages every 25 minutes or more. Issuing more HTTPS receives results in IoT Hub throttling the requests. MQTT and AMQP support server push when receiving cloud-to-device messages. They enable immediate pushes of messages from IoT Hub to the device. If delivery latency is a concern, MQTT or AMQP are the best protocols to use. For rarely connected devices, HTTPS works as well.
-* **Field gateways**. MQTT and HTTPS support only a single device identity (device ID plus credentials) per TLS connection. For this reason, these protocols are not supported for [field gateway scenarios](iot-hub-devguide-endpoints.md#field-gateways) that require multiplexing messages using multiple device identities across a single or a pool of upstream connections to IoT Hub. Such gateways can use a protocol that supports multiple device identities per connection, like AMQP, for their upstream traffic.
+* **Field gateways**. MQTT and HTTPS support only a single device identity (device ID plus credentials) per TLS connection. For this reason, these protocols aren't supported for [field gateway scenarios](iot-hub-devguide-endpoints.md#field-gateways) that require multiplexing messages, using multiple device identities, across either a single connection or a pool of upstream connections to IoT Hub. Such gateways can use a protocol that supports multiple device identities per connection, like AMQP, for their upstream traffic.
-* **Low resource devices**. The MQTT and HTTPS libraries have a smaller footprint than the AMQP libraries. As such, if the device has limited resources (for example, less than 1-MB RAM), these protocols might be the only protocol implementation available.
+* **Low resource devices**. The MQTT and HTTPS libraries have a smaller footprint than the AMQP libraries. As such, if the device has limited resources (for example, less than 1 MB of RAM), these protocols might be the only protocol implementation available.
* **Network traversal**. The standard AMQP protocol uses port 5671, and MQTT listens on port 8883. Use of these ports could cause problems in networks that are closed to non-HTTPS protocols. Use MQTT over WebSockets, AMQP over WebSockets, or HTTPS in this scenario.
Devices can communicate with IoT Hub in Azure using various protocols. Typically
| Protocol | Port | | | |
-| MQTT |8883 |
-| MQTT over WebSockets |443 |
-| AMQP |5671 |
-| AMQP over WebSockets |443 |
-| HTTPS |443 |
+| MQTT | 8883 |
+| MQTT over WebSockets | 443 |
+| AMQP | 5671 |
+| AMQP over WebSockets | 443 |
+| HTTPS | 443 |
-The IP address of an IoT hub is subject to change without notice. To learn how to mitigate the effects of IoT hub IP address changes on your IoT solution and devices, see [IoT Hub IP address best practices](iot-hub-understand-ip-address.md#best-practices).
+The IP address of an IoT hub is subject to change without notice. To learn how to mitigate the effects of IoT hub IP address changes on your IoT solution and devices, see the [Best practices](iot-hub-understand-ip-address.md#best-practices) section of [IoT Hub IP addresses](iot-hub-understand-ip-address.md).
## Next steps
-To learn more about how IoT Hub implements the MQTT protocol, see [Communicate with your IoT hub using the MQTT protocol](iot-hub-mqtt-support.md).
+For more information about how IoT Hub implements the MQTT protocol, see [Communicate with your IoT hub using the MQTT protocol](iot-hub-mqtt-support.md).
iot-hub Iot Hub Devguide Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-sdks.md
Title: Azure IoT Hub SDKs | Microsoft Docs
-description: Links to the Azure IoT Hub SDKs which you can use to build device apps and back-end apps.
+description: Links to the Azure IoT Hub SDKs that you can use to build device apps and back-end apps.
Previously updated : 06/01/2021 Last updated : 11/18/2022
There are three categories of software development kits (SDKs) for working with IoT Hub:
-* [**IoT Hub device SDKs**](#azure-iot-hub-device-sdks) enable you to build apps that run on your IoT devices using device client or module client. These apps send telemetry to your IoT hub, and optionally receive messages, job, method, or twin updates from your IoT hub. You can use these SDKs to build device apps that use [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) conventions and models to advertise their capabilities to IoT Plug and Play-enabled applications. You can also use module client to author [modules](../iot-edge/iot-edge-modules.md) for [Azure IoT Edge runtime](../iot-edge/about-iot-edge.md).
+* [**IoT Hub device SDKs**](#azure-iot-hub-device-sdks) enable you to build apps that run on your IoT devices using the device client or module client. These apps send telemetry to your IoT hub, and optionally receive messages, jobs, methods, or twin updates from your IoT hub. You can use these SDKs to build device apps that use [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) conventions and models to advertise their capabilities to IoT Plug and Play-enabled applications. You can also use the module client to author [modules](../iot-edge/iot-edge-modules.md) for [Azure IoT Edge runtime](../iot-edge/about-iot-edge.md).
* [**IoT Hub service SDKs**](#azure-iot-hub-service-sdks) enable you to build backend applications to manage your IoT hub, and optionally send messages, schedule jobs, invoke direct methods, or send desired property updates to your IoT devices or modules.
The SDKs are available in **multiple languages** providing the flexibility to ch
| **C** | [packages](https://github.com/Azure/azure-iot-sdk-c/blob/master/readme.md#getting-the-sdk) | [GitHub](https://github.com/Azure/azure-iot-sdk-c) | [Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-ansi-c) | [Samples](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples) | [Reference](https://github.com/Azure/azure-iot-sdk-c/) | > [!WARNING]
-> The **C SDK** listed above is **not** suitable for embedded applications due to its memory management and threading model. For embedded devices, refer to the [Embedded device SDKs](#embedded-device-sdks).
+> The **C device SDK** listed in the previous table is **not** suitable for embedded applications due to its memory management and threading model. For embedded devices, refer to the [Embedded device SDKs](#embedded-device-sdks).
### Embedded device SDKs
-These SDKs were designed and created to run on devices with limited compute and memory resources and are implemented using the C language.
+These SDKs are designed and created to run on devices with limited compute and memory resources and are implemented using the C language.
The embedded device SDKs are available for **multiple operating systems** providing the flexibility to choose which best suits your team and scenario.
The embedded device SDKs are available for **multiple operating systems** provid
| **FreeRTOS** | FreeRTOS Middleware | [GitHub](https://github.com/Azure/azure-iot-middleware-freertos) | [Samples](https://github.com/Azure-Samples/iot-middleware-freertos-samples) | [Reference](https://azure.github.io/azure-iot-middleware-freertos) | | **Bare Metal** | Azure SDK for Embedded C | [GitHub](https://github.com/Azure/azure-sdk-for-c/tree/master/sdk/docs/iot) | [Samples](https://github.com/Azure/azure-sdk-for-c/blob/master/sdk/samples/iot/README.md) | [Reference](https://azure.github.io/azure-sdk-for-c) |
-Learn more about the IoT Hub device SDKS in the [IoT Device Development Documentation](../iot-develop/about-iot-sdks.md).
+Learn more about the IoT Hub device SDKS in the [IoT device development documentation](../iot-develop/about-iot-sdks.md).
## Azure IoT Hub service SDKs
The Azure IoT service SDKs contain code to facilitate building applications that
## Azure IoT Hub management SDKs
-The Iot Hub management SDKs help you build backend applications that manage the IoT hubs in your Azure subscription.
+The IoT Hub management SDKs help you build backend applications that manage the IoT hubs in your Azure subscription.
| Platform | Package | Code repository | Reference | | --|--|--|--|
The Iot Hub management SDKs help you build backend applications that manage the
## SDK and hardware compatibility
-For more information about device SDK compatibility with specific hardware devices, see the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com/) or individual repository.
+For more information about device SDK compatibility with specific hardware devices, see the [Azure Certified Device catalog](https://devicecatalog.azure.com/) or individual repository.
[!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-partial.md)]
Azure IoT SDKs are also available for the following
* [Device Update for IoT Hub SDKs](../iot-hub-device-update/understand-device-update.md): To help you deploy over-the-air (OTA) updates for IoT devices.
-* [IoT Plug and Play SDKs](../iot-develop/libraries-sdks.md): To help you build IoT Plug and Play solutions.
+* [Microsoft SDKs for IoT Plug and Play](../iot-develop/libraries-sdks.md): To help you build IoT Plug and Play solutions.
## Next steps
iot-hub Iot Hub Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-scaling.md
Previously updated : 06/28/2019 Last updated : 11/21/2022 # Choose the right IoT Hub tier for your solution
-Every IoT solution is different, so Azure IoT Hub offers several options based on pricing and scale. This article is meant to help you evaluate your IoT Hub needs. For pricing information about IoT Hub tiers, see [IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub).
+Every IoT solution is different, so Azure IoT Hub offers several options based on pricing and scale. This article is meant to help you evaluate your IoT Hub needs. For pricing information about IoT Hub tiers, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub).
To decide which IoT Hub tier is right for your solution, ask yourself two questions: **What features do I plan to use?**
-Azure IoT Hub offers two tiers, basic and standard, that differ in the number of features they support. If your IoT solution is based around collecting data from devices and analyzing it centrally, then the basic tier is probably right for you. If you want to use more advanced configurations to control IoT devices remotely or distribute some of your workloads onto the devices themselves, then you should consider the standard tier. For a detailed breakdown of which features are included in each tier continue to [Basic and standard tiers](#basic-and-standard-tiers).
+Azure IoT Hub offers two tiers, basic and standard, that differ in the number of features they support. If your IoT solution is based around collecting data from devices and analyzing it centrally, then the basic tier is probably right for you. If you want to use more advanced configurations to control IoT devices remotely or distribute some of your workloads onto the devices themselves, then you should consider the standard tier. For a detailed breakdown of which features are included in each tier, continue to [Basic and standard tiers](#basic-and-standard-tiers).
**How much data do I plan to move daily?**
Each IoT Hub tier is available in three sizes, based around how much data throug
The standard tier of IoT Hub enables all features, and is required for any IoT solutions that want to make use of the bi-directional communication capabilities. The basic tier enables a subset of the features and is intended for IoT solutions that only need uni-directional communication from devices to the cloud. Both tiers offer the same security and authentication features.
-Only one type of [edition](https://azure.microsoft.com/pricing/details/iot-hub/) within a tier can be chosen per IoT Hub. For example, you can create an IoT Hub with multiple units of S1, but not with a mix of units from different editions, such as S1 and S2.
+Only one type of [IoT Hub edition](https://azure.microsoft.com/pricing/details/iot-hub/) within a tier can be chosen per IoT hub. For example, you can create an IoT hub with multiple units of S1. However, you can't create an IoT hub with a mix of units from different editions, such as S1 and B3 or S1 and S2.
-| Capability | Basic tier | Free/Standard tier |
+| Capability | Basic tier | Standard/Free tier |
| - | - | - | | [Device-to-cloud telemetry](iot-hub-devguide-messaging.md) | Yes | Yes | | [Per-device identity](iot-hub-devguide-identity-registry.md) | Yes | Yes |
Only one type of [edition](https://azure.microsoft.com/pricing/details/iot-hub/)
| [Device Provisioning Service](../iot-dps/about-iot-dps.md) | Yes | Yes | | [Monitoring and diagnostics](monitor-iot-hub.md) | Yes | Yes | | [Cloud-to-device messaging](iot-hub-devguide-c2d-guidance.md) | | Yes |
-| [Device twins](iot-hub-devguide-device-twins.md), [Module twins](iot-hub-devguide-module-twins.md), and [Device management](iot-hub-device-management-overview.md) | | Yes |
+| [Device twins](iot-hub-devguide-device-twins.md), [module twins](iot-hub-devguide-module-twins.md), and [device management](iot-hub-device-management-overview.md) | | Yes |
| [Device streams (preview)](iot-hub-device-streams-overview.md) | | Yes | | [Azure IoT Edge](../iot-edge/about-iot-edge.md) | | Yes | | [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) | | Yes |
-IoT Hub also offers a free tier that is meant for testing and evaluation. It has all the capabilities of the standard tier, but limited messaging allowances. You cannot upgrade from the free tier to either basic or standard.
+IoT Hub also offers a free tier that is meant for testing and evaluation. It has all the capabilities of the standard tier, but includes limited messaging allowances. You can't upgrade from the free tier to either the basic or standard tier.
## Partitions
-Azure IoT Hubs contain many core components of [Azure Event Hubs](../event-hubs/event-hubs-features.md), including [Partitions](../event-hubs/event-hubs-features.md#partitions). Event streams for IoT Hubs are generally populated with incoming telemetry data that is reported by various IoT devices. The partitioning of the event stream is used to reduce contentions that occur when concurrently reading and writing to event streams.
+Azure IoT hubs contain many core components from [Azure Event Hubs](../event-hubs/event-hubs-features.md), including [partitions](../event-hubs/event-hubs-features.md#partitions). Event streams for IoT hubs are populated with incoming telemetry data that is reported by various IoT devices. The partitioning of the event stream is used to reduce contentions that occur when concurrently reading and writing to event streams.
-The partition limit is chosen when IoT Hub is created, and cannot be changed. The maximum partition limit for basic tier IoT Hub and standard tier IoT Hub is 32. Most IoT hubs only need 4 partitions. For more information on determining the partitions, see the Event Hubs FAQ [How many partitions do I need?](../event-hubs/event-hubs-faq.yml#how-many-partitions-do-i-need-)
+The partition limit is chosen when an IoT hub is created, and can't be changed. The maximum limit of device-to-cloud partitions for basic tier and standard tier IoT hubs is 32. Most IoT hubs only need four partitions. For more information on determining the partitions, see the [How many partitions do I need?](../event-hubs/event-hubs-faq.yml#how-many-partitions-do-i-need-) question in the [FAQ](../event-hubs/event-hubs-faq.yml) for [Azure Event Hubs](../event-hubs/index.yml).
## Tier upgrade
Once you create your IoT hub, you can upgrade from the basic tier to the standar
The partition configuration remains unchanged when you migrate from basic tier to standard tier. > [!NOTE]
-> The free tier does not support upgrading to basic or standard.
+> The free tier does not support upgrading to basic or standard tier.
## IoT Hub REST APIs
-The difference in supported capabilities between the basic and standard tiers of IoT Hub means that some API calls do not work with basic tier hubs. The following table shows which APIs are available:
+The difference in supported capabilities between the basic and standard tiers of IoT Hub means that some API calls don't work with basic tier IoT hubs. The following table shows which APIs are available:
-| API | Basic tier | Free/Standard tier |
+| API | Basic tier | Standard/Free tier |
| | - | - | | [Delete device](/javascript/api/azure-iot-digitaltwins-service/registrymanager#azure-iot-digitaltwins-service-registrymanager-deletedevice) | Yes | Yes | | [Get device](/rest/api/iothub/service/devices/get-identity) | Yes | Yes |
The best way to size an IoT Hub solution is to evaluate the traffic on a per-uni
* Cloud-to-device messages * Identity registry operations
-Traffic is measured for your IoT hub on a per-unit basis. When you create an IoT hub, you choose its tier and edition, and set the number of units available. You can purchase up to 200 units for the B1, B2, S1, or S2 edition, or up to 10 units for the B3 or S3 edition. After your IoT hub is created, you can change the number of units available within its edition, upgrade or downgrade between editions within its tier (B1 to B2), or upgrade from the basic to the standard tier (B1 to S1) without interrupting your existing operations. For more information, see [How to upgrade your IoT hub](iot-hub-upgrade.md).
+Traffic is measured for your IoT hub on a per-unit basis. When you create an IoT hub, you choose its tier and edition, and set the number of units available. You can purchase up to 200 units for the B1, B2, S1, or S2 edition, or up to 10 units for the B3 or S3 edition. After you create your IoT hub, without interrupting your existing operations, you can:
+
+- Change the number of units available within its edition (for example, upgrading from one to three units of B1)
+- Upgrade or downgrade between editions within its tier (for example, upgrading from B1 to B2)
+- Upgrade from the basic to the standard tier (for example, upgrading from B1 to S1)
+
+For more information, see [How to upgrade your IoT hub](iot-hub-upgrade.md).
As an example of each tier's traffic capabilities, device-to-cloud messages follow these sustained throughput guidelines:
As an example of each tier's traffic capabilities, device-to-cloud messages foll
| B2, S2 |Up to 16 MB/minute per unit<br/>(22.8 GB/day/unit) |Average of 4,167 messages/minute per unit<br/>(6 million messages/day per unit) | | B3, S3 |Up to 814 MB/minute per unit<br/>(1144.4 GB/day/unit) |Average of 208,333 messages/minute per unit<br/>(300 million messages/day per unit) |
-Device-to-cloud throughput is only one of the metrics you need to consider when designing an IoT solution. For more comprehensive information, see [IoT Hub quotas and throttles](iot-hub-devguide-quotas-throttling.md).
+Device-to-cloud throughput is only one of the metrics you need to consider when designing an IoT solution. For more comprehensive information, see [IoT Hub quotas and throttling](iot-hub-devguide-quotas-throttling.md).
### Identity registry operation throughput
-IoT Hub identity registry operations are not supposed to be run-time operations, as they are mostly related to device provisioning.
+IoT Hub identity registry operations aren't supposed to be run-time operations, as they're mostly related to device provisioning.
-For specific burst performance numbers, see [IoT Hub quotas and throttles](iot-hub-devguide-quotas-throttling.md).
+For more information about specific burst performance numbers, see [IoT Hub quotas and throttling](iot-hub-devguide-quotas-throttling.md).
## Auto-scale
-If you are approaching the allowed message limit on your IoT hub, you can use these [steps to automatically scale](https://azure.microsoft.com/resources/samples/iot-hub-dotnet-autoscale/) to increment an IoT Hub unit in the same IoT Hub tier.
+If you're approaching the allowed message limit on your IoT hub, you can use these [steps to automatically scale](https://azure.microsoft.com/resources/samples/iot-hub-dotnet-autoscale/) to increment an IoT Hub unit in the same IoT Hub tier.
## Next steps
-* For more information about IoT Hub capabilities and performance details, see [IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub) or [IoT Hub quotas and throttles](iot-hub-devguide-quotas-throttling.md).
+* For more information about IoT Hub capabilities and performance details, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub) or [IoT Hub quotas and throttling](iot-hub-devguide-quotas-throttling.md).
-* To change your IoT Hub tier, follow the steps in [Upgrade your IoT hub](iot-hub-upgrade.md).
+* To change your IoT Hub tier, follow the steps in [How to upgrade your IoT hub](iot-hub-upgrade.md).
iot-hub Query Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/query-jobs.md
Here's a sample IoT hub device twin that is part of a job called **myJobId**:
{ "deviceId": "myDeviceId", "jobId": "myJobId",
- "jobType": "scheduleTwinUpdate",
+ "jobType": "scheduleUpdateTwin",
"status": "completed", "startTimeUtc": "2016-09-29T18:18:52.7418462", "endTimeUtc": "2016-09-29T18:20:52.7418462",
Here's a sample IoT hub device twin that is part of a job called **myJobId**:
Currently, this collection is queryable as **devices.jobs** in the IoT Hub query language. > [!IMPORTANT]
-> Currently, the jobs property is never returned when querying device twins. That is, queries that contain `FROM devices`. The jobs property can only be accessed directly with queries using `FROM devices.jobs`.
+> Currently, the jobs property is not returned when querying device twins. That is, queries that contain `FROM devices`. The jobs property can only be accessed directly with queries using `FROM devices.jobs`.
For example, the following query returns all jobs (past and scheduled) that affect a single device:
For example, the following query retrieves all completed device twin update jobs
```sql SELECT * FROM devices.jobs WHERE devices.jobs.deviceId = 'myDeviceId'
- AND devices.jobs.jobType = 'scheduleTwinUpdate'
+ AND devices.jobs.jobType = 'scheduleUpdateTwin'
AND devices.jobs.status = 'completed' AND devices.jobs.createdTimeUtc > '2016-09-01' ```
iot-hub Query Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/query-twins.md
SELECT * FROM devices.modules
## Twin query limitations > [!IMPORTANT]
-> Query results can have a few minutes of delay with respect to the latest values in device twins. If querying individual device twins by ID, use the [get twin REST API](/jav#azure-iot-hub-service-sdks).
+> Query results are eventually consistent operations and delays of up to 30 minutes should be tolerated. In most instances, twin query returns results in the order of a few seconds. IoT Hub strives to provide low latency for all operations. However, due to network conditions and other unpredictable factors it cannot guarantee a certain latency.
+
+An alternative option to twin queries is to query individual device twins by ID by using the [get twin REST API](/jav#azure-iot-hub-service-sdks).
Query expressions can have a maximum length of 8192 characters. Currently, comparisons are supported only between primitive types (no objects), for instance `... WHERE properties.desired.config = properties.reported.config` is supported only if those properties have primitive values.
+We recommend to not take a dependency on lastActivityTime found in Device Identity Properties for Twin Queries for any scenario. This field does not guarantee an accurate gauge of device status. Instead, please use IoT Device Lifecycle events to manage device state and activities. More information on how to use IoT Hub Lifecycle events in your solution, please visit [React to IoT Hub events by using Event Grid to trigger actions](/azure/iot-hub/iot-hub-event-grid).
++
+> [!Note]
+> Avoid making any assumptions about the maximum latency of this operation. Please refer to [Latency Solutions](/azure/iot-hub/iot-hub-devguide-quotas-throttling) for more information on how to build your solution taking latency into account.
+ ## Next steps * Understand the basics of the [IoT Hub query language](iot-hub-devguide-query-language.md)++
key-vault Hsm Protected Keys Byok https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/hsm-protected-keys-byok.md
tags: azure-resource-manager
Previously updated : 02/04/2021 Last updated : 11/21/2022
For more information, and for a tutorial to get started using Key Vault (includi
Here's an overview of the process. Specific steps to complete are described later in the article.
-* In Key Vault, generate a key (referred to as a *Key Exchange Key* (KEK)). The KEK must be an RSA-HSM key that has only the `import` key operation. Only Key Vault Premium SKU supports RSA-HSM keys.
+* In Key Vault, generate a key (referred to as a *Key Exchange Key* (KEK)). The KEK must be an RSA-HSM key that has only the `import` key operation. Only Key Vault Premium and Managed HSM support RSA-HSM keys.
* Download the KEK public key as a .pem file. * Transfer the KEK public key to an offline computer that is connected to an on-premises HSM. * In the offline computer, use the BYOK tool provided by your HSM vendor to create a BYOK file.
The following table lists prerequisites for using BYOK in Azure Key Vault:
| Requirement | More information | | | | | An Azure subscription |To create a key vault in Azure Key Vault, you need an Azure subscription. [Sign up for a free trial](https://azure.microsoft.com/pricing/free-trial/). |
-| A Key Vault Premium SKU to import HSM-protected keys |For more information about the service tiers and capabilities in Azure Key Vault, see [Key Vault Pricing](https://azure.microsoft.com/pricing/details/key-vault/). |
+| A Key Vault Premium or Managed HSM to import HSM-protected keys |For more information about the service tiers and capabilities in Azure Key Vault, see [Key Vault Pricing](https://azure.microsoft.com/pricing/details/key-vault/). |
| An HSM from the supported HSMs list and a BYOK tool and instructions provided by your HSM vendor | You must have permissions for an HSM and basic knowledge of how to use your HSM. See [Supported HSMs](#supported-hsms). | | Azure CLI version 2.1.0 or later | See [Install the Azure CLI](/cli/azure/install-azure-cli).|
The following table lists prerequisites for using BYOK in Azure Key Vault:
||EC|P-256<br />P-384<br />P-521|Vendor HSM|The key to be transferred to the Azure Key Vault HSM| ||||
-## Generate and transfer your key to the Key Vault HSM
+## Generate and transfer your key to Key Vault Premium HSM or Managed HSM
-To generate and transfer your key to a Key Vault HSM:
+To generate and transfer your key to a Key Vault Premium or Managed HSM:
* [Step 1: Generate a KEK](#step-1-generate-a-kek) * [Step 2: Download the KEK public key](#step-2-download-the-kek-public-key)
To generate and transfer your key to a Key Vault HSM:
### Step 1: Generate a KEK
-A KEK is an RSA key that's generated in a Key Vault HSM. The KEK is used to encrypt the key you want to import (the *target* key).
+A KEK is an RSA key that's generated in a Key Vault Premium or Managed HSM. The KEK is used to encrypt the key you want to import (the *target* key).
The KEK must be: - An RSA-HSM key (2,048-bit; 3,072-bit; or 4,096-bit)
Use the [az keyvault key create](/cli/azure/keyvault/key#az-keyvault-key-create)
```azurecli az keyvault key create --kty RSA-HSM --size 4096 --name KEKforBYOK --ops import --vault-name ContosoKeyVaultHSM ```
+or for Managed HSM
+
+```azurecli
+az keyvault key create --kty RSA-HSM --size 4096 --name KEKforBYOK --ops import --hsm-name ContosoKeyVaultHSM
+```
### Step 2: Download the KEK public key
Use [az keyvault key download](/cli/azure/keyvault/key#az-keyvault-key-download)
az keyvault key download --name KEKforBYOK --vault-name ContosoKeyVaultHSM --file KEKforBYOK.publickey.pem ```
+or for Managed HSM
+
+```azurecli
+az keyvault key download --name KEKforBYOK --hsm-name ContosoKeyVaultHSM --file KEKforBYOK.publickey.pem
+```
+ Transfer the KEKforBYOK.publickey.pem file to your offline computer. You will need this file in the next step. ### Step 3: Generate and prepare your key for transfer
To import an RSA key use following command. Parameter --kty is optional and defa
```azurecli az keyvault key import --vault-name ContosoKeyVaultHSM --name ContosoFirstHSMkey --byok-file KeyTransferPackage-ContosoFirstHSMkey.byok ```
+or for Managed HSM
+
+```azurecli
+az keyvault key import --hsm-name ContosoKeyVaultHSM --name ContosoFirstHSMkey --byok-file KeyTransferPackage-ContosoFirstHSMkey.byok
+```
To import an EC key, you must specify key type and the curve name.
To import an EC key, you must specify key type and the curve name.
az keyvault key import --vault-name ContosoKeyVaultHSM --name ContosoFirstHSMkey --byok-file --kty EC-HSM --curve-name "P-256" KeyTransferPackage-ContosoFirstHSMkey.byok ```
+or for Managed HSM
+
+```azurecli
+az keyvault key import --hsm-name ContosoKeyVaultHSM --name ContosoFirstHSMkey --byok-file --kty EC-HSM --curve-name "P-256" KeyTransferPackage-ContosoFirstHSMkey.byok
+```
+ If the upload is successful, Azure CLI displays the properties of the imported key. ## Next steps
logic-apps Set Up Devops Deployment Single Tenant Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/set-up-devops-deployment-single-tenant-azure-logic-apps.md
The following diagram shows the dependencies between your logic app project and
## Deploy logic app resources (zip deploy)
-After you push your logic app project to your source repository, you can set up build and release pipelines that deploy logic apps to infrastructure either inside or outside Azure.
+After you push your logic app project to your source repository, you can set up build and release pipelines either inside or outside Azure that deploy logic apps to infrastructure.
### Build your project
machine-learning How To Mlflow Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-mlflow-batch.md
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
1. First, let's connect to Azure Machine Learning workspace where we are going to work on.
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- ```bash
+ ```azurecli
az account set --subscription <subscription> az configure --defaults workspace=<workspace> group=<resource-group> location=<location> ```
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
2. Batch Endpoint can only deploy registered models. In this case, we already have a local copy of the model in the repository, so we only need to publish the model to the registry in the workspace. You can skip this step if the model you are trying to deploy is already registered.
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- ```bash
+ ```azurecli
MODEL_NAME='heart-classifier' az ml model create --name $MODEL_NAME --type "mlflow_model" --path "heart-classifier-mlflow/model" ```
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
```python model_name = 'heart-classifier'
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
3. Before moving any forward, we need to make sure the batch deployments we are about to create can run on some infrastructure (compute). Batch deployments can run on any Azure ML compute that already exists in the workspace. That means that multiple batch deployments can share the same compute infrastructure. In this example, we are going to work on an AzureML compute cluster called `cpu-cluster`. Let's verify the compute exists on the workspace or create it otherwise.
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
Create a compute definition `YAML` like the following one: __cpu-cluster.yml__
+
```yaml $schema: https://azuremlschemas.azureedge.net/latest/amlCompute.schema.json name: cluster-cpu
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
Create the compute using the following command:
- ```bash
+ ```azurecli
az ml compute create -f cpu-cluster.yml ```
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
To create a new compute cluster where to create the deployment, use the following script:
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
4. Now it is time to create the batch endpoint and deployment. Let's start with the endpoint first. Endpoints only require a name and a description to be created:
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
To create a new endpoint, create a `YAML` configuration like the following:
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
Then, create the endpoint with the following command:
- ```bash
+ ```azurecli
ENDPOINT_NAME='heart-classifier-batch'
- az ml batch-endpoint create -f endpoint.yml
+ az ml batch-endpoint create -n $ENDPOINT_NAME -f endpoint.yml
```
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
To create a new endpoint, use the following script:
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
name="heart-classifier-batch", description="A heart condition classifier for batch inference", )
+ ```
+
+ Then, create the endpoint with the following command:
+
+ ```python
ml_client.batch_endpoints.begin_create_or_update(endpoint) ``` 5. Now, let create the deployment. MLflow models don't require you to indicate an environment or a scoring script when creating the deployments as it is created for you. However, you can specify them if you want to customize how the deployment does inference.
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
Then, create the deployment with the following command:
- ```bash
+ ```azurecli
DEPLOYMENT_NAME="classifier-xgboost-mlflow"
- az ml batch-endpoint create -f endpoint.yml
+ az ml batch-deployment create -n $DEPLOYMENT_NAME -f endpoint.yml
```
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
- To create a new deployment under the created endpoint, use the following script:
+ To create a new deployment under the created endpoint, first define the deployment:
```python deployment = BatchDeployment(
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
retry_settings=BatchRetrySettings(max_retries=3, timeout=300), logging_level="info", )
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```python
ml_client.batch_deployments.begin_create_or_update(deployment) ```
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
6. Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- ```bash
+ ```azurecli
az ml batch-endpoint update --name $ENDPOINT_NAME --set defaults.deployment_name=$DEPLOYMENT_NAME ```
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
```python
+ endpoint = ml_client.batch_endpoints.get(endpoint.name)
endpoint.defaults.deployment_name = deployment.name ml_client.batch_endpoints.begin_create_or_update(endpoint) ```
For testing our endpoint, we are going to use a sample of unlabeled data located
1. Let's create the data asset first. This data asset consists of a folder with multiple CSV files that we want to process in parallel using batch endpoints. You can skip this step is your data is already registered as a data asset or you want to use a different input type.
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- Create a data asset definition in `YAML`:
+ a. Create a data asset definition in `YAML`:
__heart-dataset-unlabeled.yml__ ```yaml
For testing our endpoint, we are going to use a sample of unlabeled data located
path: heart-classifier-mlflow/data ```
- Then, create the data asset:
+ b. Create the data asset:
- ```bash
+ ```azurecli
az ml data create -f heart-dataset-unlabeled.yml ```
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
+
+ a. Create a data asset definition:
```python data_path = "heart-classifier-mlflow/data"
For testing our endpoint, we are going to use a sample of unlabeled data located
description="An unlabeled dataset for heart classification", name=dataset_name, )
+ ```
+
+ b. Create the data asset:
+
+ ```python
ml_client.data.create_or_update(heart_dataset_unlabeled) ```
+ c. Refresh the object to reflect the changes:
+
+ ```python
+ heart_dataset_unlabeled = ml_client.data.get(name=dataset_name)
+ ```
+
2. Now that the data is uploaded and ready to be used, let's invoke the endpoint:
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- ```bash
+ ```azurecli
JOB_NAME = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input azureml:heart-dataset-unlabeled@latest | jq -r '.name') ``` > [!NOTE] > The utility `jq` may not be installed on every installation. You can get installation instructions in [this link](https://stedolan.github.io/jq/download/).
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
```python input = Input(type=AssetTypes.URI_FOLDER, path=heart_dataset_unlabeled.id)
For testing our endpoint, we are going to use a sample of unlabeled data located
3. A batch job is started as soon as the command returns. You can monitor the status of the job until it finishes:
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- ```bash
+ ```azurecli
az ml job show --name $JOB_NAME ```
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
```python ml_client.jobs.get(job.name)
The file is structured as follows:
You can download the results of the job by using the job name:
-# [Azure ML CLI](#tab/cli)
+# [Azure CLI](#tab/cli)
To download the predictions, use the following command:
-```bash
+```azurecli
az ml job download --name $JOB_NAME --output-name score --download-path ./ ```
-# [Azure ML SDK for Python](#tab/sdk)
+# [Python](#tab/sdk)
```python ml_client.jobs.download(name=job.name, output_name='score', download_path='./')
Use the following steps to deploy an MLflow model with a custom scoring script.
> [!IMPORTANT] > This example uses a conda environment specified at `/heart-classifier-mlflow/environment/conda.yaml`. This file was created by combining the original MLflow conda dependencies file and adding the package `azureml-core`. __You can't use the `conda.yml` file from the model directly__.
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file.
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
Let's get a reference to the environment:
Use the following steps to deploy an MLflow model with a custom scoring script.
1. Let's create the deployment now:
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
Use the following steps to deploy an MLflow model with a custom scoring script.
Then, create the deployment with the following command:
- ```bash
- az ml batch-endpoint create -f endpoint.yml
+ ```azurecli
+ az ml batch-deployment create -f deployment.yml
```
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
To create a new deployment under the created endpoint, use the following script:
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
Traffic to one deployment can also be mirrored (copied) to another deployment. M
:::image type="content" source="media/concept-endpoints/endpoint-concept-mirror.png" alt-text="Diagram showing an endpoint mirroring traffic to a deployment.":::
-Learn how to [safely rollout to online endpoints](how-to-safely-rollout-managed-endpoints.md).
+Learn how to [safely rollout to online endpoints](how-to-safely-rollout-online-endpoints.md).
### Application Insights integration
The following table highlights the key differences between managed online endpoi
| **Managed identity** | [Supported](how-to-access-resources-from-endpoints-managed-identities.md) | Supported | | **Virtual Network (VNET)** | [Supported](how-to-secure-online-endpoint.md) (preview) | Supported | | **View costs** | [Endpoint and deployment level](how-to-view-online-endpoints-costs.md) | Cluster level |
-| **Mirrored traffic** | [Supported](how-to-safely-rollout-managed-endpoints.md#test-the-deployment-with-mirrored-traffic-preview) | Unsupported |
+| **Mirrored traffic** | [Supported](how-to-safely-rollout-online-endpoints.md#test-the-deployment-with-mirrored-traffic-preview) | Unsupported |
| **No-code deployment** | Supported ([MLflow](how-to-deploy-mlflow-models-online-endpoints.md) and [Triton](how-to-deploy-with-triton.md) models) | Supported ([MLflow](how-to-deploy-mlflow-models-online-endpoints.md) and [Triton](how-to-deploy-with-triton.md) models) | ### Managed online endpoints
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
When deploying to an online endpoint, you can use controlled rollout to enable t
* Perform A/B testing by routing traffic to different deployments within the endpoint. * Switch between endpoint deployments by updating the traffic percentage in endpoint configuration.
-For more information, see [Controlled rollout of machine learning models](./how-to-safely-rollout-managed-endpoints.md).
+For more information, see [Controlled rollout of machine learning models](./how-to-safely-rollout-online-endpoints.md).
### Analytics
machine-learning Concept Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-vulnerability-management.md
For code-based training experiences, you control which Azure Machine Learning en
* [Azure Machine Learning Base Images Repository](https://github.com/Azure/AzureML-Containers) * [Data Science Virtual Machine release notes](./data-science-virtual-machine/release-notes.md)
-* [AzureML Python SDK Release Notes](./azure-machine-learning-release-notes.md)
+* [AzureML Python SDK Release Notes](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ml/azure-ai-ml/CHANGELOG.md)
* [Machine learning enterprise security](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-enterprise-security)
machine-learning How To Access Resources From Endpoints Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-resources-from-endpoints-managed-identities.md
msi_client.user_assigned_identities.delete(
## Next steps * [Deploy and score a machine learning model by using a online endpoint](how-to-deploy-managed-online-endpoints.md).
-* For more on deployment, see [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md).
+* For more on deployment, see [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md).
* For more information on using the CLI, see [Use the CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md). * To see which compute resources you can use, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md). * For more on costs, see [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md).
machine-learning How To Create Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md
When you create a data asset in Azure Machine Learning, you'll need to specify a
> [!NOTE] > When you create a data asset from a local path, it will be automatically uploaded to the default Azure Machine Learning datastore in the cloud.
+> [!IMPORTANT]
+> The studio only supports browsing of credential-less ADLS Gen 2 datastores.
+ ## Data asset types - [**URIs**](#Create a `uri_folder` data asset) - A **U**niform **R**esource **I**dentifier that is a reference to a storage location on your local computer or in the cloud that makes it easy to access data in your jobs. Azure Machine Learning distinguishes two types of URIs:`uri_file` and `uri_folder`.
machine-learning How To Deploy Automl Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-automl-endpoint.md
You can learn to deploy to managed online endpoints with SDK more in [Deploy mac
## Next steps - [Troubleshooting online endpoints deployment](how-to-troubleshoot-managed-online-endpoints.md)-- [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md)
+- [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md)
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md
ml_client.online_endpoints.begin_delete(name=online_endpoint_name)
## Next steps -- [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md)
+- [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md)
- [Troubleshooting online endpoints deployment](./how-to-troubleshoot-online-endpoints.md) - [Torch serve sample](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-torchserve-densenet.sh)
machine-learning How To Deploy Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoints.md
The `begin_create_or_update` method also works with local deployments. Use the s
> The above is an example of inplace rolling update. > * For managed online endpoint, the same deployment is updated with the new configuration, with 20% nodes at a time, i.e. if the deployment has 10 nodes, 2 nodes at a time will be updated. > * For Kubernetes online endpoint, the system will iterately create a new deployment instance with the new configuration and delete the old one.
-> * For production usage, you might want to consider [blue-green deployment](how-to-safely-rollout-managed-endpoints.md), which offers a safer alternative.
+> * For production usage, you might want to consider [blue-green deployment](how-to-safely-rollout-online-endpoints.md), which offers a safer alternative.
### (Optional) Configure autoscaling
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
To learn more, review these articles:
- [Deploy models with REST](how-to-deploy-with-rest.md) - [Create and use online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md)-- [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md)
+- [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md)
- [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md) - [Use batch endpoints for batch scoring](batch-inference/how-to-use-batch-endpoint.md) - [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models.md
To learn more, review these articles:
- [Deploy models with REST](how-to-deploy-with-rest.md) - [Create and use online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md)-- [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md)
+- [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md)
- [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md) - [Use batch endpoints for batch scoring](batch-inference/how-to-use-batch-endpoint.md) - [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)
machine-learning How To Deploy With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-rest.md
If you aren't going use the deployment, you should delete it with the below comm
* Learn to [Troubleshoot online endpoints deployment and scoring](how-to-troubleshoot-managed-online-endpoints.md) * Learn how to [Access Azure resources with a online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md) * Learn how to [monitor online endpoints](how-to-monitor-online-endpoints.md).
-* Learn [safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md).
+* Learn [safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md).
* [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md). * [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md). * Learn about limits on managed online endpoints in [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
machine-learning How To Deploy With Triton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-triton.md
To learn more, review these articles:
- [Deploy models with REST](how-to-deploy-with-rest.md) - [Create and use managed online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md)-- [Safe rollout for online endpoints ](how-to-safely-rollout-managed-endpoints.md)
+- [Safe rollout for online endpoints ](how-to-safely-rollout-online-endpoints.md)
- [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md) - [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md) - [Access Azure resources with a managed online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md)
machine-learning How To Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-online-endpoint.md
az group delete --resource-group <resource-group-name>
## Next steps -- [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md)
+- [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md)
- [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md) - [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md) - [Access Azure resources with a online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md)
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
instance_count = ceil(concurrent_requests / max_concurrent_requests_per_instance
## Next steps - [Deploy and score a machine learning model with a managed online endpoint](how-to-deploy-managed-online-endpoints.md)-- [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md)
+- [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md)
- [Online endpoint YAML reference](reference-yaml-endpoint-online.md)
machine-learning Reference Yaml Endpoint Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-endpoint-online.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `identity` | object | The managed identity configuration for accessing Azure resources for endpoint provisioning and inference. | | | | `identity.type` | string | The type of managed identity. If the type is `user_assigned`, the `identity.user_assigned_identities` property must also be specified. | `system_assigned`, `user_assigned` | | | `identity.user_assigned_identities` | array | List of fully qualified resource IDs of the user-assigned identities. | | |
-| `traffic` | object | Traffic represents the percentage of requests to be served by different deployments. It's represented by a dictionary of key-value pairs, where keys represent the deployment name and value represent the percentage of traffic to that deployment. For example, `blue: 90 green: 10` means 90% requests are sent to the deployment named `blue` and 10% is sent to deployment `green`. Total traffic has to either be 0 or sum up to 100. See [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md) to see the traffic configuration in action. <br><br> Note: you can't set this field during online endpoint creation, as the deployments under that endpoint must be created before traffic can be set. You can update the traffic for an online endpoint after the deployments have been created using `az ml online-endpoint update`; for example, `az ml online-endpoint update --name <endpoint_name> --traffic "blue=90 green=10"`. | | |
+| `traffic` | object | Traffic represents the percentage of requests to be served by different deployments. It's represented by a dictionary of key-value pairs, where keys represent the deployment name and value represent the percentage of traffic to that deployment. For example, `blue: 90 green: 10` means 90% requests are sent to the deployment named `blue` and 10% is sent to deployment `green`. Total traffic has to either be 0 or sum up to 100. See [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md) to see the traffic configuration in action. <br><br> Note: you can't set this field during online endpoint creation, as the deployments under that endpoint must be created before traffic can be set. You can update the traffic for an online endpoint after the deployments have been created using `az ml online-endpoint update`; for example, `az ml online-endpoint update --name <endpoint_name> --traffic "blue=90 green=10"`. | | |
| `public_network_access` | string | This flag controls the visibility of the managed endpoint. When `disabled`, inbound scoring requests are received using the [private endpoint of the Azure Machine Learning workspace](how-to-configure-private-link.md) and the endpoint can't be reached from public networks. This flag is applicable only for managed endpoints | `enabled`, `disabled` | `enabled` |
-| `mirror_traffic` | string | Percentage of live traffic to mirror to a deployment. Mirroring traffic doesn't change the results returned to clients. The mirrored percentage of traffic is copied and submitted to the specified deployment so you can gather metrics and logging without impacting clients. For example, to check if latency is within acceptable bounds and that there are no HTTP errors. It's represented by a dictionary with a single key-value pair, where the key represents the deployment name and the value represents the percentage of traffic to mirror to the deployment. For more information, see [Test a deployment with mirrored traffic](how-to-safely-rollout-managed-endpoints.md#test-the-deployment-with-mirrored-traffic-preview).
+| `mirror_traffic` | string | Percentage of live traffic to mirror to a deployment. Mirroring traffic doesn't change the results returned to clients. The mirrored percentage of traffic is copied and submitted to the specified deployment so you can gather metrics and logging without impacting clients. For example, to check if latency is within acceptable bounds and that there are no HTTP errors. It's represented by a dictionary with a single key-value pair, where the key represents the deployment name and the value represents the percentage of traffic to mirror to the deployment. For more information, see [Test a deployment with mirrored traffic](how-to-safely-rollout-online-endpoints.md#test-the-deployment-with-mirrored-traffic-preview).
## Remarks
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/azure-machine-learning-release-notes.md
+
+ Title: Python SDK release notes
+
+description: Learn about the latest updates to Azure Machine Learning Python SDK.
+++++++ Last updated : 10/25/2022++
+# Azure Machine Learning Python SDK release notes
+
+In this article, learn about Azure Machine Learning Python SDK releases. For the full SDK reference content, visit the Azure Machine Learning's [**main SDK for Python**](/python/api/overview/azure/ml/intro) reference page.
+
+__RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader:
+`https://learn.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes%22&locale=en-us`
++
+## 2022-10-25
+
+### Azure Machine Learning SDK for Python v1.47.0
+ + **azureml-automl-dnn-nlp**
+ + Runtime changes for AutoML NLP to account for fixed training parameters, as part of the newly introduced model sweeping and hyperparameter tuning.
+ + **azureml-mlflow**
+ + AZUREML_ARTIFACTS_DEFAULT_TIMEOUT can be used to control the timeout for artifact upload
+ + **azureml-train-automl-runtime**
+ + Many Models and Hierarchical Time Series training now enforces check on timeout parameters to detect conflict before submitting the experiment for run. This will prevent experiment failure during the run by raising exception before submitting experiment.
+ + Customers can now control the step size while using rolling forecast in Many Models inference.
+ + ManyModels inference with unpartitioned tabular data now supports forecast_quantiles.
+
+## 2022-09-26
+
+### Azure Machine Learning SDK for Python v1.46.0
+ + **azureml-automl-dnn-nlp**
+ + Customers will no longer be allowed to specify a line in CoNLL which only comprises with a token. The line must always either be an empty newline or one with exactly one token followed by exactly one space followed by exactly one label.
+ + **azureml-contrib-automl-dnn-forecasting**
+ + There is a corner case where samples are reduced to 1 after the cross validation split but sample_size still points to the count before the split and hence batch_size ends up being more than sample count in some cases. In this fix we initialize sample_size after the split
+ + **azureml-core**
+ + Added deprecation warning when inference customers use CLI/SDK v1 model deployment APIs to deploy models and also when Python version is 3.6 and less.
+ + The following values of `AZUREML_LOG_DEPRECATION_WARNING_ENABLED` change the behavior as follows:
+ + Default - displays the warning when customer uses Python 3.6 and less and for cli/sdk v1.
+ + `True` - displays the sdk v1 deprecation warning on azureml-sdk packages.
+ + `False` - disables the sdk v1 deprecation warning on azureml-sdk packages.
+ + Command to be executed to set the environment variable to disable the deprecation message:
+ + Windows - `setx AZUREML_LOG_DEPRECATION_WARNING_ENABLED "False"`
+ + Linux - `export AZUREML_LOG_DEPRECATION_WARNING_ENABLED="False"`
+ + **azureml-interpret**
+ + update azureml-interpret package to interpret-community 0.27.*
+ + **azureml-pipeline-core**
+ + Fix schedule default time zone to UTC.
+ + Fix incorrect reuse when using SqlDataReference in DataTransfer step.
+ + **azureml-responsibleai**
+ + update azureml-responsibleai package and curated images to raiwidgets and responsibleai v0.22.0
+ + **azureml-train-automl-runtime**
+ + Fixed a bug in generated scripts that caused certain metrics to not render correctly in ui.
+ + Many Models now supports rolling forecast for inferencing.
+ + Support to return top `N` models in Many models scenario.
++++
+## 2022-08-29
+
+### Azure Machine Learning SDK for Python v1.45.0
+ + **azureml-automl-runtime**
+ + Fixed a bug where the sample_weight column was not properly validated.
+ + Added rolling_forecast() public method to the forecasting pipeline wrappers for all supported forecasting models. This method replaces the deprecated rolling_evaluation() method.
+ + Fixed an issue where AutoML Regression tasks may fall back to train-valid split for model evaluation, when CV would have been a more appropriate choice.
+ + **azureml-core**
+ + New cloud configuration suffix added, "aml_discovery_endpoint".
+ + Updated the vendored azure-storage package from version 2 to version 12.
+ + **azureml-mlflow**
+ + New cloud configuration suffix added, "aml_discovery_endpoint".
+ + **azureml-responsibleai**
+ + update azureml-responsibleai package and curated images to raiwidgets and responsibleai 0.21.0
+ + **azureml-sdk**
+ + The azureml-sdk package now allow Python 3.9.
++
+## 2022-08-01
+
+### Azure Machine Learning SDK for Python v1.44.0
+
+ + **azureml-automl-dnn-nlp**
+ + Weighted accuracy and Matthews correlation coefficient (MCC) will no longer be a metric displayed on calculated metrics for NLP Multilabel classification.
+ + **azureml-automl-dnn-vision**
+ + Raise user error when invalid annotation format is provided
+ + **azureml-cli-common**
+ + Updated the v1 CLI description
+ + **azureml-contrib-automl-dnn-forecasting**
+ + Fixed the "Failed to calculate TCN metrics." issues caused for TCNForecaster when different timeseries in the validation dataset have different lengths.
+ + Added auto timeseries ID detection for DNN forecasting models like TCNForecaster.
+ + Fixed a bug with the Forecast TCN model where validation data could be corrupted in some circumstances when the user provided the validation set.
+ + **azureml-core**
+ + Allow setting a timeout_seconds parameter when downloading artifacts from a Run
+ + Warning message added - Azure ML CLI v1 is getting retired on 30 Sep 2025. Users are recommended to adopt CLI v2.
+ + Fix submission to non-AmlComputes throwing exceptions.
+ + Added docker context support for environments
+ + **azureml-interpret**
+ + Increase numpy version for AutoML packages
+ + **azureml-pipeline-core**
+ + Fix regenerate_outputs=True not taking effect when submit pipeline.
+ + **azureml-train-automl-runtime**
+ + Increase numpy version for AutoML packages
+ + Enable code generation for vision and nlp
+ + Original columns on which grains are created are added as part of predictions.csv
+
+## 2022-07-21
+
+### Announcing end of support for Python 3.6 in AzureML SDK v1 packages
+++ **Feature deprecation**
+ + **Deprecate Python 3.6 as a supported runtime for SDK v1 packages**
+ + On December 05, 2022, AzureML will deprecate Python 3.6 as a supported runtime, formally ending our Python 3.6 support for SDK v1 packages.
+ + From the deprecation date of December 05, 2022, AzureML will no longer apply security patches and other updates to the Python 3.6 runtime used by AzureML SDK v1 packages.
+ + The existing AzureML SDK v1 packages with Python 3.6 still will continue to run. However, AzureML strongly recommends that you migrate your scripts and dependencies to a supported Python runtime version so that you continue to receive security patches and remain eligible for technical support.
+ + We recommend using Python 3.8 version as a runtime for AzureML SDK v1 packages.
+ + In addition, AzureML SDK v1 packages using Python 3.6 will no longer be eligible for technical support.
+ + If you have any questions, contact us through AML Support.
+
+## 2022-06-27
+
+ + **azureml-automl-dnn-nlp**
+ + Remove duplicate labels column from multi-label predictions
+ + **azureml-contrib-automl-pipeline-steps**
+ + Many Models now provide the capability to generate prediction output in csv format as well. - Many Models prediction will now include column names in the output file in case of **csv** file format.
+ + **azureml-core**
+ + ADAL authentication is now deprecated and all authentication classes now use MSAL authentication. Please install azure-cli>=2.30.0 to utilize MSAL based authentication when using AzureCliAuthentication class.
+ + Added fix to force environment registration when `Environment.build(workspace)`. The fix solves confusion of the latest environment built instead of the asked one when environment is cloned or inherited from another instance.
+ + SDK warning message to restart Compute Instance before May 31, 2022, if it was created before September 19, 2021
+ + **azureml-interpret**
+ + Updated azureml-interpret package to interpret-community 0.26.*
+ + In the azureml-interpret package, add ability to get raw and engineered feature names from scoring explainer. Also, add example to the scoring notebook to get feature names from the scoring explainer and add documentation about raw and engineered feature names.
+ + **azureml-mlflow**
+ + azureml-core as a dependency of azureml-mlflow has been removed. - MLflow projects and local deployments will require azureml-core and needs to be installed separately.
+ + Adding support for creating endpoints and deploying to them via the MLflow client plugin.
+ + **azureml-responsibleai**
+ + Updated azureml-responsibleai package and environment images to latest responsibleai and raiwidgets 0.19.0 release
+ + **azureml-train-automl-client**
+ + Now OutputDatasetConfig is supported as the input of the MM/HTS pipeline builder. The mappings are: 1) OutputTabularDatasetConfig -> treated as unpartitioned tabular dataset. 2) OutputFileDatasetConfig -> treated as filed dataset.
+ + **azureml-train-automl-runtime**
+ + Added data validation that requires the number of minority class samples in the dataset to be at least as much as the number of CV folds requested.
+ + Automatic cross-validation parameter configuration is now available for AutoML forecasting tasks. Users can now specify "auto" for n_cross_validations and cv_step_size or leave them empty, and AutoML will provide those configurations base on your data. However, currently this feature is not supported when TCN is enabled.
+ + Forecasting Parameters in Many Models and Hierarchical Time Series can now be passed via object rather than using individual parameters in dictionary.
+ + Enabled forecasting model endpoints with quantiles support to be consumed in Power BI.
+ + Updated AutoML scipy dependency upper bound to 1.5.3 from 1.5.2
+
+## 2022-04-25
+
+### Azure Machine Learning SDK for Python v1.41.0
+
+**Breaking change warning**
+
+This breaking change comes from the June release of `azureml-inference-server-http`. In the `azureml-inference-server-http` June release (v0.9.0), Python 3.6 support will be dropped. Since `azureml-defaults` depends on `azureml-inference-server-http`, this change will be propagated to `azureml-defaults`. If you are not using `azureml-defaults` for inference, feel free to use `azureml-core` or any other AzureML SDK packages directly instead of install `azureml-defaults`.
+
+ + **azureml-automl-dnn-nlp**
+ + Turning on long range text feature by default.
+ + **azureml-automl-dnn-vision**
+ + Changing the ObjectAnnotation Class type from object to "dataobject".
+ + **azureml-core**
+ + This release updates the Keyvault class used by customers to enable them to provide the keyvault content type when creating a secret using the SDK. This release also updates the SDK to include a new function that enables customers to retrieve the value of the content type from a specific secret.
+ + **azureml-interpret**
+ + updated azureml-interpret package to interpret-community 0.25.0
+ + **azureml-pipeline-core**
+ + Do not print run detail anymore if `pipeline_run.wait_for_completion` with `show_output=False`
+ + **azureml-train-automl-runtime**
+ + Fixes a bug that would cause code generation to fail when the azureml-contrib-automl-dnn-forecasting package is present in the training environment.
+ + Fix error when using a test dataset without a label column with AutoML Model Testing.
+
+## 2022-03-28
+
+### Azure Machine Learning SDK for Python v1.40.0
+ + **azureml-automl-dnn-nlp**
+ + We're making the Long Range Text feature optional and only if the customers explicitly opt in for it, using the kwarg "enable_long_range_text"
+ + Adding data validation layer for multi-class classification scenario which leverages the same base class as multilabel for common validations, and a derived class for additional task specific data validation checks.
+ + **azureml-automl-dnn-vision**
+ + Fixing KeyError while computing class weights.
+ + **azureml-contrib-reinforcementlearning**
+ + SDK warning message for upcoming deprecation of RL service
+ + **azureml-core**
+ + * Return logs for runs that went through our new runtime when calling any of the get logs function on the run object, including `run.get_details`, `run.get_all_logs`, etc.
+ + Added experimental method Datastore.register_onpremises_hdfs to allow users to create datastores pointing to on-premises HDFS resources.
+ + Updating the CLI documentation in the help command
+ + **azureml-interpret**
+ + For azureml-interpret package, remove shap pin with packaging update. Remove numba and numpy pin after CE env update.
+ + **azureml-mlflow**
+ + Bugfix for MLflow deployment client run_local failing when config object wasn't provided.
+ + **azureml-pipeline-steps**
+ + Remove broken link of deprecated pipeline EstimatorStep
+ + **azureml-responsibleai**
+ + update azureml-responsibleai package to raiwidgets and responsibleai 0.17.0 release
+ + **azureml-train-automl-runtime**
+ + Code generation for automated ML now supports ForecastTCN models (experimental).
+ + Models created via code generation will now have all metrics calculated by default (except normalized mean absolute error, normalized median absolute error, normalized RMSE, and normalized RMSLE in the case of forecasting models). The list of metrics to be calculated can be changed by editing the return value of `get_metrics_names()`. Cross validation will now be used by default for forecasting models created via code generation..
+ + **azureml-training-tabular**
+ + The list of metrics to be calculated can be changed by editing the return value of `get_metrics_names()`. Cross validation will now be used by default for forecasting models created via code generation.
+ + Converting decimal type y-test into float to allow for metrics computation to proceed without errors.
+
+## 2022-02-28
+
+### Azure Machine Learning SDK for Python v1.39.0
+ + **azureml-automl-core**
+ + Fix incorrect form displayed in PBI for integration with AutoML regression models
+ + Adding min-label-classes check for both classification tasks (multi-class and multi-label). It will throw an error for the customer's run if the unique number of classes in the input training dataset is fewer than 2. It is meaningless to run classification on fewer than two classes.
+ + **azureml-automl-runtime**
+ + Converting decimal type y-test into float to allow for metrics computation to proceed without errors.
+ + AutoML training now supports numpy version 1.8.
+ + **azureml-contrib-automl-dnn-forecasting**
+ + Fixed a bug in the TCNForecaster model where not all training data would be used when cross-validation settings were provided.
+ + TCNForecaster wrapper's forecast method that was corrupting inference-time predictions. Also fixed an issue where the forecast method would not use the most recent context data in train-valid scenarios.
+ + **azureml-interpret**
+ + For azureml-interpret package, remove shap pin with packaging update. Remove numba and numpy pin after CE env update.
+ + **azureml-responsibleai**
+ + azureml-responsibleai package to raiwidgets and responsibleai 0.17.0 release
+ + **azureml-synapse**
+ + Fix the issue that magic widget is disappeared.
+ + **azureml-train-automl-runtime**
+ + Updating AutoML dependencies to support Python 3.8. This change will break compatibility with models trained with SDK 1.37 or below due to newer Pandas interfaces being saved in the model.
+ + AutoML training now supports numpy version 1.19
+ + Fix AutoML reset index logic for ensemble models in automl_setup_model_explanations API
+ + In AutoML, use lightgbm surrogate model instead of linear surrogate model for sparse case after latest lightgbm version upgrade
+ + All internal intermediate artifacts that are produced by AutoML are now stored transparently on the parent run (instead of being sent to the default workspace blob store). Users should be able to see the artifacts that AutoML generates under the `outputs/` directory on the parent run.
+
+
+## 2022-01-24
+
+### Azure Machine Learning SDK for Python v1.38.0
+ + **azureml-automl-core**
+ + Tabnet Regressor and Tabnet Classifier support in AutoML
+ + Saving data transformer in parent run outputs, which can be reused to produce same featurized dataset which was used during the experiment run
+ + Supporting getting primary metrics for Forecasting task in get_primary_metrics API.
+ + Renamed second optional parameter in v2 scoring scripts as GlobalParameters
+ + **azureml-automl-dnn-vision**
+ + Added the scoring metrics in the metrics UI
+ + **azureml-automl-runtime**
+ + Bug fix for cases where the algorithm name for NimbusML models may show up as empty strings, either on the ML Studio, or on the console outputs.
+ + **azureml-core**
+ + Added parameter blobfuse_enabled in azureml.core.webservice.aks.AksWebservice.deploy_configuration. When this parameter is true, models and scoring files will be downloaded with blobfuse instead of the blob storage API.
+ + **azureml-interpret**
+ + Updated azureml-interpret to interpret-community 0.24.0
+ + In azureml-interpret update scoring explainer to support latest version of lightgbm with sparse TreeExplainer
+ + Update azureml-interpret to interpret-community 0.23.*
+ + **azureml-pipeline-core**
+ + Add note in pipelinedata, recommend user to use pipeline output dataset instead.
+ + **azureml-pipeline-steps**
+ + Add `environment_variables` to ParallelRunConfig, runtime environment variables can be passed by this parameter and will be set on the process where the user script is executed.
+ + **azureml-train-automl-client**
+ + Tabnet Regressor and Tabnet Classifier support in AutoML
+ + **azureml-train-automl-runtime**
+ + Saving data transformer in parent run outputs, which can be reused to produce same featurized dataset which was used during the experiment run
+ + **azureml-train-core**
+ + Enable support for early termination for Bayesian Optimization in Hyperdrive
+ + Bayesian and GridParameterSampling objects can now pass on properties
++
+## 2021-12-13
+
+### Azure Machine Learning SDK for Python v1.37.0
++ **Breaking changes**
+ + **azureml-core**
+ + Starting in version 1.37.0, AzureML SDK uses MSAL as the underlying authentication library. MSAL uses Azure Active Directory (Azure AD) v2.0 authentication flow to provide more functionality and increases security for token cache. For more details, see [Overview of the Microsoft Authentication Library (MSAL)](../../active-directory/develop/msal-overview.md).
+ + Update AML SDK dependencies to the latest version of Azure Resource Management Client Library for Python (azure-mgmt-resource>=15.0.0,<20.0.0) & adopt track2 SDK.
+ + Starting in version 1.37.0, azure-ml-cli extension should be compatible with the latest version of Azure CLI >=2.30.0.
+ + When using Azure CLI in a pipeline, like as Azure DevOps, ensure all tasks/stages are using versions of Azure CLI above v2.30.0 for MSAL-based Azure CLI. Azure CLI 2.30.0 is not backward compatible with prior versions and throws an error when using incompatible versions. To use Azure CLI credentials with AzureML SDK, Azure CLI should be installed as pip package.
+
++ **Bug fixes and improvements**
+ + **azureml-core**
+ + Removed instance types from the attach workflow for Kubernetes compute. Instance types can now directly be set up in the Kubernetes cluster. For more details, please visit aka.ms/amlarc/doc.
+ + **azureml-interpret**
+ + updated azureml-interpret to interpret-community 0.22.*
+ + **azureml-pipeline-steps**
+ + Fixed a bug where the experiment "placeholder" might be created on submission of a Pipeline with an AutoMLStep.
+ + **azureml-responsibleai**
+ + update azureml-responsibleai and compute instance environment to responsibleai and raiwidgets 0.15.0 release
+ + update azureml-responsibleai package to latest responsibleai 0.14.0.
+ + **azureml-tensorboard**
+ + You can now use `Tensorboard(runs, use_display_name=True)` to mount the TensorBoard logs to folders named after the `run.display_name/run.id` instead of `run.id`.
+ + **azureml-train-automl-client**
+ + Fixed a bug where the experiment "placeholder" might be created on submission of a Pipeline with an AutoMLStep.
+ + Update AutoMLConfig test_data and test_size docs to reflect preview status.
+ + **azureml-train-automl-runtime**
+ + Added new feature that allows users to pass time series grains with one unique value.
+ + In certain scenarios, an AutoML model can predict NaNs. The rows that correspond to these NaN predictions will be removed from test datasets and predictions before computing metrics in test runs.
++
+## 2021-11-08
+
+### Azure Machine Learning SDK for Python v1.36.0
++ **Bug fixes and improvements**
+ + **azureml-automl-dnn-vision**
+ + Cleaned up minor typos on some error messages.
+ + **azureml-contrib-reinforcementlearning**
+ + Submitting Reinforcement Learning runs that use simulators is no longer supported.
+ + **azureml-core**
+ + Added support for partitioned premium blob.
+ + Specifying non-public clouds for Managed Identity authentication is no longer supported.
+ + User can migrate AKS web service to online endpoint and deployment which is managed by CLI (v2).
+ + The instance type for training jobs on Kubernetes compute targets can now be set via a RunConfiguration property: run_config.kubernetescompute.instance_type.
+ + **azureml-defaults**
+ + Removed redundant dependencies like gunicorn and werkzeug
+ + **azureml-interpret**
+ + azureml-interpret package updated to 0.21.* version of interpret-community
+ + **azureml-pipeline-steps**
+ + Deprecate MpiStep in favor of using CommandStep for running ML training (including distributed training) in pipelines.
+ + **azureml-train-automl-rutime**
+ + Update the AutoML model test predictions output format docs.
+ + Added docstring descriptions for Naive, SeasonalNaive, Average, and SeasonalAverage forecasting model.
+ + Featurization summary is now stored as an artifact on the run (check for a file named 'featurization_summary.json' under the outputs folder)
+ + Enable categorical indicators support for Tabnet Learner.
+ + Add downsample parameter to automl_setup_model_explanations to allow users to get explanations on all data without downsampling by setting this parameter to be false.
+
+
+## 2021-10-11
+
+### Azure Machine Learning SDK for Python v1.35.0
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Enable binary metrics calculation
+ + **azureml-contrib-fairness**
+ + Improve error message on failed dashboard download
+ + **azureml-core**
+ + Bug in specifying non-public clouds for Managed Identity authentication has been resolved.
+ + Dataset.File.upload_directory() and Dataset.Tabular.register_pandas_dataframe() experimental flags are now removed.
+ + Experimental flags are now removed in partition_by() method of TabularDataset class.
+ + **azureml-pipeline-steps**
+ + Experimental flags are now removed for the `partition_keys` parameter of the ParallelRunConfig class.
+ + **azureml-interpret**
+ + azureml-interpret package updated to intepret-community 0.20.*
+ + **azureml-mlflow**
+ + Made it possible to log artifacts and images with MLflow using subdirectories
+ + **azureml-responsibleai**
+ + Improve error message on failed dashboard download
+ + **azureml-train-automl-client**
+ + Added support for computer vision tasks such as Image Classification, Object Detection and Instance Segmentation. Detailed documentation can be found at: [Set up AutoML to train computer vision models with Python (v1)](how-to-auto-train-image-models-v1.md).
+ + Enable binary metrics calculation
+ + **azureml-train-automl-runtime**
+ + Add TCNForecaster support to model test runs.
+ + Update the model test predictions.csv output format. The output columns now include the original target values and the features which were passed in to the test run. This can be turned off by setting `test_include_predictions_only=True` in `AutoMLConfig` or by setting `include_predictions_only=True` in `ModelProxy.test()`. If the user has requested to only include predictions then the output format looks like (forecasting is the same as regression): Classification => [predicted values] [probabilities] Regression => [predicted values] else (default): Classification => [original test data labels] [predicted values] [probabilities] [features] Regression => [original test data labels] [predicted values] [features] The `[predicted values]` column name = `[label column name] + "_predicted"`. The `[probabilities]` column names = `[class name] + "_predicted_proba"`. If no target column was passed in as input to the test run, then `[original test data labels]` will not be in the output.
+
+## 2021-09-07
+
+### Azure Machine Learning SDK for Python v1.34.0
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Added support for re-fitting a previously trained forecasting pipeline.
+ + Added ability to get predictions on the training data (in-sample prediction) for forecasting.
+ + **azureml-automl-runtime**
+ + Add support to return predicted probabilities from a deployed endpoint of an AutoML classifier model.
+ + Added a forecasting option for users to specify that all predictions should be integers.
+ + Removed the target column name from being part of model explanation feature names for local experiments with training_data_label_column_name
+ + as dataset inputs.
+ + Added support for re-fitting a previously trained forecasting pipeline.
+ + Added ability to get predictions on the training data (in-sample prediction) for forecasting.
+ + **azureml-core**
+ + Added support to set stream column type, mount and download stream columns in tabular dataset.
+ + New optional fields added to Kubernetes.attach_configuration(identity_type=None, identity_ids=None) which allow attaching KubernetesCompute with either SystemAssigned or UserAssigned identity. New identity fields will be included when calling print(compute_target) or compute_target.serialize(): identity_type, identity_id, principal_id, and tenant_id/client_id.
+ + **azureml-dataprep**
+ + Added support to set stream column type for tabular dataset. added support to mount and download stream columns in tabular dataset.
+ + **azureml-defaults**
+ + The dependency `azureml-inference-server-http==0.3.1` has been added to `azureml-defaults`.
+ + **azureml-mlflow**
+ + Allow pagination of list_experiments API by adding `max_results` and `page_token` optional params. For documentation, see MLflow official docs.
+ + **azureml-sdk**
+ + Replaced dependency on deprecated package(azureml-train) inside azureml-sdk.
+ + Add azureml-responsibleai to azureml-sdk extras
+ + **azureml-train-automl-client**
+ + Expose the `test_data` and `test_size` parameters in `AutoMLConfig`. These parameters can be used to automatically start a test run after the model
+ + training phase has been completed. The test run will compute predictions using the best model and will generate metrics given these predictions.
+
+## 2021-08-24
+
+### Azure Machine Learning Experimentation User Interface
+ + **Run Delete**
+ + Run Delete is a new functionality that allows users to delete one or multiple runs from their workspace.
+ + This functionality can help users reduce storage costs and manage storage capacity by regularly deleting runs and experiments from the UI directly.
+ + **Batch Cancel Run**
+ + Batch Cancel Run is new functionality that allows users to select one or multiple runs to cancel from their run list.
+ + This functionality can help users cancel multiple queued runs and free up space on their cluster.
+
+## 2021-08-18
+
+### Azure Machine Learning Experimentation User Interface
+ + **Run Display Name**
+ + The Run Display Name is a new, editable and optional display name that can be assigned to a run.
+ + This name can help with more effectively tracking, organizing and discovering the runs.
+ + The Run Display Name is defaulted to an adjective_noun_guid format (Example: awesome_watch_2i3uns).
+ + This default name can be edited to a more customizable name. This can be edited from the Run details page in the Azure Machine Learning studio user interface.
+
+## 2021-08-02
+
+### Azure Machine Learning SDK for Python v1.33.0
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Improved error handling around XGBoost model retrieval.
+ + Added possibility to convert the predictions from float to integers for forecasting and regression tasks.
+ + Updated default value for enable_early_stopping in AutoMLConfig to True.
+ + **azureml-automl-runtime**
+ + Added possibility to convert the predictions from float to integers for forecasting and regression tasks.
+ + Updated default value for enable_early_stopping in AutoMLConfig to True.
+ + **azureml-contrib-automl-pipeline-steps**
+ + Hierarchical timeseries (HTS) is enabled for forecasting tasks through pipelines.
+ + Add Tabular dataset support for inferencing
+ + Custom path can be specified for the inference data
+ + **azureml-contrib-reinforcementlearning**
+ + Some properties in `azureml.core.environment.DockerSection` are deprecated, such as `shm_size` property used by Ray workers in reinforcement learning jobs. This property can now be specified in `azureml.contrib.train.rl.WorkerConfiguration` instead.
+ + **azureml-core**
+ + Fixed a hyperlink in `ScriptRunConfig.distributed_job_config` documentation
+ + Azure Machine Learning compute clusters can now be created in a location different from the location of the workspace. This is useful for maximizing idle capacity allocation and managing quota utilization across different locations without having to create more workspaces just to use quota and create a compute cluster in a particular location. For more information, see [Create an Azure Machine Learning compute cluster](how-to-create-attach-compute-cluster.md?tabs=python).
+ + Added display_name as a mutable name field of Run object.
+ + Dataset from_files now supports skipping of data extensions for large input data
+ + **azureml-dataprep**
+ + Fixed a bug where to_dask_dataframe would fail because of a race condition.
+ + Dataset from_files now supports skipping of data extensions for large input data
+ + **azureml-defaults**
+ + We are removing the dependency azureml-model-management-sdk==1.0.1b6.post1 from azureml-defaults.
+ + **azureml-interpret**
+ + updated azureml-interpret to interpret-community 0.19.*
+ + **azureml-pipeline-core**
+ + Hierarchical timeseries (HTS) is enabled for forecasting tasks through pipelines.
+ + **azureml-train-automl-client**
+ + Switch to using blob store for caching in Automated ML.
+ + Hierarchical timeseries (HTS) is enabled for forecasting tasks through pipelines.
+ + Improved error handling around XGBoost model retrieval.
+ + Updated default value for enable_early_stopping in AutoMLConfig to True.
+ + **azureml-train-automl-runtime**
+ + Switch to using blob store for caching in Automated ML.
+ + Hierarchical timeseries (HTS) is enabled for forecasting tasks through pipelines.
+ + Updated default value for enable_early_stopping in AutoMLConfig to True.
++
+## 2021-07-06
+
+### Azure Machine Learning SDK for Python v1.32.0
++ **Bug fixes and improvements**
+ + **azureml-core**
+ + Expose diagnose workspace health in SDK/CLI
+ + **azureml-defaults**
+ + Added `opencensus-ext-azure==1.0.8` dependency to azureml-defaults
+ + **azureml-pipeline-core**
+ + Updated the AutoMLStep to use prebuilt images when the environment for job submission matches the default environment
+ + **azureml-responsibleai**
+ + New error analysis client added to upload, download and list error analysis reports
+ + Ensure `raiwidgets` and `responsibleai` packages are version synchronized
+ + **azureml-train-automl-runtime**
+ + Set the time allocated to dynamically search across various featurization strategies to a maximum of one-fourth of the overall experiment timeout
++
+## 2021-06-21
+
+### Azure Machine Learning SDK for Python v1.31.0
++ **Bug fixes and improvements**
+ + **azureml-core**
+ + Improved documentation for platform property on Environment class
+ + Changed default AML Compute node scale down time from 120 seconds to 1800 seconds
+ + Updated default troubleshooting link displayed on the portal for troubleshooting failed runs to: https://aka.ms/azureml-run-troubleshooting
+ + **azureml-automl-runtime**
+ + Data Cleaning: Samples with target values in [None, "", "nan", np.nan] will be dropped prior to featurization and/or model training
+ + **azureml-interpret**
+ + Prevent flush task queue error on remote AzureML runs that use ExplanationClient by increasing timeout
+ + **azureml-pipeline-core**
+ + Add jar parameter to synapse step
+ + **azureml-train-automl-runtime**
+ + Fix high cardinality guardrails to be more aligned with docs
+
+## 2021-06-07
+
+### Azure Machine Learning SDK for Python v1.30.0
++ **Bug fixes and improvements**
+ + **azureml-core**
+ + Pin dependency `ruamel-yaml` to < 0.17.5 as a breaking change was released in 0.17.5.
+ + `aml_k8s_config` property is being replaced with `namespace`, `default_instance_type`, and `instance_types` parameters for `KubernetesCompute` attach.
+ + Workspace sync keys was changed to a long running operation.
+ + **azureml-automl-runtime**
+ + Fixed problems where runs with big data may fail with `Elements of y_test cannot be NaN`.
+ + **azureml-mlflow**
+ + MLFlow deployment plugin bugfix for models with no signature.
+ + **azureml-pipeline-steps**
+ + ParallelRunConfig: update doc for process_count_per_node.
+ + **azureml-train-automl-runtime**
+ + Support for custom defined quantiles during MM inference
+ + Support for forecast_quantiles during batch inference.
+ + **azureml-contrib-automl-pipeline-steps**
+ + Support for custom defined quantiles during MM inference
+ + Support for forecast_quantiles during batch inference.
+
+## 2021-05-25
+
+### Announcing the CLI (v2) for Azure Machine Learning
+
+The `ml` extension to the Azure CLI is the next-generation interface for Azure Machine Learning. It enables you to train and deploy models from the command line, with features that accelerate scaling data science up and out while tracking the model lifecycle. [Install and set up the CLI (v2)](../how-to-configure-cli.md).
+
+### Azure Machine Learning SDK for Python v1.29.0
++ **Bug fixes and improvements**
+ + **Breaking changes**
+ + Dropped support for Python 3.5.
+ + **azureml-automl-runtime**
+ + Fixed a bug where the STLFeaturizer failed if the time-series length was shorter than the seasonality. This error manifested as an IndexError. The case is handled now without error, though the seasonal component of the STL will just consist of zeros in this case.
+ + **azureml-contrib-automl-dnn-vision**
+ + Added a method for batch inferencing with file paths.
+ + **azureml-contrib-gbdt**
+ + The azureml-contrib-gbdt package has been deprecated and might not receive future updates and will be removed from the distribution altogether.
+ + **azureml-core**
+ + Corrected explanation of parameter create_if_not_exists in Datastore.register_azure_blob_container.
+ + Added sample code to DatasetConsumptionConfig class.
+ + Added support for step as an alternative axis for scalar metric values in run.log()
+ + **azureml-dataprep**
+ + Limit partition size accepted in `_with_partition_size()` to 2GB
+ + **azureml-interpret**
+ + update azureml-interpret to the latest interpret-core package version
+ + Dropped support for SHAP DenseData, which has been deprecated in SHAP 0.36.0.
+ + Enable `ExplanationClient` to upload to a user specified datastore.
+ + **azureml-mlflow**
+ + Move azureml-mlflow to mlflow-skinny to reduce the dependency footprint while maintaining full plugin support
+ + **azureml-pipeline-core**
+ + PipelineParameter code sample is updated in the reference doc to use correct parameter.
++
+## 2021-05-10
+
+### Azure Machine Learning SDK for Python v1.28.0
++ **Bug fixes and improvements**
+ + **azureml-automl-runtime**
+ + Improved AutoML Scoring script to make it consistent with designer
+ + Patch bug where forecasting with the Prophet model would throw a "missing column" error if trained on an earlier version of the SDK.
+ + Added the ARIMAX model to the public-facing, forecasting-supported model lists of the AutoML SDK. Here, ARIMAX is a regression with ARIMA errors and a special case of the transfer function models developed by Box and Jenkins. For a discussion of how the two approaches are different, see [The ARIMAX model muddle](https://robjhyndman.com/hyndsight/arimax/). Unlike the rest of the multivariate models that use auto-generated, time-dependent features (hour of the day, day of the year, and so on) in AutoML, this model uses only features that are provided by the user, and it makes interpreting coefficients easy.
+ + **azureml-contrib-dataset**
+ + Updated documentation description with indication that libfuse should be installed while using mount.
+ + **azureml-core**
+ + Default CPU curated image is now mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04. Default GPU image is now mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.2-cudnn8-ubuntu18.04
+ + Run.fail() is now deprecated, use Run.tag() to mark run as failed or use Run.cancel() to mark the run as canceled.
+ + Updated documentation with a note that libfuse should be installed when mounting a file dataset.
+ + Add experimental register_dask_dataframe() support to tabular dataset.
+ + Support DatabricksStep with Azure Blob/ADL-S as inputs/outputs and expose parameter permit_cluster_restart to let customer decide whether AML can restart cluster when i/o access configuration need to be added into cluster
+ + **azureml-dataset-runtime**
+ + azureml-dataset-runtime now supports versions of pyarrow < 4.0.0
+ + **azureml-mlflow**
+ + Added support for deploying to AzureML via our MLFlow plugin.
+ + **azureml-pipeline-steps**
+ + Support DatabricksStep with Azure Blob/ADL-S as inputs/outputs and expose parameter permit_cluster_restart to let customer decide whether AML can restart cluster when i/o access configuration need to be added into cluster
+ + **azureml-synapse**
+ + Enable audience in msi authentication
+ + **azureml-train-automl-client**
+ + Added changed link for compute target doc
++
+## 2021-04-19
+
+### Azure Machine Learning SDK for Python v1.27.0
++ **Bug fixes and improvements**
+ + **azureml-core**
+ + Added the ability to override the default timeout value for artifact uploading via the "AZUREML_ARTIFACTS_DEFAULT_TIMEOUT" environment variable.
+ + Fixed a bug where docker settings in Environment object on ScriptRunConfig are not respected.
+ + Allow partitioning a dataset when copying it to a destination.
+ + Added a custom mode to the OutputDatasetConfig to enable passing created Datasets in pipelines through a link function. These support enhancements made to enable Tabular Partitioning for PRS.
+ + Added a new KubernetesCompute compute type to azureml-core.
+ + **azureml-pipeline-core**
+ + Adding a custom mode to the OutputDatasetConfig and enabling a user to pass through created Datasets in pipelines through a link function. File path destinations support placeholders. These support the enhancements made to enable Tabular Partitioning for PRS.
+ + Addition of new KubernetesCompute compute type to azureml-core.
+ + **azureml-pipeline-steps**
+ + Addition of new KubernetesCompute compute type to azureml-core.
+ + **azureml-synapse**
+ + Update spark UI url in widget of azureml synapse
+ + **azureml-train-automl-client**
+ + The STL featurizer for the forecasting task now uses a more robust seasonality detection based on the frequency of the time series.
+ + **azureml-train-core**
+ + Fixed bug where docker settings in Environment object are not respected.
+ + Addition of new KubernetesCompute compute type to azureml-core.
++
+## 2021-04-05
+
+### Azure Machine Learning SDK for Python v1.26.0
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Fixed an issue where Naive models would be recommended in AutoMLStep runs and fail with lag or rolling window features. These models will not be recommended when target lags or target rolling window size are set.
+ + Changed console output when submitting an AutoML run to show a portal link to the run.
+ + **azureml-core**
+ + Added HDFS mode in documentation.
+ + Added support to understand File Dataset partitions based on glob structure.
+ + Added support for update container registry associated with AzureML Workspace.
+ + Deprecated Environment attributes under the DockerSection - "enabled", "shared_volume" and "arguments" are a part of DockerConfiguration in RunConfiguration now.
+ + Updated Pipeline CLI clone documentation
+ + Updated portal URIs to include tenant for authentication
+ + Removed experiment name from run URIs to avoid redirects
+ + Updated experiment URO to use experiment ID.
+ + Bug fixes for attaching remote compute with AzureML CLI.
+ + Updated portal URIs to include tenant for authentication.
+ + Updated experiment URI to use experiment ID.
+ + **azureml-interpret**
+ + azureml-interpret updated to use interpret-community 0.17.0
+ + **azureml-opendatasets**
+ + Input start date and end date type validation and error indication if it's not datetime type.
+ + **azureml-parallel-run**
+ + [Experimental feature] Add `partition_keys` parameter to ParallelRunConfig, if specified, the input dataset(s) would be partitioned into mini-batches by the keys specified by it. It requires all input datasets to be partitioned dataset.
+ + **azureml-pipeline-steps**
+ + Bugfix - supporting path_on_compute while passing dataset configuration as download.
+ + Deprecate RScriptStep in favor of using CommandStep for running R scripts in pipelines.
+ + Deprecate EstimatorStep in favor of using CommandStep for running ML training (including distributed training) in pipelines.
+ + **azureml-sdk**
+ + Update python_requires to < 3.9 for azureml-sdk
+ + **azureml-train-automl-client**
+ + Changed console output when submitting an AutoML run to show a portal link to the run.
+ + **azureml-train-core**
+ + Deprecated DockerSection's 'enabled', 'shared_volume', and 'arguments' attributes in favor of using DockerConfiguration with ScriptRunConfig.
+ + Use Azure Open Datasets for MNIST dataset
+ + Hyperdrive error messages have been updated.
++
+## 2021-03-22
+
+### Azure Machine Learning SDK for Python v1.25.0
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Changed console output when submitting an AutoML run to show a portal link to the run.
+ + **azureml-core**
+ + Starts to support updating container registry for workspace in SDK and CLI
+ + Deprecated DockerSection's 'enabled', 'shared_volume', and 'arguments' attributes in favor of using DockerConfiguration with ScriptRunConfig.
+ + Updated Pipeline CLI clone documentation
+ + Updated portal URIs to include tenant for authentication
+ + Removed experiment name from run URIs to avoid redirects
+ + Updated experiment URO to use experiment ID.
+ + Bug fixes for attaching remote compute using az CLI
+ + Updated portal URIs to include tenant for authentication.
+ + Added support to understand File Dataset partitions based on glob structure.
+ + **azureml-interpret**
+ + azureml-interpret updated to use interpret-community 0.17.0
+ + **azureml-opendatasets**
+ + Input start date and end date type validation and error indication if it's not datetime type.
+ + **azureml-pipeline-core**
+ + Bugfix - supporting path_on_compute while passing dataset configuration as download.
+ + **azureml-pipeline-steps**
+ + Bugfix - supporting path_on_compute while passing dataset configuration as download.
+ + Deprecate RScriptStep in favor of using CommandStep for running R scripts in pipelines.
+ + Deprecate EstimatorStep in favor of using CommandStep for running ML training (including distributed training) in pipelines.
+ + **azureml-train-automl-runtime**
+ + Changed console output when submitting an AutoML run to show a portal link to the run.
+ + **azureml-train-core**
+ + Deprecated DockerSection's 'enabled', 'shared_volume', and 'arguments' attributes in favor of using DockerConfiguration with ScriptRunConfig.
+ + Use Azure Open Datasets for MNIST dataset
+ + Hyperdrive error messages have been updated.
++
+## 2021-03-31
+### Azure Machine Learning studio Notebooks Experience (March Update)
++ **New features**
+ + Render CSV/TSV. Users will be able to render and TSV/CSV file in a grid format for easier data analysis.
+ + SSO Authentication for Compute Instance. Users can now easily authenticate any new compute instances directly in the Notebook UI, making it easier to authenticate and use Azure SDKs directly in AzureML.
+ + Compute Instance Metrics. Users will be able to view compute metrics like CPU usage and memory via terminal.
+ + File Details. Users can now see file details including the last modified time, and file size by clicking the 3 dots beside a file.
+++ **Bug fixes and improvements**
+ + Improved page load times.
+ + Improved performance.
+ + Improved speed and kernel reliability.
+ + Gain vertical real estate by permanently moving Notebook file pane up
+ + Links are now clickable in Terminal
+ + Improved Intellisense performance
++
+## 2021-03-08
+
+### Azure Machine Learning SDK for Python v1.24.0
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Removed backwards compatible imports from `azureml.automl.core.shared`. Module not found errors in the `azureml.automl.core.shared` namespace can be resolved by importing from `azureml.automl.runtime.shared`.
+ + **azureml-contrib-automl-dnn-vision**
+ + Exposed object detection yolo model.
+ + **azureml-contrib-dataset**
+ + Added functionality to filter Tabular Datasets by column values and File Datasets by metadata.
+ + **azureml-contrib-fairness**
+ + Include JSON schema in wheel for `azureml-contrib-fairness`
+ + **azureml-contrib-mir**
+ + With setting show_output to True when deploy models, inference configuration and deployment configuration will be replayed before sending the request to server.
+ + **azureml-core**
+ + Added functionality to filter Tabular Datasets by column values and File Datasets by metadata.
+ + Previously, it was possibly for users to create provisioning configurations for ComputeTarget's that did not satisfy the password strength requirements for the `admin_user_password` field (i.e., that they must contain at least 3 of the following: 1 lowercase letter, 1 uppercase letter, 1 digit, and 1 special character from the following set: ``\`~!@#$%^&*()=+_[]{}|;:./'",<>?``). If the user created a configuration with a weak password and ran a job using that configuration, the job would fail at runtime. Now, the call to `AmlCompute.provisioning_configuration` will throw a `ComputeTargetException` with an accompanying error message explaining the password strength requirements.
+ + Additionally, it was also possible in some cases to specify a configuration with a negative number of maximum nodes. It is no longer possible to do this. Now, `AmlCompute.provisioning_configuration` will throw a `ComputeTargetException` if the `max_nodes` argument is a negative integer.
+ + With setting show_output to True when deploy models, inference configuration and deployment configuration will be displayed.
+ + With setting show_output to True when wait for the completion of model deployment, the progress of deployment operation will be displayed.
+ + Allow customer specified AzureML auth config directory through environment variable: AZUREML_AUTH_CONFIG_DIR
+ + Previously, it was possible to create a provisioning configuration with the minimum node count less than the maximum node count. The job would run but fail at runtime. This bug has now been fixed. If you now try to create a provisioning configuration with `min_nodes < max_nodes` the SDK will raise a `ComputeTargetException`.
+ + **azureml-interpret**
+ + fix explanation dashboard not showing aggregate feature importances for sparse engineered explanations
+ + optimized memory usage of ExplanationClient in azureml-interpret package
+ + **azureml-train-automl-client**
+ + Fixed show_output=False to return control to the user when running using spark.
+
+## 2021-02-28
+### Azure Machine Learning studio Notebooks Experience (February Update)
++ **New features**
+ + [Native Terminal (GA)](../how-to-access-terminal.md). Users will now have access to an integrated terminal as well as Git operation via the integrated terminal.
+ + Notebook Snippets (preview). Common Azure ML code excerpts are now available at your fingertips. Navigate to the code snippets panel, accessible via the toolbar, or activate the in-code snippets menu using Ctrl + Space.
+ + [Keyboard Shortcuts](../how-to-run-jupyter-notebooks.md#useful-keyboard-shortcuts). Full parity with keyboard shortcuts available in Jupyter.
+ + Indicate Cell parameters. Shows users which cells in a notebook are parameter cells and can run parameterized notebooks via [Papermill](https://github.com/nteract/papermill) on the Compute Instance.
+ + Terminal and Kernel session
+ + Sharing Button. Users can now share any file in the Notebook file explorer by right-clicking the file and using the share button.
++++ **Bug fixes and improvements**
+ + Improved page load times
+ + Improved performance
+ + Improved speed and kernel reliability
+ + Added spinning wheel to show progress for all ongoing [Compute Instance operations](../how-to-run-jupyter-notebooks.md#status-indicators).
+ + Right click in File Explorer. Right-clicking any file will now open file operations.
++
+## 2021-02-16
+
+### Azure Machine Learning SDK for Python v1.23.0
++ **Bug fixes and improvements**
+ + **azureml-core**
+ + [Experimental feature] Add support to link synapse workspace into AML as a linked service
+ + [Experimental feature] Add support to attach synapse spark pool into AML as a compute
+ + [Experimental feature] Add support for identity based data access. Users can register datastore or datasets without providing credentials. In such case, users' Azure AD token or managed identity of compute target will be used for authentication. To learn more, see [Connect to storage by using identity-based data access](./how-to-identity-based-data-access.md).
+ + **azureml-pipeline-steps**
+ + [Experimental feature] Add support for [SynapseSparkStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.synapsesparkstep)
+ + **azureml-synapse**
+ + [Experimental feature] Add support of spark magic to run interactive session in synapse spark pool.
++ **Bug fixes and improvements**
+ + **azureml-automl-runtime**
+ + In this update, we added holt winters exponential smoothing to forecasting toolbox of AutoML SDK. Given a time series, the best model is selected by [AICc (Corrected Akaike's Information Criterion)](https://otexts.com/fpp3/selecting-predictors.html#selecting-predictors) and returned.
+ + AutoML will now generate two log files instead of one. Log statements will go to one or the other depending on which process the log statement was generated in.
+ + Remove unnecessary in-sample prediction during model training with cross-validations. This may decrease model training time in some cases, especially for time-series forecasting models.
+ + **azureml-contrib-fairness**
+ + Add a JSON schema for the dashboardDictionary uploads.
+ + **azureml-contrib-interpret**
+ + azureml-contrib-interpret README is updated to reflect that package will be removed in next update after being deprecated since October, use azureml-interpret package instead
+ + **azureml-core**
+ + Previously, it was possible to create a provisioning configuration with the minimum node count less than the maximum node count. This has now been fixed. If you now try to create a provisioning configuration with `min_nodes < max_nodes` the SDK will raise a `ComputeTargetException`.
+ + Fixes bug in wait_for_completion in AmlCompute which caused the function to return control flow before the operation was actually complete
+ + Run.fail() is now deprecated, use Run.tag() to mark run as failed or use Run.cancel() to mark the run as canceled.
+ + Show error message 'Environment name expected str, {} found' when provided environment name is not a string.
+ + **azureml-train-automl-client**
+ + Fixed a bug that prevented AutoML experiments performed on Azure Databricks clusters from being canceled.
++
+## 2021-02-09
+
+### Azure Machine Learning SDK for Python v1.22.0
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Fixed bug where an extra pip dependency was added to the conda yml file for vision models.
+ + **azureml-automl-runtime**
+ + Fixed a bug where classical forecasting models (e.g. AutoArima) could receive training data wherein rows with imputed target values were not present. This violated the data contract of these models. * Fixed various bugs with lag-by-occurrence behavior in the time-series lagging operator. Previously, the lag-by-occurrence operation did not mark all imputed rows correctly and so would not always generate the correct occurrence lag values. Also fixed some compatibility issues between the lag operator and the rolling window operator with lag-by-occurrence behavior. This previously resulted in the rolling window operator dropping some rows from the training data that it should otherwise use.
+ + **azureml-core**
+ + Adding support for Token Authentication by audience.
+ + Add `process_count` to [PyTorchConfiguration](/python/api/azureml-core/azureml.core.runconfig.pytorchconfiguration) to support multi-process multi-node PyTorch jobs.
+ + **azureml-pipeline-steps**
+ + [CommandStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.commandstep) now GA and no longer experimental.
+ + [ParallelRunConfig](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunconfig): add argument allowed_failed_count and allowed_failed_percent to check error threshold on mini batch level. Error threshold has 3 flavors now:
+ + error_threshold - the number of allowed failed mini batch items;
+ + allowed_failed_count - the number of allowed failed mini batches;
+ + allowed_failed_percent- the percent of allowed failed mini batches.
+
+ A job will stop if exceeds any of them. error_threshold is required to keep it backward compatibility. Set the value to -1 to ignore it.
+ + Fixed whitespace handling in AutoMLStep name.
+ + ScriptRunConfig is now supported by HyperDriveStep
+ + **azureml-train-core**
+ + HyperDrive runs invoked from a ScriptRun will now be considered a child run.
+ + Add `process_count` to [PyTorchConfiguration](/python/api/azureml-core/azureml.core.runconfig.pytorchconfiguration) to support multi-process multi-node PyTorch jobs.
+ + **azureml-widgets**
+ + Add widget ParallelRunStepDetails to visualize status of a ParallelRunStep.
+ + Allows hyperdrive users to see an additional axis on the parallel coordinates chart that shows the metric value corresponding to each set of hyperparameters for each child run.
++
+ ## 2021-01-31
+### Azure Machine Learning studio Notebooks Experience (January Update)
++ **New features**
+ + Native Markdown Editor in AzureML. Users can now render and edit markdown files natively in AzureML Studio.
+ + [Run Button for Scripts (.py, .R and .sh)](../how-to-run-jupyter-notebooks.md#run-a-notebook-or-python-script). Users can easily now run Python, R and Bash script in AzureML
+ + [Variable Explorer](../how-to-run-jupyter-notebooks.md#explore-variables-in-the-notebook). Explore the contents of variables and data frames in a pop-up panel. Users can easily check data type, size, and contents.
+ + [Table of Content](../how-to-run-jupyter-notebooks.md#navigate-with-a-toc). Navigate to sections of your notebook, indicated by Markdown headers.
+ + Export your Notebook as Latex/HTML/Py. Create easy-to-share notebook files by exporting to LaTex, HTML, or .py
+ + Intellicode. ML-powered results provides an enhanced [intelligent autocompletion experience](/visualstudio/intellicode/overview).
+++ **Bug fixes and improvements**
+ + Improved page load times
+ + Improved performance
+ + Improved speed and kernel reliability
+
+
+ ## 2021-01-25
+
+### Azure Machine Learning SDK for Python v1.21.0
++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + Fixed CLI help text when using AmlCompute with UserAssigned Identity
+ + **azureml-contrib-automl-dnn-vision**
+ + Deploy and download buttons will become visible for AutoML vision runs, and models can be deployed or downloaded similar to other AutoML runs. There are two new files (scoring_file_v_1_0_0.py and conda_env_v_1_0_0.yml) which contain a script to run inferencing and a yml file to recreate the conda environment. The 'model.pth' file has also been renamed to use the '.pt' extension.
+ + **azureml-core**
+ + MSI support for azure-cli-ml
+ + User Assigned Managed Identity Support.
+ + With this change, the customers should be able to provide a user assigned identity that can be used to fetch the key from the customer key vault for encryption at rest.
+ + fix row_count=0 for the profile of very large files - fix error in double conversion for delimited values with white space padding
+ + Remove experimental flag for Output dataset GA
+ + Update documentation on how to fetch specific version of a Model
+ + Allow updating workspace for mixed mode access in case of private link
+ + Fix to remove additional registration on datastore for resume run feature
+ + Added CLI/SDK support for updating primary user assigned identity of workspace
+ + **azureml-interpret**
+ + updated azureml-interpret to interpret-community 0.16.0
+ + memory optimizations for explanation client in azureml-interpret
+ + **azureml-train-automl-runtime**
+ + Enabled streaming for ADB runs
+ + **azureml-train-core**
+ + Fix to remove additional registration on datastore for resume run feature
+ + **azureml-widgets**
+ + Customers should not see changes to existing run data visualization using the widget, and now will have support if they optionally use conditional hyperparameters.
+ + The user run widget now includes a detailed explanation for why a run is in the queued state.
++
+ ## 2021-01-11
+
+### Azure Machine Learning SDK for Python v1.20.0
++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + framework_version added in OptimizationConfig. It will be used when model is registered with framework MULTI.
+ + **azureml-contrib-optimization**
+ + framework_version added in OptimizationConfig. It will be used when model is registered with framework MULTI.
+ + **azureml-pipeline-steps**
+ + Introducing CommandStep which would take command to process. Command can include executables, shell commands, scripts, etc.
+ + **azureml-core**
+ + Now workspace creation supports user assigned identity. Adding the uai support from SDK/CLI
+ + Fixed issue on service.reload() to pick up changes on score.py in local deployment.
+ + `run.get_details()` has an extra field named "submittedBy" which displays the author's name for this run.
+ + Edited Model.register method documentation to mention how to register model from run directly
+ + Fixed IOT-Server connection status change handling issue.
+
+
+## 2020-12-31
+### Azure Machine Learning studio Notebooks Experience (December Update)
++ **New features**
+ + User Filename search. Users are now able to search all the files saved in a workspace.
+ + Markdown Side by Side support per Notebook Cell. In a notebook cell, users can now have the option to view rendered markdown and markdown syntax side-by-side.
+ + Cell Status Bar. The status bar indicates what state a code cell is in, whether a cell run was successful, and how long it took to run.
+
++ **Bug fixes and improvements**
+ + Improved page load times
+ + Improved performance
+ + Improved speed and kernel reliability
+
+
+## 2020-12-07
+
+### Azure Machine Learning SDK for Python v1.19.0
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Added experimental support for test data to AutoMLStep.
+ + Added the initial core implementation of test set ingestion feature.
+ + Moved references to sklearn.externals.joblib to depend directly on joblib.
+ + introduce a new AutoML task type of "image-instance-segmentation".
+ + **azureml-automl-runtime**
+ + Added the initial core implementation of test set ingestion feature.
+ + When all the strings in a text column have a length of exactly 1 character, the TfIdf word-gram featurizer won't work because its tokenizer ignores the strings with fewer than 2 characters. The current code change will allow AutoML to handle this use case.
+ + introduce a new AutoML task type of "image-instance-segmentation".
+ + **azureml-contrib-automl-dnn-nlp**
+ + Initial PR for new dnn-nlp package
+ + **azureml-contrib-automl-dnn-vision**
+ + introduce a new AutoML task type of "image-instance-segmentation".
+ + **azureml-contrib-automl-pipeline-steps**
+ + This new package is responsible for creating steps required for many models train/inference scenario. - It also moves the train/inference code into azureml.train.automl.runtime package so any future fixes would be automatically available through curated environment releases.
+ + **azureml-contrib-dataset**
+ + introduce a new AutoML task type of "image-instance-segmentation".
+ + **azureml-core**
+ + Added the initial core implementation of test set ingestion feature.
+ + Fixing the xref warnings for documentation in azureml-core package
+ + Doc string fixes for Command support feature in SDK
+ + Adding command property to RunConfiguration. The feature enables users to run an actual command or executables on the compute through AzureML SDK.
+ + Users can delete an empty experiment given the ID of that experiment.
+ + **azureml-dataprep**
+ + Added dataset support for Spark built with Scala 2.12. This adds to the existing 2.11 support.
+ + **azureml-mlflow**
+ + AzureML-MLflow adds safe guards in remote scripts to avoid early termination of submitted runs.
+ + **azureml-pipeline-core**
+ + Fixed a bug in setting a default pipeline for pipeline endpoint created via UI
+ + **azureml-pipeline-steps**
+ + Added experimental support for test data to AutoMLStep.
+ + **azureml-tensorboard**
+ + Fixing the xref warnings for documentation in azureml-core package
+ + **azureml-train-automl-client**
+ + Added experimental support for test data to AutoMLStep.
+ + Added the initial core implementation of test set ingestion feature.
+ + introduce a new AutoML task type of "image-instance-segmentation".
+ + **azureml-train-automl-runtime**
+ + Added the initial core implementation of test set ingestion feature.
+ + Fix the computation of the raw explanations for the best AutoML model if the AutoML models are trained using validation_size setting.
+ + Moved references to sklearn.externals.joblib to depend directly on joblib.
+ + **azureml-train-core**
+ + HyperDriveRun.get_children_sorted_by_primary_metric() should complete faster now
+ + Improved error handling in HyperDrive SDK.
+ + Deprecated all estimator classes in favor of using ScriptRunConfig to configure experiment runs. Deprecated classes include:
+ + MMLBase
+ + Estimator
+ + PyTorch
+ + TensorFlow
+ + Chainer
+ + SKLearn
+ + Deprecated the use of Nccl and Gloo as valid input types for Estimator classes in favor of using PyTorchConfiguration with ScriptRunConfig.
+ + Deprecated the use of Mpi as a valid input type for Estimator classes in favor of using MpiConfiguration with ScriptRunConfig.
+ + Adding command property to runconfiguration. The feature enables users to run an actual command or executables on the compute through AzureML SDK.
+
+ + Deprecated all estimator classes in favor of using ScriptRunConfig to configure experiment runs. Deprecated classes include: + MMLBaseEstimator + Estimator + PyTorch + TensorFlow + Chainer + SKLearn
+ + Deprecated the use of Nccl and Gloo as a valid type of input for Estimator classes in favor of using PyTorchConfiguration with ScriptRunConfig.
+ + Deprecated the use of Mpi as a valid type of input for Estimator classes in favor of using MpiConfiguration with ScriptRunConfig.
+
+## 2020-11-30
+### Azure Machine Learning studio Notebooks Experience (November Update)
++ **New features**
+ + Native Terminal. Users will now have access to an integrated terminal as well as Git operation via the [integrated terminal.](../how-to-access-terminal.md)
+ + Duplicate Folder
+ + Costing for Compute Drop Down
+ + Offline Compute Pylance
+++ **Bug fixes and improvements**
+ + Improved page load times
+ + Improved performance
+ + Improved speed and kernel reliability
+ + Large File Upload. You can now upload file >95mb
+
+## 2020-11-09
+
+### Azure Machine Learning SDK for Python v1.18.0
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Improved handling of short time series by allowing padding them with gaussian noise.
+ + **azureml-automl-runtime**
+ + Throw ConfigException if a DateTime column has OutOfBoundsDatetime value
+ + Improved handling of short time series by allowing padding them with gaussian noise.
+ + Making sure that each text column can leverage char-gram transform with the n-gram range based on the length of the strings in that text column
+ + Providing raw feature explanations for best mode for AutoML experiments running on user's local compute
+ + **azureml-core**
+ + Pin the package: pyjwt to avoid pulling in breaking versions in upcoming releases.
+ + Creating an experiment will return the active or last archived experiment with that same given name if such experiment exists or a new experiment.
+ + Calling get_experiment by name will return the active or last archived experiment with that given name.
+ + Users cannot rename an experiment while reactivating it.
+ + Improved error message to include potential fixes when a dataset is incorrectly passed to an experiment (e.g. ScriptRunConfig).
+ + Improved documentation for `OutputDatasetConfig.register_on_complete` to include the behavior of what will happen when the name already exists.
+ + Specifying dataset input and output names that have the potential to collide with common environment variables will now result in a warning
+ + Repurposed `grant_workspace_access` parameter when registering datastores. Set it to `True` to access data behind virtual network from Machine Learning studio.
+ [Learn more](../how-to-enable-studio-virtual-network.md)
+ + Linked service API is refined. Instead of providing resource ID, we have 3 separate parameters sub_id, rg, and name defined in configuration.
+ + In order to enable customers to self-resolve token corruption issues, enable workspace token synchronization to be a public method.
+ + This change allows an empty string to be used as a value for a script_param
+ + **azureml-train-automl-client**
+ + Improved handling of short time series by allowing padding them with gaussian noise.
+ + **azureml-train-automl-runtime**
+ + Throw ConfigException if a DateTime column has OutOfBoundsDatetime value
+ + Added support for providing raw feature explanations for best model for AutoML experiments running on user's local compute
+ + Improved handling of short time series by allowing padding them with gaussian noise.
+ + **azureml-train-core**
+ + This change allows an empty string to be used as a value for a script_param
+ + **azureml-train-restclients-hyperdrive**
+ + README has been changed to offer more context
+ + **azureml-widgets**
+ + Add string support to charts/parallel-coordinates library for widget.
+
+## 2020-11-05
+
+### Data Labeling for image instance segmentation (polygon annotation) (preview)
+
+The image instance segmentation (polygon annotations) project type in data labeling is available now, so users can draw and annotate with polygons around the contour of the objects in the images. Users will be able assign a class and a polygon to each object which of interest within an image.
+
+Learn more about [image instance segmentation labeling](../how-to-label-data.md).
+++
+## 2020-10-26
+
+### Azure Machine Learning SDK for Python v1.17.0
++ **new examples**
+ + A new community-driven repository of examples is available at https://github.com/Azure/azureml-examples
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Fixed an issue where get_output may raise an XGBoostError.
+ + **azureml-automl-runtime**
+ + Time/calendar based features created by AutoML will now have the prefix.
+ + Fixed an IndexError occurring during training of StackEnsemble for classification datasets with large number of classes and subsampling enabled.
+ + Fixed an issue where VotingRegressor predictions may be inaccurate after refitting the model.
+ + **azureml-core**
+ + Additional detail added about relationship between AKS deployment configuration and Azure Kubernetes Service concepts.
+ + Environment client labels support. User can label Environments and reference them by label.
+ + **azureml-dataprep**
+ + Better error message when using currently unsupported Spark with Scala 2.12.
+ + **azureml-explain-model**
+ + The azureml-explain-model package is officially deprecated
+ + **azureml-mlflow**
+ + Resolved a bug in mlflow.projects.run against azureml backend where Finalizing state was not handled properly.
+ + **azureml-pipeline-core**
+ + Add support to create, list and get pipeline schedule based one pipeline endpoint.
+ + Improved the documentation of PipelineData.as_dataset with an invalid usage example - Using PipelineData.as_dataset improperly will now result in a ValueException being thrown
+ + Changed the HyperDriveStep pipelines notebook to register the best model within a PipelineStep directly after the HyperDriveStep run.
+ + **azureml-pipeline-steps**
+ + Changed the HyperDriveStep pipelines notebook to register the best model within a PipelineStep directly after the HyperDriveStep run.
+ + **azureml-train-automl-client**
+ + Fixed an issue where get_output may raise an XGBoostError.
+
+### Azure Machine Learning studio Notebooks Experience (October Update)
++ **New features**
+ + [Full virtual network support](../how-to-enable-studio-virtual-network.md)
+ + [Focus Mode](../how-to-run-jupyter-notebooks.md#focus-mode)
+ + Save notebooks Ctrl-S
+ + Line Numbers
+++ **Bug fixes and improvements**
+ + Improvement in speed and kernel reliability
+ + Jupyter Widget UI updates
+
+## 2020-10-12
+
+### Azure Machine Learning SDK for Python v1.16.0
++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + AKSWebservice and AKSEndpoints now support pod-level CPU and Memory resource limits. These optional limits can be used by setting `--cpu-cores-limit` and `--memory-gb-limit` flags in applicable CLI calls
+ + **azureml-core**
+ + Pin major versions of direct dependencies of azureml-core
+ + AKSWebservice and AKSEndpoints now support pod-level CPU and Memory resource limits. More information on [Kubernetes Resources and Limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits)
+ + Updated run.log_table to allow individual rows to be logged.
+ + Added static method `Run.get(workspace, run_id)` to retrieve a run only using a workspace
+ + Added instance method `Workspace.get_run(run_id)` to retrieve a run within the workspace
+ + Introducing command property in run configuration which will enable users to submit command instead of script & arguments.
+ + **azureml-interpret**
+ + fixed explanation client is_raw flag behavior in azureml-interpret
+ + **azureml-sdk**
+ + `azureml-sdk` officially support Python 3.8.
+ + **azureml-train-core**
+ + Adding TensorFlow 2.3 curated environment
+ + Introducing command property in run configuration which will enable users to submit command instead of script & arguments.
+ + **azureml-widgets**
+ + Redesigned interface for script run widget.
++
+## 2020-09-28
+
+### Azure Machine Learning SDK for Python v1.15.0
++ **Bug fixes and improvements**
+ + **azureml-contrib-interpret**
+ + LIME explainer moved from azureml-contrib-interpret to interpret-community package and image explainer removed from azureml-contrib-interpret package
+ + visualization dashboard removed from azureml-contrib-interpret package, explanation client moved to azureml-interpret package and deprecated in azureml-contrib-interpret package and notebooks updated to reflect improved API
+ + fix pypi package descriptions for azureml-interpret, azureml-explain-model, azureml-contrib-interpret and azureml-tensorboard
+ + **azureml-contrib-notebook**
+ + Pin nbcovert dependency to < 6 so that papermill 1.x continues to work.
+ + **azureml-core**
+ + Added parameters to the TensorflowConfiguration and MpiConfiguration constructor to enable a more streamlined initialization of the class attributes without requiring the user to set each individual attribute. Added a PyTorchConfiguration class for configuring distributed PyTorch jobs in ScriptRunConfig.
+ + Pin the version of azure-mgmt-resource to fix the authentication error.
+ + Support Triton No Code Deploy
+ + outputs directories specified in Run.start_logging() will now be tracked when using run in interactive scenarios. The tracked files will be visible on ML Studio upon calling Run.complete()
+ + File encoding can be now specified during dataset creation with `Dataset.Tabular.from_delimited_files` and `Dataset.Tabular.from_json_lines_files` by passing the `encoding` argument. The supported encodings are 'utf8', 'iso88591', 'latin1', 'ascii', utf16', 'utf32', 'utf8bom' and 'windows1252'.
+ + Bug fix when environment object is not passed to ScriptRunConfig constructor.
+ + Updated Run.cancel() to allow cancel of a local run from another machine.
+ + **azureml-dataprep**
+ + Fixed dataset mount timeout issues.
+ + **azureml-explain-model**
+ + fix pypi package descriptions for azureml-interpret, azureml-explain-model, azureml-contrib-interpret and azureml-tensorboard
+ + **azureml-interpret**
+ + visualization dashboard removed from azureml-contrib-interpret package, explanation client moved to azureml-interpret package and deprecated in azureml-contrib-interpret package and notebooks updated to reflect improved API
+ + azureml-interpret package updated to depend on interpret-community 0.15.0
+ + fix pypi package descriptions for azureml-interpret, azureml-explain-model, azureml-contrib-interpret and azureml-tensorboard
+ + **azureml-pipeline-core**
+ + Fixed pipeline issue with `OutputFileDatasetConfig` where the system may stop responding when`register_on_complete` is called with the `name` parameter set to a pre-existing dataset name.
+ + **azureml-pipeline-steps**
+ + Removed stale databricks notebooks.
+ + **azureml-tensorboard**
+ + fix pypi package descriptions for azureml-interpret, azureml-explain-model, azureml-contrib-interpret and azureml-tensorboard
+ + **azureml-train-automl-runtime**
+ + visualization dashboard removed from azureml-contrib-interpret package, explanation client moved to azureml-interpret package and deprecated in azureml-contrib-interpret package and notebooks updated to reflect improved API
+ + **azureml-widgets**
+ + visualization dashboard removed from azureml-contrib-interpret package, explanation client moved to azureml-interpret package and deprecated in azureml-contrib-interpret package and notebooks updated to reflect improved API
+
+## 2020-09-21
+
+### Azure Machine Learning SDK for Python v1.14.0
++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + Grid Profiling removed from the SDK and is not longer supported.
+ + **azureml-accel-models**
+ + azureml-accel-models package now supports TensorFlow 2.x
+ + **azureml-automl-core**
+ + Added error handling in get_output for cases when local versions of pandas/sklearn don't match the ones used during training
+ + **azureml-automl-runtime**
+ + Fixed a bug where AutoArima iterations would fail with a PredictionException and the message: "Silent failure occurred during prediction."
+ + **azureml-cli-common**
+ + Grid Profiling removed from the SDK and is not longer supported.
+ + **azureml-contrib-server**
+ + Update description of the package for pypi overview page.
+ + **azureml-core**
+ + Grid Profiling removed from the SDK and is no longer supported.
+ + Reduce number of error messages when workspace retrieval fails.
+ + Don't show warning when fetching metadata fails
+ + New Kusto Step and Kusto Compute Target.
+ + Update document for sku parameter. Remove sku in workspace update functionality in CLI and SDK.
+ + Update description of the package for pypi overview page.
+ + Updated documentation for AzureML Environments.
+ + Expose service managed resources settings for AML workspace in SDK.
+ + **azureml-dataprep**
+ + Enable execute permission on files for Dataset mount.
+ + **azureml-mlflow**
+ + Updated AzureML MLflow documentation and notebook samples
+ + New support for MLflow projects with AzureML backend
+ + MLflow model registry support
+ + Added Azure RBAC support for AzureML-MLflow operations
+
+ + **azureml-pipeline-core**
+ + Improved the documentation of the PipelineOutputFileDataset.parse_* methods.
+ + New Kusto Step and Kusto Compute Target.
+ + Provided Swaggerurl property for pipeline-endpoint entity via that user can see the schema definition for published pipeline endpoint.
+ + **azureml-pipeline-steps**
+ + New Kusto Step and Kusto Compute Target.
+ + **azureml-telemetry**
+ + Update description of the package for pypi overview page.
+ + **azureml-train**
+ + Update description of the package for pypi overview page.
+ + **azureml-train-automl-client**
+ + Added error handling in get_output for cases when local versions of pandas/sklearn don't match the ones used during training
+ + **azureml-train-core**
+ + Update description of the package for pypi overview page.
+
+## 2020-08-31
+
+### Azure Machine Learning SDK for Python v1.13.0
++ **Preview features**
+ + **azureml-core**
+ With the new output datasets capability, you can write back to cloud storage including Blob, ADLS Gen 1, ADLS Gen 2, and FileShare. You can configure where to output data, how to output data (via mount or upload), whether to register the output data for future reuse and sharing and pass intermediate data between pipeline steps seamlessly. This enables reproducibility, sharing, prevents duplication of data, and results in cost efficiency and productivity gains. [Learn how to use it](/python/api/azureml-core/azureml.data.output_dataset_config.outputfiledatasetconfig)
+
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Added validated_{platform}_requirements.txt file for pinning all pip dependencies for AutoML.
+ + This release supports models greater than 4 Gb.
+ + Upgraded AutoML dependencies: `scikit-learn` (now 0.22.1), `pandas` (now 0.25.1), `numpy` (now 1.18.2).
+ + **azureml-automl-runtime**
+ + Set horovod for text DNN to always use fp16 compression.
+ + This release supports models greater than 4 Gb.
+ + Fixed issue where AutoML fails with ImportError: cannot import name `RollingOriginValidator`.
+ + Upgraded AutoML dependencies: `scikit-learn` (now 0.22.1), `pandas` (now 0.25.1), `numpy` (now 1.18.2).
+ + **azureml-contrib-automl-dnn-forecasting**
+ + Upgraded AutoML dependencies: `scikit-learn` (now 0.22.1), `pandas` (now 0.25.1), `numpy` (now 1.18.2).
+ + **azureml-contrib-fairness**
+ + Provide a short description for azureml-contrib-fairness.
+ + **azureml-contrib-pipeline-steps**
+ + Added message indicating this package is deprecated and user should use azureml-pipeline-steps instead.
+ + **azureml-core**
+ + Added list key command for workspace.
+ + Add tags parameter in Workspace SDK and CLI.
+ + Fixed the bug where submitting a child run with Dataset will fail due to `TypeError: can't pickle _thread.RLock objects`.
+ + Adding page_count default/documentation for Model list().
+ + Modify CLI&SDK to take adbworkspace parameter and Add workspace adb lin/unlink runner.
+ + Fix bug in Dataset.update that caused newest Dataset version to be updated not the version of the Dataset update was called on.
+ + Fix bug in Dataset.get_by_name that would show the tags for the newest Dataset version even when a specific older version was retrieved.
+ + **azureml-interpret**
+ + Added probability outputs to shap scoring explainers in azureml-interpret based on shap_values_output parameter from original explainer.
+ + **azureml-pipeline-core**
+ + Improved `PipelineOutputAbstractDataset.register`'s documentation.
+ + **azureml-train-automl-client**
+ + Upgraded AutoML dependencies: `scikit-learn` (now 0.22.1), `pandas` (now 0.25.1), `numpy` (now 1.18.2).
+ + **azureml-train-automl-runtime**
+ + Upgraded AutoML dependencies: `scikit-learn` (now 0.22.1), `pandas` (now 0.25.1), `numpy` (now 1.18.2).
+ + **azureml-train-core**
+ + Users must now provide a valid hyperparameter_sampling arg when creating a HyperDriveConfig. In addition, the documentation for HyperDriveRunConfig has been edited to inform users of the deprecation of HyperDriveRunConfig.
+ + Reverting PyTorch Default Version to 1.4.
+ + Adding PyTorch 1.6 & TensorFlow 2.2 images and curated environment.
+
+### Azure Machine Learning studio Notebooks Experience (August Update)
++ **New features**
+ + New Getting started landing Page
+
++ **Preview features**
+ + Gather feature in Notebooks. With the [Gather](../how-to-run-jupyter-notebooks.md#clean-your-notebook-preview) feature, users can now easily clean up notebooks with, Gather uses an automated dependency analysis of your notebook, ensuring the essential code is kept, but removing any irrelevant pieces.
+++ **Bug fixes and improvements**
+ + Improvement in speed and reliability
+ + Dark mode bugs fixed
+ + Output Scroll Bugs fixed
+ + Sample Search now searches all the content of all the files in the Azure Machine Learning sample notebooks repo
+ + Multi-line R cells can now run
+ + "I trust contents of this file" is now auto checked after first time
+ + Improved Conflict resolution dialog, with new "Make a copy" option
+
+## 2020-08-17
+
+### Azure Machine Learning SDK for Python v1.12.0
+++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + Add image_name and image_label parameters to Model.package() to enable renaming the built package image.
+ + **azureml-automl-core**
+ + AutoML raises a new error code from dataprep when content is modified while being read.
+ + **azureml-automl-runtime**
+ + Added alerts for the user when data contains missing values but featurization is turned off.
+ + Fixed child run failures when data contains nan and featurization is turned off.
+ + AutoML raises a new error code from dataprep when content is modified while being read.
+ + Updated normalization for forecasting metrics to occur by grain.
+ + Improved calculation of forecast quantiles when lookback features are disabled.
+ + Fixed bool sparse matrix handling when computing explanations after AutoML.
+ + **azureml-core**
+ + A new method `run.get_detailed_status()` now shows the detailed explanation of current run status. It is currently only showing explanation for `Queued` status.
+ + Add image_name and image_label parameters to Model.package() to enable renaming the built package image.
+ + New method `set_pip_requirements()` to set the entire pip section in [`CondaDependencies`](/python/api/azureml-core/azureml.core.conda_dependencies.condadependencies) at once.
+ + Enable registering credential-less ADLS Gen2 datastore.
+ + Improved error message when trying to download or mount an incorrect dataset type.
+ + Update time series dataset filter sample notebook with more examples of partition_timestamp that provides filter optimization.
+ + Change the sdk and CLI to accept subscriptionId, resourceGroup, workspaceName, peConnectionName as parameters instead of ArmResourceId when deleting private endpoint connection.
+ + Experimental Decorator shows class name for easier identification.
+ + Descriptions for the Assets inside of Models are no longer automatically generated based on a Run.
+ + **azureml-datadrift**
+ + Mark create_from_model API in DataDriftDetector as to be deprecated.
+ + **azureml-dataprep**
+ + Improved error message when trying to download or mount an incorrect dataset type.
+ + **azureml-pipeline-core**
+ + Fixed bug when deserializing pipeline graph that contains registered datasets.
+ + **azureml-pipeline-steps**
+ + RScriptStep supports RSection from azureml.core.environment.
+ + Removed the passthru_automl_config parameter from the `AutoMLStep` public API and converted it to an internal only parameter.
+ + **azureml-train-automl-client**
+ + Removed local asynchronous, managed environment runs from AutoML. All local runs will run in the environment the run was launched from.
+ + Fixed snapshot issues when submitting AutoML runs with no user-provided scripts.
+ + Fixed child run failures when data contains nan and featurization is turned off.
+ + **azureml-train-automl-runtime**
+ + AutoML raises a new error code from dataprep when content is modified while being read.
+ + Fixed snapshot issues when submitting AutoML runs with no user-provided scripts.
+ + Fixed child run failures when data contains nan and featurization is turned off.
+ + **azureml-train-core**
+ + Added support for specifying pip options (for example --extra-index-url) in the pip requirements file passed to an [`Estimator`](/python/api/azureml-train-core/azureml.train.estimator.estimator) through `pip_requirements_file` parameter.
++
+## 2020-08-03
+
+### Azure Machine Learning SDK for Python v1.11.0
+++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + Fix model framework and model framework not passed in run object in CLI model registration path
+ + Fix CLI amlcompute identity show command to show tenant ID and principal ID
+ + **azureml-train-automl-client**
+ + Added get_best_child () to AutoMLRun for fetching the best child run for an AutoML Run without downloading the associated model.
+ + Added ModelProxy object that allow predict or forecast to be run on a remote training environment without downloading the model locally.
+ + Unhandled exceptions in AutoML now point to a known issues HTTP page, where more information about the errors can be found.
+ + **azureml-core**
+ + Model names can be 255 characters long.
+ + Environment.get_image_details() return object type changed. `DockerImageDetails` class replaced `dict`, image details are available from the new class properties. Changes are backward compatible.
+ + Fix bug for Environment.from_pip_requirements() to preserve dependencies structure
+ + Fixed a bug where log_list would fail if an int and double were included in the same list.
+ + While enabling private link on an existing workspace, please note that if there are compute targets associated with the workspace, those targets will not work if they are not behind the same virtual network as the workspace private endpoint.
+ + Made `as_named_input` optional when using datasets in experiments and added `as_mount` and `as_download` to `FileDataset`. The input name will automatically generated if `as_mount` or `as_download` is called.
+ + **azureml-automl-core**
+ + Unhandled exceptions in AutoML now point to a known issues HTTP page, where more information about the errors can be found.
+ + Added get_best_child () to AutoMLRun for fetching the best child run for an AutoML Run without downloading the associated model.
+ + Added ModelProxy object that allows predict or forecast to be run on a remote training environment without downloading the model locally.
+ + **azureml-pipeline-steps**
+ + Added `enable_default_model_output` and `enable_default_metrics_output` flags to `AutoMLStep`. These flags can be used to enable/disable the default outputs.
++
+## 2020-07-20
+
+### Azure Machine Learning SDK for Python v1.10.0
+++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + When using AutoML, if a path is passed into the AutoMLConfig object and it does not already exist, it will be automatically created.
+ + Users can now specify a time series frequency for forecasting tasks by using the `freq` parameter.
+ + **azureml-automl-runtime**
+ + When using AutoML, if a path is passed into the AutoMLConfig object and it does not already exist, it will be automatically created.
+ + Users can now specify a time series frequency for forecasting tasks by using the `freq` parameter.
+ + AutoML Forecasting now supports rolling evaluation, which applies to the use case that the length of a test or validation set is longer than the input horizon, and known y_pred value is used as forecasting context.
+ + **azureml-core**
+ + Warning messages will be printed if no files were downloaded from the datastore in a run.
+ + Added documentation for `skip_validation` to the `Datastore.register_azure_sql_database method`.
+ + Users are required to upgrade to sdk v1.10.0 or above to create an auto approved private endpoint. This includes the Notebook resource that is usable behind the VNet.
+ + Expose NotebookInfo in the response of get workspace.
+ + Changes to have calls to list compute targets and getting compute target succeed on a remote run. Sdk functions to get compute target and list workspace compute targets will now work in remote runs.
+ + Add deprecation messages to the class descriptions for azureml.core.image classes.
+ + Throw exception and clean up workspace and dependent resources if workspace private endpoint creation fails.
+ + Support workspace sku upgrade in workspace update method.
+ + **azureml-datadrift**
+ + Update matplotlib version from 3.0.2 to 3.2.1 to support Python 3.8.
+ + **azureml-dataprep**
+ + Added support of web url data sources with `Range` or `Head` request.
+ + Improved stability for file dataset mount and download.
+ + **azureml-train-automl-client**
+ + Fixed issues related to removal of `RequirementParseError` from setuptools.
+ + Use docker instead of conda for local runs submitted using "compute_target='local'"
+ + The iteration duration printed to the console has been corrected. Previously, the iteration duration was sometimes printed as run end time minus run creation time. It has been corrected to equal run end time minus run start time.
+ + When using AutoML, if a path is passed into the AutoMLConfig object and it does not already exist, it will be automatically created.
+ + Users can now specify a time series frequency for forecasting tasks by using the `freq` parameter.
+ + **azureml-train-automl-runtime**
+ + Improved console output when best model explanations fail.
+ + Renamed input parameter to "blocked_models" to remove a sensitive term.
+ + Renamed input parameter to "allowed_models" to remove a sensitive term.
+ + Users can now specify a time series frequency for forecasting tasks by using the `freq` parameter.
+
+
+## 2020-07-06
+
+### Azure Machine Learning SDK for Python v1.9.0
+++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Replaced get_model_path() with AZUREML_MODEL_DIR environment variable in AutoML autogenerated scoring script. Also added telemetry to track failures during init().
+ + Removed the ability to specify `enable_cache` as part of AutoMLConfig
+ + Fixed a bug where runs may fail with service errors during specific forecasting runs
+ + Improved error handling around specific models during `get_output`
+ + Fixed call to fitted_model.fit(X, y) for classification with y transformer
+ + Enabled customized forward fill imputer for forecasting tasks
+ + A new ForecastingParameters class will be used instead of forecasting parameters in a dict format
+ + Improved target lag autodetection
+ + Added limited availability of multi-noded, multi-gpu distributed featurization with BERT
+ + **azureml-automl-runtime**
+ + Prophet now does additive seasonality modeling instead of multiplicative.
+ + Fixed the issue when short grains, having frequencies different from ones of the long grains will result in failed runs.
+ + **azureml-contrib-automl-dnn-vision**
+ + Collect system/gpu stats and log averages for training and scoring
+ + **azureml-contrib-mir**
+ + Added support for enable-app-insights flag in ManagedInferencing
+ + **azureml-core**
+ + A validate parameter to these APIs by allowing validation to be skipped when the data source is not accessible from the current compute.
+ + TabularDataset.time_before(end_time, include_boundary=True, validate=True)
+ + TabularDataset.time_after(start_time, include_boundary=True, validate=True)
+ + TabularDataset.time_recent(time_delta, include_boundary=True, validate=True)
+ + TabularDataset.time_between(start_time, end_time, include_boundary=True, validate=True)
+ + Added framework filtering support for model list, and added NCD AutoML sample in notebook back
+ + For Datastore.register_azure_blob_container and Datastore.register_azure_file_share (only options that support SAS token), we have updated the doc strings for the `sas_token` field to include minimum permissions requirements for typical read and write scenarios.
+ + Deprecating _with_auth param in ws.get_mlflow_tracking_uri()
+ + **azureml-mlflow**
+ + Add support for deploying local file:// models with AzureML-MLflow
+ + Deprecating _with_auth param in ws.get_mlflow_tracking_uri()
+ + **azureml-opendatasets**
+ + Recently published Covid-19 tracking datasets are now available with the SDK
+ + **azureml-pipeline-core**
+ + Log out warning when "azureml-defaults" is not included as part of pip-dependency
+ + Improve Note rendering.
+ + Added support for quoted line breaks when parsing delimited files to PipelineOutputFileDataset.
+ + The PipelineDataset class is deprecated. For more information, see https://aka.ms/dataset-deprecation. Learn how to use dataset with pipeline, see https://aka.ms/pipeline-with-dataset.
+ + **azureml-pipeline-steps**
+ + Doc updates to azureml-pipeline-steps.
+ + Added support in ParallelRunConfig's `load_yaml()` for users to define Environments inline with the rest of the config or in a separate file
+ + **azureml-train-automl-client**.
+ + Removed the ability to specify `enable_cache` as part of AutoMLConfig
+ + **azureml-train-automl-runtime**
+ + Added limited availability of multi-noded, multi-gpu distributed featurization with BERT.
+ + Added error handling for incompatible packages in ADB based automated machine learning runs.
+ + **azureml-widgets**
+ + Doc updates to azureml-widgets.
+
+
+## 2020-06-22
+
+### Azure Machine Learning SDK for Python v1.8.0
+
+ + **Preview features**
+ + **azureml-contrib-fairness**
+ The `azureml-contrib-fairness` package provides integration between the open-source fairness assessment and unfairness mitigation package [Fairlearn](https://fairlearn.github.io) and Azure Machine Learning studio. In particular, the package enables model fairness evaluation dashboards to be uploaded as part of an AzureML Run and appear in Azure Machine Learning studio
+++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + Support getting logs of init container.
+ + Added new CLI commands to manage ComputeInstance
+ + **azureml-automl-core**
+ + Users are now able to enable stack ensemble iteration for Time series tasks with a warning that it could potentially overfit.
+ + Added a new type of user exception that is raised if the cache store contents have been tampered with
+ + **azureml-automl-runtime**
+ + Class Balancing Sweeping will no longer be enabled if user disables featurization.
+ + **azureml-contrib-notebook**
+ + Doc improvements to azureml-contrib-notebook package.
+ + **azureml-contrib-pipeline-steps**
+ + Doc improvements to azureml-contrib--pipeline-steps package.
+ + **azureml-core**
+ + Add set_connection, get_connection, list_connections, delete_connection functions for customer to operate on workspace connection resource
+ + Documentation updates to azureml-coore/azureml.exceptions package.
+ + Documentation updates to azureml-core package.
+ + Doc updates to ComputeInstance class.
+ + Doc improvements to azureml-core/azureml.core.compute package.
+ + Doc improvements for webservice-related classes in azureml-core.
+ + Support user-selected datastore to store profiling data
+ + Added expand and page_count property for model list API
+ + Fixed bug where removing the overwrite property will cause the submitted run to fail with deserialization error.
+ + Fixed inconsistent folder structure when downloading or mounting a FileDataset referencing to a single file.
+ + Loading a dataset of parquet files to_spark_dataframe is now faster and supports all parquet and Spark SQL datatypes.
+ + Support getting logs of init container.
+ + AutoML runs are now marked as child run of Parallel Run Step.
+ + **azureml-datadrift**
+ + Doc improvements to azureml-contrib-notebook package.
+ + **azureml-dataprep**
+ + Loading a dataset of parquet files to_spark_dataframe is now faster and supports all parquet and Spark SQL datatypes.
+ + Better memory handling for OutOfMemory issue for to_pandas_dataframe.
+ + **azureml-interpret**
+ + Upgraded azureml-interpret to use interpret-community version 0.12.*
+ + **azureml-mlflow**
+ + Doc improvements to azureml-mlflow.
+ + Adds support for AML model registry with MLFlow.
+ + **azureml-opendatasets**
+ + Added support for Python 3.8
+ + **azureml-pipeline-core**
+ + Updated `PipelineDataset`'s documentation to make it clear it is an internal class.
+ + ParallelRunStep updates to accept multiple values for one argument, for example: "--group_column_names", "Col1", "Col2", "Col3"
+ + Removed the passthru_automl_config requirement for intermediate data usage with AutoMLStep in Pipelines.
+ + **azureml-pipeline-steps**
+ + Doc improvements to azureml-pipeline-steps package.
+ + Removed the passthru_automl_config requirement for intermediate data usage with AutoMLStep in Pipelines.
+ + **azureml-telemetry**
+ + Doc improvements to azureml-telemetry.
+ + **azureml-train-automl-client**
+ + Fixed a bug where `experiment.submit()` called twice on an `AutoMLConfig` object resulted in different behavior.
+ + Users are now able to enable stack ensemble iteration for Time series tasks with a warning that it could potentially overfit.
+ + Changed AutoML run behavior to raise UserErrorException if service throws user error
+ + Fixes a bug that caused azureml_automl.log to not get generated or be missing logs when performing an AutoML experiment on a remote compute target.
+ + For Classification data sets with imbalanced classes, we will apply Weight Balancing, if the feature sweeper determines that for subsampled data, Weight Balancing improves the performance of the classification task by a certain threshold.
+ + AutoML runs are now marked as child run of Parallel Run Step.
+ + **azureml-train-automl-runtime**
+ + Changed AutoML run behavior to raise UserErrorException if service throws user error
+ + AutoML runs are now marked as child run of Parallel Run Step.
+
+
+## 2020-06-08
+
+### Azure Machine Learning SDK for Python v1.7.0
+++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + Completed the removal of model profiling from mir contrib by cleaning up CLI commands and package dependencies, Model profiling is available in core.
+ + Upgrades the min Azure CLI version to 2.3.0
+ + **azureml-automl-core**
+ + Better exception message on featurization step fit_transform() due to custom transformer parameters.
+ + Add support for multiple languages for deep learning transformer models such as BERT in automated ML.
+ + Remove deprecated lag_length parameter from documentation.
+ + The forecasting parameters documentation was improved. The lag_length parameter was deprecated.
+ + **azureml-automl-runtime**
+ + Fixed the error raised when one of categorical columns is empty in forecast/test time.
+ + Fix the run failures happening when the lookback features are enabled and the data contain short grains.
+ + Fixed the issue with duplicated time index error message when lags or rolling windows were set to 'auto'.
+ + Fixed the issue with Prophet and Arima models on data sets, containing the lookback features.
+ + Added support of dates before 1677-09-21 or after 2262-04-11 in columns other than date time in the forecasting tasks. Improved error messages.
+ + The forecasting parameters documentation was improved. The lag_length parameter was deprecated.
+ + Better exception message on featurization step fit_transform() due to custom transformer parameters.
+ + Add support for multiple languages for deep learning transformer models such as BERT in automated ML.
+ + Cache operations that result in some OSErrors will raise user error.
+ + Added checks to ensure training and validation data have the same number and set of columns
+ + Fixed issue with the autogenerated AutoML scoring script when the data contains quotation marks
+ + Enabling explanations for AutoML Prophet and ensembled models that contain Prophet model.
+ + A recent customer issue revealed a live-site bug wherein we log messages along Class-Balancing-Sweeping even when the Class Balancing logic isn't properly enabled. Removing those logs/messages with this PR.
+ + **azureml-cli-common**
+ + Completed the removal of model profiling from mir contrib by cleaning up CLI commands and package dependencies, Model profiling is available in core.
+ + **azureml-contrib-reinforcementlearning**
+ + Load testing tool
+ + **azureml-core**
+ + Documentation changes on Script_run_config.py
+ + Fixes a bug with printing the output of run submit-pipeline CLI
+ + Documentation improvements to azureml-core/azureml.data
+ + Fixes issue retrieving storage account using hdfs getconf command
+ + Improved register_azure_blob_container and register_azure_file_share documentation
+ + **azureml-datadrift**
+ + Improved implementation for disabling and enabling dataset drift monitors
+ + **azureml-interpret**
+ + In explanation client, remove NaNs or Infs prior to json serialization on upload from artifacts
+ + Update to latest version of interpret-community to improve out of memory errors for global explanations with many features and classes
+ + Add true_ys optional parameter to explanation upload to enable additional features in the studio UI
+ + Improve download_model_explanations() and list_model_explanations() performance
+ + Small tweaks to notebooks, to aid with debugging
+ + **azureml-opendatasets**
+ + azureml-opendatasets needs azureml-dataprep version 1.4.0 or higher. Added warning if lower version is detected
+ + **azureml-pipeline-core**
+ + This change allows user to provide an optional runconfig to the moduleVersion when calling module.Publish_python_script.
+ + Enable node account can be a pipeline parameter in ParallelRunStep in azureml.pipeline.steps
+ + **azureml-pipeline-steps**
+ + This change allows user to provide an optional runconfig to the moduleVersion when calling module.Publish_python_script.
+ + **azureml-train-automl-client**
+ + Add support for multiple languages for deep learning transformer models such as BERT in automated ML.
+ + Remove deprecated lag_length parameter from documentation.
+ + The forecasting parameters documentation was improved. The lag_length parameter was deprecated.
+ + **azureml-train-automl-runtime**
+ + Enabling explanations for AutoML Prophet and ensembled models that contain Prophet model.
+ + Documentation updates to azureml-train-automl-* packages.
+ + **azureml-train-core**
+ + Supporting TensorFlow version 2.1 in the PyTorch Estimator
+ + Improvements to azureml-train-core package.
+
+## 2020-05-26
+
+### Azure Machine Learning SDK for Python v1.6.0
+++ **New features**
+ + **azureml-automl-runtime**
+ + AutoML Forecasting now supports customers forecast beyond the pre-specified max-horizon without retraining the model. When the forecast destination is farther into the future than the specified maximum horizon, the forecast() function will still make point predictions out to the later date using a recursive operation mode. For the illustration of the new feature, please see the "Forecasting farther than the maximum horizon" section of "forecasting-forecast-function" notebook in [folder](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning)."
+
+ + **azureml-pipeline-steps**
+ + ParallelRunStep is now released and is part of **azureml-pipeline-steps** package. Existing ParallelRunStep in **azureml-contrib-pipeline-steps** package is deprecated. Changes from public preview version:
+ + Added `run_max_try` optional configurable parameter to control max call to run method for any given batch, default value is 3.
+ + No PipelineParameters are autogenerated anymore. Following configurable values can be set as PipelineParameter explicitly.
+ + mini_batch_size
+ + node_count
+ + process_count_per_node
+ + logging_level
+ + run_invocation_timeout
+ + run_max_try
+ + Default value for process_count_per_node is changed to 1. User should tune this value for better performance. Best practice is to set as the number of GPU or CPU node has.
+ + ParallelRunStep does not inject any packages, user needs to include **azureml-core** and **azureml-dataprep[pandas, fuse]** packages in environment definition. If custom docker image is used with user_managed_dependencies then user need to install conda on the image.
+
++ **Breaking changes**
+ + **azureml-pipeline-steps**
+ + Deprecated the use of azureml.dprep.Dataflow as a valid type of input for AutoMLConfig
+ + **azureml-train-automl-client**
+ + Deprecated the use of azureml.dprep.Dataflow as a valid type of input for AutoMLConfig
+++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Fixed the bug where a warning may be printed during `get_output` that asked user to downgrade client.
+ + Updated Mac to rely on cudatoolkit=9.0 as it is not available at version 10 yet.
+ + Removing restrictions on prophet and xgboost models when trained on remote compute.
+ + Improved logging in AutoML
+ + The error handling for custom featurization in forecasting tasks was improved.
+ + Added functionality to allow users to include lagged features to generate forecasts.
+ + Updates to error message to correctly display user error.
+ + Support for cv_split_column_names to be used with training_data
+ + Update logging the exception message and traceback.
+ + **azureml-automl-runtime**
+ + Enable guardrails for forecasting missing value imputations.
+ + Improved logging in AutoML
+ + Added fine grained error handling for data prep exceptions
+ + Removing restrictions on prophet and xgboost models when trained on remote compute.
+ + `azureml-train-automl-runtime` and `azureml-automl-runtime` have updated dependencies for `pytorch`, `scipy`, and `cudatoolkit`. we now support `pytorch==1.4.0`, `scipy>=1.0.0,<=1.3.1`, and `cudatoolkit==10.1.243`.
+ + The error handling for custom featurization in forecasting tasks was improved.
+ + The forecasting data set frequency detection mechanism was improved.
+ + Fixed issue with Prophet model training on some data sets.
+ + The auto detection of max horizon during the forecasting was improved.
+ + Added functionality to allow users to include lagged features to generate forecasts.
+ + Adds functionality in the forecast function to enable providing forecasts beyond the trained horizon without retraining the forecasting model.
+ + Support for cv_split_column_names to be used with training_data
+ + **azureml-contrib-automl-dnn-forecasting**
+ + Improved logging in AutoML
+ + **azureml-contrib-mir**
+ + Added support for Windows services in ManagedInferencing
+ + Remove old MIR workflows such as attach MIR compute, SingleModelMirWebservice class - Clean out model profiling placed in contrib-mir package
+ + **azureml-contrib-pipeline-steps**
+ + Minor fix for YAML support
+ + ParallelRunStep is released to General Availability - azureml.contrib.pipeline.steps has a deprecation notice and is move to azureml.pipeline.steps
+ + **azureml-contrib-reinforcementlearning**
+ + RL Load testing tool
+ + RL estimator has smart defaults
+ + **azureml-core**
+ + Remove old MIR workflows such as attach MIR compute, SingleModelMirWebservice class - Clean out model profiling placed in contrib-mir package
+ + Fixed the information provided to the user in case of profiling failure: included request ID and reworded the message to be more meaningful. Added new profiling workflow to profiling runners
+ + Improved error text in case of Dataset execution failures.
+ + Workspace private link CLI support added.
+ + Added an optional parameter `invalid_lines` to `Dataset.Tabular.from_json_lines_files` that allows for specifying how to handle lines that contain invalid JSON.
+ + We will be deprecating the run-based creation of compute in the next release. We recommend creating an actual Amlcompute cluster as a persistent compute target, and using the cluster name as the compute target in your run configuration. See example notebook here: aka.ms/amlcomputenb
+ + Improved error messages in case of Dataset execution failures.
+ + **azureml-dataprep**
+ + Made warning to upgrade pyarrow version more explicit.
+ + Improved error hand